uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,877,628,091,354
arxiv
\section{Introduction} The first observation of circumstellar gas around a star with a debris disc dates back to 1975, even before the discovery of debris discs themselves. A spectrum of $\beta$ Pictoris \citep[located at 19.44 $\pm$ 0.05 pc,][]{2007A&A...474..653V} revealed the presence of Ca II in absorption, which was presumably of circumstellar origin \citep{1975ApJ...197..137S}. The discovery of circumstellar dust around $\beta$ Pic and a handful of stars by IRAS \citep{1985PASP...97..885A} provided a strong indication of the origin of the gas, and ever since gas has been looked for around stars with debris discs. Since the $\beta$ Pictoris disc is observed edge-on, it is a good target in which to detect UV absorption lines in the star's spectrum that are due to circumstellar material along the line-of-sight. Elements such as C, O, Na, Mg, Al, Si, S, Ca, Cr, Mn, Fe, Ni have been detected through their UV lines in $\beta$ Pictoris \citep{2006Natur.441..724R}, while observations of hydrogen have provided upper limits on the column density of H I \citep{1995A&A...301..231F} and H$_2$ \citep{2001Natur.412..706L}. One puzzling observation is that metals that should be strongly affected by radiation pressure (such as Na I) seem to be on Keplerian orbits \citep{2001ApJ...563L..77O}. It was proposed that an overabundance of carbon could explain this behaviour as it is a strong braking agent \citep{2006ApJ...643..509F}. This is because the metals spend the majority of their time ionised, and while ionised, they couple to the ionised carbon which is in Keplerian rotation as it does not feel radiation pressure. This prediction was confirmed by a Herschel/HIFI C II emission spectrum, which was used to infer an overabundance of carbon and oxygen by a factor that is up to 400 times the solar abundance \citep{2014A&A...563A..66C}. However, the origin of this overabundance was unclear. This suggests that either C and O are produced preferentially, or other elements are preferentially depleted \citep{2013ApJ...762..114X}. A clue to the origin of the overabundance of C and O in the disc comes from spatially resolved ALMA observations of CO gas orbiting the star. The emission is highly asymmetric, with 30\% of the emission located in a single clump on one side of the star, with kinematics giving the exact position of the gas clump at 85 AU from the star \citep{2014Sci...343.1490D}, possibly colocated with a dust clump detected in the mid-IR \citep{2005Natur.433..133T}. The radial distribution of CO is similar to the location of the planetesimal belt inferred from the continuum emission to extend 50-150 AU, which is thus thought to be the source of gas. We have now reached the point where enough observations are available for $\beta$ Pic to start tackling some important conundrums. The main questions concern the origin of the gas and its subsequent evolution. The molecular gas observed around $\beta$ Pic cannot be primordial as the photodissociation timescale of the observed CO is $\sim 120$ years \citep{2014Sci...343.1490D} whereas the system is a lot older \citep[21$\pm$4 Myr,][]{2014MNRAS.438L..11B}. Since the molecular gas is colocated with the dust, it is thought to be secondary, i.e it is released from volatile-rich solid bodies. CO can be kept on solid bodies at 85AU (even though the sublimation temperature of CO is 20K) as it can be trapped in water ice up to 140K \citep{2003ApJ...583.1058C}. When CO is ejected from solid bodies, it is not expected to recondense on grains as the dust density is too low in this disc. Photodesorption \citep{2007A&A...475..755G}, vaporisation of dust grains during high-velocity collisions \citep{2007ApJ...660.1541C}, collisions between volatile-rich comets \citep{2012ApJ...758...77Z}, and sublimating comets \citep{1990A&A...236..202B} are the main processes proposed to release gas from solid bodies. While the photodissociation of CO must release C and O into the gas disc, it has yet to be determined what this implies for the atomic species, and whether for example it is possible to explain all observations within one self-consistent model. It is expected that the atomic gas would evolve viscously, however \citet{2014A&A...563A..66C} concluded that the C II spectrum, which is sufficiently resolved to give some information on the radial distribution of ionised carbon, is inconsistent with an accretion disc profile. However, their model included several simplifying assumptions about the thermodynamic state of the gas disc and about the profile expected for an accretion disc. In this paper, we develop a thermodynamical model of gas in debris discs and apply it to explain $\beta$ Pictoris gas observations. In this new model, we assume that the atomic elements C and O are produced through photodissociation of CO. We then follow the temporal evolution of carbon and oxygen atomic elements assuming that they diffuse viscously. The viscosity is parameterised with an $\alpha$ coefficient as is typical when studying gas evolution in protoplanetary discs \citep{1973A&A....24..337S}. The temperature and ionisation state of C and O are worked out using Cloudy \citep{2013RMxAA..49..137F}, which is a PDR-like model. The impinging radiation on the gas disc is composed of the stellar and the interstellar radiation fields. The second section details the numerical model used to model gas observations of $\beta$ Pictoris. The third section presents our results and shows how this model is able to fit the recent C II and O I Herschel observations as well as a new C I non-detection by APEX in a self-consistent way. Lastly, we discuss our results in section four. \section{Numerical Model for gas evolution}\label{model} To understand the distribution of gas in $\beta$ Pictoris, we develop a numerical model of viscous diffusion of gas in the low density regime expected in debris discs. We assume that CO is being produced in the main belt and quickly photodissociates, leading to an input of gaseous C and O at that radius. We model the subsequent evolution of this gaseous component using standard accretion disc physics. The modelled gas sits in a dust disc, which extends from $\sim$ 50 to $\sim$ 150AU \citep{2001A&A...370..447A,2014Sci...343.1490D} and might affect the thermal state of the gas. Throughout this paper, we choose to use our general model on $\beta$ Pic so that we take the gas injection location to be $R_0=85$ AU, where the bulk of CO is located. The CO mass input rate is estimated to be $1.4 \times 10^{18}$ kg/yr \citep{2014Sci...343.1490D}. For the carbon input rate, this corresponds to $\dot{M}=0.1 \mathrm{M}_\oplus$/Myr. We also input the right amount of oxygen taking C/O = 1 in number density. We take the radiation field $F$ to be composed of an A6V star $F_\star$ plus the interstellar radiation field $F_i$, which is assumed to be that derived by \citet{1983A&A...128..212M} $F_0$ but multiplied by a constant $X$, to change the amount of UV radiation impinging the gas disc so that $F=F_\star+F_i$, where $F_i=X F_0$. The other free parameter is the viscosity parameter $\alpha$ (defined in subsection \ref{fiduc}). \subsection{The accretion disc}\label{fiduc} The gas evolution model developed here is very general and can be applied to a range of situations where gas is injected in a system at a specific location. The model treats the radial evolution of the gas using the standard evolution equation for the surface density, but at each time-step solves for the local vertical structure as a one-zone model. This is a gross simplification, but is adequate for an initial investigation. The evolution of the gas under viscous diffusion is followed using the equation \citep{1974MNRAS.168..603L,1981ARA&A..19..137P} \begin{equation} \label{eqdif} \frac{\partial \Sigma}{\partial t} = \frac{3}{R} \frac{\partial}{\partial R} \left( \sqrt{R} \frac{\partial}{\partial R} (\nu \Sigma \sqrt{R}) \right) + \dot{\Sigma}_0 \, \delta(\frac{R}{R_0}-1), \end{equation} where $\nu$ is the kinematic viscosity, $\Sigma(R,t)$ the surface density, $R$ the radial variable and $\dot{\Sigma}_0$ the surface density input rate at radius $R_0$. Here $\delta$ is the usual Dirac delta-function. In practice gas input takes place over a range of radii $\Delta R_0$ and thus, $\dot{\Sigma}_0$ is related to the mass input rate $\dot{M}$ in the following manner: $\dot{M}=2 \pi R_0 \Delta R_0 \dot{\Sigma}_0$. The value of $\Delta R_0$ is taken to be one radial size bin and is not important for our model as soon as $\Delta R_0/R_0 \ll 1$ as the value of $\dot{\Sigma}_0$ is worked out over the same bin size. We treat $\dot{M}$ as a free parameter. In the absence of any other information we use the standard \citet{1973A&A....24..337S} $\alpha$-parametrisation for the viscosity. Thus we write \begin{equation} \label{nualpha} \nu_\alpha = \alpha c_s H, \end{equation} where $c_s$ is the sound speed and $H$ the local one-zone disc scale height. The value of $\alpha$ and its dependence on disc properties is one of the main uncertainties of current accretion disc theory. Observational evidence from X-ray outbursts and dwarf novae, where discs are fully, or sufficiently, ionised, suggests high $\alpha$ values ranging from 0.1 to 0.4 \citep{2007MNRAS.376.1740K}. However, much lower values are expected to prevail in regions of low ionisation \citep{1998ApJ...492L..75G}. Depending on the mechanism producing this viscosity in cool protoplanetary discs, $\langle \alpha \rangle$ produced in numerical simulations can vary between $10^{-4}-1$ \citep{1998RvMP...70....1B,2013ApJ...767...30B}, or can even be essentially zero in the so-called dead zones \citep{2012MNRAS.424.1977L}. In order to compute the vertical disc scale-height we make the approximation that there is a single temperature $T(R,t)$ which adequately describes the local disc structure. We then obtain the disc scale-height $H(R,t)$ where $H=c_s/\Omega$, $c_s$ being the sound speed and $\Omega$ the orbital frequency. The sound speed $c_s$ is fixed by the gas temperature $T$ as $c_s=\sqrt{R_g T / \mu}$, with $R_g$ being the ideal gas constant and $\mu$ the mean molecular mass, which can be substantially different from that of protoplanetary discs as we expect most of the mass to be in carbon and oxygen rather than hydrogen. The gas density $\rho_\mathrm{g}(R,z)$ is assumed to be constant over a distance $H$ in the vertical direction. Hence, $\Sigma=2 \rho H$. To convert between $\Sigma$ and the particle number density $n$, we use $n = \rho/(\mu m_p)$, where $m_p$ is the proton mass \begin{equation} \label{eqn} n = \frac{ \Omega \Sigma}{2 \mu m_p c_s}. \end{equation} In our numerical simulations, we need to set the boundary conditions. We choose an inner radius of $R_\mathrm{min} = 5$AU sufficiently close to the host star that we can follow the disc radii of interest to us. We choose an outer disc radius $R_\mathrm{max} = 1000$AU to be large enough that it has no significant effect on the results. We assume that all material reaching the inner boundary is accreted onto the star or onto the planet $\beta$ Pic b, and thus set $\Sigma(R_\mathrm{min})=0$. For convenience, at the outer boundary, we assume that the material is lost at that point and set $\Sigma(R_\mathrm{max})=0$. If we know the radial temperature profile of the disc, we are now in a position to compute its evolution. In the next subsection we compute the evolution for a given power law temperature profile. Later, in contrast, we compute the local temperature structure in a more physical manner. \subsection{Evolution to the steady state of an $\alpha$ disc with fixed temperature profile}\label{ssalpha} To understand the evolution of an $\alpha$ disc, we first assume that the temperature profile is a power law defined by $T=T_0 (R/R_0)^{-\gamma}$, where $T_0$ = 60K and $\gamma=0.5$. \begin{figure} \includegraphics[width=8.5cm]{gasevol3.jpeg} \caption{\label{figA} Density profile at different epochs logarithmically spaced between 2 years and $5 \times 10^5$ years (each curve is separated by a factor $\sim$ 8 in time). The temperature is assumed to have a power law dependence imposed (see subsection \ref{ssalpha}). The last time-step at 500 000 years shows the expected steady state accretion and decretion profiles.} \end{figure} Fig.~\ref{figA} shows the evolution resulting from gas injected at $R_0=85$AU at a constant rate $\dot{M} = 0.1 \mathrm{M}_\oplus$/Myr. We take $\alpha=0.5$. For this computation we have taken $R_{\rm min} = 5$AU and $R_{\rm max} = 1056$AU. We use a radial grid of 400 points equally spaced in $\sqrt{R}$ \citep{1981MNRAS.194..967B}. At time $t=0$, mass is injected steadily and the number density starts growing, creating a spike around 85AU. At the same time, it is viscously evolving and creates an accretion disc inwards and a decretion disc outwards. By $t \sim 5 \times 10^5$ years, steady state is reached for the accretion and decretion discs. We note that the fraction $f$ of matter injected at radius $R_0$ that is lost at the outer radius $R_{\rm max}$ is given, for $R_{\rm min} \ll R_0$, by $f = (R_0/R_{\rm max})^{1/2} \sim 0.3$. Thus, for the parameters we have chosen, 70 percent of the input material is accreted onto the central star. The timescale to reach steady state for the disc should be close to the viscous timescale at $R_0$, i.e $t_\nu \sim R_0^2/\nu(R_0)$, which is $\sim 10^5$ years for the parameters of the simulation in Fig.~\ref{figA}. \subsection{The full model} To obtain a more realistic model of the radial temperature behaviour we need to consider the thermal equilibrium of the gas at each radius. We model the thermal state of the gas using Cloudy \citep{2013RMxAA..49..137F}, a spectral synthesis code used to study gas clouds under different conditions. We are then able to follow the viscous evolution of the gas within the disc with a time-dependent, and locally determined, temperature profile, which depends on $R$, with a more complex dependence than a simple power law. Hence $\nu$ also depends on $R$ and is reinjected into Eq.~\ref{eqdif} to compute the diffusion of the gas for the next time step. To compute the lines of interest for a given timestep, we use RADMC-3D when LTE can be assumed or LIME otherwise (see subsection \ref{emlinec} for more details). Fig.~\ref{figdiagram} explains schematically how the model works. We recall that the model has four free parameters, which are the input radial location $R_0$, the mass input rate $\dot{M}$, the viscosity parameter $\alpha$, taken to be constant and the radiation field impinging the disc $F$, whose main components are the radiation from the central star $F_\star$ and the external interstellar radiation field $F_i$. Comparing these, we shall find that the $F_\star$ dominates close to the centre of the disc at $R < 30$AU if the medium is optically thin in the continuum (and if a standard interstellar radiation field is considered $F_i=F_0$), and $F_i$ outside that radius where most of the emission lines of interest to us are produced. \begin{figure} \includegraphics[width=8.5cm]{schemalime.jpeg} \caption{\label{figdiagram} Diagram explaining the coupling between the dynamical and thermal models. The upper box shows the hydro model that evolves the density $\Sigma(R)$ through solving the diffusion equation in time over $\Delta t$. The new density is then passed to Cloudy, the thermal model, which solves for the new temperature $T(R)$ and ionisation fractions in each cell shown on the diagram in blue. It takes account of the photons coming from the central star (and their extinction along $R$) as well as the interstellar radiation field coming from the top. Hence, emission lines can be predicted using either RADMC-3D in LTE or LIME in NLTE. The viscosity $\nu(R)$ can be worked out from the temperature profile and is input back into the hydro model to start the next timestep.} \end{figure} \subsubsection{A more realistic temperature profile} The temperature profile depends on the local heating and cooling mechanisms, which in turn depend on the atomic type, densities, radiation and density of colliders. To work out the temperature self-consistently and take into account all the physics at play we use Cloudy, a PDR-like spectral synthesis code that works out the gas state depending on its density, composition and incoming radiation. Cloudy is a 1D model but as the interstellar radiation field is coming from every direction, it would not be correct to model our gas disc as an horizontal slab. Indeed, if the interstellar radiation field was only imposed from the inner parts of the system, where the star is located, some radiation would be blocked due to continuum optical thickness of the gas disc in the inner parts, and would not reach the outer parts, which is not physical. To have a good representation of the physical problem, we sliced our disc into $N_b = 35$ annuli which are spaced using the equally spaced radial variable $X = \sqrt{R}$ along the radial direction in the range 5 AU $ < R < $ 1000 AU (thus we have $\Delta X = 0.84$ and $\Delta R/R < 0.75$) and impose the flux from the top (see Fig.~\ref{figdiagram}). Then, the thermal state of each slab is worked out at a specific $R$ by the Cloudy model along the $z$ direction. We assume a constant density profile along $z$ and iterate a few times with Cloudy to adjust the scale height to the actual temperature. The stellar flux is added to each annulus by combining it with the incoming interstellar radiation field (i.e in the model it appears to arrive from the top). That stellar flux has been corrected for attenuation in interior annuli by summing up optical depths; the stellar flux also has a $1/R^2$ dependence due to the geometry. The optical thickness to FUV radiation in the vertical direction could be a problem as it would unphysically block stellar radiation. However, the optical depth in the vertical direction is always smaller than in the horizontal direction as the gas density extends over $H$ in the $z$ direction and over $R>H$ in the radial direction and the density increases inwards. This ensures that if the continuum optical thickness is greater than 1 in the vertical direction, it will always be greater in the radial direction and the stellar UV flux would never have made it to the midplane location anyway. The different slabs are then gathered and the solution is reinjected into the dynamical model (Eq.~\ref{eqdif}) to work out the viscous evolution for the next timestep. \subsubsection{Composition of the gas}\label{compnum} The gas in debris disc systems does not seem to be primordial and to some extents reflects the composition of the solid bodies composing the debris disc from which the gas was released. In addition to carbon and oxygen, metals have also been observed around $\beta$ Pic \citep[e.g.,][]{2012A&A...544A.134N}. We include these metals in our Cloudy thermal model using a solar abundance. These metals are found not to affect the thermal state of the gas in our $\beta$ Pic model but are included, in case they have any thermal effects in a different regime of the parameter space. Hydrogen and helium are found to be significantly depleted compared to solar metal abundance \citep{2010ApJ...720..923Z}. As we assume that carbon and oxygen come from photodissociation of CO, we fix $\mathrm{O}/\mathrm{C} = 1$. In $\beta$ Pic, carbon and oxygen are overabundant by a factor $\sim$ 400 with respect to other species \citep{2013ApJ...762..114X,2014A&A...563A..66C}. Thus, for the metals $X$ (other than C and O), we assume that [$\mathrm{C}/\mathrm{X}$]=$\log_{10}(N_C/N_X)-\log_{10}(N_C/N_X)_{\rm solar}$=2.6, which corresponds to $N_C/N_X=400\,(N_C/N_X)_{\rm solar}$. The gas disc is assumed to be depleted in hydrogen such that [$\mathrm{H}/\mathrm{X}$]=-3 and Helium is absent. We note that we tested that adding hydrogen in the system up to $\mathrm{H}/\mathrm{C}$=3 in number density, when C is 400 times solar, had no impact on heating/cooling processes. Since carbon and oxygen are by far the dominant species in the disc, the mean molecular weight is $\mu = 14$. Later, we assume that some more oxygen or hydrogen could arise from water photodissociation. There are no observations that fix the amount of water in this system at the moment. However, we find in section \ref{Hline} that increasing the amount of H and O in this way provides a better fit to the observations. \subsubsection{The dust disc}\label{dustdisc} As $\beta$ Pic hosts a debris disc, dust is added in our model to analyse the thermal effects it might have. We determine the effect of photoelectric heating \citep[assuming a standard $q$=-3.5 size distribution, e.g.,][]{2013A&A...558A.121K} on the temperature within the disc using Cloudy. The dust optical depth profile is taken to be the same as in \citet{2010ApJ...720..923Z} \begin{equation} \label{taudust} \tau_d(R)=\frac{\sqrt{2} \, \tau_0}{\sqrt{\left(\frac{R}{R_1}\right)^{-\gamma_1}+\left(\frac{R}{R_1}\right)^{\gamma_2}}}, \end{equation} \noindent where $\gamma_1=4$, $\gamma_2=6$, and $\tau_0=2 \times 10^{-3}$ is the optical depth value at $R_1=120$ AU. These values are empirically determined to fit $\beta$ Pic dust scattered light observations with HST/STIS \citep{2000ApJ...539..435H}. We also use this profile to compute the dust radiation field when using NLTE calculations of population levels. \subsubsection{Radiation field}\label{radnum} The main mechanism that heats the gas is the incoming UV flux. The UV radiation field is fairly strong and can affect the electronic states within atoms, ionise them or even photodissociate molecules. The radiation field in our model consists of: \begin{itemize} \item $F_\star$, the stellar flux where a \citet{2004astro.ph..5087C} ATLAS stellar atmosphere model is used, \item $F_i$, the interstellar radiation field (IRF) $F_0$ with a \citet{1983A&A...128..212M} prescription times a constant $X$. \item The dust radiation field using the prescription given in subsection \ref{dustdisc} above (used for NLTE calculations). \end{itemize} \noindent Other possible components, such as cosmic ray heating and the cosmic microwave background, are inconsequential. The stellar flux and IRF both provide UV photons that are energetic enough to ionise some atomic species in the gas; in particular neutral carbon that has an ionisation potential (IP) equal to 11.26eV. The ionisation fraction of the modelled gas is thus very sensitive to the incoming UV flux at energies higher than 10eV. Also, due to the Lyman break, there is a strong cut-off in both the stellar and interstellar radiation flux when the energy is greater than 13.6eV (912 \AA). \subsubsection{Emission lines}\label{emlinec} To calculate observables (e.g. images, spectra) we combined our Cloudy models with the line radiative transfer codes RADMC-3D \citep{2012ascl.soft02015D} and LIME \citep{2010A&A...523A..25B}. Both codes take account of the optical thickness of lines in each direction and for both codes we use the LAMDA database \citep{2005A&A...432..369S} to set the Einstein coefficients, transition energies or collision rates. For C I and O I, the optical depth can be a lot higher than unity whilst for C II it only reaches values close to unity in the innermost parts. In RADMC-3D we interpolate Cloudy outputs, and use spherical coordinates in 3D on a grid logarithmic in $r$ with ($r$,$\theta$,$\phi$)=(300x50x50). We assume that the axisymmetric keplerian gas disc axis is inclined at 88 degrees to the line-of-sight to the observer and has a position angle of 29 degrees \citep{2014Sci...343.1490D}. LTE is a good assumption for C I and C II as their critical densities are equal to $3.9 (T/100K)^{-0.13}$ cm$^{-3}$ and $8.7 (T/100K)^{0.5}$ cm$^{-3}$, whilst the electron density (main collider) derived by our model (see section \ref{predi}) or by previous studies \citep[e.g.][]{2014A&A...563A..66C} are greater than 100 cm$^{-3}$. For the O I line, the critical density is higher and equal to $6.3 \times 10^3 (T/100K)^{-0.03}$ cm$^{-3}$ \citep{1989ApJ...342..306H} so that an NLTE approach is necessary because our model predicts an electron number density that is always much below than this critical density (see Fig.~\ref{figdens}). Instead of using approximations in RADMC-3D (such as the large velocity gradient method) we decided to make the full calculations using LIME; RADMC-3D also does not yet include radiative exchange with the dust continuum radiation field, which proves to be important for the O I line. We used 61 channels with a resolution of 0.63 km/s and 600 pixels (along the x and y-axes) to produce the data cube with a spatial resolution of 0.05'' at the distance of $\beta$ Pic. The LIME simulations included emission from the dust (see subsection \ref{dustdisc}) as it emits in the far-IR at a level $\sim$ 17Jy. We find that the continuum optical depth in the FUV is dominated by C I ionisation. The ionisation cross-section is fairly simple for carbon as it does not vary with frequency over the restricted range [11.26, 13.6] eV and is equal to $\sigma_{\rm ionC}=1.6 \times 10^{-17}$ cm$^2$ \citep{1988ASSL..146...49V} from 11.26eV (ionisation potential of carbon) to 13.6eV (Lyman break). \subsubsection{Main thermal mechanisms}\label{mechanism} The main ionising radiation in our simulations is the UV flux from the IRF, which yields a high ionisation fraction for the carbon. The rate of heating compared to cooling sets the gas temperature, which determines both the dynamics of the gas as well as the emission line intensities. The main heating process in the Cloudy simulations is photoionisation (PI) of atoms (mainly carbon), with a small contribution from the photoelectric heating (PEH) on dust grains. The main heating mechanism is different from that assumed in previous studies \citep[e.g.][]{2001A&A...373..641K,2007ApJ...655..528B,2010ApJ...720..923Z} because the overabundance of carbon was not known before the Herschel observations (see subsection \ref{dustbp} for more details). The main coolant in our model is through the C II fine structure line at 157.7$\mu$m \citep[which is also found by][]{2010ApJ...720..923Z}. Other coolants are negligible. For instance, cooling by CO and CH vibrational/rotational transitions is negligible because these molecules are quickly photodissociated; cooling via Ly $\alpha$ emission is also possible, but in our model the gas disc is partially depleted in hydrogen and moreover this effect becomes important only at high temperatures (T $>$ 5000K). \section{Modelling $\beta$ Pic gas observations}\label{modellingbetapic} The model described in section \ref{model} is applied to the $\beta$ Pic observations described in subsection \ref{obsc} to see whether we can fit these in a self-consistent manner. \subsection{Observations used to fit our model}\label{obsc} In this subsection we detail the four main observations that are used within the paper to fit our model that are summarised in Table~\ref{tab1}. Two of them are new observations presented here for the first time. \begin{table} \caption{Integrated emission flux for the observations used in this paper.} \begin{center} \begin{tabular}{|l|c|c|} \hline \hline Element & Central & Flux \\ & Wavelength ($\mu$m) & (Jy km/s) \\ \hline CO & 867.5 & 6.6$\pm$0.7 \\ C II & 157.7 & 372$\pm$10 \\ C I & 609.7 & $<$ 14 \\ O I & 63.18 & 110$\pm$15 \\ \hline \label{tab1} \end{tabular} \end{center} \end{table} \begin{itemize} \item We use the ALMA J=3-2 $^{12}$CO \citep[observed at 867.5$\mu$m,][]{2014Sci...343.1490D} to fix two of our free parameters. The resolution was $\sim$ 12AU (using 27 antennas with projected baseline lengths from 15 to 380m) and the spectral resolution 0.85km/s. The resulting CO image shows a broad belt of CO gas from 50 to 150AU (colocated with the parent belt of solid bodies) and a peak around 85AU. No gas emission is observed inside 50AU. From the estimates of the total mass of CO and assuming that CO is destroyed by photodissociation in 120 years, they derive the CO input rate in the system $\sim 1.4\times 10^{18}$ kg/yr. Accordingly, we define $\dot{M}_0 = 0.1$ M$_\oplus$/Myr and $R_0=$85 AU but allow $\dot{M}$ to vary from $\dot{M}_0$ by a factor 10. \item \textit{Herschel} HIFI \citep{2010A&A...518L...6D} observations of the C II $^2$P$_{3/2}$-$^2$P$_{1/2}$ transition at 157.7 $\mu$m were discussed in detail in \citet{2014A&A...563A..66C}, and the spectrum (presented in red in Fig.~\ref{figCII}) was kindly provided to us by the authors. We use the horizontal polarisation beam and binned the channels to a width of 0.63km/s. The total emission line flux is estimated to be $2.36 \pm 0.06 \times 10^{-14}$ erg/s/cm$^2$ ($\sim$ 372 Jy km/s) with an additional calibration uncertainty of $\sim$ 10\% \citep{2012A&A...537A..17R}. \begin{figure} \centering \includegraphics[width=8.5cm]{plotc2b.jpeg} \caption{\label{figCII} C II emission line profile predicted for our best-fit model (black) compared to the Herschel observation (red).} \end{figure} \item The CI $^3$P$_1$-$^3$P$_0$ transition at 492.16 GHz was observed on 31st August 2015 with the APEX-3 Swedish Heterodyne Facility Instrument (SHeFI) receiver mounted on the Atacama Pathfinder EXperiment (APEX) telescope, as part of observing program 096.F-9328. Absolute calibration was performed using the chopper wheel method \citep{1976ApJS...30..247U}, and standard pointing measurements were carried out to ensure a pointing accuracy of $<$1.5" RMS. The telescope beam at 492 GHz is 12.7" in size (FWHM), and the spectral resolution obtained with the XFFTS spectrometer was 0.046 km/s. We spent a total of 20.1 minutes on source in good observing conditions (PWV $\sim$0.5 mm), allowing us to reach an RMS noise level of $\sim$139 mK for a 0.046 km/s channel. The post-processing simply consisted of visual inspection and flagging of individual spectra, which were then averaged together with a weight equal to the reciprocal of the system temperature in each channel. We then applied a simple polynomial baseline subtraction around the spectral region of interest to remove any remaining background signal, and applied spectral smoothing to increase the signal-to-noise ratio (SNR) on the CI line. The resulting spectrum is shown in red in Fig.~\ref{figCI}. The APEX non-detection imposes that the C I integrated flux at 609 $\mu$m must be smaller than 14 Jy km/s. \begin{figure} \centering \includegraphics[width=7.5cm]{CIApex.png} \caption{\label{figCI} APEX C I emission line profile observed (non-detection in red) and predicted for our best-fit model (in black).} \end{figure} \item We also retrieved archival \textit{Herschel} \citep{2010A&A...518L...1P} observations of the OI $^3$P$_1$-$^3$P$_2$ transition at 63.18 $\mu$m from the \textit{Herschel} Science Archive. These were carried out on 22nd December 2009 using the PACS instrument \citep{2010A&A...518L...2P} in single pointed, chop/nod spectroscopic mode. A 1D spectrum was extracted from the central 9.4" pixel of the archival fully reduced, \textit{Level 2} rebinned PACS data cube, following the procedure outlined in the PACS Spectroscopy Data Reduction Guide. For each spectral channel, this includes a point source correction to take into account the shape of the PACS spatial beam, and further rescaling by the total flux in the central 3x3 pixels to avoid missing any slightly extended disc emission. The continuum level measured through a polynomial baseline fitting at 63.18 $\mu$m is 16.6$\pm$1.8 Jy, in line with the 70 $\mu$m measurement (16.0$\pm$0.8 Jy) from PACS photometry \citep{2010A&A...518L.133V}. The final baseline-subtracted spectrum displayed in Fig.~\ref{figherschel}, has a channel width of 0.0023 $\mu$m and a spectral response profile well-approximated by a Gaussian of FWHM $\sim$0.018 $\mu$m (87.5km/s at the wavelength of the OI line). The emission line is not spectrally resolved but a clear excess is present. By fitting a Gaussian with a width that matches the instrumental resolution we obtain the integrated line flux equal to 110$\pm$15 Jy km/s (17.4 $\pm$ 2.3 $\times 10^{-15}$ erg/s/cm$^2$), where the error bars include the 11\% flux calibration accuracy. We note that this value is consistent with that reported by \citet{Bran2016} (Table 1), once we take into account that the authors extracted the 1D spectrum directly from the central PACS spaxel, without applying any point source correction (hence their per beam units) nor our rescaling to account for extended emission outside the central spaxel. \end{itemize} \begin{figure} \centering \includegraphics[width=7.5cm]{OIline.jpeg} \caption{\label{figherschel} Herschel/PACS observation of the O I emission line profile at 63 microns centred on the star's velocity. The continuum flux equal to $\sim$ 16.6 Jy has been substracted in this plot.} \end{figure} \subsection{$\chi^2$ analysis to fit the C II Herschel spectrum}\label{xhi2} \begin{figure*} \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=0.9\linewidth]{xhi2mod2.jpeg} \end{minipage}% \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=0.9\linewidth]{xhi2isv2mod2.png} \end{minipage} \caption{\label{figchi2} Reduced $\chi^2$ map: \textit{Left}: $F_i/F_0=5$, $0.0023<\alpha<3$ and $0.1<\dot{M}/\dot{M}_0<10$. \textit{Right}: $\dot{M}/\dot{M}_0=1$, $0.0023<\alpha<13.9$ and $0.1<F/F_0<100$. Models to the right of the green line respect the C I non-detection by APEX (see subsection \ref{CIfit}). The blue points show the different models used to compute Fig.~\ref{figionisat}. The red dot shows the best-fit model described in subsection \ref{predi}.} \end{figure*} Simulations that cover the parameter space of $\dot{M}$, $\alpha$ and $F_i$ were run from $t=0$ where the disc is devoid of gas until it reaches steady state. We do not present the transient evolution as the age of $\beta$ Pic is much larger than the typical viscous timescale that we find for our best-fit model, and so we expect the gas disc to be at steady state (unless gas production has only started very recently). We recall that in our fiducial model the carbon gas is input at $R_0$=85 AU at a rate $\dot{M}_0 = 0.1$ M$_\oplus$/Myr but given uncertainties in the CO mass determination, $\dot{M}$ can vary by a factor a few from $\dot{M}_0$ and is thus left as a free parameter \citep{2014Sci...343.1490D}. The three free parameters $\dot{M}$, $\alpha$ and $F_i$ are constrained by comparison with the C II observation in Fig.~\ref{figCII}. We compare our numerically evaluated synthetic spectrum (using RADMC-3D) to the C II HIFI spectrum using a $\chi^2$ analysis. We first probe the parameter space in Fig.~\ref{figchi2} (left) for a fixed $F_i=5F_0$ and varying $\alpha$ and $\dot{M}$ with $0.0023<\alpha<3$ and $0.1<\dot{M}/\dot{M}_0<10$ (15 and 10 logarithmically spaced bins respectively). The reduced $\chi^2_r$ for a given model was calculated as $\chi^2_r=1/N_\textrm{\tiny dof} \times \sum_i (o_i-c_i)^2/\sigma^2$, where $o_i$ is the observed flux, $c_i$ the flux given by the best-fit model for different values $i$ on the spectrum along the $x$-axis in Fig.~\ref{figCII}. For this observation, $\sigma=6\times 10^{-16}$ erg/s/cm$^2$ and the number of degrees of freedom $N_\textrm{\tiny dof}=N-N_\textrm{\tiny fp}=20$, $N$ being the number of points used on the spectrum and $N_\textrm{\tiny fp}=3$, the number of free parameters. We find that there are many good fits to the data as there is a degeneracy between $\alpha$ and $\dot{M}$. This is expected, since in steady state $\Sigma \propto \dot{M}/\alpha$. Assuming that $\dot{M}=0.1$ M$_\oplus$/Myr within a factor two (from ALMA observations), this provides the best fits when $0.2 < \alpha < 1$ (when $F_i/F_0$ is fixed equal to 5). However, the best-fit depends on the UV radiation field $F_i$ impinging the disc, which is not well-known. To test the dependence on the IRF, we fix $\dot{M}/\dot{M}_0=1$ and we create a second $\chi^2$ map, shown in Fig.~\ref{figchi2} (right) probing models with $0.0023<\alpha<13.9$ and $F_i/F_0$ varying from 0.16 to 100. We do not explore the full parameter space for high values of $\alpha$ and $F_i$ but restrict the study to the relevant parameters (195 simulations as seen on Fig.~\ref{figchi2} right). We find that our results are strongly dependent on the amount of UV photons impinging on the $\beta$ Pic disc. We note that the best fits are for $3<F/F_0<100$. The excess of UV flux compared to the standard IRF could be coming from the star or the environment close to the star (see discussion). To explain the location of the best-fit models on Fig.~\ref{figchi2} (right), note that we are keeping $\dot{M}$ constant but changing $\alpha$ and so the density is lower for higher $\alpha$. Despite the lower density, it can still be possible to fit the C II line if the ionisation fraction is increased, which can be achieved by increasing the UV radiation. This is indeed what is happenning in the models as shown on Fig.~\ref{figionisat}. Very large values of $\alpha$ cannot fit the C II line because the ionisation fraction cannot go above one. The lack of good fit for $F_i/F_0 < 4$ arises for a different reason, which is that the higher densities cause the disc to become optically thick to FUV photons in the inner regions. The absence of C II at high velocities then also changes the shape of the line, which otherwise fits very well (see Fig.~\ref{figCII}). We emphasise that these two maps provide sufficient information to consider the whole parameter space of $\dot{M}/\dot{M}_0$, $F_i/F_0$, and $\alpha$ due to the degeneracy between $\alpha$ and $\dot{M}$, i.e, we do not need to create a 3D $\chi^2$ map. In steady state, a decrease in $\alpha$ is the same as an increase in $\dot{M}$ so that one could create a new map for $\dot{M}/\dot{M}_0=2$ (the upper limit derived from the ALMA CO observation) by shifting the map to the left in Fig.~\ref{figchi2} (right) by a factor 2 in $\alpha$. Note that $\dot{M}$ could be even greater if some other molecules such as CH$_4$ or CH$_3$OH are also present so that the $\alpha$ value required would get smaller (see the discussion). We are left with a range of models that still fit the C II observation. We need to feed in more observations to our model to constrain the best-fit. We see in the two next subsections how the APEX C I non-detection and the Herschel/PACS O I spectrum can help to constrain the best-fit. \begin{figure} \centering \includegraphics[width=7.5cm]{ionisa.jpeg} \caption{\label{figionisat} Carbon ionisation fraction for different $\alpha$ and impinging UV flux for models represented with a blue dot on Fig.~\ref{figchi2} (right) for which the parameters equal [$\alpha$,$F_i/F_0$]=[(0.14,1.6);(0.39,4);(1.1,10);(1.8,25);(3,63)] from bottom to top lines.} \end{figure} \subsection{How to also fit the APEX C I non-detection?}\label{CIfit} On Fig.~\ref{figchi2} (right), one can see that the best fits for the C II line still allow two directions, one towards a smaller impinging UV flux and smaller $\alpha$ or a greater UV flux and higher $\alpha$. As discussed in subsection \ref{xhi2}, the best fits are achieved by changing the ionisation fraction, so while C II remains fixed on the black part of Fig.~\ref{figchi2} (right), the higher $\alpha$, $F_i/F_0$ models have lower overall surface density and higher ionisation and so less C I. For each best-fit model in Fig.~\ref{figchi2} (right), we compute the C I line and compare the total integrated flux to the limit imposed by the non-detection (14 Jy km/s). The non-detection implies that only models to the right of the green line in Fig.~\ref{figchi2} (right) are allowed. We find that the models that best fit the C I non-detection are for lower C I densities, i.e. for higher UV flux impinging on the disc and greater $\alpha$. We are left with only 4 models that fit both the C I non-detection and C II spectrum. We choose the best-fit as being the one with the smallest $\alpha$ value that respects the conditions. The chosen best-fit is shown as a red dot on Fig.~\ref{figchi2} (right). We thus find that our best-fit parameters are $\dot{M}/\dot{M}_0=1$, $\alpha=1.5$, $F_i/F_0=60$. On Fig.~\ref{figCII} and \ref{figCI} we overplot our best-fit model with the Herschel C II line and the APEX non-detection respectively. Our predicted emission lines are in black and observations in red. For C II, the total emission line flux worked out by our model for this best-fit is 365 Jy km/s, which is within 2\% of the observed value 372 Jy km/s. The best fit not only takes the total integrated flux into consideration but also the shape of the emission line and its peak value through our $\chi^2$ analysis. As for C I, we find a total flux of 11 Jy km/s where the APEX non-detection imposes that the 3$\sigma$ value must be smaller than 14 Jy km/s. \citet{2014A&A...563A..66C} predicted a C I total flux of 55 Jy km/s (their multiple ring model), which would have been observed by APEX if true. This non-detection and best-fit will enable a much more constrained prediction for ALMA observations of C I (see subsection \ref{predi} for details). The O I spectrum can now be used to check the consistency of our results or even to derive new quantities as explained below. \subsection{How to also fit the Herschel O I spectrum and derive the hydrogen content?}\label{Hline} Fig.~\ref{figOIline} shows the predicted O I emission line at 63 microns for our best-fit model. The critical density of O I is on the order of $10^{4}$ cm$^{-3}$, which is much higher than the electron density derived with our model everywhere in the gas disc. Hence, LIME was used to make a full NLTE calculation of the O I line. The O I line observed by Herschel is not spectrally resolved (nor spatially) and only the total emission flux predicted by our model should be compared, not the shape of the line. As noted in Table \ref{tab1}, the total integrated flux observed is equal to 110 $\pm$ 15 Jy km/s. While the density suggests that NLTE calculations are required, this calculation is complicated by the lack of collision coefficients for collisions of O with anything but e$^-$ and H, whereas there is no H in the model. For our best-fit model, we first try to calculate the O I line in NLTE with LIME using electrons as the only colliders with oxygen. We use the electron density predicted for our best-fit model as well as the dust-to-gas mass ratio (see Fig.~\ref{figdens}a,f). The NLTE calculation with LIME gives a total integrated flux equal to -2 Jy km/s (absorption), which is lower than the observed value. The resulting line is shown on Fig.~\ref{figOIline} (dotted). The line shows both emission at large velocities and absorption in the centre of the line. This can be understood looking at Fig.~\ref{figdens}a as most of the O I emission comes from the inner part of the system and the medium is optically thick. We also check that the continuum value derived is close to the observed value (see subsection \ref{obsc}). However, we can expect that the addition of more colliders in the calculation will be able to fit the observation, since an LTE calculation gives a flux 22 times that observed. Since collision rates with C I, C II and O I are unknown, here we ask the question what level of H is required to fit the observation. We consider this hydrogen component within the context of a model in which it arises from water released in the same process as the release of CO. H$_2$O photodissociates even faster than CO \citep[e.g.][]{2015MNRAS.447.3936M} and so this would result in hydrogen and some more oxygen being released into the gas disc. These atomic species have an ionisation potential of 13.6eV and so stay neutral. H I is not pushed by radiation pressure and should have a similar evolution to O I. With the help of the Herschel/PACS O I line, one can then assess the amount of H$_2$O released with CO needed to fit the line. We add the hydrogen into LIME as a new collisional partner along with the electrons and quantify the amount of H$_2$O required to fit the observation. In the process, we also add the right amount of oxygen released from water to the amount of oxygen released from CO. We find that fixing a ratio H/C $\sim$ 3 gives the best fit to the data, which translates into an H$_2$O/CO ratio of $\sim$ 1.5. The resulting plot is shown in Fig.~\ref{figOIline} (solid) and the total flux found is equal to 110 Jy km/s (i.e close to the observed value). Although the line shown in Fig.~\ref{figOIline} cannot be observed with currently available instruments, future missions such as SPICA are expected to have a higher resolution and sensitivity than Herschel to measure the O I and C II lines \citep{2009ExA....23..193S} from which our model can be used to make predictions. Assuming this scenario, one should add the oxygen coming from water to the original oxygen number density coming from CO. Overall the oxygen number density must be multiplied by a factor $\sim$ 2.5 to extract the final amount of oxygen. This translates as O/C $\sim$ 2.5 and O/H $\sim$ 1 (see Table~\ref{tab4}). Using the total CO mass derived by \citet{2014Sci...343.1490D} and the photodissociation timescales for H$_2$O and CO, one finds that the total H$_2$O mass in the gas phase is $\sim$ $2 \times 10^{-9}$ M$_\oplus$. This does not change our other results as we checked that the temperature (which fixes the viscosity) does not vary assuming this new oxygen (and hydrogen) content. If this model is correct in that the extra colliders needed to explain the O I line come from H$_2$O, the gas disc is depleted in hydrogen compared to the hydrogen component that would be expected from comets with Solar System-like compositions, where H$_2$O/CO varies from 3 to 250 \citep{2011ARA&A..49..471M}. Thus, we deduce that a typical Solar System-like comet composition can be ruled out for $\beta$ Pic. Even if one tries to go towards higher UV flux on Fig.~\ref{figchi2} (right), the best fits (in black, above the green line) do not have a different amount of oxygen as $\alpha$ keeps constant when the UV flux increases (and oxygen stays neutral). Moreover, adding more colliders with oxygen, such as C I, C II or O I would only lower the total amount of hydrogen (i.e., H$_2$O in the parent debris) required to fit the OI line. \begin{figure} \centering \includegraphics[width=7.5cm]{OIlime.png} \caption{\label{figOIline} O I emission line profile at 63 microns for our best-fit model. The dotted line is when only electrons are considered as collisional partners. The solid line is when a water component with H/C $\sim$ 3 is added. In the last case, it fits the total integrated flux from the observed line shown on Fig.~\ref{figherschel}. The continuum flux is not substracted in this image contrary to Fig.~\ref{figherschel} and equals $\sim$ 16.6 Jy.} \end{figure} \subsection{First measurement of $\alpha$ in a debris disc}\label{firstm} As explained in subsection \ref{xhi2}, in steady state, a decrease in $\alpha$ is the same as an increase in $\dot{M}$. Thus, if $\dot{M}/\dot{M}_0=2$, the black part of Fig.~\ref{figchi2} (right) would shift to the left by a factor 2 in $\alpha$. $\dot{M}_0$ is known within a factor 2 from the ALMA observation but some carbon could come from some other molecules such as CH$_4$ or CH$_3$OH although these are thought to be only a small percentage of the CO in comets. However, $\dot{M}$ could vary by another factor $\sim$ 1.5 assuming an extreme comet composition. For a fixed $F_i/F_0$, this provides a way to constrain $\alpha$, as a factor 3 higher $\dot{M}$ would have the same effect as a factor 3 smaller $\alpha$. Hence, for the highest mass input rate, $\alpha$ could go as low as 0.5 and still agree with observations. $F_i/F_0$ could also be lower if the C I spatial distribution is affected by the presence of planets. Overall, taking into account the observational uncertainties and the inherent model uncertainties, we estimate that $\alpha$ needs to be greater than $\sim$ 0.1 in $\beta$ Pic to explain all the observations. \subsection{Details of Best-Fit Model}\label{predi} \begin{table} \caption{Parameters of the best-fit model.} \begin{center} \begin{tabular}{|l|c|} \hline \hline $\alpha$ & 1.5 \\ $\dot{M}$ & 0.1 M$_\oplus$/Myr \\ $F_i/F_0$ & 60 \\ Star & A6V \\ Dust & See Eq.~\ref{taudust}\\ \hline \label{tab2} \end{tabular} \end{center} \end{table} Fig.~\ref{figdens} provides a summary of the structure of the gas disc predicted by our model in the case without extra oxygen coming from water. The best fit parameters are $\alpha=1.5$, $F_i/F_0=60$ and an injection rate equal to $\dot{M}_0 = 0.1$ M$_\oplus$/Myr (see Table~\ref{tab2}), which is the value given by the ALMA CO observation \citep{2014Sci...343.1490D}. Fig.~\ref{figdens}a gives the spatial profile of C I in red, C II in black as well as the oxygen density in yellow. The electron density is superimposed on the C II density as all electrons come from the photoionisation of C I in our model. The C I density is smaller than that of C II when $R>20$AU due to the increasing ionisation fraction with $R$. The O I density is the sum of C I and C II as almost no oxygen gets ionised and C/O=1. In the case with extra oxygen coming from water, the O I density shown on Fig.~\ref{figdens}a should be multiplied by $\sim$ 2.5. One of the specificities of the observed C II emission line is that the velocity gradient is very steep and does not show broad wings as would be expected from a naive accretion disc profile. Instead, the C II density $<80$ AU scales as $R^{-1.15}$ (i.e $\Sigma \propto nH \propto R^{-0.15}$) as can be seen on Fig.~\ref{figdens}a, which is shallower than the usual $\Sigma \propto R^{-1}$ assumed in other studies \citep[e.g.][]{2014A&A...563A..66C}. This shallow profile arises because of the decrease of ionisation and increase in temperature towards smaller radii. The reduced ionisation fraction in the inner regions (see Fig.~\ref{figdens}c) is a consequence of the gas disc becoming optically thicker to FUV radiation as the total C I density increases (since C I absorbs the UV flux before it reaches the midplane) which in turn reduces the amount of C II in the inner regions. Thus, while \citet{2014A&A...563A..66C} concluded that the shape of the C II line is inconsistent with an accretion disc profile, we find that the profile from an accretion disc is a good fit to that observed. The C II densities obtained with our best fit model can be compared to those derived from a previous simpler model. Fitting the C II HIFI emission line with a series of four rings, \citet{2014A&A...563A..66C} obtained a best fit to the total carbon mid-plane density (see their Table 3 and the large error bars associated). They obtained very high ionisation fractions (higher than 0.5) similar to that derived from our model. The most reliable comparison is within their 30-150 AU annulus, where their error bars are smaller, where they find a density of $\sim 100$ cm$^{-3}$, which is in good agreement with our best-fit where the density is equal to $\sim$ 110 cm$^{-3}$ at $\sim$ 100AU. Our model extends these values to a range of densities as a function of radius, giving an electron and a C II density varying as $R^{-1.15}$ when $R<100$ AU. It then falls off quicker to reach 10 cm$^{-3}$ at R $\sim$ 230 AU. Fig.~\ref{figdens}b gives the temperature profile expected for the gas. From 10 to 20 AU, the temperature scales as $R^{-1/3}$ and then falls off as $R^{-0.8}$ up to 200AU before reaching a plateau at $\sim$ 20K. This profile can be explained by looking at the ionisation profile, which shows also three different regimes when $<$20AU, between 20-200AU and when $>200$ AU. The decrease in temperature is mainly due to the ionisation fraction becoming higher towards the outer region so that cooling by C II increases. In the inner part, another complication comes into play since the increased temperature means that O I also contributes to the cooling reaching up to 40\% of the total cooling rate. The corresponding scale-height is given as a function of $R$ on Fig.~\ref{figdens}d. Here also 3 regimes can be distinguished. Between 20 and 200AU, $H/R$ is shallow and scales $\propto R^{0.1}$. A similar almost linear variation of $H$ with $R$ was observed for Fe I with VLT and gives an indication that the gas could be well-mixed \citep{2012A&A...544A.134N}. However, $H/R$ observed for Fe I is on the order of 0.2, which is higher than expected by our model. This difference might imply that the temperature of Fe is totally decoupled from the main gas disc. In the inner region, as the temperature drops, $H$ gets smaller, and so does $H/R$ reaching 0.028 at 10AU. Unfortunately, the regions inwards of 40AU of the spatially resolved observation of Fe I are very noisy due to the PSF substraction and a confident $H/R$ value could not be extracted here. \begin{figure*} \centering \includegraphics[width=17.5cm]{densnew.jpeg} \caption{\label{figdens} From top to bottom and left to right, a) Densities of C I (red), C II or electrons (black) and O I (yellow); b) Temperature as a function of $R$; c) Ionisation fraction as a function of $R$; d) H over R as a function of $R$; e) Cumulative mass of C I (red) and C II (black) and O I (yellow) as a function of $R$; f) Dust-to-gas ratio as a function of $R$.} \end{figure*} The electron density in $\beta$ Pic is represented by the black line on Fig.~\ref{figdens}a, and follows the C II density, since in the model all electrons are found to come from the photoionisation of carbon. Thus, the electron density falls off as $R^{-1.15}$. An alternative independent method to derive the electron density involves using the ratio of CO J=2-1 \citep{2016MNRAS} and CO J=3-2 line fluxes, since this ratio is set by the density of colliders. If these are assumed to be electrons, the radial dependence of the electron density from our model agrees with that derived using this alternative method within the error bars of the observations \citep{2016MNRAS}. However the absolute electron density predicted by these two methods differs by a factor $\sim$ 2, which could suggest that electrons are not the only colliders with CO, and that C II and O I for instance could have an impact on the overall CO excitation. The cumulative masses of C I, C II and O I over the whole disc are plotted on Fig.~\ref{figdens}e. The total mass observed for a given field of view (FOV) or FWHM can be obtained by looking at the value at which $R$ roughly equals the maximum FOV, which can be useful to compare to masses derived from observations. The total mass over the whole gas disc of C I is $2 \times 10^{-3}$ M$_\oplus$, $1.3 \times 10^{-2}$ M$_\oplus$ for C II, and $2 \times 10^{-2}$ M$_\oplus$ for O I (see Table~\ref{tab3}). This should not be compared directly to the millimetre dust mass of $6 \times 10^{-2}$ M$_\oplus$ \citep{2009A&A...508.1057N}, since the bulk of C I is in the inner region, and the dust mass is located mainly between 50-130AU. The dependence of the dust-to-gas mass ratio is plotted on Fig.~\ref{figdens}f, where we assumed a standard size distribution $q=-3.5$, from $s_{\rm min}$=5$\mu$m (blow-out size) to $s_{\rm max}=1$mm and the optical depth profile described in subsection \ref{dustdisc}. We used the model to make predictions for ALMA observations of C I. The setup is analogous to that employed for the ALMA prediction in \citet{2014A&A...563A..66C}, namely 1.24h on-source time spread between 3 pointings across the disc midplane (6" apart), in compact configuration, and standard weather conditions for 492.16 GHz observations (0.472mm precipitable water vapor). We used the \textit{simobserve} task within the CASA software version 4.5.0 \citep{2007ASPC..376..127M} to produce the visibility dataset, then used the CLEAN algorithm (with natural weighting of the visibilities) to produce a synthetic data cube with a final channel width of 0.63 km/s and synthesised beam of 0.84''x0.83''. Fig.~\ref{figalmaC1} shows the resulting spectrally integrated (i.e. moment-0) image of the CI $^3$P$_1$-$^3$P$_0$ emission from the disc for which the recovered integrated line flux is only slightly lower than predicted by our model (12 as opposed to 13.5 Jy km/s). Comparison with the observed total flux and radial/vertical brightness distribution will provide a strong test of the model predictions. \begin{figure} \centering \includegraphics[width=7.5cm]{betapicALMACIimage.jpeg} \caption{\label{figalmaC1} Synthetised ALMA (Cycle 3 in compact configuration) moment-0 image for C I emission at 609 microns for our best-fit model. The beam size is 0.84''x0.83'' and the total flux equals 12 Jy km/s (see the text for details).} \end{figure} In subsection \ref{Hline}, we computed that to fit the O I Herschel line requires an H/C ratio of $\sim$ 3. Thus, we can make predictions for the total amount of hydrogen within the system. Fig.~\ref{figHIline} shows the derived density for hydrogen for our best-fit model, for which we derive the total H I column density along the line-of-sight is $\sim 3\times 10^{18}$ cm$^{-2}$ (which may be observable as hydrogen absorption lines in UV). We also derive the total H I mass to be $3.1 \times 10^{-3}$ M$_\oplus$. \begin{figure} \centering \includegraphics[width=7.5cm]{hydrogen.png} \caption{\label{figHIline} H I number density predicted for our best-fit model. The total H I column density along the line-of-sight is $\sim 3\times 10^{18}$ cm$^{-2}$.} \end{figure} Another model prediction is that there should still be accretion in $\beta$ Pic at a rate which is roughly equal to the CO mass input rate evaluated to be $\sim 1.4\times 10^{18}$ kg/yr (within a factor 2). This value could be lower if $\beta$ Pic b accretes some of the gas before it reaches the star. However, this accretion rate assumes that only CO contributes to the production of C and O, and so the accretion rate could also be higher if some other molecules are produced at the same time CO is released. Indeed, if H$_2$O is also created, as inferred in subsection \ref{Hline} from the O I line, the accretion rate would increase to $\sim 2.55\times 10^{18}$ kg/yr. Usually, mid-A to mid-B type stars are known to be X-ray dark but $\beta$ Pic shows some weak X-ray emission that could be coming from the thermal emission of a cool corona or from some remnant accretion onto the star \citep{2012ApJ...750...78G}. \citet{2005A&A...440..727H} estimates that an accretion rate between $2\times 10^{18}$ kg/yr and $2\times 10^{20}$ kg/yr can explain the X-ray observation, which fits our model predictions. \begin{table} \caption{Total gas mass for different species.} \begin{center} \begin{tabular}{|l|c|c|} \hline \hline Element & Best-fit & Best-fit with extra hydrogen \\ \hline C I & $2 \times 10^{-3}$ M$_\oplus$ & $2 \times 10^{-3}$ M$_\oplus$ \\ C II & $1.3 \times 10^{-2}$ M$_\oplus$ & $1.3 \times 10^{-2}$ M$_\oplus$ \\ O I & $2 \times 10^{-2}$ M$_\oplus$ & $5 \times 10^{-2}$ M$_\oplus$\\ H I & $\sim$ 0 & $3.1 \times 10^{-3}$ M$_\oplus$\\ H$_2$O & $\sim$ 0 & $2 \times 10^{-9}$ M$_\oplus$\\ \hline \label{tab3} \end{tabular} \end{center} \end{table} \begin{table} \caption{Gas number density ratio when assuming extra hydrogen in the disc.} \begin{center} \begin{tabular}{|l|l|c|} \hline \hline & H/C & $\sim$ 3 \\ In the gas disc & O/C & $\sim$ 2.5 \\ & O/H & $\sim$ 1 \\ \hline In planetesimals &H$_2$O/CO & 1.5 \\ \hline \label{tab4} \end{tabular} \end{center} \end{table} \subsection{Dust in $\beta$ Pic}\label{dustbp} We use the dust density profile described in subsection \ref{dustdisc} to compute the effect of photoelectric heating on the temperature within the disc using Cloudy. When dust is added, we see no effects on the density profile nor on the emission line predictions. This can be explained in the following way. Using the definitions of $\Gamma_\mathrm{PE}$ and $\Gamma_\mathrm{ionC}$ (photoelectric and photoionisation heating rates) and the fact that the photoelectric charging current per unit area on each dust grain is equal to the thermal electron collection current \citep{2010ApJ...720..923Z}, one finds that to have $\Gamma_\mathrm{PE} > \Gamma_\mathrm{ionC}$ requires that \begin{equation} \label{dustcond} \tau_d > \tau_\mathrm{crit} = \frac{H n_{C_\mathrm{I}} \langle E_\mathrm{ion} \rangle \sqrt{2 \pi k_B T m_e}}{4 t_\mathrm{ionC} \langle E_\mathrm{dust} \rangle s_e n_e e \phi}, \end{equation} \noindent where $n_{C_\mathrm{I}}$ and $n_e$ are the C I and electron densities respectively, $m_e$ is the electron mass, $s_e \sim 0.5$ is the electron sticking coefficient, $\langle E_\mathrm{dust} \rangle$ is the mean energy imparted by the electrons to the gas (about a few eV), $\langle E_\mathrm{ion} \rangle$ is the mean initial energy of the ejected electron after a photoionisation and $t_\mathrm{ionC}=n_{C_\mathrm{I}}/R_\mathrm{ionC}$ is the ionisation timescale. We also assumed that $e\phi \gg k_B T$, $\phi$ being the charging potential of the grain, as $e \phi$ is generally a few $k_B T_\star$. This formula gives a good order of magnitude for the amount of dust required for the photoelectric heating to be more efficient than the photoionisation heating. All the values within Eq.~\ref{dustcond} are given by our model so that for a given $R$, one can work out the effect of photoelectric heating on the disc. At $R \sim 120$ AU, where the dust density is maximum, $H \sim 5$ AU, $n_{C_\mathrm{I}} \sim 10$ cm$^{-3}$, $n_e \sim 80$ cm$^{-3}$, $T \sim 40$ K, $t_\mathrm{ionC} \sim 2$ years and $e \phi \sim 8000 k_B$. $\langle E_\mathrm{dust} \rangle$, the energy gained for each electron ejected from the dust grains is about a factor 2 greater than that ejected from C I as the ionisation potential is higher than that needed to be ejected from the grain (i.e $e\phi+W$, where $W$ is the work function of the grains). The result is that $\tau_d > 10^{-2}$ is necessary for the photoelectric effect to dominate over the photoionisation of carbon. This value is never reached in $\beta$ Pic, and in debris discs in general, explaining why the inclusion of photoelectric effect has no effects on our results. Previous models had concluded that photoelectric heating was dominant in most of the $\beta$ Pic disc \citep{2010ApJ...720..923Z}. However, the density of carbon used in such models was a lot lower than now known to be the case. Had we used the same densities as assumed in \citet{2010ApJ...720..923Z}, we would also have concluded that photoelectric heating dominates photoionisation heating (as seen in their Fig.~1). \begin{figure} \centering \includegraphics[width=8.5cm]{stop.jpeg} \caption{\label{figstop} Dimensionless stopping time versus $R$ for 3 different grain sizes 0.1 (solid), 1 (dotted) and 10 microns (dashed). The thin solid line shows when the stopping time equals 1.} \end{figure} Although there is no thermal effect from the dust, gas could have a dynamical effect on dust grains in these inner regions as the gas density is rather high. To quantify this, we worked out the dimensionless stopping time for our best-fit model. Fig.~\ref{figstop} shows the stopping time as a function of $R$ for three different grain sizes 0.1 (solid), 1 (dotted) and 10 microns (dashed). In $\beta$ Pic, the blow-out size due to radiation pressure is close to 5 microns. For the case with extra oxygen coming from water, the stopping times shown on Fig.~\ref{figstop} should be divided by a factor $\sim$ 3. One can see that the grains need to be very small for the gas to brake them. Submicron grains feel a drag from $\sim$ 200 AU inwards. This means that the grains produced in the main belt that are expected to be on unbound trajectories (i.e those below the blow-out limit) can instead become coupled to the gas and be affected by gas drag so that they drift inward instead of being ejected. However, bound grains just above the blow-out limit should not be significantly affected by gas drag even in the inner region. The latest observations by GPI \citep{2015ApJ...811...18M} show a dust density profile that falls off as $R^{-0.85}$, with small grains as close in as 23AU. Since the ALMA observations show that the main belt is beyond $\sim$ 50AU, these small grains cannot be produced in-situ. The high gas density we predict and the resulting drag on sub-micron grains may help to explain this observation. This may also explain mid-IR observations which show material within 20AU \citep{1997A&A...327.1123P}. \subsection{$\alpha$ model explained with MRI?} In Figure \ref{Knudsenplot} (left), we plot the Knudsen number Kn=$\lambda_{i,j}/H$ for our best-fit disc, where $\lambda_{i,j}$ is the mean free path of element $i$ with $j$. We see that throughout most of the disc (where densities are higher than 10 cm$^{-3}$), taking the mean free path between C$^+$/C$^+$ (solid line), C$^+$/C (dotted) or C/C (dashed), Kn $ < 1$, and thus we conclude that modelling this disc as a fluid should give an adequate description. The value of the viscosity parameter that gives our best-fit is $\alpha \approx 1.5$ but if the spatial distribution of C I is affected by interactions with the planet or if the input rate is higher than the value used in our study (e.g. because of the inaccurate conversion of the CO flux to a CO mass and the possibility that other molecules contribute to the production of C, O or even H), it could be as low as 0.1 (see subsection \ref{firstm}). This value is comparable to those found in highly ionised accretion discs \citep[e.g. dwarf novae in outburst;][]{2007MNRAS.376.1740K} in which the magnetorotational instability is thought to be operational. The condition for the ideal MRI to be operational is usually taken to be that the magnetic Reynolds number $Re_M$, given by \begin{equation} \label{Rem} Re_M = \frac{c_s H}{\eta}, \end{equation} \noindent where $\eta$ is the magnetic diffusivity, exceeds some critical value $Re_M({\rm crit})$ (\citet{2000ApJ...530..464F} or \citet{2012MNRAS.420.3139M} suggest that $Re_M(crit) \approx 10^4$). In Figure \ref{Knudsenplot} (right), we plot $Re_M$ against radius for our best-fit accretion disc, which shows that the magnetic Reynolds number exceeds this critical value by a large amount throughout the disc. We conclude that the value of $\alpha$ that we obtain is in line with the idea that the viscosity is provided by MHD turbulence (MRI). \begin{figure*} \centering \includegraphics[width=17.5cm]{knuremlast.jpeg} \caption{\label{Knudsenplot} \textit{Left:} Knudsen Kn number in $\beta$ Pictoris for C$^+$/C$^+$ collisions (solid line), C$^+$/C collisions (dashed), C/C collisions (dotted). \textit{Right:} Magnetic Reynolds number $Re_M$ in $\beta$ Pictoris (see Eq.~\ref{Rem}).} \end{figure*} \section{Discussion} This paper presents a new model for gas in debris discs. Previous models had considered a static gas disc, whereas the model presented here couples its dynamical and thermal evolution, as well as taking into account its optical thickness to ionising radiation. This allows for a better understanding of gas in debris discs. One of the main outcomes is that the various observations of gas around $\beta$ Pic can be explained within the framework of a viscous evolution model. The model assumes that CO is produced from solid bodies and quickly photodissociates into C and O that evolve viscously. To explain the $\beta$ Pic carbon and CO observations in a self-consistent way, our gas model requires a CO mass input rate equal to $\dot{M} \sim 1.4 \times 10^{18}$ kg, an $\alpha \sim 1.5$ for the viscous evolution and an impinging UV flux $F_i/F_0 \sim$ 60 times greater than the interstellar radiation field. The required value of $\dot{M}$ is consistent with the CO input rate derived from observations \citep{2014Sci...343.1490D}. The $\alpha$ value needed to fit the carbon observations is rather high. We suggest that MRI is at work within this highly ionised medium, which seems to agree with the high magnetic Reynolds number that is found by our model. Hence, this work may provide the first indirect measurement of $\alpha$ in a debris disc. We note that the $\alpha$ value derived here is degenerate with $\dot{M}$ (as seen in subsection \ref{xhi2}) so that if $\dot{M}$ is greater by a factor 3, $\alpha$ would go down to $\sim$ 0.5. This value depends largely on the C I non-detection in our model and can be refined when the C I flux is known. Due to the uncertainties on C I and the uncertainties in our model, for now we put a lower limit on $\alpha$, which must be greater than 0.1. The high $\alpha$ value found in our study is consistent with that inferred for dwarf novae and X-ray outbursts, where discs are fully ionised, which suggest $\alpha$ values ranging from 0.1 to 0.4 \citep{2007MNRAS.376.1740K} and also with the value of $\alpha=1\pm0.2$ measured for the fully ionised disc in a Be star \citep{2012ApJ...744L..15C}. Note that the MRI conditions required to be active can involve some complicated non-linear effects (and some non ideal MRI effects), a discussion of which exceeds the scope of this paper \citep[see][for more details]{kral16c}. We find that the overall UV flux (star+external radiation) impinging on the gas disc required to explain the observations is of the order of 60 times greater than the UV flux expected from the local interstellar radiation field. This is surprising, but $\beta$ Pic is still young and seems very active. Usually, mid-A to mid-B type stars are known to be X-ray dark but $\beta$ Pic shows some weak X-ray emission that could be coming from the thermal emission of a cool corona \citep{2012ApJ...750...78G}. Also, observations with FUSE in $\beta$ Pic show the presence of highly ionised elements such as C III (977 \AA) and O VI (1032 \AA) pointing to the possible presence of an extended chromosphere or accretion around $\beta$ Pic \citep{2001ApJ...557L..67D}. We recall that in protoplanetary discs, the main ionisation source is the X-rays produced in T Tauri stars due to corona activity \citep{2011ApJ...739...50B}. These X-rays are not present around main sequence stars but there could be some remnants in the youngest systems \citep{2015ApJ...801...31R}. Indeed, the flux of a typical classical T Tauri star FUV radiation field at 1 AU from the central star is $10^7$ times the interstellar radiation field \citep{2014ApJ...784..127F}. Also, the interstellar radiation field impinging on the $\beta$ Pic system is not well known. The IRF consists of four components that are described in Table A1 of \citet{1983A&A...128..212M} based on the well-observed spectrum of the IRF in the solar neighbourhood. The close-by environment of $\beta$ Pic could be modelled to take account of the presence of nearby O and B stars, but this represents an arduous task \citep[e.g.][]{1973ApJ...181..363W} and was not attempted in this study. The IRF can also differ from that assumed in our model as the extinction curves of grains in the galaxy vary from one place to another \citep{1994ASPC...58..319V}. Another uncertainty comes from the stellar spectrum. Overall, $\beta$ Pic shows a very active environment, which translates as increasing the ionising flux from the star in our model or the value of $F_i/F_0$. Having a more active star than usual main sequence A stars could explain why we need such a high radiation flux to fit the observations. Previous work explaining the C II emission line observed towards $\beta$ Pic discarded the possibility of a viscous evolution because the emission line shows a very steep velocity gradient implying an absence of material in the inner region \citep{2014A&A...563A..66C}. We showed that this is not the case as viscous evolution does not create a continuous C II profile but rather one with a break resulting in a lower amount of C II in the inner regions. Indeed, the continuum optical thickness to ionising radiation in the inner regions blocks photoionisation, which in turn lowers the ionisation fraction and prevents C II from accumulating too much in the inner regions (its radial profile scales as $R^{-1.15}$). Thus, in our model, viscous evolution naturally creates the steep velocity gradient observed for the C II emission line. \section{Summary-Conclusions} We have developed a gas evolution model that couples the dynamics of gas particles to their thermal state through a viscous evolution that is modelled by an $\alpha$ prescription. Using a thermodynamical self-consistent model, we are able to follow the evolution of the gas from its production site to its steady state location. This gives interesting insights on $\beta$ Pic and on the magnetorotational instability in general that can be summed up as follows: \begin{itemize} \item The model developed in this paper indicates that the dynamical evolution of gas in $\beta$ Pic is well represented through an $\alpha$ model where CO photodissociation produces C and O which diffuse viscously. \item The $\beta$ Pic gas disc is well explained by a viscous evolution with an $\alpha$ value greater than 0.1, a mass input rate of 0.1 M$_\oplus$/Myr (as found by the ALMA CO observation) and a high impinging UV flux (see Table~\ref{tab2}). \item $\beta$ Pic carbon observations are reproduced with our model assuming that it comes from CO. This model also explains the APEX non-detection of C I presented here and is in agreement with the O I detection by Herschel and the electron density derived from the CO J=2-1 and J=3-2 ratio. \item We make predictions for the hydrogen content in the $\beta$ Pictoris gas disc. We suggest that the H$_2$O/CO ratio in the colliding planetesimals is $\sim$ 1.5 (giving a predicted total H$_2$O mass present in the system of $\sim$ $2 \times 10^{-9}$ M$_\oplus$) and that the total H I column density along the line-of-sight is $\sim 3\times 10^{18}$ cm$^{-2}$, with a total H I mass of $3.1 \times 10^{-3}$ M$_\oplus$ (see Table~\ref{tab3} and \ref{tab4}). \item The unexpected observed X-ray flux in $\beta$ Pic may also be explained with our model, which provides the right amount of accretion to explain the X-ray observation. \item The ionisation fraction of carbon is high in $\beta$ Pic ($>0.3$). The ionisation fraction drops closer to the host star as the carbon density increases, which blocks more FUV flux from reaching the midplane. \item Due to the high ionisation fraction and high Reynolds magnetic number throughout the disc, we suggest that the magnetorotational instability is likely to be the physical mechanism that sets the viscosity in the disc \citep{kral16c}. \item We show that gas drag may be strong enough to affect the small unbound dust grains in $\beta$ Pic. This may help to explain the presence of small grains in the inner regions of $\beta$ Pic observed with GPI or the mid-IR emission within 20AU. \item Photoelectric heating is never dominant throughout the whole gas disc compared to photoionisation of carbon. The conditions for when dust heating can become important compared to carbon photoionisation heating can be estimated from Eq.~\ref{dustcond}. \end{itemize} Our model could be applied to other debris discs to make predictions for C I, C II, O I, H I line fluxes and so assess their observability. \section*{Acknowledgments} We thank the referee for his/her helpful review. QK, MW and LM acknowledge support from the European Union through ERC grant number 279973. A.J. acknowledges the support of the DISCSIM project, grant agreement 341137, funded by the European Research Council under ERC-2013-ADG. QK wishes to thank Gianni Cataldi for providing the C II emission line spectrum for $\beta$ Pictoris. This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX), under program ID 096.F-9328(A). APEX is a collaboration between the Max-Planck-Institut fur Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory. {\it Herschel} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
2,877,628,091,355
arxiv
\section{Introduction} Impressive results have been achieved in 3D pose estimation of objects from images during the last decade. However, current approaches cannot scale to large-scale problems because they rely on one classifier per object, or multi-class classifiers such as Random Forests, whose complexity grows with the number of objects. So far the only recognition approaches that have been demonstrated to work on large scale problems are based on Nearest Neighbor~(NN) classification~\cite{Nister06,Jegou11,Dean13}, because extremely efficient methods for NN search exist with an average complexity of $O(1)$~\cite{Norouzi14,Muja14}. Moreover, Nearest Neighbor~(NN) classification also offers the possibility to trivially add new objects, or remove old ones, which is not directly possible with neural networks, for example. However, to the best of our knowledge, such an approach has not been applied to the 3D pose estimation problem, while it can potentially scale to many objects seen under large ranges of poses. For example, \cite{Dean13} only focuses on object recognition without considering the 3D pose estimation problem. For NN approaches to perform well, a compact and discriminative description vector is required. Such representations that can capture the appearance of an object under a certain pose have already been proposed~\cite{Dalal05,Hinterstoisser12b}, however they have been handcrafted. Our approach is motivated by the success of recent work on feature point descriptor learning~\cite{Brown10,Trzcinski13a,Masci14}, which shows that it is possible to learn compact descriptors that significantly outperform handcrafted methods such as SIFT or SURF. \input{fig_one.tex} However, the problem we tackle here is more complex: While feature point descriptors are used only to find the points' identities, we here want to find both the object's identity and its pose. We therefore seek to learn a descriptor with the two following properties: a) The Euclidean distance between descriptors from two different objects should be large; b) The Euclidean distance between descriptors from the same object should be representative of the similarity between their poses. This way, given a new object view, we can recognize the object and get an estimate of its pose by matching its descriptor against a database of registered descriptors. New objects can also be added and existing ones removed easily. To the best of our knowledge, our method is the first one that learns to compute descriptors for object views. Our approach is related to manifold learning, but the key advantage of learning a direct mapping to descriptors is that we can use efficient and scalable Nearest Neighbor search methods. This is not possible for previous methods relying on geodesic distances on manifolds. Moreover, while previous approaches already considered similar properties to a) and b), to the best of our knowledge, they never considered both simultaneously, while it is critical for efficiency. Combining these two constraints in a principled way is far from trivial, but we show it can be done by training a Convolutional Neural Network~\cite{LeCun98} using simple constraints to compute the descriptors. As shown in Fig.~\ref{fig:one}, this results in a method that nicely untangles the views of different objects into descriptors that capture the identities and poses of the objects. We evaluate our approach on instance recognition and pose estimation data with accurate ground truth and show significantly improved results over related methods. Additionally we perform experiments assessing the ability of the method to generalize to unseen objects showing promising results. \section{Related Work} Our work is related to several aspects of Computer Vision, and we focus here on the most relevant and representative work. Our approach is clearly in the framework of 2D view specific templates~\cite{Hoiem11}, which is conceptually simple, supported by psychophysical experiments~\cite{Tarr89}, and was successfully applied to various problems and datasets over the last years~\cite{Nayar96b,Malisiewicz11,Hinterstoisser12,Gu10,Dean13,RiosCabrera14}. However, most of these works rely on handcrafted representations of the templates, for example HOG~\cite{Dalal05} or LineMOD~\cite{Hinterstoisser12b}. In particular, LineMOD was designed explicitly in the context of object detection and pose estimation. However these handcrafted representations are suboptimal compared to statistically learned features. \cite{Malisiewicz11,Gu10,RiosCabrera14} show how to build discriminative models based on these representations using SVM or boosting applied to training data. \cite{Malisiewicz11,RiosCabrera14} do not consider the pose estimation problem, while \cite{Gu10} focuses on this problem only, with a discriminatively trained mixture of HOG templates. Exemplars were recently used for 3D object detection and pose estimation in~\cite{Aubry14}, but still rely on a handcrafted representation. As mentioned in the introduction, our work is influenced by work developed for keypoint descriptor learning. Some of these methods are applied to existing descriptors to make them more discriminative, such as in~\cite{Gong12,Strecha12}, but others are trained directly on image data. \cite{Brown10} introduces datasets made of ``positive pairs'' of patches corresponding to the same physical points and ``negative pairs'' of patches corresponding to different points. It is used for example in~\cite{Trzcinski13a} to learn a binary descriptor with boosting. \cite{Masci14} uses a ``siamese'' architecture~\cite{Chopra05} to train a neural network to compute discriminative descriptors. Our approach is related to this last work, but the notion of pose is absent in their case. We show how to introduce this notion by using triplets of training examples in addition to only pairs. Instead of relying on rigid templates as we do, many works on category recognition and pose estimation rely on part-based models. \cite{Savarese07} pioneered this approach, and learned canonical parts connected by a graph for object recognition and pose estimation. \cite{Pepik12} extends the Deformable Part Model to 3D object detection and pose estimation. \cite{Payet11} uses contours as parts. One major drawback of such approaches is that the complexity is typically linear with the number of objects. It is also not clear how important the ``deformable'' property really is for the recognition, and rigid templates seem to be sufficient~\cite{Divvala12}. Our approach is also related to manifold learning~\cite{Pless09}. For example, \cite{Salakhutdinov07} learns an embedding that separates extremely well the classes from the MNIST dataset of digit images, but the notion of pose is absent. \cite{Hadsell06} learns either for different classes, also on the MNIST dataset, or for varying pose and illumination, but not the two simultaneously. More recently, \cite{Bakry14} proposes a method that separates manifolds from different categories while being able to predict the object poses, and also does not require solving an inference problem, which is important for efficiency. However, it relies on a discretisation of the pose space in a few classes, which limits the possible accuracy. It also relies on HOG for the image features, while we learn the relevant image features. Finally, many works focus as we do on instance recognition and pose estimation, as it has important applications in robotics for example. \cite{Hinterstoisser12b} introduced LineMOD, a fast but handcrafted presentation of template for dealing with poorly textured objects. The very recent \cite{Brachmann14,Tejani14} do not use templates but rely on recognition of local patches instead. However they were demonstrated on RGB-D images, and local recognition is likely to be much more challenging on poorly textured objects when a depth information is not available. \cite{Lai11} also expects RGB-D images, and uses a tree for object recognition, which however still scales linearly in the numbers of objects, categories, and poses. \input{method} \input{experiments} \section{Conclusion} We have shown how to train a CNN to map raw input images from different input modalities to very compact output descriptors using pair-wise and triplet-wise constraints over training data and template views. Our descriptors significantly outperform LineMOD and HOG, which are widely used for object recognition and 3D pose estimation, both in terms of accuracy and descriptor length. Our representation therefore replaces them advantageously. Tests on the capability to generalize to unseen objects also have shown promising results. For further investigation we will make our code available upon request. {\small \bibliographystyle{ieee} \section{Evaluation} We compare our approach to LineMOD and HOG on the LineMOD dataset~\cite{Hinterstoisser12}. This dataset contains training and test data for object recognition and pose estimation of 15 objects, with accurate ground truth. It comes with a 3D mesh for each of the objects. Additionally, it also provides sequences of RGB images and depth maps recorded with a Kinect{} sensor. \subsection{Dataset Compilation} \label{sec:dataset} We train a CNN using our method on a mixture of synthetic and real world data. As in~\cite{Hinterstoisser12b}, we create synthetic training data by rendering the mesh available for each of the objects in the dataset from positions on a half-dome over the object, as shown in Fig.~\ref{fig:one} on the left. The viewpoints are defined by starting with a regular icosahedron and recursively subdividing each triangle into 4 sub-triangles. For the template positions the subdivison is applied two times. After removing the lower half-sphere we end up with 301 evenly distributed template positions. Additional training data is created by subdividing one more time, resulting in 1241 positions. From each pose we render the object standing on a plane over an empty background using Blender\footnote{http://www.blender.org}. We parameterize the object pose with the azimuth and elevation of the camera relative to the object. We store the RGB image as well as the depth map. For the real world data we split the provided sequences captured with the Kinect{} randomly into a training and a test set. We ensure an even distribution of the samples over the viewing hemisphere by taking two real world images close to each template, which results roughly in a 50/50 split of the data into training and test. Preliminary experiments showed very little to no variance over the different train/test splits and, thus, all results presented here report runs on one random split, fixed for each experiment. The whole training data set is augmented by making multiple copies with added noise. On both RGB and depth channel we add a small amount of Gaussian noise. Additionally, for the synthetic images, we add larger fractal noise on the background, to simulate diverse backgrounds. Note that the template views, which are ultimately used in the classification are purely synthetic and noise-free renderings on clean backgrounds. The algorithm, thus, has to learn to map the noisy and real world input data to the same location in descriptor space as the clean templates. As pointed out in~\cite{Hinterstoisser12b} some of the objects are rotationally invariant, to different degrees. Thus, the measure of similarity of poses used for the evaluation and, in our case to define pairs and triplets, should not consider the azimuth of the viewing angle for those objects. We treat the \emph{bowl} object as fully rotationally invariant. The classes \emph{eggbox}, \emph{glue} are treated as symmetric, meaning a rotation by $180^\circ$ around the z-axis shows the same pose again. The \emph{cup} is a special case because it looks the same from a small range of poses, but from sufficient elevation such that the handle is visible, the exact pose could be estimated. We also treat it as rotationally invariant, mainly to keep the comparison to LineMOD fair. We extract a patch centered at the object and capturing a fixed size window in 3D at the distance of the object's center. In order to also address the detection part in a sliding window manner, it would be necessary to extract and test several scales. However, only a small range of scales needs to be considered, starting with a maximal one, defined by the depth at the center point, and going down until the center of the object is reached. Before applying the CNN we normalize the input images. RGB images are normalized to the usual zero mean, unit variance. For depth maps we subtract the depth at the center of the object, scale down such that $20$cm in front and behind the object's center are mapped to the range of $[-1,1]$ and clip everything beyond that range. The test sequences captured with the Kinect{} are very noisy. In particular, there are many regions with undefined depth, introducing very large jumps for which the convolutional filters with ReLU activation functions might output overly strong output values. Therefore, we pre-process the test data by iteratively applying median filters in a $3\times 3$ neighborhood, but only on the pixels for which the depth is available, until all gaps are closed. \subsection{Network Optimization} For the optimization we use the following protocol: We initially train the network on the initial dataset for 400 epochs, an initial learning rate of $0.01$ and a momentum of 0.9. Every 100 epochs the learning rate is multiplied by $0.9$. Then we perform two rounds of bootstrapping triplet indices as explained in Section~\ref{sec:impl_aspects}, and for each round we train the CNN for another 200 epochs on the augmented training set. In the end we train another 300 epochs with the learning rate divided by $10$ for final fine-tuning. The regularization weight $\lambda$ is set to $10^{-6}$ in all our experiments. \subsection{LineMOD and HOG} We compare our learned descriptors to the LineMOD descriptor and HOG as a baseline, as it is widely used as representation in the related work. For LineMOD we use the publicly available source code in OpenCV. We run it on the same data as our method, except for the median filter depth inpainting and normalization: LineMOD handles the missing values internally and performed better without these pre-processing operations. For HOG we also use the publicly available implementation in OpenCV. We extract the HOG descriptors from the same data we use with our CNN. We use a standard setup of a $64 \times 64$ window size, $8 \times 8$ cells, $2\times 2$ cells in a block and a block stride of $8$, giving a 1764-dimensional descriptor per channel. We compute descriptors on each RGB and depth channel individually and stack them. For evaluation we normalize all descriptors to length $1$ and take the dot product between test and template descriptors as similarity measure. \input{fig_sim_vs_dst} \input{fig_class_sep_o15} \subsection{Manifolds} \label{sec:manifolds} Figure~\ref{fig:one} plots the views of three objects after being mapped into 3-dimensional descriptors, for visualization purposes. As can be seen, not only the descriptors from the different objects are very well separated, but they also capture the geometry of the corresponding poses. This means that the distances between descriptors is representative of the distances between the corresponding poses, as desired. For longer descriptors we show an evaluation of the relation between distances of descriptors and similarity between poses in Figure~\ref{fig:sim_vs_dst}. For each object, we computed the distances between every sample in the test set and every template for the same object in the training set, as well as the angles between their poses. We then plot a two-dimensional histogram over these angle/distance pairs. Correlation between small angles and large distances indicates the risk of missed target templates, and correlation between large angles and small distances the risk of incorrect matches. Ideally the histograms should therefore have large values only on the diagonal. The histograms for the descriptors computed with our method clearly show that the distance between descriptors increase with the angle between the views, as desired, while the histograms for LineMOD and HOG show that these descriptors are much more ambiguous. Additionally, the ability of the descriptors to separate the different classes is evaluated in Figure~\ref{fig:class_sep}. For every test sample descriptor we compute the distance to the closest template descriptor of the same object and the closest from any other object and plot a histogram over those ratios. Clearly, descriptors obtained with our method exhibit a larger ratio for most samples and thus separate the objects better. \subsection{Retrieval Performance} What we ultimately want from the descriptors is that nearest neighbors are from the same class and have similar poses. In order to evaluate the performance we thus perform the following comparisons. The scores reported for LineMOD in \cite{Hinterstoisser12b} represent the accuracy of the output of the whole processing pipeline, including the descriptor calculation, retrieval of similar templates, pruning the set with heuristics and refinement of the pose for a set of candidate matches by aligning a voxel model of the object. The contribution of this work is to replace the descriptors for the retrieval of templates with similar pose. Thus, we evaluate and compare this step in separation of the rest of the pipeline. \paragraph*{Evaluation Metric} For each test sample we consider the $k$-nearest neighbors according to the descriptors and similarity metric of each method, the Euclidean distance in our case, the dot product for HOG, and the matching score of LineMOD. Among those $k$ nearest templates we search for the one with the best closest pose to the test sample's pose, assuming that this one would perform best in the subsequent refinement process and thus finally be selected. The pose error is measured by the angle between the two positions on the viewing half-sphere. We define the accuracy as the percentage of test images for which the best angle error is below a certain threshold. The minimum angle error for which perfect accuracy can theoretically be reached is $5^\circ$, because that is the maximal distance of a test image to its closest template. \paragraph*{Descriptor Length} In Figure~\ref{fig:descr_len} we evaluate the influence of the length of the descriptors learned on depth data. As can be seen the maximal performance is already reached with a 16 dimensional descriptor, while the length of the HOG descriptor is 1764. Thus, we use a 16 dimensional descriptor for all of the following experiments, including for the RGB and RGB-D data. \input{fig_descr_len} \paragraph*{Results} \input{fig_compare_o15} \input{table_linemod_angleerror_o15_dpt_rgb_rgbd} We evaluate all three approaches on depth, RGB, and RGB-D data. Figure~\ref{fig:compare_o15} and Table~\ref{tab:angleerror_o15} summarize the results. For \emph{depth maps}, results are shown in Figure~\ref{fig:compare_o15}~\subref{fig:compare_o15_dpt}. When only considering 1 nearest neighbor we achieve a recognition rate of $98.1\%$, as opposed to the $69.5\%$ achieved by the LineMOD descriptor and a pose error of less than $20^\circ$ for $94.7\%$ of the test samples ($59.3\%$ for LineMOD). Figure~\ref{fig:compare_o15}~\subref{fig:compare_o15_rgb} shows results for training and testing on \emph{color images}. While both LineMOD and HOG cannot reach the performance they obtain on the depth data on RGB alone, our descriptor performs almost identically in this setup. Finally, Figure~\ref{fig:compare_o15}~\subref{fig:compare_o15_rgbd} shows results for training and testing on the combination of color images and depth maps. While LineMOD takes advantage of the combination of the two modalities, it is clearly outperformed by our descriptor, as taking the single nearest neighbor exhibits a pose error below $20^\circ$ for $96.2\%$ of the test samples and an overall recognition rate of $99.8\%$, an almost perfect score. \subsection{Generalization} As a last experiment, we show that our descriptor can generalize to unseen objects. This evaluation was performed using depth only. To do so, we train the CNN on 14 out of the 15 objects. We then perform the evaluation just as above by computing descriptors for the new object. As can be seen from the histogram of Fig.~\ref{fig:o14_plusduck}-left, our method generalizes well to this unseen object. The overall performance rate is slightly reduced since the network could not learn the subtle differences between the unseen object and the others. Most of the miss-classifications are with the ape, whose shape looks similar to the duck's under some viewpoints, as shown in Fig.~\ref{fig:o14_plusduck}-right. \input{fig_o14_plusduck} \section{Method} Given a new input image $x$ of an object, we want to correctly predict the object's class and 3D pose. Because of the benefits discussed above, such as scalability and ability to easily add and remove objects, we formulate the problem as a k-nearest neighbor search in a descriptor space: For each object in the database, descriptors are calculated for a set of template views and stored along with the object's identity and 3D pose of the view. In order to get an estimate for the class and pose of the object depicted in the new input image, we can compute a descriptor for $x$ and search for the most similar descriptors in the database. The output is then the object and pose associated with them. We therefore introduce a method to efficiently map an input image to a compact and discriminative descriptor that can be used in the nearest neighbor search according to the Euclidean distance. For the mapping, we use a Convolutional Neural Network (CNN) that is applied to the raw image patch as input and delivers the descriptor as activations of the last layer in one forward pass. We show below how to train such a CNN to enforce the two important properties already discussed in the introduction: a) The Euclidean distance between descriptors from two different objects should be large; b) The Euclidean distance between descriptors from the same object should be representative of the similarity between their poses. \newcommand{\mathcal{S}_\text{train}}{\mathcal{S}_\text{train}} \newcommand{\mathcal{S}_\text{db}}{\mathcal{S}_\text{db}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\mathcal{T}}{\mathcal{T}} \subsection{Training the CNN} In order to train the network we need a set $\mathcal{S}_\text{train}$ of training samples, where each sample $s = (x, c, p)$ is made of an image $x$ of an object, which can be a color or grayscale image or a depth map, or a combination of the two; the identity $c$ of the object; and the 3D pose $p$ of the object relative to the camera. Additionally, we define a set $\mathcal{S}_\text{db}$ of templates where each element is defined in the same way as a training sample. Descriptors for these templates are calculated and stored with the classifier for k-nearest neighbor search. The template set can be a subset of the training set, the whole training set or a separate set. Details for the creation of training and template data are given in the implementation section. \subsection{Defining the Cost Function} We argue that a good mapping from the images to the descriptors should be so that the Euclidean distance between two descriptors of the same object and similar poses are small and in every other case (either different objects or different poses) the distance should be large. In particular, each descriptor of a training sample should have a small distance to the one template descriptor from the same class with the most similar pose and a larger distance to all descriptors of templates from other classes, or the same class but less similar pose. \newcommand{f_{\!_w\!}}{f_{\!_w\!}} \newcommand{\mathcal{L}_\text{pairs}}{\mathcal{L}_\text{pairs}} \newcommand{\mathcal{L}_\text{triplets}}{\mathcal{L}_\text{triplets}} We enforce these requirements by minimizing the following objective function over the parameters $w$ of the CNN: \begin{equation} \mathcal{L} = \mathcal{L}_\text{triplets} + \mathcal{L}_\text{pairs} + \lambda {||w'||}_2^2 \;\; . \end{equation} The last term is a regularization term over the parameters of the network: $w'$ denotes the vector made of all the weights of the convolutional filters and all nodes of the fully connect layers, except the bias terms. We describe the first two terms $\mathcal{L}_\text{triplets}$ and $\mathcal{L}_\text{pairs}$ below. \subsubsection{Triplet-wise terms} \label{sec:triplet_cost} We first define a set $\mathcal{T}$ of triplets $(s_i, s_j, s_k)$ of training samples. Each triplet in $\mathcal{T}$ is selected such that one of the two following conditions is fulfilled: \begin{itemize} \item either $s_i$ and $s_j$ are from the same object and $s_k$ from another object, or \item the three samples $s_i$, $s_j$, and $s_k$ are from the same object, but the poses $p_i$ and $p_j$ are more similar than the poses $p_i$ and $p_k$. \end{itemize} These triplets can therefore be seen as made of a pair of similar samples ($s_i$ and $s_j$) and a pair of dissimilar ones ($s_i$ and $s_k$). We introduce a cost function for such a triplet: \begin{align} c(s_i, s_j, s_k) = \max \left( 0, 1 - \frac{{||f_{\!_w\!}(x_i) - f_{\!_w\!}(x_k)||}_2}{{||f_{\!_w\!}(x_i) - f_{\!_w\!}(x_j)||}_2 + m} \right) \;\; , \label{eq:triplet_cost} \end{align} where $f_{\!_w\!}(x)$ is the output of the CNN for an input image $x$ and thus our descriptor for $x$, and $m$ is a margin. We can now define the term $\mathcal{L}_\text{triplets}$ as the sum of this cost function over all the triplets in $\mathcal{T}$: \begin{equation} \mathcal{L}_\text{triplets} = \sum_{(s_i, s_j, s_k) \in \mathcal{T}} c(s_i, s_j, s_k) \;\; . \end{equation} It is easy to check that minimizing $\mathcal{L}_\text{triplets}$ enforces our two desired properties in one common framework. The margin $m$ serves two purposes. First, it introduces a margin for the classification. It also defines a minimum ratio for the Euclidean distances of the dissimilar pair of samples and the similar one. This counterbalances the weight regularization term, which naturally contracts the output of the network and thus the descriptor space. We set $m$ to $0.01$ in all our experiments. The concept of forming triplets from similar and dissimilar pairs is adopted from the field of metric learning, in particular, the method of~\cite{Weinberger2008}, where it is used to learn a Mahalanobis distance metric. Note also that our definition of the cost is slightly different from the one in~\cite{Wang14}, which uses $c(s_i, s_j, s_k) = \max \left( 0, m + {||f_{\!_w\!}(x_i) - f_{\!_w\!}(x_j)||}_2^2 - {||f_{\!_w\!}(x_i) - f_{\!_w\!}(x_k)||}_2^2 \right)$, where $m$ is set to 1. Our formulation does not suffer from a vanishing gradient when the distance of the dissimilar pair is very small (see suppl.\ material). Also the increase of the cost with the distance of the similar pair is bounded, thus putting more focus on local interactions. In practice, however, with proper initialization and selection of $m$ both formulations deliver similar results. \subsubsection{Pair-wise terms} In addition to the triplet-wise terms, we also use pair-wise terms. These terms make the descriptor robust to noise and other distracting artifacts such as changing illumination. We consider the set $\mathcal{P}$ of pairs $(s_i, s_j)$ of samples from the same object under very similar poses, ideally the same, and we define the $\mathcal{L}_\text{pairs}$ term as the sum of the squared Euclidean distances between the descriptors for these samples: \begin{equation} \mathcal{L}_\text{pairs} = \sum_{(s_i, s_j) \in \mathcal{P}} {||f_{\!_w\!}(x_i) - f_{\!_w\!}(x_j)||}_2^2 \;\; . \end{equation} This term therefore enforces the fact that for two images of the same object and same pose, we want to obtain two descriptors which are as close as possible to each other, even if they are from different imaging conditions: Ideally we want the same descriptors even if the two images have different backgrounds or different illuminations, for example. As will be discussed in more detail in Section~\ref{sec:dataset}, this also allows us to use a mixture of real and synthetic images for training. Note that we do not consider dissimilar pairs unlike work in keypoint descriptors learning for example. With dissimilar pairs the problem arises how strong to penalize a certain distance between the two samples, given their individual labels. Using triplets instead gives the possibility to only consider relative dissimilarity. \subsection{Implementation Aspects} \label{sec:impl_aspects} The exact structure of the network we train to compute the descriptors is shown in Figure~\ref{fig:network_structure}. It consists of two layers that perform convolution of the input with a set of filters, max-pooling and sub-sampling over a $2\times 2$ area and a rectified linear (ReLU) activation function, followed by two fully connected layers. The first fully connected layer also employs a ReLU activation, the last layer has linear output and delivers the final descriptor. \input{fig_network_structure} We optimize the parameters $w$ of the CNN by Stochastic Gradient Descent on mini-batches with Nesterov momentum~\cite{Sutskever2013ICML}. Our implementation is based on Theano~\cite{bergstra2010scipy}. The implementation of the optimization needs some special care: Since we are working with mini-batches, the data corresponding to each pair or triplet has to be organized such as to reside within one mini-batch. The most straightforward implementation would be to place the data for each pair and triplet after each other, calculate the resulting gradient wrt.\ the network's parameters individually and sum them up over the mini-batch. However, this would be inefficient since descriptors for templates would be calculated multiple times if they appear in more than one pair or triplet. To assemble a mini-batch we start by randomly taking one training sample from each object. Additionally, for each of them we add its template with the most similar pose, unless it was already included in this mini-batch. This is iterated until the mini-batch is full. However, this procedure can lead to very unequal numbers of templates per object if, for instance, all of the selected training samples have the same most similar template. We make sure that for each object at least two templates are included by adding a random one if necessary. Pairs are then formed by associating each training sample with its closest template. Additionally, for each training sample in the mini-batch we initially create three triplets. In each of them the similar template is set to be the one with the closest pose and the dissimilar sample is either another, less similar template from the same object or any template of a different object. During the optimization, after the first set of epochs, we perform boot-strapping of triplets within each mini-batch to focus on the difficult samples: For each training sample we add two additional triplets. The similar template is again the closest one. The dissimilar ones are those templates that currently have the closest descriptors, one from the same object but different pose and one from all the other objects. Another aspect to take care of is the fact that the objective function must be differentiable with respect to the parameters of the CNN, while the derivative of the square root---used in the triplet-wise cost---is not defined for a distance of 0. Our solution is to add a small constant $\epsilon$ before taking the square root. Another possible approach~\cite{Wang14} is to take the square of the norm. However, this induces the problem (mentioned in Section~\ref{sec:triplet_cost}) that for very small distances of the dissimilar pair, the gradient becomes very small and vanishes for zero distance. \section{Additional Samples} Figures~\ref{fig_samples_dpt}, \ref{fig_samples_rgb} and \ref{fig_samples_rgbd} show additional examples of templates retrieved for random sets of test samples. The first column shows the test sample. To the right, each row shows the first 10 templates, sorted by descriptor distance. Note how most of the closest templates show very similar views of the correct object that all give a good estimate of the object's pose. \begin{figure}[h] \center \includegraphics[width=0.7\textwidth]{figures/o15_dpt/2014-11-04_19-25-05-pid9289_test_samples} \caption{Templates with closest descriptors for samples. Network trained on depth data.} \label{fig_samples_dpt} \end{figure} \begin{figure}[h] \center \includegraphics[width=0.7\textwidth]{figures/o15_rgb/2014-11-08_15-07-25-pid31488_test_samples} \caption{Templates with closest descriptors for samples. Network trained on RGB color data.} \label{fig_samples_rgb} \end{figure} \begin{figure}[h] \center \includegraphics[width=0.9\textwidth]{figures/o15_rgbd/2014-11-10_10-19-20-pid1481_test_samples} \caption{Templates with closest descriptors for samples. Network trained on RGBD data.} \label{fig_samples_rgbd} \end{figure} \clearpage \newpage \newpage \pagebreak \section{Triplet Cost} In Sections 3.2.1 and 3.3 we discuss our definition and implementation of the cost of a triplet in contrast to the definition in related work. Figure~\ref{fig_triplet_cost} shows the value of the cost of one triplet given the distances between similar and dissimilar samples on the x- and y-axis, respectively. On top is our definition, on the lower left our definition, but with the distance squared and on the lower right the definition of Wang \etal in CVPR'14. As can be seen, in our definition the value of the cost does not depend on the total scale of the triplet. This allows us to define triplets over arbitrary ranges. A triplet reaching across the whole template dome does not dominate small local triplets and does not contract the similar pair more than it pushes apart the dissimilar one. Additionally, like our definition, the other two versions correctly assign high cost to triplets that have a very low distance between the descriptors of the dissimilar samples. However, since the square of the distances is taken, when the distance of the dissimilar pair approaches zero, the derivative w.r.t the distance of the dissimilar pair goes to zero, thus, not pushing apart dissimilar pairs when they are violating the constraints the most. \newcommand{f_{\!_w\!}}{f_{\!_w\!}} \newcommand{\mathcal{L}_\text{pairs}}{\mathcal{L}_\text{pairs}} \newcommand{\mathcal{L}_\text{triplets}}{\mathcal{L}_\text{triplets}} \begin{figure}[h] \center \subfloat[our definition]{ \includegraphics[width=0.51\textwidth]{figures/triplet_cost/ours_3d.pdf} } \\ \subfloat[ours with squared distances]{ \includegraphics[width=0.49\textwidth]{figures/triplet_cost/ours_sq_3d.pdf} } \subfloat[Wang et al. CVPR'14]{ \includegraphics[width=0.49\textwidth]{figures/triplet_cost/wang14_3d.pdf} } \caption{Cost of a triplet for different definitions.} \label{fig_triplet_cost} \end{figure} \end{document}
2,877,628,091,356
arxiv
\section{INTRODUCTION} \label{sec:intro} The upcoming Hobby-Eberly Telescope Dark Energy eXperiment (HETDEX; Ref. \citenum{Hill08a}) will amass a sample of $\sim0.8$ million Ly$\alpha$\ emitting galaxies (LAEs) that will be used as tracers of large-scale structure for constraining dark energy and measuring its evolution from $1.9 < z < 3.5$. To carry out the 120 night blind spectroscopic survey covering a 420 square degree field (9 Gpc$^{3}$), a revolutionary new multiplexed instrument called VIRUS (the Visible Integral field Replicable Unit Spectrograph; Ref. \citenum{Hill14a}) is being constructed\cite{Tuttle14} for the upgraded 9.2 m Hobby-Eberly Telescope (HET\footnote{The Hobby-Eberly Telescope is operated by McDonald Observatory on behalf of the University of Texas at Austin, the Pennsylvania State University, Ludwig-Maximillians-Universit\"{a}t M\"{u}nchen, and Georg-August-Universit\"{a}t G\"{o}ttingen.}; Ref. \citenum{Hill14b}). The VIRUS array consists of at least 150 copies of a simple fiber-fed integral field spectrograph and is the first optical astronomical instrument to leverage the economies of scale associated with large-scale replication to significantly reduce overall costs. The spectrographs are mechanically built into unit pairs and are fed by dense-pack fiber bundle integral field units (IFU) with 1/3 fill factor, each consisting of 448 fiber optic elements with a core diameter of 266 $\mu$m (1.5$^{\prime\prime}$\ on the sky). Thus, each individual spectrograph images 224 fibers. At least 75 IFUs will be arrayed on the 22$^{\prime}$\ diameter focal plane of the upgraded HET, yielding $\sim33,000$ individual spectra per exposure. Each spectrograph consists of a double-Schmidt optical design with a volume phase holographic (VPH) diffraction grating at the pupil between a $f$/3.33 folded collimator and a $f$/1.25 cryogenic camera. The spectral coverage of VIRUS is $350 < \lambda \mathrm{(nm)} < 550$ at $R = \lambda / \Delta\lambda \approx 700$, which is optimal for measuring the baryonic acoustic oscillation via the Ly$\alpha$\ emission of star-forming galaxies from $1.9 < z < 3.5$. Fig. \ref{fig:VIRUS} shows a rendering of VIRUS and its optical design. \begin{figure}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=0.95\textwidth]{f1.png} \end{tabular} \end{center} \caption[example] { \label{fig:VIRUS} \textit{a}) A rendering of the upgraded HET showing the large enclosures mounted on either side of the telescope structure that contain VIRUS spectrographs. The green cables extending from the prime-focus instrument package at the top of the telescope structure to the enclosures are large bundles of fiber optics. \textit{b}) Close view of two enclosures, each containing an 8$\times$3 array of VIRUS units (48 spectrographs). \textit{c}) Section view of a single VIRUS pair. \textit{d}) Ray trace for a single VIRUS spectrograph.} \end{figure} The VIRUS concept was proven by the Mitchell Spectrograph (formerly known as VIRUS-P; Ref. \citenum{Hill08b}), a single prototype VIRUS spectrograph that has been in use at the McDonald Observatory 2.7 m Harlan J. Smith telescope since 2007. The instrument has excellent throughput in the blue down to 350 nm ($\sim30$\%, excluding the telescope and atmosphere). For VIRUS, better throughput is required to maximize the number of LAE detections in order to achieve the goals of HETDEX. This is especially true at the lowest redshifts of the survey (i.e., at wavelengths in the near-ultraviolet) because the surveyed volume is smaller, the number density of bright LAEs is diminished (e.g., Ref. \citenum{Ciardullo12}), and the atmospheric transmission is quickly decreasing. An optical component that can be improved in efficiency at these wavelengths is the VPH diffraction grating that is used as the instrument's dispersing element. VPH gratings have become the standard in astronomical spectroscopy as they provide higher diffraction efficiency and versatility over classic surface relief gratings\cite{Barden98}. For an overview of the physics of VPH gratings, we refer the reader to Refs. \citenum{Arns99}, \citenum{Barden00}, and \citenum{Baldry04}. Ref. \citenum{Adams08} discusses the performance of VPH gratings developed for the Mitchell Spectrograph, which at that time pushed the technology to the highest diffraction efficiency achieved at 350 nm ($\sim60$\%). More recently, as shown in Ref. \citenum{Chonis12}, we have developed prototype VPH gratings for VIRUS that have achieved diffraction efficiencies of $\gtrsim70$\% at 350 nm. For VIRUS, a key technological challenge will be to achieve consistency in this high performance standard over a large production suite of 170 gratings. In this paper, we present the mass production of VPH gratings for VIRUS with a focus on the acceptance testing methodologies and the as-built performance of the suite of 170 gratings. We begin in $\S$\ref{sec:gratingspec} by describing the production design of the gratings. In $\S$\ref{sec:evaluation}, we discuss the performance metrics by which the gratings are judged, and present the design of a custom apparatus that ensures the standardization of our acceptance tests. In $\S$\ref{sec:production}, the grating mass production at Sygygy Optics, LLC is discussed and the resulting performance of the grating suite is presented in $\S$\ref{sec:performance}. Our final thoughts and a summary are provided in $\S$\ref{sec:conclusions}. \section{VIRUS VPH GRATING PRODUCTION DESIGN} \label{sec:gratingspec} \begin{figure}[t!] \begin{center} \begin{tabular}{c} \includegraphics[width=0.92\textwidth]{f2.png} \end{tabular} \end{center} \caption[example] { \label{fig:GratingSchem} \textit{a}) A schematic drawing of the VIRUS VPH diffraction grating, showing a face-on view of the grating and an edge-on view. The substrate diameter is 148 mm and the total thickness of the grating assembly is 16 mm. Note that the grating and epoxy layer thicknesses are exaggerated and are not to scale. The incident angle $\alpha$ and angle of diffraction $\beta$ are shown in addition to the direction of the fringe tilt, indicated in red by $\phi$. \textit{b}) A photograph of a production VIRUS VPH grating.\vspace{5mm} } \end{figure} \input{t1.tex} A schematic drawing and photograph of a VIRUS production grating can be seen in Fig. \ref{fig:GratingSchem}. The grating assembly has physical dimensions of 148 mm (diameter) $\times$ 16 mm (total thickness). The VPH layer has a 138 mm diameter clear aperture (CA) and is sandwiched between two 8 mm thick, anti-reflection (AR) coated fused silica substrates using an optical grade adhesive. The grating has a fringe density of $930\pm2$ lines mm$^{-1}$, which provides a level of dispersion that is sufficient to cover $350<\lambda\;\mathrm{(nm)}<550$ at spectral order $m=1$. The grating will operate in transmission for unpolarized light. The key properties of the gratings are high diffraction efficiency (especially for the bluer wavelengths) and repeatability of the grating properties from unit to unit. Due to the large number of units required for VIRUS (170 science-grade gratings, plus four witness samples of lesser quality to monitor environmental degradation over the lifetime of the gratings), the gratings were delivered in batch sizes of up to 50 units (but no smaller than 10) over a 12 month time period. To accommodate the large number of units and the expected variation of performance from unit to unit, the required external diffraction efficiency for unpolarized light was defined as a mean over a given delivery batch. Uniformity from unit to unit is promoted by establishing a minimum allowable efficiency for any grating. The batch mean and minimum external diffraction efficiencies are summarized in Table \ref{tab:Efficiency}. For HETDEX, the ideal peak diffraction efficiency is between $350 < \lambda\; \mathrm{(nm)} < 400$, and we have focused on maximizing the diffraction efficiency over this wavelength range while maintaining a sufficiently wide bandwidth to retain acceptable efficiency towards 550 nm. Given the parameters listed in this section, the Bragg condition (e.g., see Ref. \citenum{Baldry04}) dictates that the grating angle of incidence $\alpha$ is $\sim10$$^{\circ}$\ to maximize the diffraction efficiency over the desired wavelength range. However, Ref. \citenum{Burgh07} has shown that the location in detector space where the wavelength satisfying the Bragg condition is imaged is also the location of the ``Littrow recombination ghost''. This optical ghost can have a wavelength integrated strength that dominates the signal in a given resolution element of the direct spectrum and can masquerade as a solitary emission line source\cite{Adams08}. For HETDEX, this could contribute significantly to sample contamination since normal LAE detections in our redshift range do not include any other bright emission lines other than Ly$\alpha$\ itself. To mitigate this issue, the fringes are tilted by $\phi = -1$$^{\circ}$\ to decouple the Bragg condition from the Littrow configuration. Note that we have adopted the sign convention of Ref. \citenum{Burgh07} for $\phi$, where negative tilts move the plane of the fringes away from the incident beam, as depicted in Fig. \ref{fig:GratingSchem}$a$. By including $\phi$, we reduce the angle of incidence on the grating substrate to 9$^{\circ}$. This allows the retention of the diffraction efficiency curve similar to a $\alpha=10$$^{\circ}$\ grating with unslanted fringes while pushing the Littrow ghost off the CCD detector as a result of the change in the physical grating angle. \section{EVALUATING THE VPH GRATING SUITE} \label{sec:evaluation} \subsection{Evaluation of Prototype Gratings}\label{subsec:prototypegratings} Prior to engaging in the full-scale production of the VIRUS VPH gratings as described in $\S$\ref{sec:production}, we carried out multiple design studies with several vendors to tune the VPH grating prescription sufficiently to meet the requirements of the HETDEX survey. These efforts have been documented extensively in Ref. \citenum{Adams08} for the Mitchell Spectrograph and Ref. \citenum{Chonis12} for VIRUS. Both of these publications include tests carried out with a custom automated test facility that allowed the full characterization of a grating over a range of $\alpha$, angle of diffraction $\beta$, and $m$. This test-bench was essential in the determination of the final specifications to which the VIRUS production gratings were built. \subsection{Acceptance Test Requirements} \label{subsec:testrequirements} Efficiently and consistently testing 170 gratings to validate acceptance metrics requires a different approach than the detailed characterization efforts described in Refs. \citenum{Adams08} and \citenum{Chonis12}. The following is a brief list of requirements for the acceptance tests we have performed on the mass-produced gratings: \begin{itemize} \itemsep1pt \parskip0pt \parsep0pt \item The testing method must provide a standardized reference for comparison to specifications. \item Characterization and testing must take no longer than 10 minutes per grating. \item Diffraction efficiency measurements at $\alpha=9$$^{\circ}$\ and $m=1$ must be made for $\geq3$ wavelengths within $350 < \lambda \mathrm{(nm)} < 550$. \item Tests must provide an estimation of the spatial uniformity of the diffraction efficiency across the CA. \item Tests must provide an estimation of the scattered light in the VPH layer for at least one wavelength in the near-ultraviolet (UV). \item The test apparatus must be transportable to the vendor's facility and be operable in an office environment. \item The test apparatus' calibration and data reduction must be transparent to the operator. \end{itemize} The test-bench discussed in the previous subsection is too large to be easily transported, and a test of a grating for multiple subapertures to provide a measure of the spatial uniformity is time consuming. Additionally, the ability to test a grating for a range of $\alpha$, $\beta$, and $m$ is not necessary for the acceptance tests. More precisely, the flexibility is undesirable as it may increase the probability of user error over a large number of grating tests. As we describe in the following subsection, we have designed a new apparatus to meet these requirements and efficiently check-out a grating directly on the production line. \subsection{Design and Operation of the VIRUS Grating Tester}\label{subsec:testerdesign} The design, operation, and validation of the prototype of the new grating test apparatus was described in Ref. \citenum{Chonis12}. We constructed a second, more refined copy of the apparatus and sent it to Syzygy Optics, LLC to be integrated into the production line they setup for the VIRUS VPH grating contract. Hereafter, we refer to this device as the VIRUS Grating Tester (VGT). The original prototype apparatus has remained at the University of Texas at Austin and has been used to check the results from the VGT and to characterize witness samples. \begin{figure}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=0.8\textwidth]{f3.png} \end{tabular} \end{center} \caption[example] { \label{fig:design} The optomechanical design of the VGT. $a$) Ray trace of the VGT optics with the major components labeled. $b$) A cross-section of the VGT mechanical model shown at the same scale and orientation as the ray trace in panel $a$ with rough dimensions indicated for scale. } \end{figure} Fig. \ref{fig:design} shows the optomechanical design of the VGT. For direct comparison to the external diffraction efficiency specification in Table \ref{tab:Efficiency}, the VGT performs measurements only at 350, 450, and 550 nm. The light from the LED sources first passes through an engineered diffuser, followed by a 300 $\mu$m diameter pinhole that is placed at the focus of a 25 mm diameter $f$/3 singlet. The collimated beam is then stopped down to 12.5 mm in diameter. The 12.5 mm beam size was chosen to avoid the differential vignetting of the dispersed collimated beam after transmission through the grating since the collimator and camera lenses have the same physical diameter (see below). With an emitted FWHM of approximately 25, 25, and 60 nm for the 350, 450 and 550 nm LEDs, respectively, the light sources are far from monochromatic. To centralize the spectral emission, we limit each respective LED by using a 10 nm FWHM narrow-band filter in the collimated beam. The final, effective measurement wavelengths after filtering the LEDs' output are 353.9, 452.3, and 549.5 nm. After the filter, the collimated beam is split into two paths. The first path is refocused with a second $f$/3 singlet onto a silicon photodiode, which is used to monitor and self-calibrate instabilities in the LED output. The active area of the photodiode is 1.6 mm in diameter, which is large enough to accommodate the chromatic aberration attributed to the singlet lenses. The second path of the collimated beam is incident on the diffraction grating at $\alpha = 9$$^{\circ}$. The diffracted light ($\beta = 15.3$$^{\circ}$\ at 450 nm) is then focused onto a 2/3''-format, 5 megapixel CCD (3.45 $\mu$m square pixels) by a stock 25 mm diameter $f$/1 aspheric lens. The use of a CCD (rather than multiple single pixel detectors for each wavelength, such as photodiodes) is highly beneficial for the VGT. Firstly, the detector alignment and the alignment of the grating in the system (see below) is simplified since the active sensing area is larger than the individual pinhole images and the angular range of the dispersion. Additionally, the fine sampling of the chosen CCD enables accurate centroid determination for verifying the dispersion of each grating, and allows for the use of custom photometric apertures. Due to imperfections in the grating fabrication process, the diffraction efficiency of a grating may be spatially variable across the CA\cite{Chonis12}. Ideally, one would measure the diffraction efficiency with a collimated beam matched to the CA of the grating. However such an apparatus for our 138 mm diameter gratings would be too large to be portable and would significantly increase costs. To maximize the tested area and provide an empirical estimate of the spatial variability of the diffraction efficiency with our small optical system, we have designed the VGT to easily measure up to 9 subapertures of a grating. Fig. \ref{fig:gratingcell} details the design of a special mounting cell that makes these multiple measurements possible. A marking is placed on the edge of each grating by the vendor to indicate the fringe tilt and fringe direction within $\pm1$$^{\circ}$\ so that the grating can easily be placed in the VGT mounting cell in the correct orientation. A corresponding mark on the edge of the mounting cell is used for visual alignment of the grating and ensures that the placement of the focused spot is on the CCD chip. The cell contains two press-fit locating pins which mate to a series of holes and slots on the custom tester base for constraining the rotational alignment of the grating as it is moved between the series of 9 test positions. \begin{figure}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=\textwidth]{f4.png} \end{tabular} \end{center} \caption[example] { \label{fig:gratingcell} The mechanical design of the VGT grating mounting cell. \textit{a}) A rendering of the grating cell with major components labeled. \textit{b}) The engraving on the edge of one of the prototype VIRUS gratings that indicates the fringe orientation, including the direction of the fringe tilt. This engraving is aligned with a matching feature on the grating cell's frame for rapid initial alignment of the grating. \textit{c}) A rendering of the VGT base showing the series of holes and slots used for maintaining the rotational alignment of the grating when switching between the 9 subapertures. The positions of the subapertures are indicated by the thin blue circles on the grating face. } \end{figure} The optomechanical design of the tester has been simplified by utilizing off-the-shelf components for 1'' diameter optics, and modifications to these stock parts were made where necessary to meet our needs. Custom aluminum hardware seamlessly mates with these stock components to fix the collimator and camera angles in the same configuration as a VIRUS spectrograph to an accuracy of $\pm0.1$$^{\circ}$. Including its light-tight enclosure that allows accurate testing in a fully-illuminated room, the VGT is compact and can easily fit on an office desk. It has the approximate dimensions of 485 mm tall with a 350$\times$320 mm footprint and weighs $\sim12$ kg. The LEDs and the comparison photodiode are controlled through a data acquisition unit requiring a single USB connection to a host computer. The CCD camera interfaces with the computer through a Gigabit ethernet port and is powered by an external 12 V DC source. Both the CCD and the comparison photodiode were verified for linearity over the relevant signal levels. The operation of the VGT is controlled through custom Python software that provides near ``push-button'' simplicity for the tester's operation in a command shell environment, and includes the automated reduction of the CCD images, photometry, background subtraction of the photodiode signal, and calculation of diffraction efficiencies. Before shipment to the vendor, an absolute calibration of the VGT was provided by assembling the camera lens and CCD in a ``straight-through'' configuration without the grating and performing CCD photometry on the direct pinhole images of the LEDs. This initial calibration is then refined as needed over time automatically through the continuous monitoring of the LED output by the comparison photodiode, and manually through regular checks of a standardized 930 line mm$^{-1}$ reference grating whose absolute external diffraction efficiency is well-known through measurements made with the flexible test-bench facility discussed in $\S$\ref{subsec:prototypegratings}. The statistical uncertainties in the diffraction efficiency measurements by the VGT are $\pm0.8$\%, $\pm0.8$\%, and $\pm0.1$\% at 350, 450, and 550 nm, respectively. \subsection{Measurements}\label{subsec:measurements} \begin{figure}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=0.9\textwidth]{f5.png} \end{tabular} \end{center} \caption[example] { \label{fig:constructed} \textit{a}) A photo of the completed VGT. When in use, a curtain drops over the open face of the enclosure to make the apparatus light-tight. \textit{b}) An image constructed from the sum of three dark subtracted CCD images, each taken with one of three LEDs turned on. Each CCD image was normalized to the peak flux within each indicated circular aperture before the summation. The three circular apertures are used for the photometric measurements from which the diffraction efficiency at each wavelength is calculated. The red elliptical annulus around the 350 nm pinhole image has a maximum angular width of 0.5$^{\circ}$ and was used to measure the near-UV scattering of the gratings. } \end{figure} A photo of the completed VGT can be seen in panel $a$ of Fig. \ref{fig:constructed}. As we demonstrated in Ref. \citenum{Chonis12}, a full test of a grating requires $\lesssim10$ minutes to complete. Thanks to the fixed and rugged nature of the VGT design, the grating acceptance measurements are tailored to verifying the most critical specifications and have become standardized to avoid the unnecessary confusion that results from inconsistent measurement methods. For each VPH grating on the production line, the VGT provides the following key measurements: \begin{itemize} \itemsep1pt \parskip0pt \parsep0pt \item \textbf{Average External Diffraction Efficiency:} Each grating is tested at 350, 450, and 550 nm in each of the 9 different subaperture positions across the grating CA. With a 12.5 mm diameter beam size, this allows 7.4\% of the total CA area to be directly tested. The reported external diffraction efficiency for each wavelength that is to be compared with the specification in Table \ref{tab:Efficiency} is calculated as the average over these 9 subapertures. Fig. \ref{fig:constructed}$b$ shows an example of the pinhole images for each wavelength on the VGT CCD and the location of the over-sized apertures used for performing the photometry. \item \textbf{External Diffraction Efficiency Spatial Uniformity:} Using the data gathered above for the 9 subapertures, an estimate of the spatial uniformity of the external diffraction efficiency can be calculated. For example, a simple metric for estimating the spatial diffraction efficiency uniformity is the difference between the maximum and minimum measured efficiencies at each wavelength. \item \textbf{Near-UV Scattered Light:} Scattered light within the processed grating layer can be the result of aberrations, reflections, and other imperfections in the optical system used to expose the holographic material in which the grating is formed\cite{Barden01}. Additional sources of scattering are the epoxy layer used to bond the substrates and the surface roughness of the substrates themselves. An example of a grating with particularly bad scattering properties can be found in Fig. 18 of Ref. \citenum{Barden01}. In VIRUS, a grating that scatters a high amount of the incident beam can adversely affect the image quality of the spectrograph leading to an increase in the cross-talk between imaged fibers at the focal plane. Scattering is most pronounced at short wavelengths, so we quantify the worst-case scenario by making the measurement at 350 nm. To increase the signal-to-noise ratio for measuring faint scattered light, the images taken at all 9 subaperture positions are coadded. Two custom photometric apertures are used. First, an inner elliptical aperture is centered on the 350 nm pinhole image with major and minor axes that correspond to the ideal image size as modeled with Zemax, given the dispersion resulting from the non-monochromatic LED output. A second elliptical aperture is also used with the same center and major axis as the first, but with a minor axis that extends an additional angular distance of 0.5$^{\circ}$\ on either side of the inner elliptical aperture. The total flux within the large circular aperture used in the 350 nm diffraction efficiency calculation is first measured, followed by a measurement of the flux contained within the elliptical annulus formed between the two elliptical apertures (see Fig. \ref{fig:constructed}$b$). The goal for science-grade gratings is to have $\lesssim3$\% of the total flux scattered into the elliptical annulus for the grating to be accepted\footnote{The original specification for scattered light by the gratings stated that $\lesssim3$\% of light at $\lambda=350$ nm in a point source can be scattered into a 0.5$^{\circ}$\ solid angle cone around the $m=1$ beam at the design $\alpha$. However, as stated in $\S$\ref{subsec:testerdesign}, the narrowband-filtered LED light sources in the VGT are not monochromatic. As a result, the pinhole images on the CCD are elongated in the dispersion direction, which motivates the use of an elliptical annulus as the scattering aperture.}. \end{itemize} The most notable optical property that the VGT does not verify is the transmitted wavefront error (TWE). TWE measurements are made on selected gratings to verify consistency from batch to batch using a Zygo interferometer with a 6'' diameter beam (stopped down to a 138 mm diameter) at $\lambda=632$ nm in a double-pass configuration using a reference mirror. The TWE for a science-grade grating should be $<2$ waves peak-to-valley at 632 nm within the CA, including any spherical wavefront error. \section{MASS PRODUCTION OF VPH GRATINGS} \label{sec:production} In this section, the production line process that was used to fabricate the VIRUS VPH gratings at Syzygy Optics, LLC is summarized. For an individual VPH grating, the process begins by preparing a solution of ammonium dichromate and gelatin (i.e., dichromated gelatin; DCG) that serves as the holographic medium. This solution is poured between a glass mold and the base fused silica substrate. To achieve a layer of gelatin that is initially $\sim100$ $\mu$m thick, we attached adhesives to the substrates around the perimeter to serve as shims between the substrate and mold. We then cooled the gelatin until it congealed and removed the substrate from the mold using a releasing agent. The substrate is then dried and subsequently cured in an incubator. To form a holographic image in the gelatin layer, each cured substrate is exposed to a 457.5 nm coherent light source that has interference in a plane that forms a 1$^{\circ}$\ angle with the gelatin layer. After the exposure, the grating is submerged in photographic fixer and then dehydrated with graded alcohol. The amount of time in the fixer and in each of the alcohol baths can be varied to produce different modulations of the gelatin layer's index of refraction. Upon removal from the final alcohol bath, we dried any remaining liquid from the grating by placing it in an oven for approximately five minutes. To obtain a preliminary analysis of the optical properties of the uncapped grating\footnote{On average, the exposed and processed DCG layer has approximately the same index of refraction as the glass that is used to cap the grating\cite{Barden00}. As a result, the VGT with its fixed 9$^{\circ}$\ angle of incidence can be used to measure a grating with or without the cap substrate in place. The only difference in the measurement without the cap substrate is the lack of an AR coated incident surface, and the lack of losses due to the internal transmittance of the fused silica. Both of these effects can be accounted for to estimate the final diffraction efficiency of the capped grating.}, we took a series of rapid measurements with the VGT to determine if the grating met the minimum required efficiency at 350, 450, and 550 nm. If the grating did not meet the minimum diffraction efficiency specification, the process of fixing and dehydration is immediately modified for the following grating. If necessary, gratings can be reprocessed through the alcohol baths to further modulate the gelatin layer's refractive index. If a grating was not uniform across the CA, however, it would not be eligible for reprocessing. Using the VGT to determine the level of uniformity in-situ prevented the reprocessing of gratings that could not consistently reach the minimum required diffraction efficiency across the CA. Acceptable gratings were stored in a dry box for 2-3 days before retesting with the VGT to confirm that the grating properties did not significantly vary from the initial measurements. If there were no significant changes after drying, a 4-5 mm ring of the gelatin around the edge of the grating is removed and the grating is capped with the second fused silica substrate using an optical grade glue. The ring around the edge of the grating that is devoid of gelatin fills with the adhesive and encapsulates the diffractive medium. This seals the grating and prevents moisture from entering and altering the material. Once the adhesive sets, the VGT is then used for the final measurements at each of the 9 subapertures before being approved as science-grade. In the following section, we present the final VGT measurements for the 170 science-grade gratings that were fabricated using this production process. \section{PERFORMANCE OF THE VPH GRATING SUITE} \label{sec:performance} To be accepted as a science-grade grating, each of the 170 units must meet the basic assembly specifications. These include having a total thickness of $16.0\pm0.5$ mm, a physical diameter of $148.0\pm0.5$ mm, a radial mismatch of the two fused-silica substrates of $<0.5$ mm, and a total wedge of $<30$$^{\prime}$\ and $<10$$^{\prime}$\ perpendicular and parallel to the fringes, respectively. The VPH layer must also have a CA with a diameter $>138.0$ mm, be centered on the base substrate to within $\pm1$ mm, and be free of major bubbles and point defects. Averaged over all 170 science-grade gratings, we have measured the mean defect area to be 1.13 mm$^{2}$ (standard deviation $\sigma = 0.53$ mm$^{2}$). The maximum defect area of any individual science-grade grating is 2.40 mm$^{2}$, which corresponds to only 2.0\% of the area of a single VGT subaperture. As a result, our characterization of the average external diffraction efficiency will not be significantly affected by measuring a subaperture that contains a large bubble or point defect. Each science-grade grating must have no chips within the CA on either substrate, be able to meet a surface finish specification of 60/40 scratch/dig, and have a surface roughness of $<2$ nm within the CA. Finally, each VPH grating assembly was supplied with a unique serial number that can be traced back to a specific manufacturing date and process during production. \subsection{Average External Diffraction Efficiency}\label{subsec:efficiency} \begin{figure}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=0.99\textwidth]{f6.png} \end{tabular} \end{center} \caption[example] { \label{fig:Results} The average delivered external diffraction efficiency of the VIRUS VPH grating suite. The shaded gray region in both panels indicates the external diffraction efficiency specification outlined in Table \ref{tab:Efficiency}. \textit{a}) The average VGT diffraction efficiency measurements for each grating are shown as the small colored data points (teal for 350 nm, blue for 450 nm, and green for 550 nm), which have been scattered randomly about the tested wavelengths for visual clarity. The large black data points show the average external diffraction efficiency of the entire grating suite at each of the three measured wavelengths, while the error bars show the standard deviation of the distributions. The black curves show quadratic spline fits to the diffraction efficiency measurements for notable individual gratings (the heavy solid and heavy dashed curves correspond to the best and worst overall gratings, respectively, while the light dashed curve is the grating that performs best at 350 nm with a peak external diffraction efficiency of 77.4\%). \textit{b}) A comparison of the delivered grating suite to 10 gratings simulated with RCWA. The blue curves are quadratic spline fits to the individual VGT grating measurements, while the red curves are quadratic spline fits to the simulated gratings. The large black (red) data points and error bars are the mean diffraction efficiency and standard deviation for the delivered (simulated) grating suites. The dashed red curve indicates a simulated grating that would be rejected as science-grade. The vertical blue (red) lines indicate the average wavelength of peak diffraction efficiency for the delivered (simulated) grating suites. See the text of $\S$\ref{subsec:efficiency} for more details. } \end{figure} \begin{figure}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=0.99\textwidth]{f7.png} \end{tabular} \end{center} \caption[example] { \label{fig:EfficiencyHist} Histograms showing the distribution of the external diffraction efficiency for the delivered VIRUS VPH grating suite, as measured by the VGT. Each histogram represents a cross-section cut along the $y$-axis at each wavelength from Figure \ref{fig:Results}$a$. For each panel, the solid and dashed black vertical line indicates the batch mean and minimum requirements on the external diffraction efficiency, respectively, while the solid red vertical line indicates the measured mean external diffraction efficiency for the entire grating suite.\vspace{5mm} } \end{figure} The external diffraction efficiency results for the delivered science-grade VPH gratings for VIRUS are shown in Fig. \ref{fig:Results}$a$. For each of the three tested wavelengths, we show the individual, spatially averaged VGT diffraction efficiency measurements along with the suite mean and standard deviation averaged over all 170 gratings. In addition, the distribution of the diffraction efficiency for each wavelength is shown in Fig. \ref{fig:EfficiencyHist}. The histograms in that figure represent a cross-section at each wavelength along the $y$-axis of Fig. \ref{fig:Results}$a$. These results are also summarized in Table \ref{tab:Results}. In general, the measured external diffraction efficiency of the VPH grating suite very closely meets or exceeds our batch mean requirement at each of the three tested wavelengths on average. Additionally, the majority of the delivered gratings greatly exceed the minimum external diffraction efficiency specification, with only a single grating dropping below the minimum requirement at 350 nm by 0.2\%. To quickly compare the individual gratings amongst each other quantitatively, we calculate $\int\!\eta(\lambda)\:d\lambda$ for each grating between $350 < \lambda \mathrm{(nm)} < 550$, where $\eta(\lambda)$ is the function describing the external diffraction efficiency curve. Since we only have measurements at three discrete wavelengths, we interpolate the diffraction efficiency between the measurements using a quadratic spline to determine $\eta(\lambda)$ for each grating. In Fig. \ref{fig:Results}$a$, the heavy solid curve shows the quadratic spline fit to the measurements of the best overall performing grating according to this merit function while the heavy dashed curve represents the worst grating. Despite the large difference in performance of these two gratings at longer wavelengths, both have similar efficiency at 350 nm. This is consistent with the measurements in general (see Table \ref{tab:Results}), as will also be seen in the following subsection on the spatial uniformity of the diffraction efficiency. This consistency is the result of communication with the contractor that high efficiency at 350 nm was the primary goal for the grating performance due to the multiple combined effects in the near-UV that make detecting LAEs difficult (e.g., see $\S$\ref{sec:intro}). Of all the gratings in the suite, the best-performing unit at 350 nm has a spatially averaged external diffraction efficiency of 77.4\%. The quadratic spline fit to this unit's VGT data has been highlighted in Fig. \ref{fig:Results}$a$ in addition to that of the best and worst performing gratings mentioned above. Overall, the production process was very consistent with time, as there is no statistical dependence of the external diffraction efficiency at any wavelength on the date of manufacture. The range of variation from grating to grating seen in Figs. \ref{fig:Results} and \ref{fig:EfficiencyHist} is due to small differences in the properties of the processed DCG layer in which the grating is formed. A VPH grating can be fully described and its diffraction efficiency modeled at a given $\alpha$ and $m$ (e.g., with a Rigorous Coupled Wave Analysis; RCWA\cite{Gaylord85}) given the following properties: the fringe density, the fringe tilt $\phi$, the DCG layer refractive index $n_{\mathrm{DCG}}$ and its sinusoidal modulation $\Delta n_{\mathrm{DCG}}$, and the DCG layer thickness $d$. Of these properties at a fixed $\alpha$ and $m$, those that significantly affect the diffraction efficiency of the grating are $d$, $\Delta n_{\mathrm{DCG}}$, and $\phi$. To estimate the range of variation in these parameters that match the measured unit-to-unit diffraction efficiency variation in the delivered grating suite, we ran a series of $m=1$ RCWA models based around the targeted parameters for VIRUS. As described in Ref. \citenum{Chonis12}, those parameters are 930 line mm$^{-1}$ fringe density, $\phi=-1$$^{\circ}$, $\alpha=9$$^{\circ}$, $d=5.5$ $\mu$m, and $\Delta n_{\mathrm{DCG}}=0.037$. We assume that $n_{\mathrm{DCG}} = 1.5$ (e.g., Ref. \citenum{Barden00}), and apply factors to the RCWA modeled diffraction efficiency to take into account the transmission through the AR coated fused-silica substrates, the epoxy layer, and the transmittance of the DCG layer itself for typical physical thicknesses\cite{Barden00}. The predicted diffraction efficiency was calculated at 350, 450 and 550 nm to mimic the VGT measurements. In Fig. \ref{fig:Results}$b$, we show a sample suite of 10 RCWA modeled gratings (red quadratic splines) compared to the delivered VIRUS grating suite (blue quadratic splines). The exact values of $d$, $\Delta n_{\mathrm{DCG}}$, and $\phi$ for each RCWA model were chosen from a uniform distribution with a range about the targeted value of $\pm1.0$ $\mu$m, $\pm0.01$, and $\pm0.5$$^{\circ}$, respectively. As can be seen, the range of diffraction efficiency at each measured wavelength in the RCWA model suite qualitatively matches that of the delivered grating suite relatively well. Similar to what we observe for the delivered gratings, the wavelength with the least variation from unit to unit is 350 nm. Additionally, the overall shape of the average modeled diffraction efficiency curve matches that of the delivered grating suite well, with a difference in the average diffraction efficiency peak of only 10.8 nm (see the vertical lines in Fig. \ref{fig:Results}$b$). Of the modeled gratings, one was not considered in the discussion above due to not meeting the diffraction efficiency requirements for a science-grade grating (see the dashed red curve; the fall-off at 350 nm was due to an extremely large $\Delta n_{\mathrm{DCG}}$ coupled with $d$ that was also larger than the targeted value). In general, however, the average diffraction efficiency of the delivered gratings falls systematically short of the RCWA models at each wavelength (also, see Ref. \citenum{Chonis12} and $\S$\ref{subsec:uniformity} below). The exercise outlined above gives an estimation of the precision to which the DCG layer can be processed for modern VPH gratings. \input{t2.tex} \subsection{Spatial Uniformity} \label{subsec:uniformity} From the measurement of the 9 individual subapertures across a grating CA, the VGT provides a measure of the spatial uniformity of the external diffraction efficiency. As labeled in Table \ref{tab:Results}, the ``Total'' spatial variation is a simple measure of uniformity calculated by taking the difference between the maximum and the minimum measured diffraction efficiencies between the 9 subapertures. In Fig. \ref{fig:UniformityHist}, we show the distributions of the total spatial variation for the VIRUS VPH grating suite at each of the three measured wavelengths. As was the case with the average external diffraction efficiency discussed in the previous subsection, the most consistent performance is at 350 nm, where the mean total spatial variation is $\sim2\times$ smaller than at 450 or 550 nm. The 350 nm distribution's standard deviation is also significantly smaller than the other two wavelengths. \begin{figure}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=0.99\textwidth]{f8.png} \end{tabular} \end{center} \caption[example] { \label{fig:UniformityHist} Histograms showing the distribution of different measures of the uniformity of the external diffraction efficiency for the delivered VIRUS VPH grating suite, as measured by the VGT. Each panel corresponds to one of the three measured wavelengths and contains three histograms. The shaded histograms represent the difference between the maximum and minimum diffraction efficiency measured over the 9 VGT subapertures (``Total''). The black histograms represent the difference between the maximum diffraction efficiency measured over the 9 subapertures and the average (``High''). Finally, the red histograms represent the difference between the minimum diffraction efficiency measured over the 9 subapertures and the average (``Low''). To ease the comparison of the ``High'' and ``Low'' distributions, the ``Low'' distribution (which consists entirely of negative values) is shown as positive. } \end{figure} \begin{figure}[ht] \begin{center} \begin{tabular}{c} \includegraphics[width=0.99\textwidth]{f9.png} \end{tabular} \end{center} \caption[example] { \label{fig:UniformityCorrelation} Scatter plots of the ``High'' and ``Low'' spatial variation measures as a function of the spatially averaged diffraction efficiency for each VPH grating in the VIRUS suite at each of the three wavelengths measured by the VGT. As described in $\S$\ref{subsec:uniformity}, the ``High'' (``Low'') spatial variation is simply the difference between the maximum (minimum) diffraction efficiency measured over the 9 subapertures and the spatially averaged diffraction efficiency that is plotted on the abscissa. Thus, there are two colored data points plotted for each grating: one above the thick dashed line at $y=0$, and one below. The larger black data points in each panel correspond to the average spatial variation measure calculated for four equally spaced bins of average diffraction efficiency, while the error bars correspond to the standard error of the mean. } \end{figure} In addition to the total spatial variation, we also look at the spatial variation above and below the average external diffraction efficiency. As labeled in Table \ref{tab:Results}, the ``High'' spatial variation is simply the difference between the maximum measured diffraction efficiency among the 9 subapertures and the spatially averaged diffraction efficiency for a given grating. Similarly, the ``Low'' spatial variation is the difference between the minimum measured diffraction efficiency among the 9 subapertures and the spatially averaged diffraction efficiency, and is a negative quantity. The ``High'' (``Low'') distributions for the VIRUS VPH grating suite are also shown in the panels of Fig. \ref{fig:UniformityHist} as the black (red) histograms. In Fig. \ref{fig:UniformityHist}, the ``Low'' distributions are shown as positive to facilitate visual comparison with the ``High'' distributions. By simply looking at the mean and standard deviation of the two distributions for each wavelength (see Table \ref{tab:Results}), it is clear that the distributions significantly differ. This is confirmed by running a two-sample Kolmogorov-Smirnov test on the ``High'' and ``Low'' distributions, which indicates that the null-hypotheses (i.e., the two distributions are drawn from the same parent distribution) can be rejected at each wavelength with very high certainty. At each wavelength, the ``High'' distribution is narrower and has a lower mean than the corresponding ``Low'' distribution. This is likely due to the fact that the maximal diffraction efficiency at a given wavelength for a fixed fringe density and $\alpha$ is achieved for exactly the right combination of properties that describe the processed DCG layer (i.e., primarily $\Delta n_{\mathrm{DCG}}$ and $d$)\cite{Barden00,Baldry04}. Given some distribution of achieved values for these processed DCG layer properties about the targeted values, it is more likely that a combination of non-optimal values will be drawn rather than drawing the exact correct set of values that maximizes the diffraction efficiency. As a result, non-uniformities in the DCG layer processing that cause the diffraction efficiency to vary across the grating CA (e.g., small changes in effective gelatin thickness) are more likely to reduce the diffraction efficiency than boost it about the average in a given subaperture. The result is that the ``High'' distribution is narrower and smaller on average than that for the ``Low'' distribution. Since the spatial variations in DCG layer properties tend to decrease the measured average diffraction efficiency for a given grating, it should not be surprising that the modeled RCWA predictions discussed in the previous subsection and shown in Fig. \ref{fig:Results}$b$ (which does not consider the spatial variation of grating parameters) are systematically higher than the measurements for the delivered gratings. Fig. \ref{fig:UniformityCorrelation} also supports the aforementioned hypotheses that small changes in the DCG properties as a function of position across the CA tend to scatter individual measurements towards lower efficiency rather than higher efficiency. In this figure, we plot both the ``High'' and ``Low'' spatial variation measures for each grating as a function of the spatially averaged diffraction efficiency for each respective wavelength. In addition, we have calculated the Spearman Rank Correlation Coefficient $\rho_{\mathrm{S}}$ and associated $p$-value for the ``High'' and ``Low'' variation measures separately at each wavelength. As can be seen in Fig. \ref{fig:UniformityCorrelation}, the ``High'' and ``Low'' variation measures are correlated with high statistical significance at 350 and 450 nm such that more uniform gratings have higher spatially averaged diffraction efficiency. However, this trend is not seen at 550 nm. For 350 and 450 nm, the fact that the ``Low'' spatial variation measure increases with increasing average diffraction efficiency is not surprising. What is more interesting is that the ``High'' spatial variation measure \textit{decreases} with increasing average diffraction efficiency. This is likely a result of the highest average diffraction efficiency gratings already having close to the optimal DCG properties that maximize the diffraction efficiency. As a result, non-uniformities in the DCG layer processing are increasingly less likely to result in a better combination of layer properties. The reason these trends are not seen at 550 nm is because it is the furthest tested wavelength from the Bragg condition (from $\S$\ref{sec:gratingspec}, recall that the VIRUS gratings were designed such that the Bragg wavelength is between $350 < \lambda\; \mathrm{(nm)} < 400$; from Fig. \ref{fig:Results}$a$, the average wavelength of the peak diffraction efficiency for the suite is 412.3 nm). As a result, the diffraction efficiency at 550 nm is not maximal, yielding a more equal likelihood that a slight change in DCG properties could either scatter the diffraction efficiency positively or negatively about the average. \subsection{Near-UV Scattering}\label{subsec:scattering} Fig. \ref{fig:Scattering}$a$ shows the distribution of the scattering measurements made by the VGT at 350 nm. On average, the fraction of light in the 350 nm pinhole image that is scattered into the VGT scattering aperture (see $\S$\ref{subsec:measurements}) is 3.2\%, which is slightly worse than our desired $\lesssim3$\% specification. With a standard deviation of only 0.37\%, all gratings in the VIRUS VPH grating suite perform comparably, and the amount of observed scattering in the worst performing grating (4.3\%) should not significantly increase the fiber-to-fiber cross-talk on the CCD detector beyond what is acceptable for HETDEX. \begin{figure}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=0.7\textwidth]{f10.png} \end{tabular} \end{center} \caption[example] { \label{fig:Scattering} The scattered light properties of the VIRUS VPH grating suite. $a$) The distribution of the scattered light at 350 nm as measured by the VGT. The teal histogram represents the original measurements, while the red histogram represents the measurements after correction for the scattering time dependence that is suspected to be due to an increasing amount of dust on the VGT optics (see panel $b$). The dashed black vertical line indicates our original specification on the fraction of the 350 nm pinhole image's total flux that could be scattered into the VGT scattering aperture ($\lesssim3$\%; see $\S$\ref{subsec:measurements}). The solid black vertical line indicates the mean of the entire VIRUS VPH grating suite from the original measurements, and the solid red vertical line indicates the mean after the correction. $b$) The measured scattered light at 350 nm as a function of manufacturing date. The manufacturing date is represented as $\Delta t$ in days, which is the time since the completion of the first science-grade grating. The gray line represents a linear fit to the data and is used as an estimation of the increased scattering effect due to the increasing dust deposited on the VGT optics with time. The slope of this linear fit is used to correct the original measurements, and the corrected data are shown as the red histogram in panel $a$. } \end{figure} \begin{figure}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=0.8\textwidth]{f11.png} \end{tabular} \end{center} \caption[example] { \label{fig:twe} Surface/wavefront maps showing the TWE at $\lambda=632$ nm for a typical grating in the VIRUS VPH grating suite. The map on the left shows the $m=1$ TWE while the map on the right shows the $m=0$ TWE. The typical $m=1$ TWE is $1/2$ wave peak-to-valley at $\lambda=632$ nm, which is significantly better than the $\leq2$ wave peak-to-valley requirement for VIRUS. } \end{figure} Of all the properties that were measured by the VGT, the near-UV scattering is the only one that is significantly correlated with the manufacturing date ($\rho_{\mathrm{S}}=0.61$, $p=1.07\times10^{-18}$; see Fig. \ref{fig:Scattering}$b$). Recall that the VGT operates in an office environment rather than a cleanroom. Without clean air in circulation around the VGT during the year over which these tests were carried out, we suspect that the slow increase in the measured scattered light with time is a systematic effect caused by the steady increase of dust deposited on the VGT's lenses and filters, rather than an increase in the actual grating layer scattering. To estimate the average scattering in this scenario, we assume that the VGT optics were clean as of the time of the first science-grade grating's completion, and simply subtract the slope of a linear fit to the scattering data as a function of time. The linear fit can be seen in Fig. \ref{fig:Scattering}$b$ and the resulting histogram of corrected scattering measurements can be found in Fig. \ref{fig:Scattering}$a$. The mean and standard deviation of the corrected distribution is 2.84\% and 0.30\%, respectively. After the correction, 48 gratings ($\sim28$\% of the suite) have a scattering measurement of $>3$\%. \subsection{Transmitted Wavefront Error}\label{subsec:twe} Fig. \ref{fig:twe} shows surface/wavefront maps for a typical VIRUS VPH grating as measured at $m=0$ and $m=1$. The surface errors do not appear to be correlated among the two measured spectral orders. The typical TWE at $\lambda=632$ nm for $m=1$ ($m=0$) is $\sim360$ nm ($\sim240$ nm) peak-to-valley. For VIRUS, this $1/2$ wave peak-to-valley performance at $m=1$ is excellent and is significantly better than our $\leq2$ wave peak-to-valley requirement. This is the result of a high performance holographic exposure system in addition to the use of thick substrates to reduce the effect of warping after the cap substrate is finally glued over the processed grating layer. \section{SUMMARY} \label{sec:conclusions} In this paper, we have presented the design of the VPH diffraction gratings that have been mass-produced for use in the new VIRUS array of spectrographs for the HET. The grating design was optimized to have high external diffraction efficiency in the near-UV. This is required for VIRUS to maintain sufficient throughput for the HETDEX survey, which aims to constrain dark energy and measure its evolution from $1.9 < z < 3.5$ using LAEs as tracers of large scale structure. One of the principle challenges involved in the production of the suite of 170 gratings is maintaining consistency in the high performance standard required for HETDEX. With such a large number of units, we are also faced with the challenge of efficiently and consistently validating the performance of each of the 170 gratings to ensure that the best quality units are delivered. To perform these tests, we have developed an apparatus that is very effective at providing robust acceptance test results in which measurements of the average external diffraction efficiency, spatial uniformity of the diffraction efficiency, and near-UV scattered light are provided in $\lesssim10$ minutes per grating. We have tested the suite of 170 science-grade gratings and determined that they individually meet or exceed our specifications. At near-UV wavelengths, the average grating in the suite achieves an external diffraction efficiency of $\sim70$\%. As the first optical astronomical instrument to be replicated on such a large scale, the VIRUS project has provided a useful platform on which the production of large aperture VPH gratings for astronomy can be evaluated in a statistical manner. \acknowledgments HETDEX is run by the University of Texas at Austin McDonald Observatory and Department of Astronomy with participation from the Ludwig-Maximilians-Universit\"{a}t M\"{u}nchen, Max-Planck-Institut f\"{u}r Extraterrestriche-Physik (MPE), Leibniz-Institut f\"{u}r Astrophysik Potsdam (AIP), Texas A\&M University (TAMU), Pennsylvania State University, Institut f\"{u}r Astrophysik G\"{o}ttingen (IAG), University of Oxford, and Max-Planck-Institut f\"{u}r Astrophysik (MPA). In addition to Institutional support, HETDEX is funded by the National Science Foundation (grant AST-0926815), the State of Texas, the US Air Force (AFRL FA9451-04-2-0355), the Texas Norman Hackerman Advanced Research Program under grants 003658-0005-2006 and 003658-0295-2007, and generous support from private individuals and foundations. We thank the staffs of McDonald Observatory, AIP, MPE, TAMU, Oxford University Department of Physics, and IAG for their contributions to the development of VIRUS. We also acknowledge Jim Arns of Kaiser Optical Systems, Inc. for useful discussions during the development phase of the VPH gratings for the Mitchell Spectrograph and VIRUS. T.S.C. acknowledges the support of a National Science Foundation Graduate Research Fellowship.
2,877,628,091,357
arxiv
\section{Introduction} Wire media are structured materials formed by many conducting wires embedded in a host medium. The wires are normally considered to be very long compared to the wavelength in the host medium, but the diameter of the wires is only a small fraction of the lattice constant. The known analytical models of wire media \cite{Pendry_plasmons_PRL_1996,Pendry_plasmons_JPCM_1998, Belov_wiremedium_JEWA_2002,Maslovski_quasistatic_MOTL_2002,Efros_WM_PRB_2002, Shvets_wires_PSPIE_2003,Belov_dispersion_PRB_2003,Constantin_WM, Silveirinha_3dconnected_I3EMTT_2005,Silveirinha_ENG_Plasmonic_2006, Silveirinha_crosswires_PRB_2009} treat them as crystals of infinitely long conducting cylinders. The cylinders may be arranged in different types of lattices resulting in different types of anisotropy of the wire crystals. It is known that wire media may exhibit strong spatial dispersion, so that the permittivity dyadic $\=\varepsilon(\omega, \_k)$ in such media depends on both frequency and wave vector. For instance, the permittivity dyadic of uniaxial wire medium with one set of thin ideally conducting wires oriented along $\_z_0$ reads \cite{Belov_dispersion_PRB_2003} \begin{equation}{\=\varepsilon(\omega, \_k)\over \varepsilon_0} = \=I_{\rm t} + \left(1 - {k_{\rm p}^2 \over k_0^2 - k_z^2}\right)\_z_0\_z_0, \l{epsilon} \end{equation} where $k_0=\omega\sqrt{\varepsilon_0\mu_0}$, $\varepsilon_0$ and $\mu_0$ are the permittivity and the permeability of the host medium, $k_{\rm p}$ is the plasma wavenumber, $k_z$ is the wave vector component along $\_z_0$, and $\=I_{\rm t}$ is the unit dyadic in the plane orthogonal to~$\_z_0$. It is well known that the wire medium supports propagation of transverse electromagnetic modes (TEM) which are basically the modes of a multi-wire transmission line.\cite{Belov_dispersion_PRB_2003, Silveirinha_ENG_Plasmonic_2006} Such modes propagate along the wires with the velocity equal to the speed of light in the host medium. The distribution of the microscopic $\_E$ and $\_H$ fields associated with the TEM modes is static-like in the planes orthogonal to the wires, with the electric force lines emerging from and ending at the surfaces of the wires. It can be easily proven that there is electrical charge accumulated on the wires associated with these modes. In Ref.~\onlinecite{Maslovski_disser_2004} it was shown (for the uniaxial wire medium case) that when this charge and the related potential are taken into account it is possible to obtain Eq.~\r{epsilon} from simple quasi-static considerations similar to those used in Ref.~\onlinecite{Maslovski_quasistatic_MOTL_2002}. Thus, it was shown that the strong spatial dispersion in wire media can be correctly described in a quasi-static approximation. In this paper we extend these considerations to a wide class of wire media, and propose an analytical model based on the effective inductance and capacitance per unit length of a wire. The other motivation for this study is the suppression of the nonlocal effects in wire media. In a recent paper by Demetriadou {\it et al.}\cite{Demetriadou_taming_JPCM_2008} the charge accumulated on the wires together with the rather small capacitance of thin wires were identified as the reasons for the spatial dispersion in wire mesh: a metamaterial formed by three sets of wires oriented along three Cartesian coordinate axes and joined at the crossing points. A rigorous analytical model of such medium was developed in Refs.~\onlinecite{Silveirinha_3dconnected_I3EMTT_2005,Silveirinha_crosswires_PRB_2009}. The authors of Ref.~\onlinecite{Demetriadou_taming_JPCM_2008} make use of this model and full wave simulations to justify their main claims. They also propose certain ways how to decrease the spatial dispersion effects. The basic idea is to increase the capacitance of the wires by periodically loading them with metallic bodies or patches, or alternatively to increase the inductance per unit length by coating the wires with a magnetic material. Somehow related to this work, it was shown in Refs.~\onlinecite{Alex_Mushrooms_MTT_2009, Olli_Mushrooms_MTT_2009} that for a substrate formed by a wire medium slab capped with an array of patches (the so-called mushroom substrate \cite{Sievenpiper_Mushrooms_MTT_1999}) the response of the wire medium is essentially local. A different strategy to reduce the spatial dispersion was reported in Ref.~\onlinecite{Silveirinha_crosswires_PRB_2009}, where it was shown that at infrared frequencies the plasmonic properties of metals may enable the design of artificial plasmas that mimic more closely a continuous local isotropic medium with negative permittivity. In this work, we generalize the theories reported in previous studies, \cite{Pendry_plasmons_PRL_1996,Pendry_plasmons_JPCM_1998,Belov_wiremedium_JEWA_2002,Maslovski_quasistatic_MOTL_2002,Shvets_wires_PSPIE_2003,Belov_dispersion_PRB_2003, Efros_WM_PRB_2002, Silveirinha_ENG_Plasmonic_2006, Silveirinha_3dconnected_I3EMTT_2005, Constantin_WM, Silveirinha_crosswires_PRB_2009} and propose a quasi-static homogenization model that accurately characterizes the nonlocal dielectric function of a wide class of wire media (both arrays of parallel wires, and arrays of connected wires), including the case where the wires are periodically loaded with conducting metallic bodies. In particular, we demonstrate that our analytical theory models accurately the electric response of a uniaxial wire medium loaded with patches, and we discuss the physics of the suppression of spatial dispersion in such structures. \section{Uniaxial wire medium \label{SecWires}} \label{uniaxial} We will start with the simplest possible case of the uniaxial wire medium with one set of wires oriented along the $z$-axis. We will follow the treatment presented in Ref.~\onlinecite{Maslovski_disser_2004}. We are interested in the longitudinal (\emph{zz}) component of the permittivity dyadic. To get an expression for it in the quasi-static limit we assume that the radius of the wires $r_0$ and the distance between the wires (the lattice period) $a$ are much less than the wavelength in the medium. Let us note that for the model we are going to develop the exact arrangement of the wires is not important, it is just enough to know the average distance between a pair of neighboring wires in a structure. \begin{figure}[htb] \centering \epsfig{file=fig1.eps,width=0.8\textwidth} \caption{\label{fig1} (Color online) A pair of wires of the uniaxial wire medium without (on the left) and with patches (on the right). The integration path used to define Eq.~\r{circulation} is shown by the blue rectangular contour.} \end{figure} Denoting the average (macroscopic) electric field along $z$ axis in the medium by $\langle E_z\rangle$, one can write the following relation between this field component and the current in the wires~$I_z$: \begin{equation} \langle E_z \rangle = (j\omega L + Z_w) I_z + {\partial\varphi\over \partial z}, \l{main_uni} \end{equation} where $L$ is the effective inductance per unit length of the wire, $Z_w$ is the self-impedance of the wire per unit length which accounts for the finite conductivity of metallic wires at microwave frequencies or plasmonic behavior at optical frequencies, and $\varphi$ is the additional potential due to charges on the wires. This relation can be obtained integrating the microscopic electric field over a path shown in Fig.~\ref{fig1}. The path goes first along the surface of a wire then to the middle line in a pair of two neighboring wires, then along this middle line and, finally, back to the surface of the wire. The circulation of the microscopic electric field $\_E(x,z)$ over this path reads \begin{multline} \oint\_E\.\_{dl} = \int\limits_{z}^{z+\Delta z}\!\!E_z(r_0, z')\,dz' -\!\!\int\limits_{z}^{z+\Delta z}\!\!E_z(a/2, z')\,dz' +\int\limits_{r_0}^{a/2}E_x(x,z+\Delta z)\,dx -\int\limits_{r_0}^{a/2}E_x(x,z)\,dx. \l{circint} \end{multline} The first integral in this relation represents the voltage drop along the surface of the wire and, therefore, can be expressed in terms of the wire current and the wire self-impedance per unit lenght. The second integral is the voltage drop along the symmetry line shown in Fig.~\ref{fig1}. In the same manner as it was done in Ref.~\onlinecite{Maslovski_quasistatic_MOTL_2002} we relate this voltage drop with the macroscopic electric field in the medium. After doing this the circulation of the electric field reads (when $\Delta z$ is small enough) \begin{equation} \l{circulation} \oint\_E\.\_{dl} = (Z_wI_z - \langle E_z\rangle)\Delta z +\varphi(z + \Delta z) - \varphi(z), \quad\mbox{where } \varphi(z) = \int\limits_{r_0}^{a/2}\!E_x(x,z)\, dx. \end{equation} The electric field circulation equals minus the time derivative of the magnetic flux that penetrates the area bounded by the integration path: $\oint \_E\.\_{dl} = -j\omega\Phi = -j\omega L I_z \Delta z$, from which we immediately get \r{main_uni} when $\Delta z\rightarrow 0$. In general, the effective inductance $L$ depends on the specific microstructure of the system (e.g. if the wires are coated or not with some material). In the particular case in which the wires are conducting cylinders (with no material coating), it was shown,\cite{Maslovski_quasistatic_MOTL_2002} by calculating the magnetic flux of a pair of neighboring wires in the quasi-static approximation, that $L$ verifies: \begin{equation} L = {\mu_0\over 2\pi}\log{a^2\over 4r_0(a-r_0)}. \l{inductance}\end{equation} It may be verified that the above formula also applies to the case where the wires are loaded with metallic patches (Fig. \ref{fig1}, right). The additional potential caused by the charges on the wires can be found by placing a linear charge density $\rho$ on the wires and by calculating the corresponding electrostatic potential $\varphi$ created by the fluctuating part of the microscopic electric field. Thus $\rho$ is responsible for the electric field component orthogonal to the wires. We introduce an effective capacitance $C$ per unit length, such that it verifies: \begin{equation} \varphi(z) = {\rho(z)\over C}. \l{defcapacitance} \end{equation} Notice that the considered capacitance is calculated by placing an $identical$ linear charge density over the wires (differently from the traditional definition of capacitance, which assumes that charge density over two conductors is antisymmetric). In the same manner as the inductance, the capacitance depends on the microstructure of the system. In the quasi-static limit a pair of charged wires (with no attached conducting bodies) induces the field (see Fig.~\ref{fig1}) \begin{equation} E_x = {\rho\over 2\pi\varepsilon_0}\left[{1\over x} - {1\over a-x}\right]. \end{equation} This expression has the same form as the one used in Ref.~\onlinecite{Maslovski_quasistatic_MOTL_2002} for the quasi-static magnetic field of a pair of lines of current. Therefore, for this particular case the capacitance is given by \begin{equation} {1\over C} = {1\over 2\pi\varepsilon_0}\log{a^2\over 4r_0(a-r_0)}. \l{capacitance} \end{equation} The capacitance for a system of wires loaded with conducting patches (Fig.~\ref{fig1}, right) is calculated in Appendix A. Considering now a monochromatic plane wave of current excited in the crystal, the currents in the wires can be written in the form \begin{equation} I_z(z) = I_0 e^{-j k_z z}, \end{equation} and thus the linear density of the charge associated with the currents verifies \begin{equation} \rho(z) = -{1\over j\omega} {dI_z(z)\over dz} = {k_z\over\omega}I_z(z). \end{equation} These charges are responsible for the electric field component orthogonal to the wires. Hence, the relation \r{main_uni} can be rewritten in terms of the effective inductance and of the effective capacitance per unit length of the wire as \begin{equation} \langle E_z\rangle = \left(j\omega L + Z_w + {k_z^2\over j\omega C}\right)I_z. \end{equation} Already in this expression one can identify the spatial dispersion term proportional to the square of the $z$-component of the wave vector. The macroscopic polarization current in wire media is the average of the currents in separate wires. Let $A_{\rm cell}$ be the average area in the $xy$ plane per one wire of the crystal. Then the macroscopic polarization current is $J_z = I_z/A_{\rm cell}$. The macroscopic displacement field is $D_z = \varepsilon_0 \left\langle {E_z } \right\rangle + J_z/(j\omega)$. Therefore, after some algebra we find that the longitudinal component of the permittivity dyadic is given by \begin{equation} {\varepsilon_{zz}\over \varepsilon_0} = 1 - {k_{\rm p}^2\over k_0^2 - j\xi k_0 - k_z^2 / n^2}, \l{uniaxial_permittivity} \end{equation} where $k_{\rm p}^2 = \mu_0/(A_{\rm cell}L)$, $n^2 = LC/(\varepsilon_0\mu_0)$, $\xi = (Z_w/L)\sqrt{\varepsilon_0\mu_0}$. It may be easily checked that the above formula reduces to Eq.~\r{epsilon} in the case of perfectly conducting straight wires ($Z_w=0$) [also, for unloaded wires $n = 1$ as is seen from Eqs.~\r{inductance} and \r{capacitance}]. More generally, when the wires are characterized by the complex permittivity $\varepsilon_0 \varepsilon_{\rm m}$ (e.g., thin plasmonic rods at optical frequencies), the impedance $Z_w$ is given by, \begin{equation} Z_w = \frac{1}{j\omega \pi r_0^2\varepsilon_0(\varepsilon_{\rm m} - 1)}, \l{Zw}\end{equation} where $r_0$ is the radius of the rods. It may be easily verified that in this scenario Eq. \r{uniaxial_permittivity} reduces to formula (16) of Ref.~\onlinecite{Silveirinha_ENG_Plasmonic_2006}, which was calculated using a local field based approach. Thus, Eq. \r{uniaxial_permittivity} generalizes the previous homogenization models of the uniaxial wire medium. Nevertheless, it is worth noting that the expression for the plasma wavenumber obtained in the present paper differs from the one derived in previous works.~\cite{Belov_wiremedium_JEWA_2002, Silveirinha_ENG_Plasmonic_2006} Namely, under the approach developed above we have \begin{equation} (k_{\rm p}a)^2 = {2\pi \over \log{a^2\over 4r_0(a - r_0)}}. \l{myplasm} \end{equation} In Refs.~\onlinecite{Belov_wiremedium_JEWA_2002, Silveirinha_ENG_Plasmonic_2006}, under a thin wire approximation, it was obtained that \begin{equation} (k_{\rm p}a)^2 \approx {2\pi \over 0.5275 + \log{a\over 2\pi r_0}}. \l{belovplasm} \end{equation} One can notice that \r{belovplasm} gives unphysical results for any ${r_0/a} \ge (2\pi)^{-1}\exp(0.5275) \approx 0.27$. Contrary, Eq.~\r{myplasm} gives a physically sound result in the limit $r_0 \rightarrow a/2$ when the surfaces of two wires touch: It predicts an infinite growth in the magnitude of $k_{\rm p}$ in this limit. It can be also checked numerically that the accuracy of \r{myplasm} is better than \r{belovplasm} when $r_0 \approx 0.1a$ or larger, whereas the opposite behavior is observed for $r_0 < 0.05a$. Nevertheless, both formulas have the same asymptotic behavior when $r_0\rightarrow 0$. At $r_0/a = 0.05$ (this ratio has been used in our numerical simulations that are discussed in Section~\ref{supression}) the formulas \r{myplasm} and \r{belovplasm} overestimate the plasma frequency by about 3\%. Another asymptotic expression for the normalized plasma frequency which is often cited was obtained in Refs.~\onlinecite{Pendry_plasmons_PRL_1996,Pendry_plasmons_JPCM_1998} but even for rather small wire radii its accuracy is worse than that of \r{myplasm} and \r{belovplasm}. Also, it does not predict the infinite growth of $k_{\rm p}$ when $r_0\rightarrow a/2$. It should be emphasized that Eq.~\r{uniaxial_permittivity} is in principle valid for a wide class of wire media (e.g. wires with attached conducting bodies). The parameters $C$ and $L$ depend on the specific microstructure of the system. The magnitude of the spatial dispersion term $k_z^2/n^2$ in \r{uniaxial_permittivity} can be reduced by increasing the value of $n = \sqrt{LC/(\varepsilon_0\mu_0)}$. This quantity has the meaning of slow-wave factor for quasi-TEM waves propagating along the wires. As mentioned before, for unloaded straight wires $n = 1$. As discussed in Ref.~\onlinecite{Demetriadou_taming_JPCM_2008}, the capacitance $C$ can be increased by loading wires with metallic patches and the inductance $L$ can be increased by placing wires in ferromagnetic shields. An alternative way to increase the inductance is to use helices instead of straight wires. Associated bi-anisotropy in helix medium can be compensated if both right- and left-handed helices are used. Attaching metallic or dielectric bodies to the wires also changes the transversal components of the permittivity dyadic. We will study this effect with more details in Section~\ref{sect_uniax_supr}. \section{Wire mesh} \label{mesh} The (3D) wire mesh is a wire crystal formed by three mutually orthogonal sets of wires joined at the intersection points. The electromagnetics of such metamaterial have been studied in several recent works. \cite{Hudlicka_WM3D_PIER_2006,Shapiro_WM3D_OL_2006,Silveirinha_3dconnected_I3EMTT_2005,Silveirinha_crosswires_PRB_2009} In the following derivation we assume a cubic lattice, but after a straightforward generalization the same method can be applied to structures of more complex geometries. Similar to the case studied in section \ref{SecWires}, metallic or dielectric bodies may be attached to the wires. In the wire mesh we get three components of the polarization current related with the currents in three orthogonal sets of wires. The currents in the wires are related to the average electric field in the medium in a manner similar to the uniaxial case: \begin{eqnarray} \l{wiremesh1} \left\langle {E_x } \right\rangle &=& (j\omega L + Z_w) I_x + {\partial\varphi\over \partial x},\\ \left\langle {E_y } \right\rangle &=& (j\omega L + Z_w) I_y + {\partial\varphi\over \partial y},\\ \l{wiremesh3} \left\langle {E_z } \right\rangle &=& (j\omega L + Z_w) I_z + {\partial\varphi\over \partial z}. \end{eqnarray} Because the wires are joined at the crossing points they are locally under the same potential, that is why we have the same $\varphi$ in all three equations. But the currents in three sets of wires can differ and that is taken into account by the variables $I_x$, $I_y$, and $I_z$. Let us consider a unit cell of the wire mesh with three intersecting connected wires. The total charge $q$ accumulated on these three wires per unit cell can be found as \begin{equation} q = -{a\over j\omega}\left({dI_x\over dx} + {dI_y\over dy} + {dI_z\over dz}\right). \end{equation} Because the wires are electrically connected and their effective capacitance per unit length is the same, this charge is equally distributed among the three wires in the unit cell. Therefore, for the linear charge densities on the wires we have in a vicinity of the unit cell \begin{equation} \rho_x = \rho_y = \rho_z = {q\over 3a} = -{1\over 3j\omega}\left({dI_x\over dx} + {dI_y\over dy} + {dI_z\over dz}\right). \end{equation} Using the same notation for the effective capacitance of a wire as above we can write the potential $\varphi$ as \begin{equation} \varphi = -{1\over 3j\omega C}\left({dI_x\over dx} + {dI_y\over dy} + {dI_z\over dz}\right) = {1\over 3\omega C}\left(k_x I_x + k_y I_y + k_z I_z\right), \end{equation} where we have taken into account that the currents on the wires change on average as \begin{equation} I_{n}=I_{n}^0 e^{-jk_{n} n}, \quad n=x,y,z. \end{equation} Now we can substitute this expression for the additional potential into \r{wiremesh1}--\r{wiremesh3}. Doing this we obtain the following system of equations: \begin{eqnarray} \left\langle {E_x } \right\rangle &=& (j\omega L + Z_w + {k_x^2\over 3j\omega C}) I_x + {k_x\over 3j\omega C}(k_yI_y + k_zI_z),\\ \left\langle {E_y } \right\rangle &=& (j\omega L + Z_w + {k_y^2\over 3j\omega C}) I_y + {k_y\over 3j\omega C}(k_zI_z + k_xI_x),\\ \left\langle {E_z } \right\rangle &=& (j\omega L + Z_w + {k_z^2\over 3j\omega C}) I_z + {k_z\over 3j\omega C}(k_xI_x + k_yI_y). \end{eqnarray} By introducing a vector of currents $\_I = I_x\_x_0+I_y\_y_0+I_z\_z_0$ we rewrite this system in a more compact form using dyadics: \begin{equation} \left\langle { \_E } \right\rangle = \left[(j\omega L +Z_w)\=I + {\_k\_k\over 3j\omega C}\right]\.\_I, \l{electric_field} \end{equation} where $\=I$ is the unit dyadic and ${\bf{kk}} \equiv {\bf{k}} \otimes {\bf{k}}$ is the dyadic (tensor) product of two vectors. Now it is only a matter of inverting the dyadic in brackets of \r{electric_field} to get the permittivity dyadic of the wire mesh. The average polarization in the medium is $\_P = \_I/(j\omega A_{\rm cell}) + \_P_{\rm t}$, where $\_P_{\rm t}$ accounts for additional polarization due to finite thickness of the wires or metallic bodies attached to the wires. For a crystal of cubic symmetry we can write $\_P_{\rm t} = \varepsilon_0(\varepsilon_{\rm t}-1) \left\langle { \_E } \right\rangle$, therefore the displacement vector $\_D = \varepsilon_0\varepsilon_{\rm t}\langle \_E\rangle + \_I/(j\omega A_{\rm cell})$, and \begin{equation} {\=\varepsilon(\omega,\_k)\over \varepsilon_0} = \varepsilon_{\rm t}\=I + {1\over j\omega\varepsilon_0 A_{\rm cell}}\left[(j\omega L +Z_w)\=I + {\_k\_k\over 3j\omega C}\right]^{-1}, \end{equation} or, after some dyadic algebra, \begin{equation} {\=\varepsilon(\omega,\_k)\over \varepsilon_0} = \left(\varepsilon_{\rm t} - {k_{\rm p}^2\over k_0^2 - j\xi k_0}\right)\=I - {k_{\rm p}^2\,\_k\_k\over 3n^2[k_0^2-j\xi k_0][k_0^2-j\xi k_0 - k^2/(3n^2)]}, \end{equation} where we use the same notations as in \r{uniaxial_permittivity}, and $k^2 = k_x^2+k_y^2+k_z^2$. The obtained permittivity dyadic can be also rewritten as \begin{equation} {\=\varepsilon(\omega,\_k)\over \varepsilon_0} = \varepsilon_{\rm tr}(\omega)\left(\=I - {\_k\_k\over k^2}\right)+ \varepsilon_{\rm lo}(\omega,k){\_k\_k\over k^2}, \end{equation} where \begin{eqnarray} \l{eps_t} \varepsilon_{\rm tr}(\omega) &=& \varepsilon_{\rm t} - {k_{\rm p}^2\over k_0^2-j\xi k_0},\\ \l{eps_l} \varepsilon_{\rm lo}(\omega,k) &=& \varepsilon_{\rm t} - {k_{\rm p}^2\over k_0^2-j\xi k_0 - k^2/(3n^2)}. \end{eqnarray} It can be verified that for the mesh of thin plasmonic rods without loading [for which $Z_w$ is given by Eq. \r{Zw}], the relations \r{eps_t}--\r{eps_l} transform to the ones presented in Ref.~\onlinecite{Silveirinha_crosswires_PRB_2009} with the parameters $\varepsilon_{\rm t} = 1$, $k_{\rm p} = \beta_{\rm p}$, and identifying the numerical coefficient $l_0$ from the same reference with $l_0 = 3n^2$. \section{Uniaxial wire medium loaded with patches and suppression of spatial dispersion \label{sect_uniax_supr}} \label{supression} Recently\cite{Demetriadou_taming_JPCM_2008} it was proposed to load the wire mesh with metal patches to increase the effective capacitance of the wires per unit length and decrease the related spatial dispersion effects. This proposal was supported by numerical simulations. Here, we will apply our general analytical model to the particular case of a uniaxial wire medium loaded with metal patches. For this purpose we just need to determine what is the effective capacitance $C$ introduced in Section~\ref{uniaxial} in the presence of patches. The details of calculation of this capacitance are described in Appendix A. Here we give the result: $C = C_{\rm wire} + C_{\rm patch}$, where $C_{\rm wire}$ is the wire capacitance given by \r{capacitance} and \begin{equation} C_{\rm patch} = {2\pi\varepsilon_0w\over h\log\left(\sec{\pi d\over 2a}\right)}, \l{cpatch} \end{equation} where $w$ is the width of the square patches periodically attached to the wires and separated by the distance $h$ along $z$, and $d = a - w$ is the gap between two adjacent patches on a pair of neighboring wires. Thus, the permittivity dyadic of the uniaxial wire medium loaded with patches is given by \begin{equation} {\=\varepsilon\over \varepsilon_0} = \varepsilon_{\rm t}\=I_{\rm t}+\left(1 - {k_{\rm p}^2\over k_0^2 - j\xi k_0 - k_z^2 / n^2}\right)\_z_0\_z_0, \l{unipatchperm} \end{equation} where we keep the same notations as in Section~\ref{uniaxial}. The transverse permittivity $\varepsilon_{\rm t}$ is mostly determined by the patches when $w\gg r_0$ and it can be found as the permittivity of a stack of capacitive grids separated by $h$ one from another. With the help of the known theory of such grids~\cite{Tretyakov_modelling_2003} it can be found that \begin{equation} \varepsilon_{\rm t} = 1 + {2w\over \pi h}\log\left(\csc{\pi d\over 2a}\right). \l{epst} \end{equation} The accuracy of \r{cpatch} and \r{epst} is better for small gaps and for large values of $h/a$. In the limit $d\rightarrow 0$ the effective capacitance behaves as $C \approx {16\varepsilon_0wa^2\over \pi h d^2}$ and, therefore, can be arbitrarily large if the gap between two adjacent patches is made small enough. On the other hand, the transverse permittivity $\varepsilon_{\rm t}$ grows under the same limit as $\varepsilon_{\rm t}\approx {2w\over \pi h}\log\left({2a\over\pi d}\right)$. The square of the slow-wave factor $n^2$ is proportional to the effective capacitance, therefore, by increasing the width of the patches one can discard the spatial dispersion term in the right-hand side of \r{unipatchperm} while keeping $\varepsilon_{\rm t}$ at a reasonable level (this is possible because $\varepsilon_{\rm t}$ grows more slowly when $d\rightarrow 0$). An explicit expression for the slow-wave factor under the mentioned limit reads \begin{equation} n^2 = {LC\over\varepsilon_0\mu_0} \approx 1 + {16 w\over \pi h (k_{\rm p}d)^2}. \end{equation} In fact, we have numerically checked that this simple expression works quite well for gaps of width $d \le 0.2a$. To illustrate the suppression of the spatial dispersion in the considered wire media, we have calculated the dispersion diagrams for several configurations using our quasi-static model, the transfer matrix method described in Appendix B, and the eigenmode solver of CST Microwave Studio. The structure was assumed lossless in the simulations (all metallic components are perfectly conducting so that $Z_w=0$). The transfer matrix formalism developed in Appendix B is based on the assumption that in between two patch grids the electric field is a superposition of TEM and TM modes. \cite{Belov_dispersion_PRB_2003} The fields on the interfaces of each patch grid are linked by a grid impedance and by an additional boundary condition,\cite{ABCtilted} consistent with the formalism described in Refs.~\onlinecite{Olli_Mushrooms_MTT_2009, Alex_Mushrooms_MTT_2009}. The obtained results are presented in Fig.~\ref{model_vs_cst}. \begin{figure}[htb] \centering \epsfig{file=fig2.eps,width=\textwidth} \caption{\label{model_vs_cst} (Color online) Dispersion diagrams for a uniaxial wire medium loaded with patches obtained using two analytical models and numerical simulations for different propagation angles $\alpha$ with respect to the $z$-axis. Only the branches associated with the quasi-TEM and TM modes are shown. Panels (a) and (c): quasi-static model vs. numerical simulations: (a) $w = 0.5a$, (c) $w = 0.9a$. Panels (b) and (d): transfer matrix model vs. numerical simulations: (b) $w = 0.5a$, (d) $w = 0.9a$. On all 4 panels the solid lines represent the analytical results and the symbols correspond to the results of numerical simulations; the values of the propagation angles are coded in color: $\alpha=0$: blue lines and circles; $\alpha=30^\circ$: magenta lines and triangles; $\alpha=60^\circ$: red lines and crosses. The other parameters in all 4 cases: $r_0 = 0.05a$, $h = a$.} \end{figure} In Fig.~\ref{model_vs_cst}(a) and Fig.~\ref{model_vs_cst}(c) the dispersion diagrams obtained from the quasi-static model and the numerical simulations are shown for a set of the propagation angles with respect to the axis of the structure: $\alpha = 0, 30^\circ, 60^\circ$ [for the other parameters of the structure refer to Fig.~\ref{fig1}; in these plots the wave vector is ${\bf{k}} = k\left( {\sin \alpha \,{\bf{x}}_0 + \cos \alpha \,{\bf{z}}_0 } \right)$]. The dispersion curves predicted by the quasi-static model are depicted with solid lines while the results of the numerical simulations are represented by symbols. In the example of Fig.~\ref{model_vs_cst}(a) the patch width has been set equal to $w = 0.5 a$, while in Fig.~\ref{model_vs_cst}(c) the patch width is $w = 0.9a$. In both cases the theory and the simulations predict the existence of two dispersion branches associated with extraordinary waves, i.e., with the quasi-TEM and TM modes, as well as a dispersion branch associated with the ordinary (TE) wave whose dispersion is not depicted in Fig.~\ref{model_vs_cst} (there are also other higher order modes at higher frequencies, but we are not interested in them). We call the high-frequency branch ``the plasmon mode'' because for $\alpha = 0$ this branch corresponds to the longitudinal plasmon-type wave propagating along the axis of the structure. On the other hand, the low-frequency branch for $\alpha = 0$ belongs to an ordinary transverse wave which is not affected by the wires (but it is affected by the transverse permittivity $\varepsilon_{\rm t}$ of the medium). From Fig.~\ref{model_vs_cst}(a) one can see that for the moderate-size patches the quasi-static model works surprisingly well even when $ka$ approaches $\pi$. The small difference in the frequencies of the plasmon-type modes predicted by the theory and the simulations at $ka = 0$ is due to the asymptotic nature of the formula for the plasma wavenumber that we use (the discussion on this is given in Section~\ref{uniaxial}). For larger patches (Fig.~\ref{model_vs_cst}(c)) the quasi-static model does not predict appearance of a band gap at $\alpha = 0$ and $ka = \pi$. This is expected since in the model the capacitive loading on the wires is assumed to be effectively uniform along the wires. Fig.~\ref{model_vs_cst}(b) and Fig.~\ref{model_vs_cst}(d) display the same dispersion diagrams but with the quasi-static model replaced by the transfer matrix model described in Appendix B. One can see that this model wrongly predicts a completely flat dispersion for the plasmon mode propagating along the $z$ axis ($\alpha = 0$), independently of the patch size. This is in disagreement with the numerical simulations, as is seen from Fig.~\ref{model_vs_cst}(b). Indeed, the formalism developed in Refs.~\onlinecite{Olli_Mushrooms_MTT_2009, Alex_Mushrooms_MTT_2009} is only valid when the gap between the patches is small, because otherwise other higher modes can be excited near the connections of the wires to the patch grid, and in such conditions it is not possible to consider that the microscopic field in the vicinity of the connection points are a superposition of TM and TEM modes of the unloaded wire medium, as assumed in Refs.~\onlinecite{Olli_Mushrooms_MTT_2009, Alex_Mushrooms_MTT_2009}. Consistent with this observation, it is seen in Fig.~\ref{model_vs_cst}(d), that for larger patches and (or) larger angles of propagation the disagreement is less pronounced. Another characteristic feature of the transfer matrix model is that it is able to predict the existence of the above-mentioned bandgap. This is because the transfer matrix model takes into account the granularity of the structure along the $z$ axis. \begin{figure}[htb] \centering \epsfig{file=fig3.eps,width=0.5\textwidth} \caption{\label{slow_wave_factor} (Color online) The square of the slow-wave factor as a function of $a/d$ (logarithmic scale). The lines represent the result of the quasi-static model, the symbols correspond to the values of $n^2$ extracted from the numerical simulations. Blue dotted line and crosses: $r_0 = 0.05a$, $h = a/3$; red solid line and circles: $r_0 = 0.05a$, $h = a$.} \end{figure} The suppression of the spatial dispersion effects is evident if we compare Fig.~\ref{model_vs_cst}(a) with Fig.~\ref{model_vs_cst}(c). Indeed, the latter case corresponds to a larger patch width ($w = 0.9a$), and consequently the slope of the dispersion curve associated with the longitudinal mode (the plasmon mode at $\alpha = 0$) is very small. To justify this effect and also to check the accuracy of the quasi-static model near the origin of the Brillouin zone for a wide range of values of the gap, we have extracted the values of the slow wave factor $n$ from the results of the numerical simulations slightly above the point $ka = 0$ and compared them with the value of $n$ given by the analytical model. The results of this extraction are presented in Fig.~\ref{slow_wave_factor}. From this figure we see that despite its simplicity, the quasi-static model predicts very well the trend in the growth of $n^2$ when the the gap between the patches decreases. The agreement tends to improve for larger values of $h/a$. \section{Conclusions} In this paper we have developed a quasi-static analytical model of wire media applicable to a wide class of structures, and in particular we have considered uniaxial and isotropic wire crystals, which may be loaded with metallic patches. Because the developed model is defined in simple physical terms of the effective inductance and capacitance per unit length of a wire it can be readily extended to other wire structures of more complex geometries. The model accounts for the finite conductivity of the wires so that it can be applied when the metallic wires become plasmonic (consistent with the results reported in Refs.~\onlinecite{Silveirinha_crosswires_PRB_2009, Silveirinha_ENG_Plasmonic_2006}) or when the wires are uniformly loaded with arbitrary complex impedances. In particular, we have studied with details the electrodynamics of uniaxial wire media loaded with patches, and demonstrated with full wave simulations that the proposed quasi-static model describes accurately the properties of the system in the long wavelength limit. Consistent with the analysis of Refs.~\onlinecite{Demetriadou_taming_JPCM_2008, Olli_Mushrooms_MTT_2009, Alex_Mushrooms_MTT_2009}, it was shown that the presence of the patches may result in a dramatic reduction of the nonlocal effects. For the case of unloaded wire media, we have demonstrated that the quasi-static model yields the same expressions for the dielectric permittivity tensors as those obtained by much more sophisticated methods.\cite{Shvets_wires_PSPIE_2003, Efros_WM_PRB_2002, Silveirinha_ENG_Plasmonic_2006, Silveirinha_3dconnected_I3EMTT_2005, Constantin_WM, Silveirinha_crosswires_PRB_2009} Thus, we have proven that the strong spatial dispersion in wire media is a quasi-static effect. Although this fact has already been noticed,~\cite{Maslovski_disser_2004} the presented research extends the results obtained in Ref.~\onlinecite{Maslovski_disser_2004} and allows for analytical and quantitative studies of the possibilities to control the spatial dispersion in wire media. \begin{acknowledgments} This work is supported in part by Funda\c{c}\~ao para a Ci\^encia e a Tecnologia under project PDTC/EEA-TEL/71819/2006. \end{acknowledgments}
2,877,628,091,358
arxiv
\section{Introduction} In most of the introductory references in the literature devoted to quantum mechanics and quantum field theory, it comes out that the natural appearance of noncommutativity in string theories has increasingly led to attempts to study physical problems in noncommutative spaces \cite{doplicher}-\cite{douglas}. Although noncommuting coordinates are operators even at the classical level, one can treat them as commuting by replacing operator products by $*$-products \cite{omer-jellal1}. This approach allows one to generalize classical as well as quantum mechanics without altering their main physical interpretations and to recover the usual results when noncommutativity is switched off. In some recent works, the quantum Hall system has attracted considerable attention from the point of view of noncommutative quantum mechanics and quantum field theory (see e.g. \cite{horvathy-duval}, \cite{horvathy}, \cite{omer-jellal}, \cite{pasquier}) as it is probably the simplest physical realization of a noncommutative spatial geometry. Note that a noncommutative model valid for a constant magnetic field, with respect to the geometrical aspect of the problem, has been also investigated. For more details, please see \cite{nair-polychronakos}. The description of such a system \cite{goerbig, lederer} is adequately provided by the well known Landau model. This latter describes the motion of an electron in a static uniform magnetic field, studied for the first time by Landau \cite{landau}, which can be assimilated in $2D$ to a harmonic oscillator. Since this discovery, the quantum states of a particle in a magnetic and electromagnetic fields on noncommutative plane have been attracting considerable attention, see for instance \cite{horvathy, omer-jellal, jellal, gamboa-loewe-mendez-rojas, geloun-jan-hounkonnou, dulat-li, alvarez, ben-sunandan-scholtz} and more recently \cite{zhang-horvathy} (and references listed therein). In \cite{jan-scholtz}, the thermodynamics of an ideal fermion gas in a noncommutative well in two dimensions \cite{scholtz-chakraborty-jan-vaidya}, has been investigated. The authors have shown that the thermodynamical properties of the fermion gas for the commutative and noncommutative cases agree at low densities, while at high densities they start diverging strongly due to the implied excluded area resulting from the noncommutativity. In \cite{gazeau-hsiao-jellal} the possible occurrence of orbital magnetism for two-dimensional electrons confined by a harmonic potential \cite{ishikawa-fukuyama} in various regimes of temperature and magnetic field has been studied. Standard coherent states (CS) have been used for calculating symbols of various involved observables like the thermodynamical potential, the magnetic moment or the spatial distribution of the current. In \cite{jellal}, an analogous treatment in a noncommutative framework has been achieved and the results of \cite{gazeau-hsiao-jellal} in the commutative case have been recovered by switching off the $\theta$-parameter. In the noncommutative quantum mechanics formulation, a major role is played by the CS on the quantum Hilbert space denoted by $\mathcal H_{q}$ (the space of Hilbert Schmidt operators on the classical configuration space denoted by $\mathcal H_{c}$), which are expressed in terms of a projection operator on the usual Glauber-Klauder-Sudarshan CS in the classical configuration space. Based on the approach developed in \cite{gouba-scholtz}, Gazeau-Klauder CS have been constructed in the noncommutative quantum mechanics \cite{ben-scholtz}. These states share similar properties to those of ordinary canonical CS in the sense that they saturate the related position uncertainty relation, obey a Poisson distribution and possess a flat geometry. This work deals with the Landau problem, particularly the study of the electron motion in an external uniform electromagnetic field coupled with a harmonic potential in a two-dimensional noncommutative space. The thermodynamics of this physical system is investigated, not proceeding by the same way as in \cite{jan-scholtz} for an ideal fermion gas, but following rather the method established in \cite{gazeau-hsiao-jellal} by formulating at first CS on the quantum Hilbert space $\mathcal H_{q}$. Then, the thermodynamical potential is evaluated, by the use of deduced inequalities, together with the magnetic moment. The vector coherent states (VCS) are derived; they fulfill a resolution of the identity on a suitable Hilbert space which is consistent with the general formulation of \cite{ali-englis-gazeau}. We extend the VCS construction used in \cite{ben-scholtz} to a formal tensor product of quantum Hilbert spaces (using the primary formulation of \cite{thirulo}), including complex matrices and quaternions as CS variables. The physical features of the quaternionic VCS (QVCS) are discussed. The paper is organized as follows. In Section $2$, we describe the physical model as well as the associated matrix formulation. The Hamiltonian spectrum and its spectral decomposition are provided. The definition of the passage operators from an orthonormal basis to another is also supplied. Section $3$ deals with the thermodynamical aspects of the studied model. In Section $4$, relevant VCS and QVCS for Landau levels are constructed and discussed. Finally, there follow concluding remarks in Section $5$. \section{The electron in noncommutative plane} \subsection{Quantum model} The physics of an electron in crossed constant uniform electric ${\bf E}$ and magnetic ${\bf B}$ fields coupled with a confining harmonic potential in a noncommutative space, is described, in the gauge ${\bf A} = \left(-\frac{B}{2}y, \frac{B}{2}x \right),$ by the Hamiltonian: \begin{eqnarray}{\label{exp00}} H_{\theta} = \frac{1}{2M}\left(\hat P_{i} - \frac{eB}{2c}\epsilon_{ij}\hat X_{j}\right)^{2} + \frac{M \omega^{2}_{0}}{2}\hat X^{2}_{i} - e E_{i}\hat X_{i}, \; \; \epsilon_{ji} = -\epsilon_{ij}, \; \epsilon_{12} = +1, \end{eqnarray} where the position and momentum operators $\hat X_{i} = \hat X, \, \hat Y$ and $\hat P_{i} = \hat P_{X}, \, \hat P_{Y}, \, i=1,2$, satisfy the following commutation relations of the noncommutative Heisenberg algebra \cite{gouba-scholtz}: \begin{eqnarray} [\hat X, \hat Y] = \imath \theta, \quad [\hat X, \hat P_{X}] = \imath \hbar = [\hat Y, \hat P_{Y}] , \quad [\hat P_{X}, \hat P_{Y}] = 0. \end{eqnarray} The position operators $\hat X_{i}$ and their corresponding canonically conjugate momenta $\hat P_{i}$ can be combined in the operators $\hat \Pi_{i} = \hat P_{i} - \frac{eB}{2c}\epsilon_{ij}\hat X_{j}$ to yield the relations: \begin{eqnarray} [\hat X_{i}, \hat \Pi_{j}] = \imath \left( \hbar- \frac{eB}{2c}\theta\right)\delta_{ij}, \quad [\hat \Pi_{i}, \hat \Pi_{j}] = -i \frac{eB}{c}\left(\hbar - \frac{eB}{4c}\theta\right)\epsilon_{ij}. \end{eqnarray} From the latter, define the complex canonically conjugate momenta, denoted by $\hat \Pi_{Z}$ corresponding to $\hat Z = \hat X + \imath \hat Y$ and $\hat{\bar Z} = \hat X - \imath \hat Y$ by \begin{eqnarray} \hat \Pi_{Z} = \hat \Pi_{X} - \imath \hat \Pi_{Y}, \qquad \hat \Pi_{\bar Z} = \hat \Pi_{X} + \imath \hat \Pi_{Y}, \end{eqnarray} respectively, such that the quantum operators $\hat Z, \hat{\bar Z} ,\hat \Pi_{Z}, \hat \Pi_{\bar Z}$ act on the quantum Hilbert space $\mathcal H_{q}$ \cite{ben-sunandan-scholtz, gouba-scholtz}, i.e. the space of Hilbert-Schmidt operators acting on the noncommutative configuration (Hilbert) space $\mathcal H_{c},$ defined as: \begin{eqnarray} \mathcal H_{q} = \left\{\psi(\hat z, \hat{\bar z}): \psi(\hat z, \hat{\bar z}) \in \mathcal B(\mathcal H_{c}),\, tr_{c}(\psi(\hat z, \hat{\bar z})^{\dag}, \psi(\hat z, \hat{\bar z})) < \infty \right \}, \end{eqnarray} endowed with the following inner product \begin{eqnarray} (\psi(\hat x_{1}, \hat x_{2}), \phi(\hat x_{1}, \hat x_{2})) = tr_{c}(\psi(\hat x_{1}, \hat x_{2})^{\dag}, \phi(\hat x_{1}, \hat x_{2})) \end{eqnarray} where $tr_{c}$ stands for the trace over $\mathcal H_{c}$. $\mathcal B(\mathcal H_{c})$ is the set of bounded operators on $\mathcal H_{c}$. \begin{rmk} For a harmonic oscillator, the two-dimensional noncommutative coordinate algebra is given by \begin{eqnarray} [\hat x, \hat y] = \imath \theta; \end{eqnarray} $\theta$ refers to as the noncommutativity parameter. The annihilation and creation operators $a = 1/{\sqrt{2\theta}}(\hat x + \imath \hat y),\, a^{\dag} = 1/{\sqrt{2\theta}}(\hat x - \imath \hat y)$ obey a Heisenberg-Fock algebra $[a,a^{\dag}] = 1 \! \! {\rm I}_{c}$, where $1 \! \! {\rm I}_{c}$ is the identity operator on the Hilbert space $\mathcal H_{c},$ i.e. the noncommutative configuration space which becomes itself a Hilbert space isomorphic to the boson Fock space \cite{gouba-scholtz} $\mathcal H_{c} = span\{|n\rangle, n \in \mathbb N\}$, with $|n\rangle = 1/\sqrt{n !}(a^{\dag})^{n}|0\rangle$. \end{rmk} As mentioned in \cite{ben-scholtz}, a well defined representation with self-adjoint properties with respect to the quantum Hilbert space $\mathcal H_{q}$ inner product is provided by the following relations \begin{eqnarray} \hat X \psi = \hat x \psi, \quad \hat Y \psi = \hat y \psi, \quad \hat P_{X}\psi = \frac{\hbar}{\theta}[\hat y, \psi], \quad \hat P_{Y}\psi = -\frac{\hbar}{\theta}[\hat x, \psi]. \end{eqnarray} On the Hilbert space $\mathcal H_{q}$, the following commutation relations are satisfied: \begin{eqnarray}{\label{delt00}} [\hat Z, \hat{\bar Z}] &=& 2\theta, \qquad [\hat Z, \hat \Pi_{Z}] = 2\imath\left( \hbar - \frac{eB}{2 c}\theta\right) = [\hat{\bar Z}, \hat \Pi_{\bar Z}], \crcr [\hat \Pi_{Z}, \hat \Pi_{\bar Z}] &=& 2 \frac{eB}{c} \left(\hbar -\frac{eB}{4c}\theta\right). \end{eqnarray} For the analysis purpose, defining a diagonal matrix $\mathcal D $ and adopting the notations $E = (E_{1}, E_{2},0,0), {\mathcal X_{0}} = ( x_{0}, y_{0},0,0)^{t}$, where $ x_{0} = \frac{eE_{1}}{M \omega^{2}_{0}}, \; y_{0} = \frac{eE_{2}}{M \omega^{2}_{0}}$, the Hamiltonian $H_{\theta}$ of the physical model can be rewritten in a short form as follows: \begin{eqnarray} H_{q} = \frac{1}{4M} \hat{\mathcal Z}^{\ddag}\hat{\mathcal Z} - \frac{1}{2} e E.{\mathcal X_{0}} = \frac{1}{4M} A^{\ddag}\mathcal D A - \frac{1}{2} e E. {\mathcal X_{0}}, \quad A = (B_{+}, B^{\ddag}_{+}, B_{-}, B^{\ddag}_{-}). \end{eqnarray} The symbol $t$ means the transpose operation. Now introduce the operators \begin{eqnarray} A^{+} &=& (B^{\ddag}_{+}, B_{+}, B^{\ddag}_{-}, B_{-})^{t} = \Lambda A, \crcr \hat \mathcal Z^{+} &=& (\hat{\bar Z'} - {\bar Z'_{0}}, \hat Z' - Z'_{0}, \hat \Pi_{\bar Z} - \Pi_{\bar Z_{0}}, \hat \Pi_{Z} - \Pi_{Z_{0}})^{t} = \Lambda \hat{\mathcal Z} \end{eqnarray} where $\hat Z' - Z'_{0} = M\omega_{0} (\hat Z - Z_{0})$, with the permutation matrix $\Lambda$ defined by \begin{eqnarray} \Lambda = \left(\begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array} \right) \end{eqnarray} and reserving the notation $\ddag$ to denote the Hermitian conjugation on the quantum Hilbert space. Then, consider the matrix $\mathfrak g$ with entries ${\mathfrak g}_{lk} = [\hat{\mathcal Z}_{l}, \hat{\mathcal Z}^{+}_{k}], \; l, k = 1,\dots, 4,$ obtained from the commutation relations (\ref{delt00}) as follows: \begin{eqnarray} \mathfrak g = && \left(\begin{array}{cccc} 2M^{2}\omega^{2}_{0}\theta & 0 & 0 & 2\imath \hbar M\omega_{0}\times \\ & & & \times \left(1-\frac{M\omega_{c}}{2\hbar}\theta\right)\\ 0 & -2M^{2}\omega^{2}_{0}\theta & 2\imath \hbar M\omega_{0} \times & 0 \\ & & \times \left(1-\frac{M\omega_{c}}{2\hbar}\theta\right) &\\ 0 & -2\imath \hbar M\omega_{0} \times & 2\hbar M\omega_{c} \times & 0 \\ & \times \left(1-\frac{M\omega_{c}}{2\hbar}\theta\right) & \times \left(1-\frac{M\omega_{c}}{4\hbar}\theta\right) \\ -2\imath \hbar M\omega_{0} \times & 0 & 0 & -2\hbar M\omega_{c} \times \\ \times \left(1-\frac{M\omega_{c}}{2\hbar}\theta\right) & & & \times \left(1-\frac{M\omega_{c}}{4\hbar}\theta\right)\\ \end{array} \right) \nonumber \\ \end{eqnarray} with the eigenvalues $\tilde \lambda_{\pm}, -\tilde \lambda_{\pm}$ supplied by the expressions \begin{eqnarray} \tilde \lambda_{\pm} &=& M\hbar \left\{\Omega \sqrt{1-\frac{M\omega_{c}}{2\hbar}\theta + \left(\frac{M \Omega}{2\hbar}\theta\right)^{2}} \pm \omega_{c}\left(1 - \left(\frac{\omega_{c}}{4\hbar} + \frac{\omega^{2}_{0}}{\hbar \omega_{c}}\right)M\theta\right)\right \} \end{eqnarray} where $\Omega^{2} = 4\omega^{2}_{0} + \omega^{2}_{c}$. The matrix $\mathcal S^{\dag}$, eigenvector matrix of $\mathfrak g$, is given by \begin{eqnarray} \mathcal S^{\dag} = \left(\frac{1}{\sqrt{|\lambda_{+}|}}u'_{1}, \frac{1}{\sqrt{|\lambda_{+}|}}(\Lambda u^{*}_{1})', \frac{1}{\sqrt{|\lambda_{-}|}}u'_{2}, \frac{1}{\sqrt{|\lambda_{-}|}}(\Lambda u^{*}_{2})'\right) \end{eqnarray} where the normalized eigenvectors $(u'_{1}, (\Lambda u^{*}_{1})')$ and $(u'_{2}, (\Lambda u^{*}_{2})')$ associated with $(\tilde \lambda_{+}, -\tilde \lambda_{+})$ and $( \tilde \lambda_{-}, - \tilde \lambda_{-})$, respectively, are given by \begin{eqnarray} u'_{1} = \frac{1}{||u_{1}||}u_{1}, \; (\Lambda u^{*}_{1})' = \frac{1}{||\Lambda u^{*}_{1}||}[\Lambda u^{*}_{1}], \; u'_{2} = \frac{1}{||u_{2}||}u_{2}, \; (\Lambda u^{*}_{2})' = \frac{1}{||\Lambda u^{*}_{2}||}[\Lambda u^{*}_{2}] \end{eqnarray} with \begin{eqnarray} u_{1} = \left(\begin{array}{c} 0 \\ \imath \frac{B_{\hbar}}{\kappa_{+}} \\ 1 \\ 0 \end{array} \right), \qquad u_{2} = \left(\begin{array}{c} \imath \frac{B_{\hbar}}{\kappa_{-}} \\ 0 \\ 0 \\ 1 \end{array} \right); \end{eqnarray} $u^{*}_{j}, j = 1,2,$ are the vectors with entries which are conjugate of the $u_{j}$ entries; $ B_{\hbar} = 2 \hbar M\omega_{0}\left(1-\frac{M\omega_{c}}{2\hbar}\theta\right)$ and \begin{eqnarray} \kappa_{\pm} = M\hbar \left\{\Omega \sqrt{1-\frac{M\omega_{c}}{2\hbar}\theta + \left(\frac{M \Omega}{4\hbar}\theta\right)^{2}} \pm \omega_{c}\left(1 - \left(\frac{\omega_{c}}{4\hbar} - \frac{\omega^{2}_{0}}{\hbar \omega_{c}}\right)M\theta\right)\right \}. \end{eqnarray} Then, the Hamiltonian is obtained as \begin{eqnarray} H_{q} = \frac{1}{4M} A^{\ddag}\mathcal D A - \frac{1}{2} e E. {\mathcal X_{0}} = \frac{1}{4M} A^{\ddag} \mathbb J_{4}\mathcal S\mathfrak g^{2}\mathcal S^{\dag}\mathbb J_{4} A - \frac{1}{2} e E. {\mathcal X_{0}} \end{eqnarray} where $\mathbb J_{4}$ is given by \begin{eqnarray} \mathbb J_{4} = \mbox{diag}(\sigma_{3}, \sigma_{3}), \qquad \sigma_{3} = \left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array} \right) . \end{eqnarray} Setting $\tilde \Omega_{\pm} = \frac{\tilde \Omega \pm \tilde \omega_{c}}{2}$, where the $\theta$-dependent quantities $\tilde \Omega$ and $\tilde \omega_{c}$ are given by \begin{eqnarray} \tilde \Omega = \Omega \sqrt{1-\frac{M\omega_{c}}{2}\theta + \left(\frac{M \Omega}{4}\theta\right)^{2}}, \quad \tilde \omega_{c} = \omega_{c}\left(1 - \left(\frac{\omega_{c}}{4} + \frac{\omega^{2}_{0}}{ \omega_{c}}\right)M\theta\right), \end{eqnarray} then we can re-express the Hamiltonian $H_q$ in terms of positive quantities $\tilde \Omega_\pm$ as follows: \begin{eqnarray} H_{q} = \frac{\hbar}{2} \left(\tilde \Omega_{+}\tilde N_{+} + \tilde \Omega_{-}\tilde N_{-} + \tilde \Omega \right) - \frac{1}{2}e(E_{1}x_{0} + E_{2}y_{0}), \end{eqnarray} where $\tilde N_{\pm} = B^{\ddag}_{\pm}B_{\pm}$ denote the number operators on the quantum Hilbert space; $B^{\ddag}_{\pm}, B_{\pm}$ are the corresponding creation and annihilation operators. Further defining the quantities \begin{eqnarray} \zeta = \sqrt{\frac{M \Omega}{\hbar}} \frac{1}{\mu_{\theta}} = \sqrt[4]{\frac{(M \Omega/\hbar)^{2}}{1-\frac{M\omega_{c}}{2}\theta + \left(\frac{M \Omega}{4}\theta\right)^{2}}}, \qquad \mu_{\theta} = \sqrt[4]{1 - \frac{M\omega_{c}}{2}\theta + \left(\frac{M\Omega}{4} \theta\right)^{2}} \end{eqnarray} which are also $\theta-$ dependent functions, the annihilation and creation operators are deduced as \begin{eqnarray} B_{+} &=& \zeta \frac{\hat{\bar{Z}} - {\bar{Z}}_{0}}{2} + \frac{\imath}{\zeta \hbar}(\hat P_{Z} - P_{Z_{0}}), \qquad B^{\ddag}_{+} = \zeta \frac{\hat Z - Z_{0}}{2} - \frac{\imath}{\zeta \hbar}(\hat P_{\bar Z} - P_{\bar Z_{0}}) \crcr B_{-} &=& \zeta \frac{\hat Z - Z_{0}}{2} + \frac{\imath}{\zeta \hbar}(\hat P_{\bar Z} - P_{\bar Z_{0}}), \qquad B^{\ddag}_{-} = \zeta \frac{\hat{\bar{Z}} - {\bar{Z}}_{0}}{2} - \frac{\imath}{\zeta \hbar}(\hat P_{Z} - P_{Z_{0}}) \end{eqnarray} satisfying the commutation relations: \begin{eqnarray}{\label{commtation}} [B_{\pm}, B^{\ddag}_{\pm}] = 1 \! \! {\rm I}_{q}, \;\;\; [B_{\pm}, B^{\ddag}_{\mp}] = 0, \;\;\; [B_{+}, B_{-}] = 0, \;\;\; [B^{\ddag}_{+}, B^{\ddag}_{-}] = 0. \end{eqnarray} Finally, there result the eigenvalues of the Hamiltonian $H_{q},$ expressed in the Fock helicity representation $|\tilde n_{+}, \tilde n_{-}\rangle $ by \begin{eqnarray} E_{\tilde n_{+}, \tilde n_{-}} &=& \frac{\hbar}{2} \left(\tilde \Omega_{+}\tilde n_{+} + \tilde \Omega_{-}\tilde n_{-} + \tilde \Omega \right) - \frac{1}{2}e(E_{1}x_{0} + E_{2}y_{0}) \end{eqnarray} with the corresponding eigenvectors on the quantum Hilbert space given by \begin{eqnarray} |\tilde n_{+}, \tilde n_{-}) = \frac{1}{\sqrt{\tilde n_{+} !\tilde n_{-} !}}\left(B^{\ddag}_{+}\right)^{\tilde n_{+}} \left(B^{\ddag}_{-}\right)^{\tilde n_{-}}|0\rangle \langle 0| , \end{eqnarray} where $B^{\ddag}_{-}$ may have an action on the right by $B_{-}$ on $|0\rangle \langle 0|$ which stands for the vacuum state on $\mathcal H_{q}$ and $|||\tilde n_{+}, \tilde n_{-})|| = 1$. The annihilation and creation operators act on the states $|\tilde n_{+}, \tilde n_{-}) = |\tilde n_{+} \rangle \langle \tilde n_{-}| , \; \tilde n_{\pm} = 0, 1, 2, \dots,$ as follows: \begin{eqnarray} B_{+}|\tilde n_{+}, \tilde n_{-}) &=& \sqrt{\tilde n_{+}}|\tilde n_{+}-1, \tilde n_{-}), \qquad B^{\ddag}_{+}|\tilde n_{+}, \tilde n_{-}) = \sqrt{\tilde n_{+} + 1 }|\tilde n_{+}+1, \tilde n_{-}), \end{eqnarray} \begin{eqnarray} B_{-}|\tilde n_{+}, \tilde n_{-}) &=& \sqrt{\tilde n_{-}}|\tilde n_{+}, \tilde n_{-}-1), \qquad B^{\ddag}_{-}|\tilde n_{+}, \tilde n_{-}) = \sqrt{\tilde n_{-} + 1 } |\tilde n_{+}, \tilde n_{-}+1). \end{eqnarray} \subsection{Spectral decomposition} Let us consider the dimensionless shifted quantum Hamiltonian \begin{eqnarray} H^{dim}_{q} = \frac{1}{\hbar \tilde \Omega} \left[H_{q} + \frac{1}{2}e(E_{1}x_{0} + E_{2}y_{0}) \right] \end{eqnarray} with associated eigenvalues \begin{eqnarray} \tilde E_{\tilde n_{+}, \tilde n_{-}} = \frac{1}{2}\left(\frac{\tilde \Omega_{+}}{\tilde \Omega}\tilde n_{+} + \frac{\tilde \Omega_{-}}{\tilde \Omega}\tilde n_{-} + 1\right) . \end{eqnarray} Take $\{|\tilde n_{+}, \tilde n_{-}), \tilde n_{\pm} \in \mathbb N\}$ as the orthonormal eigenstate basis associated with the quantum Hamiltonian $ H_{q}$ in the helicity Fock algebra representation. With respect to the inner product on $\mathcal H_{q}$, we have $(\tilde n_{+}, \tilde n_{-}|\tilde n'_{+}, \tilde n'_{-}) = {\mbox tr}_{c}[(|\tilde n_{+} \rangle \langle \tilde n_{-}|)^{\ddag}|\tilde n'_{+} \rangle \langle\tilde n'_{-}|] = \delta_{\tilde n_{+},\tilde n'_{+}} \delta_{\tilde n_{-},\tilde n'_{-}}$. On this basis, the Hamiltonian $H^{dim}_{q}$ admits the following spectral decomposition \begin{eqnarray} H^{dim}_{q} = \sum_{\tilde n_{\pm}=0}^{\infty} |\tilde n_{+}, \tilde n_{-}) \tilde E_{\tilde n_{+}, \tilde n_{-}}(\tilde n_{+}, \tilde n_{-}|. \end{eqnarray} Let $\{|n\rangle \langle m| := |n,m), n, m \in \mathbb N \}$ be the orthonormal basis associated with the quantum Hilbert space $\mathcal H_{q}$. Introduce the passage operators from $\{|n,m), n, m \in \mathbb N \}$ to $\{|\tilde n_{+}, \tilde n_{-}), \tilde n_{\pm} \in \mathbb N \}$ and vice versa given by \begin{eqnarray} \mathcal U|n,m) = |\tilde n_{+}, \tilde n_{-}), \qquad \mathcal V|\tilde n_{+}, \tilde n_{-}) = |n,m) \end{eqnarray} where their expansions are given by \begin{eqnarray} \mathcal U = \sum_{n,m=0}^{\infty} |\tilde n_{+}, \tilde n_{-})(n,m|, \qquad \mathcal V = \sum_{\tilde n_{\pm}=0}^{\infty} |n,m)(\tilde n_{+}, \tilde n_{-}|, \end{eqnarray} respectively. $\mathcal U, \mathcal V$ are obtained as mutually adjoint through the following identities satisfied on $\mathcal H_{q}:$ \begin{eqnarray} \mathcal U \mathcal V = \sum_{\tilde n_{\pm}=0}^{\infty}|\tilde n_{+}, \tilde n_{-})(\tilde n_{+}, \tilde n_{-}| = \mathbb I_{q}, \qquad \mathcal V \mathcal U = \sum_{n,m=0}^{\infty}|n,m)(n,m| = \mathbb I_{q}, \end{eqnarray} where $\mathbb I_{q}$ stands for the identity on $\mathcal H_{q}$. Then, the Hamiltonian $H^{dim}_{q}$ can be rewritten in a diagonal form as below: \begin{eqnarray}{\label{hq00}} \mathbb H^{dim} = \mathcal V H^{dim}_{q} \mathcal U = \sum_{n,m=0}^{\infty}|n,m)\tilde E_{n,m}(n,m|, \qquad \tilde E_{n,m} = \frac{1}{2}\left(\frac{\tilde \Omega_{+}}{\tilde \Omega}n + \frac{\tilde \Omega_{-}}{\tilde \Omega}m + 1 \right). \end{eqnarray} \section{Coherent states and thermodynamics of the model} For the Hamiltonian $H_{q}$ with eigenvalues $E_{\tilde{n}_{+}, \tilde{n}_{-}}= \frac{\hbar}{2} \left(\tilde \Omega_{+}\tilde n_{+} + \tilde \Omega_{-}\tilde n_{-} + \tilde \Omega \right) + k_{e,E}, \, k_{e,E}= - \frac{1}{2}e\left(E_{1}x_{0} + E_{2}y_{0} \right), $ the coherent states denoted by $|z_{\pm},\tau)$ are defined on the quantum Hilbert space $\mathcal H_{q}$, as follows: \begin{eqnarray}{\label{vect00}} |z_{\pm},\tau) &=& \mathbb U(\tau)|z_{+}\rangle \langle z_{-}| \cr &=& e^{-\frac{1}{2}(|z_{+}|^{2} + |z_{-}|^{2})} \sum_{\tilde{n}_{+}, \tilde{n}_{-} = 0}^{\infty} \frac{z_{+}^{\tilde{n}_{+}}\bar z_{-}^{\tilde{n}_{-}}}{\sqrt{\tilde{n}_{+} !\tilde{n}_{-} !}} e^{-i \tau E_{\tilde n_{+}, \tilde n_{-}}} |\tilde{n}_{+}\rangle \langle \tilde{n}_{-}|. \end{eqnarray} The parameter $\tau$ is introduced such that the states (\ref{vect00}) fulfill the Gazeau-Klauder axiom of temporal stability relative to the classical time evolution operator $\mathbb U(\tau) = e^{-\imath \left[H_{q}\right]\tau }$. Indeed, we have the following. \begin{pro} These vectors satisfy the following properties: \begin{itemize} \item [-] Temporal stability \begin{eqnarray} \mathbb U(t)|z_{\pm}, \tau) = e^{-\imath \left[H_{q}\right]t }|z_{\pm}, \tau) = |z_{\pm}, \tau + t) , \end{eqnarray} \item [-] Action identity, also called lower symbol of $H_{q}$, \begin{eqnarray}{\label{ncident}} \mbox{\v{H}}_{q}(z_{\pm}) = (z_{\pm}, \tau|H_{q}|z_{\pm}, \tau) = \frac{\hbar}{2} \left(\tilde \Omega_{+}|z_{+}|^{2} + \tilde \Omega_{-}|z_{-}|^{2} + \tilde \Omega\right) + k_{e,E}, \end{eqnarray} \item [-] Resolution of the identity \begin{eqnarray}{\label{resolv}} \frac{1}{\pi^{2}}\int_{\mathbb C^{2}}|z_{\pm},\tau) (z_{\pm}, \tau|d^{2}z_{+}d^{2}z_{-} \equiv \mathbb I_{q}, \end{eqnarray} where $\mathbb I_{q}$ is the identity operator on $ \mathcal H_{q}$ provided by \begin{eqnarray}{\label{resolv01}} \mathbb I_{q} = \frac{1}{\pi}\int_{\mathbb C}dzd\bar{z}|z)e^{\overleftarrow{\partial_{\bar z}}\overrightarrow{\partial_{z}}} (z|. \end{eqnarray} \end{itemize} \end{pro} {\bf Proof:} We have from the definition (\ref{vect00}) the following relation \begin{eqnarray} \frac{1}{\pi^{2}}\int_{\mathbb C^{2}}d^{2}z_{+}d^{2}z_{-}|z_{+}\rangle \langle z_{+}| |z_{-}\rangle \langle z_{-}| \equiv \frac{1}{\pi^{2}}\int_{\mathbb C^{2}}d^{2}z_{+}d^{2}z_{-}|z_{\pm}) (z_{\pm}| \end{eqnarray} where from the definition (\ref{vect00}), we have \begin{eqnarray} |z_{\pm} ) &=& |z_{+}\rangle \langle z_{-}| \cr &=& e^{-\frac{1}{2}(|z_{+}|^{2} + |z_{-}|^{2})} \sum_{\tilde{n}_{+}, \tilde{n}_{-} = 0}^{\infty} \frac{z_{+}^{\tilde{n}_{+}}\bar z_{-}^{\tilde{n}_{-}}}{\sqrt{\tilde{n}_{+} !\tilde{n}_{-} !}} |\tilde{n}_{+}\rangle \langle \tilde{n}_{-}|. \end{eqnarray} By taking a state $|\psi)$ on $ \mathcal H_{q}$, we obtain \begin{eqnarray} \frac{1}{\pi^{2}}\int_{\mathbb C^{2}}d^{2}z_{+}d^{2}z_{-}|z_{\pm}) (z_{\pm}|\psi) &=& \frac{1}{\pi^{2}}\int_{\mathbb C^{2}}d^{2}z_{+}d^{2}z_{-} |z_{+}\rangle \langle z_{-}| \sum_{\tilde n_{\pm}=0}^{\infty}|\tilde n_{-}\rangle\langle \tilde n_{+} |[|z_{+} \rangle \langle z_{-}|]^{\ddag} \psi|\tilde n_{+} \rangle \langle \tilde n_{-}| \cr &=& \frac{1}{\pi^{2}}\int_{\mathbb C^{2}}d^{2}z_{+}d^{2}z_{-} |z_{+}\rangle \langle z_{-}| \langle z_{+}|\psi|z_{-} \rangle \cr &=& |\psi) \end{eqnarray} such that \begin{eqnarray}{\label{ncres01}} \frac{1}{\pi^{2}}\int_{\mathbb C^{2}}d^{2}z_{+}d^{2}z_{-}|z_{\pm}) (z_{\pm}| \equiv \mathbb I_{q}. \end{eqnarray} In order to provide an equivalence between (\ref{resolv}) and (\ref{resolv01}), let us consider the following relations \begin{eqnarray}{\label{resolv02}} \mathbb I_{q}|\psi) &=& \frac{1}{\pi^{2}}\int_{\mathbb C^{2}}dzd\bar{z}dwd\bar{w} |z\rangle \langle w| \langle z|\psi|w \rangle \cr &=& \frac{1}{\pi^{2}}\int_{\mathbb C^{2}}dzd\bar{z}dud\bar{u} |z\rangle \langle z+u| \langle z|\psi|z+u \rangle \cr &=& \frac{1}{\pi}\int_{\mathbb C}dzd\bar{z}\frac{1}{\pi}\int_{\mathbb C}d^{2}ue^{-|u|^{2}}|z\rangle\langle z| e^{\bar u\overleftarrow{\partial_{\bar z}} + u\overrightarrow{\partial_{z}}}\langle z|\psi|z\rangle, \end{eqnarray} where $w = z + u$ with $d^{2}w = d^{2}u $, and $e^{u \partial_{z}} f(z) = f(z+u)$. Then, set \begin{eqnarray} \frac{1}{\pi}\int_{\mathbb C}d^{2}ue^{-|u|^{2}}|z\rangle\langle z| e^{\bar u\overleftarrow{\partial_{\bar z}} + u\overrightarrow{\partial_{z}}}\langle z|\psi|z\rangle = \frac{1}{\pi}\int_{\mathbb C}d^{2}ue^{-|u|^{2}}|z\rangle\langle z|e^{\bar u\overleftarrow{\partial_{\bar z}}}e^{u \overrightarrow{\partial_{z}}} \langle z|\psi|z\rangle \end{eqnarray} and \begin{eqnarray} I = |z\rangle\langle z|e^{\bar u\overleftarrow{\partial_{\bar z}}}e^{u \overrightarrow{\partial_{z}}} \langle z|\psi|z\rangle. \end{eqnarray} We have \begin{eqnarray} I &=& \left[\sum_{n', m' = 0}^{\infty}|n'\rangle\langle m'|e^{-{\bar z}z}\frac{{\bar z}^{m'}}{\sqrt{m' !}}\frac{{ z}^{n'}}{\sqrt{n' !}}\right] e^{\bar u\overleftarrow{\partial_{\bar z}}}e^{u \overrightarrow{\partial_{z}}} \left[\sum_{n, m = 0}^{\infty}|n\rangle\langle m|e^{-{\bar z}z}\frac{{\bar z}^{n}}{\sqrt{n !}}\frac{{ z}^{m}}{\sqrt{m !}}\right] \crcr &=& \left[ \sum_{n', m' = 0}^{\infty}\frac{z^{n'}}{\sqrt{n' !}}\frac{{\bar z}^{n}}{\sqrt{n !}}|n'\rangle\langle m'| \right] \left(e^{-{\bar z}z} \frac{{\bar z}^{m'}}{\sqrt{m' !}}\right) e^{\bar u\overleftarrow{\partial_{\bar z}}}e^{u \overrightarrow{\partial_{z}}}\left(e^{-{\bar z}z} \frac{{z}^{m}}{\sqrt{m !}}\right). \end{eqnarray} Let \begin{eqnarray} K(z) = \left(e^{-{\bar z}z} \frac{{\bar z}^{m'}}{\sqrt{m' !}}\right) e^{\bar u\overleftarrow{\partial_{\bar z}}}e^{u \overrightarrow{\partial_{z}}}\left(e^{-{\bar z}z} \frac{{z}^{m}}{\sqrt{m !}}\right). \end{eqnarray} We obtain \begin{eqnarray} K(z) &=& \frac{1}{\sqrt{m' !}} \frac{1}{\sqrt{m !}}\sum_{k=0}^{\infty}\sum_{l=0}^{\infty}\frac{1}{k !} \left({\bar u}^{k}\partial^{k}_{{\bar z}}[{\bar z}^{m'} e^{-{\bar z}z}]\right)\frac{1}{l !}\left({u}^{l}\partial^{l}_{{z}}\left[{ z}^{m} e^{-{\bar z}z}\right]\right) \end{eqnarray} which supplies, by performing a radial parametrization, that \begin{eqnarray} &&\frac{1}{\pi}\int_{\mathbb C}d^{2}ue^{-|u|^{2}}K(z) \cr &=& \frac{1}{\sqrt{m' !}} \frac{1}{\sqrt{m !}}\sum_{k=0}^{\infty}\sum_{l=0}^{\infty} \frac{1}{\pi}\int_{\mathbb C}d^{2}ue^{-|u|^{2}}\frac{{\bar u}^{k}}{k !}\frac{u^{l}}{l !}\partial^{k}_{{\bar z}}[{\bar z}^{m'} e^{-{\bar z}z}]\partial^{l}_{{z}}\left[{ z}^{m} e^{-{\bar z}z}\right] \cr &=& \frac{1}{\sqrt{m' !}} \frac{1}{\sqrt{m !}}\sum_{k=0}^{\infty}\sum_{l=0}^{\infty} \frac{1}{\pi}\int_{0}^{\infty}rdre^{-r^{2}}\frac{r^{k+l}}{k ! l !}\int_{0}^{2\pi}e^{-\imath(l-k)\phi}d\phi \cr && \times \partial^{k}_{{\bar z}}[{\bar z}^{m'} e^{-{\bar z}z}]\partial^{l}_{{z}}\left[{ z}^{m} e^{-{\bar z}z}\right]\cr &=& \frac{1}{\sqrt{m' !}} \frac{1}{\sqrt{m !}}\sum_{k=0}^{\infty}\left[\frac{1}{k !}\int_{0}^{\infty}2r^{2k+1}e^{-r^{2}}dr\right] \left[\frac{1}{k !}\partial^{k}_{{\bar z}}[{\bar z}^{m'} e^{-{\bar z}z}]\partial^{k}_{{z}}\left[{ z}^{m} e^{-{\bar z}z}\right]\right] \cr &=& \frac{1}{\sqrt{m' !}} \frac{1}{\sqrt{m !}}\sum_{k=0}^{\infty}\left[\frac{1}{k !}\partial^{k}_{{\bar z}}[{\bar z}^{m'} e^{-{\bar z}z}]\partial^{k}_{{z}}\left[{ z}^{m} e^{-{\bar z}z}\right]\right]. \end{eqnarray} Besides, \begin{eqnarray} \left(e^{-{\bar z}z} \frac{{\bar z}^{m'}}{\sqrt{m' !}}\right) e^{\overleftarrow{\partial_{\bar z}}\overrightarrow{\partial_{z}}}\left(e^{-{\bar z}z} \frac{{z}^{m}}{\sqrt{m !}}\right) = \frac{1}{\sqrt{m' !}} \frac{1}{\sqrt{m !}}\sum_{k=0}^{\infty}\left[\frac{1}{k !}\partial^{k}_{{\bar z}}[{\bar z}^{m'} e^{-{\bar z}z}]\partial^{k}_{{z}}\left[{ z}^{m} e^{-{\bar z}z}\right]\right] \end{eqnarray} which implies that \begin{eqnarray} \frac{1}{\pi}\int_{\mathbb C}d^{2}ue^{-|u|^{2}}K(z) = \left(e^{-{\bar z}z} \frac{{\bar z}^{m'}}{\sqrt{m' !}}\right) e^{\overleftarrow{\partial_{\bar z}}\overrightarrow{\partial_{z}}}\left(e^{-{\bar z}z} \frac{{z}^{m}}{\sqrt{m !}}\right). \end{eqnarray} Then, \begin{eqnarray} &&\frac{1}{\pi}\int_{\mathbb C}d^{2}ue^{-|u|^{2}}|z\rangle\langle z| e^{\bar u\overleftarrow{\partial_{\bar z}} + u\overrightarrow{\partial_{z}}}\langle z|\psi|z\rangle \cr &=& \left[ \sum_{n', m' = 0}^{\infty}\frac{z^{n'}}{\sqrt{n' !}}\frac{{\bar z}^{n}}{\sqrt{n !}}|n'\rangle\langle m'| \right] \left(e^{-{\bar z}z} \frac{{\bar z}^{m'}}{\sqrt{m' !}}\right) e^{\overleftarrow{\partial_{\bar z}}\overrightarrow{\partial_{z}}}\left(e^{-{\bar z}z} \frac{{z}^{m}}{\sqrt{m !}}\right)\cr &=& \left[\sum_{n', m' = 0}^{\infty}|n'\rangle\langle m'|e^{-{\bar z}z}\frac{{\bar z}^{m'}}{\sqrt{m' !}}\frac{{ z}^{n'}}{\sqrt{n' !}}\right] e^{\overleftarrow{\partial_{\bar z}}\overrightarrow{\partial_{z}}} \left[\sum_{n, m = 0}^{\infty}|n\rangle\langle m|e^{-{\bar z}z}\frac{{\bar z}^{n}}{\sqrt{n !}}\frac{{ z}^{m}}{\sqrt{m !}}\right] \cr &=& |z)e^{\overleftarrow{\partial_{\bar z}}\overrightarrow{\partial_{z}}} (z|. \end{eqnarray} Thus, (\ref{resolv02}) becomes for a given operator $|\psi)$ \begin{eqnarray} \mathbb I_{q}|\psi) &=& \frac{1}{\pi}\int_{\mathbb C}dzd\bar{z}\frac{1}{\pi}\int_{\mathbb C}d^{2}ue^{-|u|^{2}}|z\rangle\langle z| e^{\bar u\overleftarrow{\partial_{\bar z}} + u\overrightarrow{\partial_{z}}}\langle z|\psi|z\rangle \cr &=& \frac{1}{\pi}\int_{\mathbb C}dzd\bar{z}|z)e^{\overleftarrow{\partial_{\bar z}}\overrightarrow{\partial_{z}}} (z|\psi) \end{eqnarray} which completes the proof. $\hfill{\square}$ Provided the definition of the upper (or covariant) symbol \cite{gazeau-hsiao-jellal, jellal} of an appropriate observable $\mathcal O$, given by \begin{eqnarray} \mathcal O = \frac{1}{\pi^{2}}\int_{\mathbb C^{2}} \hat{\mathcal O}|z_{\pm},\tau) (z_{\pm}, \tau| d^{2}z_{+}d^{2}z_{-}, \end{eqnarray} the upper symbol of the Hamiltonian $H_{q}$ is furnished by the formula \begin{eqnarray}{\label{ncident0}} \mbox{\^{H}}_{q}(z_{\pm}) = \frac{\hbar}{2} \left(\tilde \Omega_{+}|z_{+}|^{2} + \tilde \Omega_{-}|z_{-}|^{2} - \tilde \Omega\right) + k_{e,E}. \end{eqnarray} Assume that the term $ k_{e,E} = - \frac{1}{2}e(E_{1} x_{0} + E_{2} y_{0})$ is a mere constant, and set $H_{q} = \mathcal H_{OSC} + \frac{\tilde \omega_{c}}{2}L_{z} + k_{e,E}$, such that $\mathcal H_{OSC}$ and $L_{z}$ are given by \begin{eqnarray}{\label{exp07}} \mathcal H_{OSC} &=& \frac{1}{2M}(\tilde p_{x} - \tilde p_{x_{0}})^{2} + \frac{1}{2M}(\tilde p_{y} - \tilde p_{y_{0}})^{2} + \frac{M\Omega^{2}}{8}\left[\left(x - x_{0} \right)^{2} + \left(y - y_{0} \right)^{2} \right], \crcr L_{z} &=& (x - x_{0})(p_{y} - p_{y_{0}}) - (y - y_{0})(p_{x}-p_{x_{0}}), \end{eqnarray} with $p_{x_{0}} = -\frac{eB}{2}y_{0}, \quad p_{y_{0}} = \frac{eB}{2}x_{0}$ and $\tilde p^{2}_{\nu} = \left(1 - \frac{M\omega_{c}}{2}\theta + \left(\frac{M\Omega}{4} \theta\right)^{2}\right) p^{2}_{\nu}, \quad \nu = x, y.$ Then, setting $\Psi(r,\varphi) = R(r)e^{\imath \rho \varphi}$, where the polar coordinates $(x,y) = (r \sin{\varphi}, r\cos{\varphi})$ with $0<r<\infty $ and $0 \leq \varphi \leq \pi$ are introduced, the stationary Schr\"{o}dinger equation $\mathcal H \Psi = \mathcal E \Psi$, where $\mathcal H = \mathcal H_{OSC} + \frac{\tilde \omega_{c}}{2}L_{z}$, is detailed as follows: \begin{eqnarray} &&\left[-\frac{\hbar^{2}}{2M}\left(1-\frac{M\omega_{c}}{2}\theta + \left(\frac{M \Omega}{4}\theta\right)^{2}\right) \left(\partial^{2}_{r} + \frac{1}{r}\partial_{r} + \frac{1}{r^{2}}\partial^{2}_{\varphi}\right) -\imath \frac{\hbar}{2}\tilde \omega_{c}\partial_{\varphi} + \frac{M\Omega^{2}}{8}r^{2} \right]\Psi(r,\varphi) \cr &&= \mathcal E\Psi(r,\varphi), \end{eqnarray} providing the eigenstates and eigenvalues \begin{eqnarray}{\label{eigens00}} \Psi_{n,\rho}(r,\varphi) = (-1)^{n} \sqrt{\frac{\xi}{\pi}} \sqrt{\frac{n !}{(n+|\rho|) !}} \exp\left\{-\frac{\xi r^{2}}{2}\right\} \left(\sqrt{\xi}r\right)^{|\rho|} L^{(|\rho|)}_{n,\theta}(\xi r^{2}) e^{\imath \rho \varphi}, \end{eqnarray} and \begin{eqnarray}{\label{eigens01}} \mathcal E_{n,\rho, \theta} = \hbar \tilde \Omega \left(n + \frac{|\rho| + 1}{2}\right) + \frac{\hbar \tilde \omega_{c}}{2} \rho - \frac{1}{2} e(E_{1}x_{0} + E_{2}y_{0}), \end{eqnarray} respectively. Here the \begin{eqnarray} L^{(|\rho|)}_{n,\theta}(\xi r^{2}) = \sum_{m=0}^{n} (-1)^{m} \left(\begin{array}{c} n + |\rho| \\ n-m \end{array} \right) \frac{(\xi r^{2})^{m}}{m !} \end{eqnarray} are Laguerre polynomials; $\xi$ is given by \begin{eqnarray} \xi = \sqrt{\frac{(M \Omega/2\hbar)^{2}}{1-\frac{M\omega_{c}}{2}\theta + \left(\frac{M \Omega}{4}\theta\right)^{2}}} \end{eqnarray} and $n = 0, 1, 2, \dots$ is the principal quantum number while $\rho = 0, \pm 1, \pm 2, \dots$ stands for the angular moment quantum number. As already mentioned in the Introduction, we voluntarily choose to study the thermodynamics of the system faithfully following the analysis performed in \cite{gazeau-hsiao-jellal}. Although some main expressions appear similar in their form to those derived in that work, due to our system parameter reformulation, this approach has the interesting advantage to offer an easier relation comparison and to point up the contribution of the electric field not considered in that work. On this basis, from (\ref{eigens01}), we derive the thermodynamical potential using the formula: \begin{eqnarray}{\label{eigens02}} \Gamma_{\theta} = -\frac{1}{\beta}\sum_{n=0}^{\infty}\sum_{\rho=-\infty}^{\infty} \log\left[1 + e^{-\beta(\mathcal E_{n,\rho, \theta} - \mu)}\right], \end{eqnarray} where $\beta = 1/k_{B}T;$ $\mu$ is the chemical potential. The resolution of the identity (\ref{resolv}) allows to apply the Berezin-Lieb inequalities \cite{feng-klauder-staryer-jpg,gazeau-hsiao-jellal}: \begin{eqnarray} &&-\frac{1}{\beta \pi^{2}}\int_{\mathbb C^{2}} \mbox{log}\left(1+ e^{-\beta(\mbox{\^{H}}_{q}- \mu)}\right)d^{2}z_{+}d^{2}z_{-} \leq \Gamma_{\theta} \cr && \Gamma_{\theta} \leq -\frac{1}{\beta \pi^{2}}\int_{\mathbb C^{2}} \mbox{log}\left(1+ e^{-\beta(\mbox{\v{H}}_{q}- \mu)}\right)d^{2}z_{+}d^{2}z_{-}. \end{eqnarray} By using the lower and upper symbols of the Hamiltonian $H_{q},$ and performing the angular integrations, where $u_{+} = |z_{+}|^{2}, v_{-} = |z_{-}|^{2}$ with $z_{+} = re^{i\varphi}, \, z_{-} = \rho e^{i \phi}, \ r, \rho \geq 0, \varphi, \phi \in [0,2\pi)$, we get \begin{eqnarray}{\label{ineq00}} && - \frac{1}{\beta} \int_{0}^{\infty}du_{+} \int_{0}^{\infty}dv_{-} \; \mbox{log} (1+ e^{-\beta(\frac{\hbar}{2} \left(\tilde \Omega_{+}u_{+} + \tilde \Omega_{-}v_{-} - \tilde \Omega\right) - \mu_{e,E})}) \leq \Gamma_{\theta} \cr && \Gamma_{\theta} \leq - \frac{1}{\beta} \int_{0}^{\infty}du_{+} \int_{0}^{\infty}dv_{-} \; \mbox{log} (1+ e^{-\beta(\frac{\hbar}{2} \left(\tilde \Omega_{+}u_{+} + \tilde \Omega_{-}v_{-} + \tilde \Omega\right) -\mu_{e,E})}) \end{eqnarray} where $\mu_{e,E} = \mu + \frac{1}{2}e(E_{1} x_{0} + E_{2} y_{0})$. Setting $u = \frac{\beta \hbar}{2}(\tilde \Omega_{+} u_{+} + \tilde \Omega_{-}v_{-}), \, v= \frac{\beta \hbar}{2}\tilde \Omega_{+} u_{+} $, performing an integration by parts, and introducing the control parameters $\tilde \kappa'_{\pm} = \mbox{exp}(\beta(\mu_{e,E} \pm \frac{\hbar \tilde \Omega}{2})) = \mbox{exp}(-\beta k) \cdot \tilde \kappa_{\pm}$, where $\tilde \kappa_{\pm} = \mbox{exp}(\beta(\mu \pm \frac{\hbar \tilde \Omega}{2}))$, (\ref{ineq00}) is reduced to \begin{eqnarray}{\label{ineq01}} \phi(\tilde \kappa'_{+}) \leq \Gamma_{\theta} \leq \phi(\tilde \kappa'_{-}) \end{eqnarray} where $\phi(\tilde \kappa')$ takes the form \begin{eqnarray} \phi(\tilde \kappa') &=& -\frac{2 \tilde \kappa'}{\beta (\beta \hbar \omega_{0})^{2} } \int_{0}^{\infty}\frac{u^{2} e^{-u}}{1 + \tilde \kappa' e^{-u}} du \cr &=& \cases{ \begin{array}{lll} \frac{4 }{\beta (\beta \hbar \omega_{0})^{2} }F_{3}(-\tilde \kappa') \quad \qquad \qquad \qquad \qquad \qquad \quad \mbox{for} \quad \tilde \kappa' \leq 1 \\ \frac{4 }{\beta (\beta \hbar \omega_{0})^{2} } \left[ - \frac{(log \tilde \kappa' )^{3}}{6} - \frac{\pi^{2}log \tilde \kappa'}{6} + F_{3}(-\tilde \kappa'^{-1}) \right] \quad \mbox{for} \quad \tilde \kappa' > 1 \end{array}}. \end{eqnarray} The function $F_{s}$ is of the Riemann-Fermi-Dirac type \cite{gazeau-hsiao-jellal}: \begin{eqnarray} F_{s}(z)= \sum_{n=1}^{\infty}\frac{z^{n}}{n^{s}}. \end{eqnarray} In the high-temperature limit case, the assumption $|\mu_{e,E} \pm \frac{\hbar \tilde \Omega}{2}| \gg \beta$ gives $\tilde \kappa'_{\pm} \approx 1$ so that using (\ref{ineq00}) and (\ref{ineq01}) the thermodynamical potential $\Gamma_{\theta}$ can be approximated by \begin{eqnarray} \Gamma_{\theta} \approx \frac{4}{\beta^{3}\hbar^{2} }\frac{ F_{3}(-1)}{\omega^{2}_{0}} \approx -0,901543 \times \frac{4}{\beta} \left(\frac{1}{\beta \hbar \omega_{0}}\right)^{2}. \end{eqnarray} Consider the expression of the function $\phi$ as follows \cite{gazeau-hsiao-jellal}: \begin{eqnarray} \phi(\tilde \kappa'_{\pm}) = A \mp \frac{\Delta}{2} + S_{\pm} \end{eqnarray} where in our considered physical model situation \begin{eqnarray} A = -2\mu_{e,E} \left[\frac{1}{3}\left(\frac{\mu_{e,E}}{\hbar \omega_{0}}\right)^{2} + \frac{1}{4}\left(\frac{\tilde \Omega}{\omega_{0}}\right)^{2} + \frac{\pi^{2}}{3}\left(\frac{1}{\beta \hbar \omega_{0}}\right)^{2}\right], \end{eqnarray} \begin{eqnarray} \frac{\Delta}{2} = 2 \hbar \tilde \Omega \left[\frac{1}{2}\left(\frac{\mu_{e,E}}{\hbar \omega_{0}}\right)^{2} + \frac{1}{24}\left(\frac{\tilde \Omega}{\omega_{0}}\right)^{2} + \frac{\pi^{2}}{6}\left(\frac{1}{\beta \hbar \omega_{0}}\right)^{2}\right], \end{eqnarray} \begin{eqnarray} S_{\pm} = \frac{4 }{\beta (\beta \hbar \omega_{0})^{2} } F_{3}\left(-e^{-\beta(\mu_{e,E} \pm \frac{\hbar \tilde \Omega}{2})}\right). \end{eqnarray} At low temperature, $S_{\pm}$ can be approximated by \begin{eqnarray} S_{0} = \frac{4 }{\beta (\beta \hbar \omega_{0})^{2} } F_{3}\left(-e^{-\beta \mu_{e,E}}\right). \end{eqnarray} Considering the following ratio \begin{eqnarray} \frac{\Delta}{| A + S_{0}|} = \frac{\hbar \tilde \Omega}{\mu_{e,E}}\left[\frac{3 + \pi^{2}\left(\frac{1}{\beta \mu_{e,E}}\right)^{2} + \frac{1}{4}\left(\frac{\hbar \tilde \Omega}{\mu_{e,E}}\right)^{2}} {1 + \pi^{2}\left(\frac{1}{\beta \mu_{e,E}}\right)^{2} + \frac{3}{4}\left(\frac{\hbar \tilde \Omega}{\mu_{e,E}}\right)^{2}- \left(\frac{1}{\beta \mu_{e,E}}\right)^{3} F_{3}\left(-e^{-\beta \mu_{e,E}}\right)} \right] \end{eqnarray} which tends to zero at low temperature, namely $\mu_{e,E} \gg \hbar \tilde \Omega / 2$ and $\mu_{e,E} \gg 1 / \beta$, the thermodynamical potential can be obtained as \begin{eqnarray} \Gamma_{\theta} & \approx & A + S_{0} \cr &=& -2\mu_{e,E} \left[\frac{1}{3}\left(\frac{\mu_{e,E}}{\hbar \omega_{0}}\right)^{2} + \frac{1}{4}\left(\frac{\tilde \Omega}{\omega_{0}}\right)^{2} + \frac{\pi^{2}}{3}\left(\frac{1}{\beta \hbar \omega_{0}}\right)^{2} - \frac{2}{\beta \mu_{e,E} (\beta \hbar \omega_{0})^{2} } F_{3}\left(-e^{-\beta \mu_{e,E}}\right)\right]. \nonumber \\ \cr \end{eqnarray} The average number of electrons is given by \begin{eqnarray} \langle N_{e} \rangle &\approx & -\partial_{\mu} (A + S_{0}) \cr &= & 4\left(\frac{\mu_{e,E}}{\hbar \omega_{0}}\right)^{2}\left[\frac{1}{2} + \frac{1}{8}\left(\frac{\hbar \tilde \Omega}{\mu_{e,E}}\right)^{2} + \frac{\pi^{2}}{6}\left(\frac{1}{\beta \mu_{e,E}}\right)^{2} + \left(\frac{1}{\beta \mu_{e,E}}\right)^{2}F_{2}\left(-e^{-\beta \mu_{e,E}}\right)\right] \cr &\approx & 2\left(\frac{\mu_{e,E}}{\hbar \omega_{0}}\right)^{2} \quad \mbox{for} \quad \mu_{e,E} \gg \hbar \tilde \Omega / 2 \quad \mbox{and} \quad \mu_{e,E} \gg 1 / \beta. \end{eqnarray} The magnetic moment $\mathcal M_{\theta} = -\left(\frac{\partial \Gamma_{\theta}}{\partial B}\right)_{\mu}$ is derived as follows: \begin{eqnarray}{\label{magn00}} \mathcal M_{\theta} &=& \frac{e \mu}{Mc}\left[\frac{\omega_{c}}{\omega^{2}_{0}} - \frac{M \theta}{4\omega^{2}_{0}}(2\omega^{2}_{c} + \Omega^{2}) + 2 \frac{\omega_{c}}{\omega^{2}_{0}} \left(\frac{M \Omega \theta}{4}\right)^{2} \right] \cr && + \frac{e^{2}}{ 2Mc}(E_{1} x_{0} + E_{2} y_{0}) \left[\frac{\omega_{c}}{\omega^{2}_{0}} - \frac{M \theta}{4\omega^{2}_{0}}(2\omega^{2}_{c} + \Omega^{2}) + 2 \frac{\omega_{c}}{\omega^{2}_{0}} \left(\frac{M \Omega \theta}{4}\right)^{2} \right] \end{eqnarray} which provides the susceptibility $\chi_{\theta} = \frac{\partial \mathcal M_{\theta}}{\partial B}$ obtained in the following form \begin{eqnarray}{\label{magn01}} \chi_{\theta} &=& \left(\frac{e}{Mc\omega_{0}}\right)^{2} \mu\left[1- \frac{3}{2}M\theta \omega_{c} - (M\theta \omega_{0})^{2} + 6\left(\frac{M \Omega \theta}{4}\right)^{2} \right] \cr && + \frac{e^{3}}{2(Mc\omega_{0})^{2}} (E_{1} x_{0} + E_{2} y_{0}) \left[1- \frac{3}{2}M\theta \omega_{c} - (M\theta \omega_{0})^{2} + 6\left(\frac{M \Omega \theta}{4}\right)^{2} \right]. \end{eqnarray} In both expressions (\ref{magn00}) and (\ref{magn01}), the second terms stand for the contributions engendered by the presence of the electric field. \begin{rmk} In the absence of the electric field, this model is reduced to that described by the Fock-Darwin Hamiltonian investigated in the study of the orbital magnetism of a two-dimensional noncommutative confined system \cite{jellal}. In that work, it has been shown that the degeneracy of Landau levels can be lifted via the $\theta$-term at weak magnetic field limit, i.e. for $\omega_{c} \ll \omega_{0}$. \end{rmk} By use of the Poisson summation formula \cite{landau3} in the sum over $n$ and $\rho$ in (\ref{eigens02}), we obtain \begin{eqnarray} \Gamma_{\theta} = \Gamma^{0}_{\theta} + \Gamma^{L}_{\theta} + \Gamma^{OSC}_{\theta}, \end{eqnarray} where \begin{eqnarray}{\label{gamma0}} \Gamma^{0}_{\theta} &=& -\frac{1}{\beta(\hbar \omega_{0})^{2}} \int_{0}^{\infty} d\varepsilon \int_{0}^{\infty} d\eta \log{(1 + e^{-\beta(\varepsilon + \eta - \mu_{e, E})})} \crcr && + \frac{1}{12}\mu_{e,E}, \end{eqnarray} \begin{eqnarray}{\label{gamma1}} \Gamma^{L}_{\theta} = \frac{\mu_{e,E}}{24} \left(\frac{\omega_{c} -\Theta_{M, \omega_{c}, \theta}}{\omega_{0}}\right)^{2}, \end{eqnarray} \begin{eqnarray}{\label{gamma2}} \Gamma^{OSC}_{\theta} &=& \frac{1}{2 \pi \beta}\sum_{k=1}^{\infty}(-1)^{k} \left[\left(\frac{\tilde \Omega}{\omega_{0}}\right)^{2} \frac{1}{k^{2}} - \frac{\pi^{2}}{3} \right] \times \frac{\sin{\left\{2\pi_{\tilde \Omega}k\mu_{e,E}\right \}}} {\mbox{Sinh}_{k,\tilde \Omega}} \crcr && + \frac{1}{2\pi \beta}\sum_{\sigma = \pm}\sum_{l=1}^{\infty}\frac{\tilde \Omega_{\sigma}}{\tilde \Omega}\frac{1}{l^{2}} \frac{\sin{\left\{2\pi_{\tilde \Omega_{\sigma}}l \mu_{e,E}\right \}}} {\mbox{Sinh}_{l,\tilde \Omega_{\sigma}}} \crcr && + \frac{1}{\pi \beta} \sum_{\sigma = \pm} \sum_{k=1}^{\infty} \sum_{l=1}^{\infty}\frac{(-1)^{k}}{l} \crcr && \times [ \frac{\sin{\left\{\pi_{\tilde \Omega}\mu_{e, E} K_{k,l,\tilde \Omega, \sigma;-}\right \}} \cos{\left\{\pi_{\tilde \Omega}\mu_{e, E} K_{k,l,\tilde\Omega, \sigma;+}\right \}}} {K_{k,l,\tilde \Omega, \sigma;-} \mbox{Sinh}_{l,\tilde \Omega_{\sigma}}} + \crcr && \frac{\sin{\left\{\pi_{\tilde \Omega}\mu_{e, E} K_{k,l,\tilde \Omega, \sigma;+}\right \}} \cos{\left\{\pi_{\tilde \Omega}\mu_{e, E} K_{k,l,\tilde \Omega, \sigma;-}\right \}}} {K_{k,l,\tilde \Omega, \sigma;+}\mbox{Sinh}_{l,\tilde \Omega_{\sigma}}}] \nonumber \\ \end{eqnarray} with the relation $\mu \gg T$ assumed. From the thermodynamical potential $\Gamma_{\theta}$, the magnetic moment $\mathcal M_{\theta} = -\left(\frac{\partial \Gamma_{\theta}}{\partial B}\right)_{\mu}$ can be derived to give the following contributions: \begin{eqnarray} \mathcal M_{\theta} = \mathcal M^{L}_{\theta} + \mathcal M^{OSC}_{\theta,1} + \mathcal M^{OSC}_{\theta,2} + \mathcal M^{OSC}_{\theta,3}, \end{eqnarray} where \begin{itemize} \item \begin{eqnarray} \mathcal M^{0}_{\theta} &=& -\left(\frac{\partial \Gamma^{0}_{\theta}}{\partial B}\right)_{\mu} =0, \end{eqnarray} \item \begin{eqnarray} \mathcal M^{L}_{\theta} = -\left(\frac{\partial \Gamma^{L}_{\theta}}{\partial B}\right)_{\mu} = -\frac{\mu_{e,E}}{12}\left[\frac{\omega_{c} -\Theta_{M, \omega_{c}, \theta}}{\omega^{2}_{0}} \right]\mathcal I_{e, B, \theta, M}, \end{eqnarray} \item \begin{eqnarray} &&\mathcal M^{OSC}_{\theta,1} = -\frac{1}{2 \pi \beta} \sum_{k=1}^{\infty}(-1)^{k}[\quad \frac{1}{k^{2}\omega^{2}_{0}} \left(\frac{2e\omega_{c}}{Mc}\left(\frac{\tilde \Omega}{\Omega}\right)^{2} + \Omega^{2}B_{\theta}\right)\times \cr && \times \frac{\sin{\left\{2\pi_{\tilde \Omega}k\mu_{e,E}\right \}}} {\mbox{Sinh}_{k,\tilde \Omega}} - \frac{2\pi k} {\hbar \, \mbox{Sinh}_{k,\tilde \Omega}} \left[\left(\frac{\tilde \Omega}{\omega_{0}}\right)^{2} \frac{1}{k^{2}} - \frac{\pi^{2}}{3}\right] \times \frac{\mathcal K_{\omega_{c}, e, \theta, B, M}}{\tilde \Omega^{2}} \times \cr && \times\left[\mu_{e, E} \cos{\left\{2\pi_{\tilde \Omega}k\mu_{e,E}\right \}} - \frac{\pi}{\beta} \mbox{Coth}_{k, \tilde \Omega} \times \sin{\left\{2\pi_{\tilde \Omega}k\mu_{e,E}\right \}} \right]\quad ], \end{eqnarray} \item \begin{eqnarray} \mathcal M^{OSC}_{\theta,2} &=& -\frac{1}{2 \pi \beta} \sum_{\sigma = \pm}\sum_{l=1}^{\infty}\frac{1}{l^{2}} [\quad [\pm \frac{1}{\tilde \Omega^{2}}[\mathcal L_{e,B, \theta, M} - \frac{\tilde \omega_{c}}{2}\mathcal K_{\omega_{c}, e, \theta, B, M}]] \times \frac{\sin\left\{2\pi_{\tilde \Omega_{\sigma}}l \mu_{e,E}\right \}} {\mbox{Sinh}_{l,\tilde \Omega_{\sigma}}} \cr && - \frac{\tilde \Omega_{\sigma}}{\tilde \Omega} \frac{2\pi l} {\hbar \, \mbox{Sinh}_{l,\tilde \Omega_{\sigma}}} \times \frac{1}{2\tilde \Omega_{\sigma}^{2}} \left[\mathcal K_{\omega_{c}, e, \theta, B, M} \pm \mathcal I_{e,B, \theta, M} \right]\times \cr &&\times \left[\mu_{e, E} \cos{\left\{2\pi_{\tilde \Omega_{\sigma}}l \mu_{e,E}\right \}} - \frac{\pi}{\beta} \mbox{Coth}_{l, \tilde \Omega_{\sigma}} \times \sin{\left\{2\pi_{\tilde \Omega_{\sigma}}l \mu_{e,E}\right \}} \right]\quad ], \end{eqnarray} \item \begin{eqnarray} &&\mathcal M^{OSC}_{\theta,3} = - \frac{1}{\pi \beta} \sum_{\sigma = \pm} \sum_{k=1}^{\infty} \sum_{l=1}^{\infty}\frac{(-1)^{k}}{l} \crcr &&\times[\quad [\frac{1} {\left[K_{k,l,\tilde \Omega, \sigma;-}\mbox{Sinh}_{l,\tilde \Omega_{\sigma}}\right]} \times \pi_{\tilde \Omega}\mu_{e, E} [\left[k\cos\left\{2\pi_{\tilde \Omega}k\right \}-l \frac{\tilde \Omega}{\tilde \Omega_{\sigma} } \cos{\left\{2\pi_{\tilde \Omega_{\sigma}}l \mu_{e,E} \right \}} \right] \times \cr && \times \left[-\frac{\mathcal K_{\omega_{c}, e, \theta, B, M}}{\tilde \Omega} \right] \pm l \cos{\left\{2\pi_{\tilde \Omega_{\sigma}}l \mu_{e,E}\right \}} \times \frac{1}{\tilde \Omega^{2}_{\sigma}}\left(\mathcal L_{\omega_{c}, e, \theta, B, M} - \frac{\tilde \omega_{c}}{2} \mathcal K_{\omega_{c}, e, \theta, B, M}\right)] \; + \cr && \; + \frac{1} {\left[K_{k,l,\tilde \Omega, \sigma;+}\mbox{Sinh}_{l,\tilde \Omega_{\sigma}}\right]} \times \pi_{\tilde \Omega}\mu_{e, E} [\left[k\cos\left\{2\pi_{\tilde \Omega}k\right \}-l \frac{\tilde \Omega}{\tilde \Omega_{\sigma} } \cos{\left\{2\pi_{\tilde \Omega_{\sigma}}l \mu_{e,E}\right \}} \right] \times \cr && \times \left[-\frac{\mathcal K_{\omega_{c}, e, \theta, B, M}}{\tilde \Omega} \right] \mp l \cos{\left\{2\pi_{\tilde \Omega_{\sigma}}l \mu_{e,E}\right \}} \times \frac{1}{\tilde \Omega^{2}_{\sigma}} \left(\mathcal L_{\omega_{c}, e, \theta, B, M} - \frac{\tilde \omega_{c}}{2} \mathcal K_{\omega_{c}, e, \theta, B, M}\right)] \; - \cr &&-\frac{\sin{\left\{\pi_{\tilde \Omega}\mu_{e, E} K_{k,l,\tilde \Omega, \sigma;+}\right \}}\cos{\left\{\pi_{\tilde \Omega}\mu_{e, E} K_{k,l,\tilde \Omega, \sigma;-}\right \}}}{\left[K_{k,l,\tilde \Omega, \sigma;+}\mbox{Sinh}_{l,\tilde \Omega_{\sigma}} \right]^{2}} \times \cr &&\times [\;\; \mp \frac{l}{(\tilde \Omega_{\sigma})^{2}}\left(\mathcal L_{e,B, \theta, M} - \frac{\tilde \omega_{c}}{2}\mathcal K_{\omega_{c}, e, \theta, B, M} \right)\mbox{Sinh}_{l,\tilde \Omega_{\sigma}} -\cr && - K_{k,l,\tilde \Omega, \sigma;+} \frac{2\pi^{2}l} {\beta \hbar }\mbox{Cosh}_{l, \tilde \Omega_{\sigma}} \times \frac{1}{2(\tilde \Omega_{\sigma})^{2}} \left(\mathcal K_{\omega_{c}, e, \theta, B, M} \pm \mathcal I_{e, B, \theta, M}\right) \; \; ] - \cr && - \frac{\sin{\left\{\pi_{\tilde \Omega}\mu_{e, E} K_{k,l,\tilde \Omega, \sigma;-}\right \}}\cos{\left\{\pi_{\tilde \Omega}\mu_{e, E} K_{k,l,\tilde \Omega, \sigma;+}\right \}}}{\left[K_{k,l,\tilde \Omega, \sigma;-}\mbox{Sinh}_{l,\tilde \Omega_{\sigma}} \right]^{2}} \times \cr &&\times [\; \; \pm \frac{l}{(\tilde \Omega_{\sigma})^{2}}\left(\mathcal L_{e,B, \theta, M} - \frac{\tilde \omega_{c}}{2}\mathcal K_{\omega_{c}, e, \theta, B, M}\right)\mbox{Sinh}_{l,\tilde \Omega_{\sigma}} - \cr && - K_{k,l,\tilde \Omega, \sigma;-} \frac{2\pi^{2}l} {\beta \hbar } \mbox{Cosh}_{l, \tilde \Omega_{\sigma}} \times \frac{1}{2(\tilde \Omega_{\sigma})^{2}} \left(\mathcal K_{\omega_{c}, e, \theta, B, M} \pm \mathcal I_{e, B, \theta, M}\right)\; \; ] \quad ] ; \end{eqnarray} \end{itemize} with \begin{eqnarray}{\label{quant00}} \Theta_{M, \omega_{c}, \theta} = \frac{M\omega^{2}_{c} \theta}{4} + M \omega^{2}_{0}\theta,\qquad B_{\theta} = \frac{e\theta}{2c}\left(\frac{eB}{4c}\theta - 1\right), \end{eqnarray} \begin{eqnarray}{\label{quant02}} \pi_{\tilde \Omega} = \frac{\pi}{\hbar \tilde \Omega}, \qquad \pi_{\tilde \Omega_{\sigma}} = \frac{\pi}{\hbar \tilde \Omega_{\sigma}},\qquad k \pm \frac{\tilde \Omega}{ \tilde \Omega_{\sigma}}l = K_{k,l,\tilde \Omega, \sigma;\pm}, \end{eqnarray} \begin{eqnarray}{\label{quant04}} \sinh\left\{\frac{2\pi^{2}l} {\beta \hbar \tilde \Omega_{\sigma}}\right\} = \mbox{Sinh}_{l,\tilde \Omega_{\sigma}} , \qquad \sinh\left\{\frac{2\pi^{2}k} {\beta \hbar \tilde \Omega}\right\} = \mbox{Sinh}_{k,\tilde \Omega}, \end{eqnarray} \begin{eqnarray}{\label{quant05}} \cosh\left\{\frac{2\pi^{2}l} {\beta \hbar \tilde \Omega_{\sigma}}\right\} = \mbox{Cosh}_{l,\tilde \Omega_{\sigma}} , \qquad \cosh\left\{\frac{2\pi^{2}k} {\beta \hbar \tilde \Omega}\right\} = \mbox{Cosh}_{k,\tilde \Omega}, \end{eqnarray} \begin{eqnarray}{\label{quant06}} \mathcal I_{e, B, \theta, M} = \frac{e}{Mc}\left(1 - \frac{eB}{2c}\theta\right),\qquad \mathcal K_{\omega_{c}, e, \theta, B, M} = \frac{\omega_{c} e\tilde \Omega}{Mc\Omega^{2}} + \frac{\Omega^{2}}{\tilde \Omega}\frac{e\theta}{4c}\left(\frac{eB}{4c}\theta - 1\right), \end{eqnarray} \begin{eqnarray}{\label{quant08}} \mathcal L_{e,B, \theta, M} = \frac{e\tilde \Omega}{2Mc}\left(1 - \frac{eB}{2c}\theta\right). \end{eqnarray} \begin{rmk} In the situation where the $\theta-$ parameter is switched off, we obtain: \begin{eqnarray} \Theta_{M, \omega_{c}, \theta = 0} = 0, \qquad B_{\theta = 0} = 0, \end{eqnarray} \begin{eqnarray} \mathcal I_{e, B, \theta=0, M} = \frac{e}{Mc}, \qquad \mathcal K_{\omega_{c}, e, \theta=0, B, M} = \frac{e\omega_{c} }{Mc\Omega}, \qquad \mathcal L_{e,B, \theta= 0, M} = \frac{e \Omega}{2Mc}, \end{eqnarray} where we get $\tilde \Omega \equiv \Omega, \quad \tilde \omega_{c} \equiv \omega_{c}$ and $\tilde \Omega_{\sigma} \equiv \Omega_{\sigma} = \frac{\Omega \pm \omega_{c}}{2}$. The term $k_{e,E} = - \frac{1}{2}e(E_{1} x_{0} + E_{2} y_{0})$ giving $\mu_{e,E}$ contributes to the chemical potential $\mu$. The expressions (\ref{gamma0})-(\ref{gamma2}) are identified for $k_{e, E} = 0$ with those derived in \cite{ishikawa-fukuyama} for the Hamiltonian describing a two-dimensional electrons confined by an isotropic harmonic potential in a perpendicular magnetic field, linked to the Landau problem in the commutative case, for which coherent states have been constructed on the Fock Hilbert space \cite{gazeau-hsiao-jellal}. Besides, the analysis done in \cite{jan-scholtz} for an ideal fermion gas may also permit to understand the behavior of the studied physical system. \end{rmk} \section{Matrix vector coherent states for $\mathbb H^{dim}$} \subsection{Construction} Consider the set of continuous mappings $F_{n}(\mathfrak Z): \mathcal M_{4}(\mathbb C)\rightarrow \mathcal B(\mathcal H_{c})$ satisfying \begin{eqnarray}{\label{bound00}} 0 < \mathcal N(\mathfrak Z) = \sum_{n \in \mathbb N} tr_{c} [|F_{n}(\mathfrak Z)|^{2}] < \infty, \end{eqnarray} where $\mathcal M_{4}(\mathbb C)$ is the space of $4 \times 4$ complex matrices. Then, there follows that the linear map given by \begin{eqnarray} T(\mathfrak Z): \mathbb C^{4} &\rightarrow& \mathbb C^{4} \otimes \mathcal H_{c} \cr \chi^{j} &\mapsto& T(\mathfrak Z)\chi^{j} = (\mathcal N(\mathfrak Z))^{-1/2} \sum_{n \in \mathbb N} F_{n}(\mathfrak Z) |\chi^{j}, n \rangle, \qquad j=1,2,3,4 \end{eqnarray} is bounded. Set \begin{eqnarray}{\label{bound02}} F_{n}(\mathfrak Z)|\chi^{j}, \tilde n\rangle &=& \frac{\mathfrak Z^{n} \bar{\mathfrak Z}^{\tilde n} }{\sqrt{R(n)R(\tilde n)}}|\chi^{j}, \tilde n\rangle \end{eqnarray} where $\mathfrak Z = diag(z_{1}, z_{2}, z_{3}, z_{4})$, $z_{j} = r_{j}e^{\imath \theta_{j}}$ with $r_{j} \geq 0, \theta_{j} \in [0,2\pi)$ and $R(n) = n !\mathbb I_{4}$. Let $\mathfrak W = diag(w_{1}, w_{2}, w_{3}, w_{4}), w_{j} = \rho_{j}e^{\imath \varphi_{j}}$ where $\rho_{j} \geq 0, \varphi_{j} \in [0,2\pi)$ and set $R(m) = m!\mathbb I_{4}$. With this setup and by analogy with the constructions provided in \cite{ali-englis-gazeau, thirulo, ben-scholtz}, the set of vectors formally given by \begin{eqnarray}{\label{ncvcs00}} |\mathfrak Z, \mathfrak W, \tau, j, \tilde n, \tilde m) &=& (\mathcal N(\mathfrak Z, \mathfrak W))^{-1/2}\sum_{n,m=0}^{\infty} \frac{\mathfrak Z^{n} \bar{\mathfrak Z}^{\tilde n} }{\sqrt{R(n)R(\tilde n)}} \frac{\mathfrak W^{m} \bar{\mathfrak W}^{\tilde m} }{\sqrt{R(m)R(\tilde m)}} e^{-\imath \tau \tilde E_{n,m}}\cr &&|\chi^{j}\rangle \otimes |\tilde n \rangle \langle\tilde m| \otimes |m\rangle \langle n| \end{eqnarray} forms a set of VCS on $\mathbb C^{4} \otimes \mathcal H_{q} \otimes \mathcal H_{q}$. These states satisfy a normalization condition to unity given by \begin{eqnarray}{\label{normnvcs00}} \sum_{j=1}^{4}\sum_{\tilde n, \tilde m = 0}^{\infty} (\mathfrak Z, \mathfrak W, \tau, j, \tilde n, \tilde m|\mathfrak Z, \mathfrak W, \tau, j, \tilde n, \tilde m) = 1 \end{eqnarray} with \begin{eqnarray} \mathcal N(\mathfrak Z, \mathfrak W) = e^{2(r^{2}_{1} + \rho^{2}_{1})} + e^{2(r^{2}_{2} + \rho^{2}_{2})} + e^{2(r^{2}_{3} + \rho^{2}_{3})} + e^{2(r^{2}_{4}+ \rho^{2}_{4})}. \end{eqnarray} Let $D = \{(z_{1}, z_{2}, z_{3}, z_{4}) \in \mathbb C^4 \,| \; \, |z_{j}|< \infty, j=1,2,3,4 \}$, $\mathcal D = \{(w_{1}, w_{2}, w_{3}, w_{4}) \in \mathbb C^4 \,| \; \, |w_{j}|< \infty, j=1,2,3,4 \}$. \begin{pro} The VCS (\ref{ncvcs00}) satisfy on the quantum Hilbert space $\mathbb C^{4} \otimes \mathcal H_{q} \otimes \mathcal H_{q}$ a resolution of the identity as follows: \begin{eqnarray}{\label{ncres03}} &&\sum_{j=1}^{4}\sum_{\tilde m = 0}^{\infty}\sum_{\tilde n = 0}^{\infty}\frac{1}{\tilde m !\tilde n !} \cr &&\int_{D \times \mathcal D} d\mu(\mathfrak Z, \mathfrak W)(\overrightarrow{\partial_{\bar z_{j}}})^{\tilde n} (\overrightarrow{\partial_{\bar w_{j}}})^{\tilde m}[\mathcal N(\mathfrak Z, \mathfrak W)|\mathfrak Z, \mathfrak W, \tau, j, \tilde n, \tilde m) (\mathfrak Z, \mathfrak W, \tau, j, \tilde n, \tilde m|] (\overleftarrow{\partial_{z_{j}}})^{\tilde n}(\overleftarrow{\partial_{w_{j}}})^{\tilde m} \cr && = \mathbb I_{4} \otimes \mathbb I_{q} \otimes \mathbb I_{q} \end{eqnarray} where the measure $d\mu(\mathfrak Z, \mathfrak W)$ is given on $D \times \mathcal D$ by \begin{eqnarray}{\label{ncres05}} d\mu(\mathfrak Z, \mathfrak W) = \frac{1}{(2\pi)^{8}} \prod_{j=1}^{4}\lambda(r_{j}) \varpi(\rho_{j})dr_{j}d\rho_{j} d\theta_{j}d\varphi_{j}. \end{eqnarray} \end{pro} {\bf Proof:} In order to prove (\ref{ncres03}), let us first expand the integrand as \begin{eqnarray}{\label{bound01}} &&\sum_{j=1}^{4}(\overrightarrow{\partial_{\bar{z}_{j}}})^{\tilde n} (\overrightarrow{\partial_{\bar{w}_{j}}})^{\tilde m}[\mathcal N(\mathfrak Z, \mathfrak W)|\mathfrak Z, \mathfrak W, \tau, j, \tilde n, \tilde m) (\mathfrak Z, \mathfrak W, \tau, j, \tilde n, \tilde m|] (\overleftarrow{\partial_{z_{j}}})^{\tilde n}(\overleftarrow{\partial_{w_{j}}})^{\tilde m} \cr &=& \sum_{j=1}^{4}\sum_{n,n', m, m' =0}^{\infty}(\overrightarrow{\partial_{\bar{z}_{j}}})^{\tilde n}(\overrightarrow{\partial_{\bar{w}_{j}}})^{\tilde m} (F_{n}(\mathfrak Z)F_{m}(\mathfrak W)|\chi^{j}\rangle \otimes |\tilde n \rangle \langle\tilde m| \otimes |m\rangle \langle n| )\cr && \times (F_{n'}(\mathfrak Z)F_{m'}(\mathfrak W)|\chi^{j}\rangle \otimes |\tilde n\rangle \langle\tilde m| \otimes |m'\rangle \langle n'| )^{\dag} (\overleftarrow{\partial_{z_{j}}})^{\tilde n}(\overleftarrow{\partial_{w_{j}}})^{\tilde m} \cr &:=& \sum_{j=1}^{4}\sum_{n,n', m, m'=0 }^{\infty}[(\partial_{\bar{z}_{j}})^{\tilde n}(F_{n}(\mathfrak Z)(\partial_{\bar{w}_{j}})^{\tilde m}F_{m}(\mathfrak W) |\chi^{j}, \tilde n, \tilde m) (\chi^{j}, \tilde n, \tilde m|\cr && \times (\partial_{z_{j}})^{\tilde n}((F_{n'}(\mathfrak Z))^{*}(\partial_{w_{j}})^{\tilde m}(F_{m'}(\mathfrak W))^{*}]\otimes | m \rangle \langle n|n'\rangle \langle m'|, \end{eqnarray} call $\mathcal I$ the operator on the left hand side of (\ref{ncres03}) and choose arbitrary vectors $\Psi, \Psi', \Phi, \Phi'$ on the Hilbert space $\mathcal H_{q} \otimes \mathcal H_{q}$. Then, using (\ref{bound01}), we have \begin{eqnarray} (\Psi, \Psi'|\mathcal I|\Phi,\Phi') &=& \sum_{j=1}^{4}\sum_{\tilde m = 0}^{\infty}\sum_{\tilde n = 0}^{\infty}\frac{1}{\tilde m !\tilde n !} \int_{D \times \mathcal D} d\mu(\mathfrak Z, \mathfrak W) \times \cr && \times \sum_{n,n', m, m'=0}^{\infty} (\Psi|[(\partial_{\bar{z}_{j}})^{\tilde n} (F_{n}(\mathfrak Z)(\partial_{\bar{w}_{j}})^{\tilde m}F_{m}(\mathfrak W) |\chi^{j}, \tilde n, \tilde m) \cr && (\chi^{j}, \tilde n, \tilde m| (\partial_{z_{j}})^{\tilde n}((F_{n'}(\mathfrak Z))^{*}(\partial_{w_{j}})^{\tilde m}(F_{m'}(\mathfrak W))^{*}]|\Phi) \otimes (\Psi'| m \rangle \langle n| n'\rangle \langle m'|\Phi'). \nonumber \\ \end{eqnarray} The use of the boundedness of the operator $T$ and of the fact that $\sum_{j=1}^{4}|\chi^{j}\rangle \langle \chi^{j}| = \mathbb I_{4}$ allows to interchange the sum over $j$ with the integral and the four sums over $n,n'$ and $m,m'$, respectively. Thus, \begin{eqnarray}{\label{ncres04}} && (\Psi, \Psi'|\mathcal I|\Phi,\Phi') =\sum_{\tilde m = 0}^{\infty}\sum_{\tilde n = 0}^{\infty}\frac{1}{\tilde m !\tilde n !}\int_{D \times \mathcal D} d\mu(\mathfrak Z, \mathfrak W) \times \cr &&\times \sum_{n,n', m, m'=0}^{\infty} [\sum_{j=1}^{4} (\Psi|\chi^{j}\rangle[(\partial_{\bar{z}_{j}})^{\tilde n}(F_{n}(\mathfrak Z)(\partial_{\bar{w}_{j}})^{\tilde m}F_{m}(\mathfrak W) |\tilde n, \tilde m) \cr && (\tilde n, \tilde m| (\partial_{z_{j}})^{\tilde n}((F_{n'}(\mathfrak Z))^{*}(\partial_{w_{j}})^{\tilde m}(F_{m'}(\mathfrak W))^{*}]\langle \chi^{j}|\Phi)] \otimes (\Psi'| m \rangle \langle n|n'\rangle \langle m'|\Phi') \cr &=& \sum_{\tilde m = 0}^{\infty}\sum_{\tilde n = 0}^{\infty}\frac{1}{\tilde m !\tilde n !} \int_{D \times \mathcal D} d\mu(\mathfrak Z, \mathfrak W) \sum_{n,n', m, m'=0}^{\infty}\cr &&diag(\frac{ \tilde n! z^{n}_{1}}{\sqrt{n! \tilde n!}} \frac{\tilde m!w^{m}_{1}}{\sqrt{m! \tilde m!}}, \dots, \frac{\tilde n!z^{n}_{4}}{\sqrt{n! \tilde n!}} \frac{\tilde m! w^{m}_{4}}{\sqrt{m! \tilde m!}}) \times diag(\frac{\tilde n!\bar z^{n'}_{1}}{\sqrt{n'! \tilde n!}} \frac{\tilde m!\bar w^{m'}_{1}}{\sqrt{m'! \tilde m!}}, \dots, \frac{\tilde n!\bar z^{n'}_{4}}{\sqrt{n'! \tilde n!}} \frac{\tilde m!\bar w^{m'}_{4}}{\sqrt{m'! \tilde m!}})\cr && (\Psi|\tilde n, \tilde m) (\tilde n, \tilde m|\Phi)\otimes (\Psi'|n',m') (n,m|\Phi') \cr &=& \sum_{m,\tilde m = 0}^{\infty}\sum_{n,\tilde n = 0}^{\infty} (\Psi|diag(2\pi \int_{0}^{\infty}r_{1}dr_{1}W(r_{1})\frac{r^{2 n}_{1}}{ n !} 2\pi \int_{0}^{\infty}\rho_{1}d\rho_{1}\tilde W(\rho_{1})\frac{\rho^{2 m}_{1}}{ m !},\cr && \dots, 2\pi \int_{0}^{\infty}r_{4}dr_{4}W(r_{4})\frac{r^{2 n}_{4}}{n !} 2\pi \int_{0}^{\infty}\rho_{4}d\rho_{4}\tilde W(\rho_{4})\frac{\rho^{2 m}_{4}}{ m !}) \langle \tilde n|\tilde n\rangle \langle\tilde m|\tilde m \rangle |\Phi) \cr &&\otimes (\Psi'|\langle n |n\rangle \langle m|m\rangle |\Phi') \cr \cr &=& (\Psi,\Psi'|\Phi,\Phi') \end{eqnarray} where the moment problems are solved by $W(r_{j}) = (1/2\pi) \lambda(r_{j}), \tilde W(\rho_{j}) = (1/2\pi)\varpi(\rho_{j})$ with $\lambda(r_{j}) = 2e^{-r^{2}_{j}}, \, \varpi(\rho_{j}) = 2e^{-\rho^{2}_{j}}$, respectively. $\hfill{\square}$ There also results this: \begin{pro} These states fulfill the following properties: \begin{itemize} \item [i-]Temporal stability \begin{eqnarray} \mathbb U(t)|\mathfrak Z, \mathfrak W, \tau, j, \tilde n, \tilde m) = |\mathfrak Z, \mathfrak W, \tau + t, j, \tilde n, \tilde m), \qquad \mathbb U(t) = e^{-\imath t \mathbb H^{dim}}. \end{eqnarray} \item [ii-]Action identity \begin{eqnarray} \sum_{j=1}^{4}\sum_{\tilde n, \tilde m = 0}^{\infty} (\mathfrak Z, \mathfrak W, \tau, j, \tilde n, \tilde m|\mathbb H^{dim}|\mathfrak Z, \mathfrak W, \tau, j , \tilde n, \tilde m) = \frac{1}{2}\left(\frac{\tilde \Omega_{+}}{\tilde \Omega}|\mathfrak Z|^{2} + \frac{\tilde \Omega_{-}}{\tilde \Omega} |\mathfrak W|^{2}+ 1\right). \end{eqnarray} \end{itemize} \end{pro} \subsection{Quaternionic vector coherent states} \subsubsection{Construction} We briefly discuss now the QVCS construction and their connection with the studied VCS. In (\ref{ncvcs00}), set $\mathfrak Z = diag(z,\bar z, z, \bar z)$ and $\mathfrak W = diag(w, \bar w, w, \bar w)$ where $z = re^{-\imath \tilde \phi}, w = \rho e^{-\imath \tilde \varphi}$ with $r, \rho \geq 0, \, \tilde \phi, \tilde \varphi \in [0, 2\pi)$. Consider $u, v \in SU(2)$ and take $\mathcal Z = U \mathfrak Z U^{\dag}, \mathcal W = V \mathfrak W V^{\dag}$ where $U = diag(u,u), \, V = diag(v,v)$. Introduce the quaternions $\mathfrak q = A(r)e^{\imath \vartheta \Theta(\hat n )}, \mathfrak Q = B(\rho)e^{\imath \gamma \tilde \Theta(\hat k) }$ with $\Theta(\hat n ) = diag(\sigma(\hat n ), \sigma(\hat n )) , \, \tilde\Theta(\hat k ) = diag(\tilde \sigma(\hat k ), \tilde \sigma(\hat k ))$, where $A(r) = r\mathbb I_{4}, \, B(\rho) = \rho\mathbb I_{4}$ and \begin{eqnarray} \sigma(\hat n ) = \left(\begin{array}{cc} \cos{\phi} & e^{\imath \eta}\sin{\phi} \\ e^{-\imath \eta}\sin{\phi} & -\cos{\phi} \end{array} \right), \quad \tilde \sigma(\hat k) = \left(\begin{array}{cc} \cos{\varphi} & e^{\imath \varrho}\sin{\varphi} \\ e^{-\imath \varrho}\sin{\varphi} & -\cos{\varphi} \end{array} \right) \end{eqnarray} where $\phi, \varphi \in [0, \pi]$ and $\vartheta, \gamma, \eta,\varrho \in [0, 2\pi)$. From the scheme developed in \cite{thirulogasanthar-ali}, since $u, v$ are given as $u = u_{\xi_{1}}u_{\phi_{1}}u_{\xi_{2}}, \, v = v_{\zeta_{1}}v_{\phi_{2}}v_{\zeta_{2}}$ with $ u_{\xi_{1}} = diag(e^{\imath \xi_{1}/2}, e^{-\imath \xi_{1}/2}), \, u_{\xi_{2}} = diag(e^{\imath \xi_{2}/2}, e^{-\imath \xi_{2}/2}), \, v_{\zeta_{1}} = diag(e^{\imath \zeta_{1}/2}, e^{-\imath \zeta_{1}/2}), v_{\zeta_{2}} = diag(e^{\imath \zeta_{2}/2}, e^{-\imath \zeta_{2}/2})$, and \begin{eqnarray} u_{\phi_{1}} = \left(\begin{array}{cc} \cos{\frac{\phi_{1}}{2}} & \imath \sin{\frac{\phi_{1}}{2}} \\ \imath \sin{\frac{\phi_{1}}{2}} & \cos{\frac{\phi_{1}}{2}} \end{array} \right), \qquad v_{\phi_{2}} = \left(\begin{array}{cc} \cos{\frac{\phi_{2}}{2}} & \imath \sin{\frac{\phi_{2}}{2}} \\ \imath \sin{\frac{\phi_{2}}{2}} & \cos{\frac{\phi_{2}}{2}} \end{array} \right), \quad \xi_{1}, \xi_{2}, \zeta_{1}, \zeta_{2} \in [0,2\pi), \end{eqnarray} for $ \xi_{1}= \xi_{2} = \eta$ and $\zeta_{1}= \zeta_{2} = \varrho$, we get the following identifications: $\mathcal Z = r(\mathbb I_{4}\cos{\vartheta} + \imath \Theta(\hat n )\sin{\vartheta}) = \mathfrak q, \, \mathcal W = \rho(\mathbb I_{4}\cos{\gamma} + \imath \tilde\Theta(\hat k )\sin{\gamma}) = \mathfrak Q$. Then, the QVCS obtained as $|U\mathfrak Z U^{\dag}, V \mathfrak W V^{\dag}, \tau, j, \tilde n, \tilde m) = |\mathfrak q, \mathfrak Q, \tau, j, \tilde n, \tilde m)$ can be written as \begin{eqnarray}{\label{nqvcs00}} |\mathfrak q, \mathfrak Q, \tau, j, \tilde n, \tilde m) = (\mathcal N(r, \rho))^{-1/2}\sum_{n,m=0}^{\infty} \frac{\mathfrak q^{n} \bar{\mathfrak q}^{\tilde n} }{\sqrt{n !\tilde n !}} \frac{\mathfrak Q^{m} \bar{\mathfrak Q}^{\tilde m} }{\sqrt{m !\tilde m !}} e^{-\imath \tau \tilde E_{n,m}} |\chi^{j}\rangle \otimes | \tilde n \rangle \langle \tilde m| \otimes |m\rangle \langle n|. \end{eqnarray} They satisfy a normalization condition to unity given by \begin{eqnarray} \sum_{j=1}^{4}\sum_{\tilde n, \tilde m = 0}^{\infty} (\mathfrak q, \mathfrak Q, \tau, j, \tilde n, \tilde m|\mathfrak q, \mathfrak Q, \tau, j, \tilde n, \tilde m) = 1 \end{eqnarray} which provides $\mathcal N(r, \rho) = 4e^{2(r^{2} + \rho^{2})}$. \begin{pro} The QVCS (\ref{nqvcs00}) fulfill a resolution of the identity property on $\mathbb C^{4} \otimes \mathcal H_{q} \otimes \mathcal H_{q}$ given by \begin{eqnarray}{\label{qvcsresolu}} &&\sum_{j=1}^{4}\sum_{\tilde m = 0}^{\infty}\sum_{\tilde n = 0}^{\infty}\frac{1}{\tilde m !\tilde n !} \times \cr &&\times \int_{D_{1} \times D_{2}} d\mu(\mathfrak q, \mathfrak Q)(\overrightarrow{\partial_{r}})^{\tilde n} (\overrightarrow{\partial_{\rho}})^{\tilde m}[W(r,\rho)|\mathfrak q, \mathfrak Q, \tau, j, \tilde n, \tilde m) (\mathfrak q, \mathfrak Q, \tau, j, \tilde n, \tilde m|] (\overleftarrow{\partial_{r}})^{\tilde n}(\overleftarrow{\partial_{\rho}})^{\tilde m} \cr && = \mathbb I_{4} \otimes \mathbb I_{q} \otimes \mathbb I_{q} \end{eqnarray} where $d\mu(\mathfrak q, \mathfrak Q) = \frac{1}{16 \pi^{2}}rdr \rho d\rho (\sin{\phi})d\phi d\eta d\vartheta (\sin{\varphi})d\varphi d\varrho d\gamma$ on $D_{1} \times D_{2};$ \end{pro} $D_{1} = \{(r,\phi, \eta, \vartheta)| 0 \leq r < \infty, 0 \leq \phi \leq \pi, 0 \leq \eta, \vartheta < 2\pi\}$ and $D_{2} = \{(\rho,\varphi, \varrho, \gamma)| 0 \leq \rho < \infty, 0 \leq \varphi \leq \pi, 0 \leq \varrho, \gamma < 2\pi\}$. The moment problem issued from (\ref{qvcsresolu}), stated as follows: \begin{eqnarray} \int_{0}^{\infty} \int_{0}^{\infty} \frac{ 4 \pi^{2} W(r, \rho)}{ \mathcal N(r, \rho)} \frac{r^{2 n}}{ n !} \frac{\rho^{2 m}}{ m !} rdr \rho d\rho = 1 \end{eqnarray} is solved with \begin{eqnarray} W(r, \rho) = \frac{1}{\pi^{2}} \mathcal N(r, \rho)e^{-(r^{2}+\rho^{2})}. \end{eqnarray} A connection with the Weyl-Heisenberg group is realized by considering the unitary operators given by \begin{eqnarray} U_{R}(0, \mathfrak q) &=& e^{\left[-\mathfrak q \otimes a^{\dag}_{R} + \mathfrak q^{\dag} \otimes a_{R}\right]} = e^{-1/2\left[-\mathfrak q \otimes a^{\dag}_{R}, \mathfrak q^{\dag} \otimes a_{R}\right]} e^{\mathfrak q^{\dag} \otimes a_{R}} e^{- \mathfrak q \otimes a^{\dag}_{R}} \cr U_{L}(0, \mathfrak Q) &=& e^{\left[\mathfrak Q \otimes d^{\dag}_{L} - \mathfrak Q^{\dag} \otimes d_{L}\right]} = e^{-1/2\left[\mathfrak Q \otimes d^{\dag}_{L}, -\mathfrak Q^{\dag} \otimes d_{L}\right]} e^{\mathfrak Q \otimes d^{\dag}_{L}} e^{-\mathfrak Q^{\dag} \otimes d_{L}} \end{eqnarray} such that \begin{eqnarray} |\mathfrak q, \mathfrak Q, \tau, j, \tilde n, \tilde m) = \frac{e^{-(r^{2} + \rho^{2})/2}}{2}\mathbb U(\tau)\left[ \frac{\bar{\mathfrak q}^{\tilde n} \bar{\mathfrak Q}^{\tilde m} }{\sqrt{\tilde n !\tilde m !}} |\chi^{j}\rangle \otimes |\tilde n\rangle \langle\tilde m| \otimes U_{L}(0, \mathfrak Q)|0 \rangle \langle 0|U_{R}(0, \mathfrak q)\right], \end{eqnarray} where the operators $a_{R}$ and $d_{L}$ act on a given state $| \tilde n \rangle \langle \tilde m| \otimes |m\rangle \langle n|$ as follows: \begin{eqnarray}{\label{vec}} a_{R}| \tilde n \rangle \langle \tilde m| \otimes |m\rangle \langle n| &:=& | \tilde n \rangle \langle \tilde m| \otimes |m\rangle \langle n|a \cr &=& \sqrt{n+1}|\tilde n \rangle \langle \tilde m| \otimes |m \rangle \langle n+1|, \cr d_{L}| \tilde n \rangle \langle \tilde m| \otimes |m\rangle \langle n| &:=& | \tilde n \rangle \langle \tilde m| \otimes d|m\rangle \langle n| \cr &=& \sqrt{m}|\tilde n \rangle \langle \tilde m| \otimes |m-1 \rangle \langle n|. \end{eqnarray} \subsubsection{QVCS statistical properties} Let us consider the operators given on $\mathbb C^{4} \otimes \mathcal H_{q} \otimes \mathcal H_{q}$ by \begin{eqnarray} \hat P_{X} = \mathbb I_{4} \otimes \frac{-\imath \hbar }{\sqrt{2 \theta}}[a_{R} - a^{\dag}_{R}, \ .], \qquad \hat P_{Y} = \mathbb I_{4} \otimes \frac{-\hbar }{\sqrt{2 \theta}}[a_{R} + a^{\dag}_{R}, \ .], \end{eqnarray} \begin{eqnarray} \hat X = \mathbb I_{4} \otimes \sqrt{\frac{\theta}{2}}[a_{R}+a^{\dag}_{R}], \qquad \hat Y = \mathbb I_{4} \otimes \imath \sqrt{\frac{\theta}{2}}[a^{\dag}_{R} - a_{R}]. \end{eqnarray} From (\ref{vec}), we obtain \begin{eqnarray} [a_{R} - a^{\dag}_{R}, \ |\tilde n \rangle \langle \tilde m| \otimes | m \rangle\langle n|] = \sqrt{n+1}|\tilde n \rangle \langle \tilde m| \otimes |m \rangle \langle n+1| - \sqrt{n}|\tilde n \rangle \langle \tilde m| \otimes |m \rangle \langle n|. \end{eqnarray} Denote the expectation value of an operator by $\langle \cdot \rangle = \sum_{\tilde n, \tilde m = 0}^{\infty} (\mathfrak q, \mathfrak Q, j, \tilde n, \tilde m|\cdot|\mathfrak q, \mathfrak Q, j, \tilde n, \tilde m)$. We get the following expressions: \begin{eqnarray}{\label{quad00}} \langle \hat P_{X} \rangle = \pm\frac{\hbar }{2\sqrt{2 \theta}}r\cos{(\phi) \sin{(\eta)}}, \qquad \langle \hat P^{2}_{X} \rangle = \frac{\hbar^{2} }{2 \theta}[r^{2}\sin^{2}(\eta) +\frac{1}{4}], \end{eqnarray} \begin{eqnarray}{\label{quad02}} \langle \hat P_{Y} \rangle = - \frac{\hbar }{2\sqrt{2 \theta}}[r \cos (\eta) ], \qquad \langle \hat P^{2}_{Y} \rangle = \frac{\hbar^{2} }{2 \theta}[r^{2}\cos^{2}(\eta) + \frac{1}{4}] \end{eqnarray} from which result the relations \begin{eqnarray}{\label{quad03}} (\Delta \hat P_{X} )^{2} &=& \frac{1}{4}\left(\frac{\hbar^{2} }{2 \theta}\right) [4r^{2}\sin^{2}(\eta) - r^{2}\cos^{2}(\phi)\sin^{2}(\eta) +1], \cr (\Delta \hat P_{Y} )^{2} &=& \frac{1}{4}\left(\frac{\hbar^{2} }{2 \theta}\right) [3r^{2}\cos^{2}(\eta) +1]. \end{eqnarray} In the same way, we obtain \begin{eqnarray} (\Delta \hat Y)^{2} &=& \frac{1}{4}\left(\frac{\theta}{2}\right) [4r^{2}\sin^{2}(\eta) - r^{2}\cos^{2}(\phi)\sin^{2}(\eta) +1], \cr (\Delta \hat X)^{2} &=& \frac{1}{4}\left(\frac{\theta}{2}\right) [3r^{2}\cos^{2}(\eta) +1] \end{eqnarray} and the following uncertainties \begin{eqnarray} [\Delta \hat X \Delta \hat Y]^{2} &=& \frac{1}{16}\left(\frac{\theta^{2}}{4}\right) F(r,\eta,\phi) = \frac{1}{16} \left[\frac{1}{4}|\langle [\hat X, \hat Y] \rangle|^{2} \right]F(r,\eta,\phi), \cr [\Delta \hat X \Delta \hat P_{X}]^{2} &=& \frac{1}{16} \left(\frac{\hbar^{2} }{4}\right) F(r,\eta,\phi) \geq \frac{1}{16} \left[ \frac{1}{4}|\langle [\hat X, \hat P_{X}] \rangle|^{2}\right], \cr [\Delta \hat Y \Delta \hat P_{Y}]^{2} & = & \frac{1}{16} \left(\frac{\hbar^{2} }{4}\right) F(r,\eta,\phi) \geq \frac{1}{16} \left[ \frac{1}{4}|\langle [\hat Y, \hat P_{Y}] \rangle|^{2}\right], \cr [\Delta \hat P_{X} \Delta \hat P_{Y}]^{2} &= & \frac{1}{16} \left(\frac{\hbar^{4} }{4\theta^{2}}\right) F(r,\eta,\phi) \geq \frac{1}{16} \left[ \frac{1}{4}|\langle [\hat P_{X}, \hat P_{Y}] \rangle|^{2}\right] = 0, \end{eqnarray} where \begin{eqnarray} F(r,\eta,\phi) = [3r^{2}\cos^{2}(\eta) +1] [4r^{2}\sin^{2}(\eta) - r^{2}\cos^{2}(\phi)\sin^{2}(\eta) +1]. \end{eqnarray} \begin{rmk} As a matter of result checking, let us attract the reader attention on the fact that, in \cite{ben-scholtz}, a factor \textquotedblleft 2 \textquotedblright has been forgotten in the author expressions (\ref{quad00})-(\ref{quad03}) obtained for the dispersions of the momentum operators; one should read, in the denominator, the quantity $\hbar^{2}/2\theta$ instead of $\hbar^{2}/\theta.$ Furthermore, in the mentioned work, a sign \textquotedblleft - \textquotedblright should be also added in the expression of the operator $\hat P_{Y}.$ \end{rmk} \section{Concluding remarks} A matrix formulation of a Hamiltonian describing the motion of an electron in an electromagnetic field with a confining harmonic potential in a two-dimensional noncommutative space has been provided in this work. Relevant thermodynamical and statistical properties of the physical system have been studied and discussed. In this analysis, some $\theta-$ modified quantities have been obtained. In the limit $\theta \rightarrow 0$, these quantities can be identified with those derived in the commutative context related to the standard Landau problem. Then, the MVCS have been constructed and analyzed with respect to required properties. Finally, the QVCS as well as their connection with the VCS and their statistical properties have been investigated and discussed. \section*{Acknowledgements} This work is partially supported by the Abdus Salam International Centre for Theoretical Physics (ICTP, Trieste, Italy) through the Office of External Activities (OEA) - \mbox{Prj-15}. The ICMPA is in partnership with the Daniel Iagolnitzer Foundation (DIF), France.
2,877,628,091,359
arxiv
\section{Introduction} Machine learning and especially deep convolutional neural networks have become the de facto standard in a variety of computer vision related tasks~\cite{he2016deep,krizhevsky2012imagenet}. However, at least since 2004 the vulnerability of machine learning classifiers to carefully crafted attack points is known~\cite{dalvi2004adversarial} and has gained a lot of attention in the research community lately~\cite{biggio2018wild}. Generally, attacks against machine learning classifiers can be divided into poisoning and evasion attacks~\cite{biggio2018wild}. In the former, the adversary can already tamper with the training procedure, and in the latter, the adversary tries to evade the classifier at inference time. We deal with evasion attacks only in this paper, as especially the vulnerability of neural networks to so-called adversarial examples~\cite{Szegedy13} caused a stir in the machine learning community. Adversarial examples are benign input points to which tiny, maliciously crafted perturbations are added. For humans, these perturbed images are often indistinguishable from their unmodified counterparts~\cite{madry2017towards} but they trick a neural network into misclassification. In the light of modern applications, including, but not limited to, facial recognition, self-driving cars, or spam filtering, adversarial examples pose an obvious security concern. Yet, to the best of our knowledge, most of the papers on either new attack methods or new countermeasures deal with unrealistic conditions: the cost of creating adversarial examples is disregarded; the cost of training or defending the model is not considered; and the proportion of adversarial examples to be expected, is either 100\% or 0\%, but nothing in between. Taking into account the observation that all countermeasures against adversarial examples proposed so far strictly decrease the accuracy on benign images, we argue that depending on the expected amount of adversarial examples faced, defending a neural network classifier by one of these countermeasures might not be worth it. As a motivating example, we use the clean and robust accuracies from the famous paper~\cite{madry2017towards} to create Figure~\ref{fig:Madry}. Here, we show the expected proportion of adversarial examples ($x$-axis) and an evaluation metric (formally introduced in Sec.~\ref{sec:formalization}) that balances the classifier's adversarial robustness against its accuracy on benign inputs ($y$-axis). This shows that when expecting less then 17\% adversarial examples, the undefended model (solid line) yields a higher correct classification rate than the defended model (dashed line) and thus, is favorable for the defender -- assuming that misclassifying a benign or and adversarial example induce the same penalty. \begin{figure}[t] \centering \newcommand{20}{20} \input{fig/CCR_madry} \caption{Correct classification rate on CIFAR-10 test images based on the original data in \cite{madry2017towards}, as mentioned in Sec.~\ref{sec:relatedwork}. We consider the ``wide'' architecture.} \label{fig:Madry} \end{figure} Given this drawback of all so far proposed countermeasures against adversarial examples, we raise the question if a rationally acting defender would actually deploy these countermeasures when facing a rationally acting adversary. We do so by proposing a game-theoretical model and a thorough analysis of a minimal instantiation. Our contributions are as follows: \begin{enumerate} \item We propose the \emph{Advanced Adversarial Classification Game} that captures all relevant properties in the competition between adversary and defender in adversarial machine learning. Our game can be instantiated with all possible defender and adversary strategies (meaning classification models on the defender's side and attack algorithms on the adversary's side). \item We thoroughly analyse the game and identify situations where both players play pure (or mixed) strategies. \item We define two new metrics, the correct classification rate (CCR) for the defender and the attack success rate (ASR) for the adversary, and show their role in the analysis of the game. \item By starting with a rigorous mathematical formulation of an economic model and identifying all simplifications made, we justify the sufficiency and practical importance of CCR and ASR. \end{enumerate} The remainder of the paper is structured as follows: In Sec.~\ref{sec:relatedwork} we review the most relevant related work before introducing our game-theoretical model in Sec.~\ref{sec:general}. We instantiate this general model in Sec.~\ref{sec:analysis} and analyze the case where both, defender and adversary have two possible strategies. We report best responses and Nash equilibria before ending with a discussion and conclusion. \section{Related Work}\label{sec:relatedwork} In the last couple of years, stronger and stronger attacks against machine learning classifiers were developed, e.g.,~\cite{brendel2017decision,papernot2016transferability,papernot2017practical}, with basically everyday new papers appearing on this subject\footnote{More than 3500 papers can be found at: \url{https://preview.tinyurl.com/yxenrc4k}}. Naturally, with an increased interest in attacks, also an increasing number of countermeasures have been proposed. Already the seminal work that first identified the existence of adversarial examples~\cite{Szegedy13} suggested to harden the underlying convolutional neural networks (CNN) against these adversarial examples by incorporating them into the training procedure. This method is by now commonly called \emph{adversarial training}. It is one of the most prominent approaches in the research of CNNs that are robust against adversarial examples~\cite{madry2017towards}. Another approach to avert the danger of adversarial examples is to try to detect them either inside the CNN itself~\cite{Grosse17}, or by applying detection methods to every input object and sorting out adversarial examples before they even enter the neural network~\cite{Xu18}. Here, one important observation is that all countermeasures against adversarial examples proposed so far strictly decrease the accuracy on benign images. For example, adversarial training decreases clean accuracy from $95.2\%$ to $87.3\%$ in the ``wide'' model considered in~\cite{madry2017towards} and from $95.6\%$ to $90.0\%$ with the approach proposed in~\cite{zhang2019you}. Even the randomized defence mechanism proposed in~\cite{pinot2020randomization} decreases the accuracy on benign inputs from $88\%$ to $80\%$. (All results are for CIFAR-10.) Most of the literature on attacks against machine learning classifiers and countermeasures against these assumes that an adversary will always attack and a defender will always (try to) defend, irrespective of whether it actually pays off to do so. As soon as we assume both, adversary and defender to act rationally, and only attack and defend when it pays off, we enter the realm of game theory. Game-theoretical analysis of adversarial machine learning dates back to 2004, when Dalvi et al. analyzed the security of a machine learning-based spam detector against a strategic adversary~\cite{dalvi2004adversarial}. Here, the spam detector is a binary classifier and the adversary creates adversarial examples by perturbing some well chosen features of legitimate emails. The simultaneous move game is solved for a Nash equilibrium~\cite{nash1951non} by formulating the problem as a constrained optimization problem and solving this with a mixed linear program. Following up on this, researchers have framed \emph{adversarial classification} with zero-sum~\cite{globerson2006nightmare} and non zero-sum games~\cite{dritsoula2017game} and as simultaneous move and sequential move (Stackelberg) games~\cite{bruckner2011stackelberg}. With the increasing interest of the machine learning community in deep neural networks, also the game-theoretical analysis of these machine learning frameworks~\cite{schuurmans2016deep}, as well as their security properties~\cite{grosshans2015solving,pinot2020randomization} appeared. Interestingly, to the best of our knowledge, there is no work which that to incorporate costs on both the adversary's and the defender's side and the decreased accuracy on benign inputs, mentioned above, and then analyzes the optimal strategies of both players. Probably closest to our work is Gilmer et al.'s paper titled ``Motivating the Rules of the Game for Adversarial Example Research''~\cite{gilmer2018}. Besides the title, it does not deal with game theory although many of its arguments follows game-theoretical considerations, such as the question who moves first, the adversary or the defender (like in a Stackelberg game), or allowing the adversary a strategy between never attacking and always attacking (as in a mixed strategy). We adapt the following definitions from game theory literature, e.g.,~\cite{leyton2008essentials}: \begin{definition}[Mixed strategy] A \textbf{mixed strategy} is a strategy, which assigns a positive probability to two or more pure strategies. A \textbf{fully} mixed strategy assigns a positive probability to all pure strategies. \end{definition} \begin{definition}[Best response] A defender's \textbf{best response} $s^* \in \StrategySetDEF$ to an adversary's strategy $r \in \StrategySetADV$ satisfies $\ensuremath{\mathrm{Utility}}^\ensuremath{\mathrm{def}\phantom{.}}\!\!(s^*, r) \geq \ensuremath{\mathrm{Utility}}^\ensuremath{\mathrm{def}\phantom{.}}\!\!(s, r)$ for all $s \in \StrategySetDEF$. Likewise, an adversary's \textbf{best response} $r^* \in \StrategySetADV$ to a defender's strategy $s\in \StrategySetDEF$ satisfies $\ensuremath{\mathrm{Utility}}^\ensuremath{\mathrm{adv}}\!(s, r^*) \geq \ensuremath{\mathrm{Utility}}^\ensuremath{\mathrm{adv}}\!(s, r)$ for all $r \in \StrategySetADV$. \end{definition} \begin{definition}[Nash equilibrium] A \textbf{Nash equilibrium} is a strategy profile ($s^*,r^*$) where both strategies are mutual best responses. In a Nash equilibrium neither of the two actors has an incentive to unilaterally change her strategy. \noindent A game can have zero, one, or multiple Nash equilibria. \end{definition} \section{The Advanced Adversarial Classification Game \label{sec:general} This section introduces our game model. It starts by identifying and justifying costs of both parties, namely those of the adversary (cf. Sec.~\ref{sub:costs_of_adversary}) and those of the defender (cf. Sec.~\ref{sub:costs_of_defender}). Once these costs are identified, we use them to formulate the game characterizing the interplay between the parties. As is usual in economic studies, we partition the utility into \textit{initial costs} ($\ensuremath{\mathrm{I}}$) that are mandatory and independent of the use of attack (or defence), \textit{ongoing costs} ($\ensuremath{\mathrm{O}}$), which scale with how many times the attack (or defence) is used, and \textit{total reward}, $\textrm{TR},$ which is the reward and which is also assumed to be proportional to the number of usage. Hence, the utility of each actor is given by \begin{equation} \label{eq:UtilityDEFGeneral} \ensuremath{\mathrm{Utility}}^k = -\ensuremath{\mathrm{I}}^k - \ensuremath{\mathrm{O}}^k + \textrm{TR}^k, \end{equation} where $k \in \{\ensuremath{\mathrm{adv}}, \ensuremath{\mathrm{def}\phantom{.}}\!\}.$ As expected (and used later below), ongoing costs $\ensuremath{\mathrm{O}}$ and total reward $\textrm{TR}$ can be represented by a single term for both adversary and defender. We started by denoting them separately in order to clearly communicate their origins and factors. \subsection{Adversary} \label{sub:costs_of_adversary} The adversary's \textbf{action} set is defined by the attack methods she can use, her perturbation budget, and the fraction of samples she can attack. Each action has associated properties with regards to its cost structure and the attack success rate, depending on the model to be attacked (which is in turn the defender's chosen action). \textbf{Initial costs} of an adversary, $\ensuremath{\mathrm{I}}^\ensuremath{\mathrm{adv}}$, may include gathering intelligence about the victim, stealing the targeted model, acquisition of suitable data to train a surrogate model, computational costs to train a surrogate model, hardware or software, and human capital. Notice that some costs occur even though no attack is carried out. For example, as soon as an adversary contemplates to attack a model, time is spent to investigate the options, get information on the target, or prototype an attack. \textbf{Ongoing costs} of an adversary, $\ensuremath{\mathrm{O}}^\ensuremath{\mathrm{adv}}$, are mainly characterized by the computational costs of the attack to calculate perturbations. They depend on the attack method(s), but other factors might contribute such as a fee charged by the model provider per prediction, if the adversary does not own the model (or has trained a surrogate model). \textbf{Total rewards} of an adversary, $\textrm{TR}^\ensuremath{\mathrm{adv}}$, are the rewards the adversary obtains for attacking the defender's classifier. They can be positive when the target is successfully deceived, or negative for failed attacks. While often close to zero, negative rewards are conceivable, if every failed attack allows the defender to learn something about the adversary's strategies which helps them to detect the adversary more successfully \subsection{Defender} \label{sub:costs_of_defender} The defender has many \textbf{actions} to choose from, where each action is a combination of choices with regard to the architecture of the model, the data used to train the model, the training algorithm itself which can already implement defence mechanisms such as adversarial training~\cite{madry2017towards}, and the inference mode~\cite{cohen2019certified}. Each unique combination of the above leads to a model, which is considered to be a pure strategy. Hence, in the context of the defender, the terms model and action are used interchangeably. Each of them has associated properties with regard to its cost structure, its accuracy on clean inputs, and the robust accuracy on adversarial examples crafted with a given attack method and strength, where the last two are adversary's actions. The \textbf{initial costs} of the defender, $\ensuremath{\mathrm{I}}^\ensuremath{\mathrm{def}\phantom{.}}\!,$ include, but are not limited to, gathering and labeling training data, training the model (computational costs) and human capital, e.g. hiring an expert to instantiate the classification pipeline. Therefore some of them, such as the data acquisition, are mostly independent of the defender's strategy, while others like the number of trained models or the complexity of the training procedure clearly depend on them. \textbf{Ongoing costs} of the defender, $\ensuremath{\mathrm{O}}^\ensuremath{\mathrm{def}\phantom{.}}\!,$ occur constantly per classified input. They mainly correspond to the computational costs for inference of a specific model. Ongoing costs for models in the same complexity class can be considered equivalent, but some defences~\cite{salman2019provably} recommend classifying each sample many times (subjected to some randomness), which increases ongoing costs by orders of magnitude. \textbf{Total reward} of the defender. A rational defender will only train and deploy the model when she can draw positive reward, $\textrm{TR}^\ensuremath{\mathrm{def}\phantom{.}}\!\!,$ (not necessarily monetary) from its operation. As mentioned above, we assume these rewards to be described on a per-sample basis, specifically for correct outputs only. Further, it is reasonable to expect a negative reward, i.e. some penalty term, when the model's output differs from the ground truth. The extent of this penalty might differ from sample to sample and might be smaller for benign samples, but larger for samples manipulated by the adversary. Following the above considerations, we model the game as a non-zero-sum game as both actions are clearly interdependent and one actor's positive reward is not necessarily equal to the other actor's negative reward. We further choose a simultaneous move game in which both actors decide on their strategy at the same time without having certainty about the opponent's chosen strategy. \subsection{Cost of pure strategies} \label{sec:formalization} The defender classifies a finite number of samples $n\in\mathbb{N}$, out of which a fraction $\ensuremath{r_{\max}}\in[0,1]$ is under the control of the adversary, which means she can attack them by any method of her choice.\footnote{Note that this is already a simplification, since in practice none of the parties knows how many samples the adversary can influence.} The adversary may use up to $M-1$ different attacks (we refer to the $j$-th attack as attack $j$) and may also choose to leave a sample untouched (not to attack), which is denoted as attack $M$. The defender may choose one of $N$ different models to classify samples (we refer to the $i$-th model as model $i$). The accuracy of the $i$-th model is given by $\ensuremath{\mathrm{acc}}_{i}$ and its robustness against the $j$-th attack is given by $\ensuremath{\mathrm{rob}}_{ij}$. For expressing the costs, it will be useful to introduce two metrics: Attack success rate (ASR) and Correct classification rate (CCR). The first quantifies how successful the $j$-th attack is against the $i$-th classifier, while the second quantifies the probability of correctly classifying samples with model $i$, taking into account that samples might be attacked by attack $j$ with probability $\rho.$ Though both quantities seem to be similar at first glance, we have shown below that they allow to compactly represent the strategies and play a pivotal role in the analysis of the game in Sec.~\ref{sec:analysis}. \begin{definition}[Attack Success Rate]\label{def:ASRij} We define the attack success rate $\ensuremath{\mathrm{ASR}}$ of attack $j$ against model $i$ as \begin{equation}\label{eq:ASRij} \ensuremath{\mathrm{ASR}}_{ij} = 1 - \ensuremath{\mathrm{rob}}_{ij}. \end{equation} \end{definition} \begin{definition}[Correct Classification Rate]\label{def:CCRij} We define the correct classification $\ensuremath{\mathrm{CCR}}$ rate of model $i$, where only a fraction $\rho\in[0,\ensuremath{r_{\max}}]$ of all samples are perturbed by the $j$-th adversarial attack. \begin{equation}\label{eq:CCRij} \ensuremath{\mathrm{CCR}}_{ij}(\rho) = (1-\rho)\ensuremath{\mathrm{acc}}_i + \rho\ \ensuremath{\mathrm{rob}}_{ij} \end{equation} \end{definition} Figure \ref{fig:Madry} shows the correct classification rate for for the ``wide'' architecture in \cite{madry2017towards} with $\ensuremath{r_{\max}}=1$. The lines of the two models intersect at $\approx 17\%$, clearly indicating that the defender should use the undefended model if the proportion of adversarial examples is below, and the defended model when the proportion is above. \subsubsection{Adversary} Below, we state and discuss our assumptions on the economic factors of the adversary which are based on adversarial machine learning literature. \begin{axioms}{A} \item \label{item:A1} The initial costs $\ensuremath{\mathrm{I}}^\ensuremath{\mathrm{adv}}\ge0$ are constant and non-negative. This corresponds to a case where all attacks are similar in nature. That is, all attacks require a similar level of intelligence on the victim's model, use the same surrogate model (if applicable) and demand the same hardware, software and development costs. Moreover, this assumes that the adversary has to instantiate each attack to assess its quality. This incurs the initial costs, even if the adversary does not create a single adversarial example. \item \label{item:A2}Generating an adversarial example with attack $j$, where $j<M$, inflicts constant ongoing costs $\ensuremath{\mathrm{O}}_j^\ensuremath{\mathrm{adv}}\ge0$. \item \label{item:A3}Successful attacks yield revenue $R_+^\ensuremath{\mathrm{adv}}\ge0$, while unsuccessful attacks cost $R_-^\ensuremath{\mathrm{adv}}\ge0$. \item \label{item:A4}Attack $j<M$ against model $i$ succeeds with probability $\ensuremath{\mathrm{ASR}}_{ij} = 1 - \ensuremath{\mathrm{rob}}_{ij}$ and fails with probability $1 - \ensuremath{\mathrm{ASR}}_{ij} = \ensuremath{\mathrm{rob}}_{ij}$. \item \label{item:A5}Attack $M$ corresponds to refraining from attacking. No adversarial examples are generated, ongoing costs $\ensuremath{\mathrm{O}}_M^\ensuremath{\mathrm{adv}} = 0$ are assumed to be equal to zero, though we assume an adversary has to pay the initial costs. Furthermore, $\ensuremath{\mathrm{ASR}}_{iM}$ is undefined and $\ensuremath{\mathrm{CCR}}_{iM} \equiv \ensuremath{\mathrm{acc}}_i$ for all models $i$. \end{axioms} With the above assumptions, we define the expected payoff (revenue) per sample (EPPS) of the adversary when using the $j$-th attack against the $i$-th model as: \begin{align} \ensuremath{\mathrm{EPPS}}_{ij}^\ensuremath{\mathrm{adv}} &= -\ensuremath{\mathrm{O}}_j^\ensuremath{\mathrm{adv}} - R_-^\ensuremath{\mathrm{adv}}(1-\ensuremath{\mathrm{ASR}}_{ij}) + R_+^\ensuremath{\mathrm{adv}}\ensuremath{\mathrm{ASR}}_{ij} \enspace, \end{align} where the minus in front of $R_-^\ensuremath{\mathrm{adv}}(1-\ensuremath{\mathrm{ASR}}_{ij})$ indicates that adversary has to pay for failed attacks (or cannot get revenue, if $R_-^\ensuremath{\mathrm{adv}}$ is equal to zero). The total revenue (utility) of the adversary after perturbing $n\ensuremath{r_{\max}}$ samples is \begin{equation}\label{eq:UtilityADV} U_{ij}^\ensuremath{\mathrm{adv}} = \begin{cases} -\ensuremath{\mathrm{I}}^\ensuremath{\mathrm{adv}} + n\ensuremath{r_{\max}}\ensuremath{\mathrm{EPPS}}_{ij}^\ensuremath{\mathrm{adv}} & \text{if $j<M$}\\ -\ensuremath{\mathrm{I}}^\ensuremath{\mathrm{adv}} & \text{if $j=M$} \end{cases}\enspace, \end{equation} where we have used assumption~\eqref{item:A1} (initial costs are not influenced by strategy of any party) and~\eqref{item:A5} (refraining from attacking does not cost anything). \subsubsection{Defender} Next, we state and discuss our assumptions on the economic factors of the defender, again based on the literature on adversarial machine learning. \begin{axioms}{D} \item \label{item:D1} The defender has to classify all $n\in\mathbb{N}$ samples. This models a situation, where the defender cannot distinguish between any sample a priori and therefore does not exclude any of them. Note that adversarial examples are indistinguishable from benign examples by definition. \item \label{item:D2} The initial costs $\ensuremath{\mathrm{I}}^\ensuremath{\mathrm{def}\phantom{.}}\!\ge0$ are constant and do not depend on the defender's choice of model. This covers all cases, where the costs for training each model are either the same or are dominated by the costs for setting up the training pipeline. \item \label{item:D3} Classifying a sample with the $i$-th model inflicts ongoing costs $\ensuremath{\mathrm{O}}_i^\ensuremath{\mathrm{def}\phantom{.}}\!>0$, which only depend on the model itself. \item \label{item:D4} Defender rewards only depend on the correct classification rate. That is, correctly classifying a sample yields $R_+^\ensuremath{\mathrm{def}\phantom{.}}\!\ge0$, while misclassifications incurs cost $R_-^\ensuremath{\mathrm{def}\phantom{.}}\!\ge0$. In particular, errors on benign samples are as expensive as errors on adversarial examples. \end{axioms} Based on these assumptions we construct the utilities of the defender for the $i$-th model and the $j$-th attack. Remember that the adversary perturbs at most a fraction $\ensuremath{r_{\max}}\in[0,1]$ of all samples. By choosing any attack $j<M$ the adversary generates $n\ensuremath{r_{\max}}$ adversarial examples. The expected payoff (revenue) per classified sample (EPPS) for the defender is given by \begin{equation}\label{eq:EPPSij} \ensuremath{\mathrm{EPPS}}_{ij}^\ensuremath{\mathrm{def}\phantom{.}}\! = \begin{cases} -\ensuremath{\mathrm{O}}_{i}^\ensuremath{\mathrm{def}\phantom{.}}\!\! - R_-^\ensuremath{\mathrm{def}\phantom{.}}\!\!(1-\ensuremath{\mathrm{CCR}}_{ij}(\ensuremath{r_{\max}})) + R_+^\ensuremath{\mathrm{def}\phantom{.}}\!\ensuremath{\mathrm{CCR}}_{ij}(\ensuremath{r_{\max}}) & \text{if $j<M$}\\ -\ensuremath{\mathrm{O}}_{i}^\ensuremath{\mathrm{def}\phantom{.}}\!\! - R_-^\ensuremath{\mathrm{def}\phantom{.}}\!\!(1-\ensuremath{\mathrm{acc}}_i) + R_+^\ensuremath{\mathrm{def}\phantom{.}}\!\ensuremath{\mathrm{acc}}_i & \text{if $j=M$} \end{cases}\enspace, \end{equation} where we have used assumption \eqref{item:D3} (classifying a sample incurs constant costs) and~\eqref{item:D4} (reward depends only on correct classification rate). By assumption~\eqref{item:A5} no adversarial examples are generated if $j=M$. By further incorporating assumption~\eqref{item:D1} (defender has to classify all samples) and \eqref{item:D2} (initial costs are constant), we get the defender's utility \begin{equation}\label{eq:UtilityDEF} \mathbf{U}_{ij}^\ensuremath{\mathrm{def}\phantom{.}}\! = -\ensuremath{\mathrm{I}}^\ensuremath{\mathrm{def}\phantom{.}}\! + n \ \ensuremath{\mathrm{EPPS}}_{ij}^\ensuremath{\mathrm{def}\phantom{.}}\! \enspace. \end{equation}\noindent \begin{table}[t] \centering \caption{Overview of Game parameters} \resizebox{\textwidth}{!}{ \begin{tabular}{lll@{~~~}ll} \toprule \multirow{5}{*}{Description} & \multicolumn{2}{c}{Total number of samples} & \multicolumn{2}{c}{Fraction of samples that can be} \\ & \multicolumn{2}{c}{to classify $n \in \mathbb{N}$} & \multicolumn{2}{c}{adversarially perturbed $\ensuremath{r_{\max}}\in[0,1]$} \\[1ex] & \multicolumn{2}{c}{Accuracy values of all models} & \multicolumn{2}{c}{Robustness values of all models} \\ & \multicolumn{2}{c}{$\ensuremath{\mathrm{acc}}\in[0,1]^N$} & \multicolumn{2}{c}{ $\ensuremath{\mathrm{rob}}\in[0,1]^{N\times (M-1)}$} \\ \cmidrule(r){2-5} & \multicolumn{2}{c}{Defender's side} & \multicolumn{2}{c}{Adversary's side} \\ \cmidrule(r){2-3}\cmidrule(r){4-5} Positive reward & correct classification & $R_+^\ensuremath{\mathrm{def}\phantom{.}}\!\in[0,\infty)$ & successful attack & $R_+^\ensuremath{\mathrm{adv}}\in[0,\infty)$ \\ Negative reward & misclassification & $R_-^\ensuremath{\mathrm{def}\phantom{.}}\!\in[0,\infty)$ & failed attack & $R_-^\ensuremath{\mathrm{adv}}\in[0,\infty)$ \\ Initial Costs & & ~$\ensuremath{\mathrm{I}}^\ensuremath{\mathrm{def}\phantom{.}}\!\in[0,\infty)$ & & ~$\ensuremath{\mathrm{I}}^\ensuremath{\mathrm{adv}}\in[0,\infty)$ \\ Ongoing costs & & $\ensuremath{\mathrm{O}}^\ensuremath{\mathrm{def}\phantom{.}}\!\in[0,\infty)^N$& & $\ensuremath{\mathrm{O}}^\ensuremath{\mathrm{adv}}\in[0,\infty)^{M-1}$ \\ \cmidrule(r){2-3}\cmidrule(r){4-5} Choice Parameter & & $s\in\StrategySetDEF$ & & $r\in\StrategySetADV$ \\ Performance & \multirow{2}{*}{correct classification rate} & \multirow{2}{*}{$\ensuremath{\mathrm{CCR}}(r)^\dagger$} & \multirow{2}{*}{attack success rate}& \multirow{2}{*}{$\ensuremath{\mathrm{ASR}}(s)^\dagger$} \\ Measure &&&&\\ Expected Payoff & \multirow{2}{*}{classified} &\multirow{2}{*}{$\ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{def}\phantom{.}}\!\!(r)^\dagger$} & \multirow{2}{*}{adversarially perturbed} &\multirow{2}{*}{$\ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{adv}}\!(s)^\dagger$} \\ per Sample &&&&\\ \bottomrule \multicolumn{3}{l}{\scriptsize $~^\dagger:$ Note that these entries depend on the opponent's choice} \end{tabular} } \label{tab:GameInstantiation} \end{table}\noindent \subsection{Utility of mixed strategies} Let $\StrategySetADV = \Delta^M$ and $ \StrategySetDEF = \Delta^N$ denote the strategy space\footnote{$\Delta^d = \left\{v\in[0,1]^d \ \colon v_1 + \dots + v_d = 1 \right \}$ is the $d-1$ dimensional probability simplex.} of the adversary and defender, respectively. Each strategy $r \in \StrategySetADV$ / $s \in \StrategySetDEF$ corresponds to a probability distribution, where the adversary / defender chooses action $j$ / $i$ with probability $r_j$ / $s_i$, respectively. The adversary's and defender's utility function are given by \begin{align*} \ensuremath{\mathrm{Utility}}^\ensuremath{\mathrm{adv}}\!(s,r) &= \sum_{i=1}^N\sum_{j=1}^M s_i \mathbf{U}_{ij}^\ensuremath{\mathrm{adv}} r_j = s^T \mathbf{U}^\ensuremath{\mathrm{adv}} r \\ \ensuremath{\mathrm{Utility}}^\ensuremath{\mathrm{def}\phantom{.}}\!\!(s,r) &= \sum_{i=1}^N\sum_{j=1}^M s_i \mathbf{U}_{ij}^\ensuremath{\mathrm{def}\phantom{.}}\! r_j = s^T \mathbf{U}^\ensuremath{\mathrm{def}\phantom{.}}\! r \enspace, \end{align*} respectively. For the analysis of the game, it is convenient to represent the utility functions as \begin{align} \ensuremath{\mathrm{Utility}}^\ensuremath{\mathrm{adv}}\!(s,r) &= -\ensuremath{\mathrm{I}}^\ensuremath{\mathrm{adv}} + n\ \ensuremath{r_{\max}} \ r^T \ \ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{adv}}\!(s)\label{eq:UtilityADV_with_EPPS} \\ \ensuremath{\mathrm{Utility}}^\ensuremath{\mathrm{def}\phantom{.}}\!\!(s,r) &= -\ensuremath{\mathrm{I}}^\ensuremath{\mathrm{def}\phantom{.}}\! + n\ s^T \ \ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{def}\phantom{.}}\!\!(r)\label{eq:UtilityDEF_with_EPPS} \enspace, \end{align} with the expected payoff per classified sample depending only their respective opponent's strategy $\ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{adv}}\!(s)\defeq(\ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{adv}}\!\ s)^T$ and $\ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{def}\phantom{.}}\!\!(r)\defeq\ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{def}\phantom{.}}\!\!\ r$. By this, the advanced adversarial classification game is fully described as a non-zero-sum normal form game with the adversary and defender as players, pure strategy sets $\{e_1^\ensuremath{\mathrm{def}\phantom{.}}\!,\dots, e_N^\ensuremath{\mathrm{def}\phantom{.}}\!\}$ and $\{e_1^\ensuremath{\mathrm{adv}},\dots, e_M^\ensuremath{\mathrm{adv}}\}$, and utility functions $\ensuremath{\mathrm{Utility}}^\ensuremath{\mathrm{adv}}\!$ and $\ensuremath{\mathrm{Utility}}^\ensuremath{\mathrm{def}\phantom{.}}\!\!$, where $e_i^\ensuremath{\mathrm{def}\phantom{.}}\!\in\mathbb{R}^N$ and $e_j^\ensuremath{\mathrm{adv}}\in\mathbb{R}^M$ are the $i$-th and $j$-th standard unit vectors, respectively. The mixed strategy sets are given by $\StrategySetADV$ and $\StrategySetDEF$. All parameters of the game are shown in Table~\ref{tab:GameInstantiation}. \subsection{Expected payouts for mixed strategies} In this subsection we introduce two performance measures for both players which depend only on non-choice parameters and their respective opponent's strategy. These performance measures are generalizations of the attack success rate (\ensuremath{\mathrm{ASR}}) and correct classification rate (\ensuremath{\mathrm{CCR}}) in Definitions~\ref{def:ASRij} and \ref{def:CCRij}. \begin{definition}[ASR for mixed strategies] Given a defender's strategy $s\in\StrategySetDEF$, we define the attack success rate of attack $j$ as \[\ensuremath{\mathrm{ASR}}_j(s) \defeq s^T (\vec{1} -\ensuremath{\mathrm{rob}}_{\cdot j}) = 1-\sum_{i=1}^N s_i\ensuremath{\mathrm{rob}}_{ij} \enspace. \] \end{definition} This is a direct extension of the attack success rate from Definition~\ref{def:ASRij} to mixed strategies, since $\ensuremath{\mathrm{ASR}}_j(e_i^\ensuremath{\mathrm{def}\phantom{.}}\!)=\ensuremath{\mathrm{ASR}}_{ij}$. As the name implies, it quantifies the probability of a successful attack given a possibly mixed defender's strategy. \begin{definition}[CCR for mixed strategies]\label{def:CCR} We define the correct classification rate with respect to an adversary's strategy $r\in\StrategySetADV$ against model $i$ as \[\ensuremath{\mathrm{CCR}}_{i}(r;\ensuremath{r_{\max}}) \defeq r^T \left(\ensuremath{\mathrm{CCR}}_{ij}(\ensuremath{r_{\max}})\right)_{1\le j\le M} = \sum_{j=1}^M r_j\ensuremath{\mathrm{CCR}}_{ij}(\ensuremath{r_{\max}})\enspace, \] where $\ensuremath{\mathrm{CCR}}_{iM}\equiv \ensuremath{\mathrm{acc}}_i$ by assumption \eqref{item:A5}. \end{definition} Similar to above, this generalizes the correct classification rate from Definition~\ref{def:CCRij} to mixed strategies. Unlike the above, we place an additional restriction on the performance metric. Here, the proportion of adversarial examples is explicitly restricted to $\ensuremath{r_{\max}}\in[0,1]$. For pure strategies of the adversary we get $\ensuremath{\mathrm{CCR}}_i(e_j^\ensuremath{\mathrm{adv}};\ensuremath{r_{\max}})=\ensuremath{\mathrm{CCR}}_{ij}(\ensuremath{r_{\max}})$, the probability of correctly classifying a sample if the adversary chooses attack $j$. \subsubsection{Expected payoff per sample} Both performance measures allow us to concisely describe each entry of the $\ensuremath{\mathrm{EPPS}}$-functions in Equation~\eqref{eq:UtilityADV_with_EPPS} and \eqref{eq:UtilityDEF_with_EPPS} \begin{align} \nonumber \ensuremath{\mathrm{EPPS}}_j^\ensuremath{\mathrm{adv}}\!(s) &= -\ensuremath{\mathrm{O}}_j^\ensuremath{\mathrm{adv}} - R_-^\ensuremath{\mathrm{adv}}(1-\ensuremath{\mathrm{ASR}}_j(s)) + R_+^\ensuremath{\mathrm{adv}}\ensuremath{\mathrm{ASR}}_j(s) \\ &= -\ensuremath{\mathrm{O}}_j^\ensuremath{\mathrm{adv}} - R_-^\ensuremath{\mathrm{adv}} - (R_+^\ensuremath{\mathrm{adv}}+R_-^\ensuremath{\mathrm{adv}})\ \ensuremath{\mathrm{ASR}}_j(s) \label{eq:EPPSadv}\\ \nonumber \ensuremath{\mathrm{EPPS}}_i^\ensuremath{\mathrm{def}\phantom{.}}\!\!(r) &= -\ensuremath{\mathrm{O}}_i^\ensuremath{\mathrm{def}\phantom{.}}\! - R_-^\ensuremath{\mathrm{def}\phantom{.}}\!(1-\ensuremath{\mathrm{CCR}}_i(r;\ensuremath{r_{\max}})) + R_+^\ensuremath{\mathrm{def}\phantom{.}}\!\ensuremath{\mathrm{CCR}}_i(r;\ensuremath{r_{\max}}) \\ &= -\ensuremath{\mathrm{O}}_i^\ensuremath{\mathrm{def}\phantom{.}}\! - R_-^\ensuremath{\mathrm{def}\phantom{.}}\! + (R_+^\ensuremath{\mathrm{def}\phantom{.}}\!+R_-^\ensuremath{\mathrm{def}\phantom{.}}\!)\ \ensuremath{\mathrm{CCR}}_i(r;\ensuremath{r_{\max}})\enspace. \label{eq:EPPSdef} \end{align} $\ensuremath{\mathrm{EPPS}}_j^\ensuremath{\mathrm{adv}}\!(s)$ corresponds to the expected payoff per adversarially perturbed sample for attack $j$ given the strategy $s\in\StrategySetDEF$. Similarly, $\ensuremath{\mathrm{EPPS}}_i^\ensuremath{\mathrm{def}\phantom{.}}\!\!(r)$ corresponds to the expected payoff per classified sample given model $i$ and strategy $r\in\StrategySetADV$. \section{Game Instantiation and Analysis} \label{sec:analysis} Below, we analyse the simplest realization of the game, where $N=2$ and $M=2$. The defender chooses from two classifiers (typically one of them trained as usual and the other using some form of adversarial training increasing robustness) and the adversary may attack or not. Hence their strategy spaces can be specified by a single scalar parameter $r = (r_1, 1-r_1)$ and $s = (s_1, 1-s_1)$ with $r_1, s_1\in[0,1]$ for the adversary and defender, respectively. By this, ``not attacking'', i.e., attack $M=2$ is represented by the second entry ($1-r_1$). We further simplify the notation by writing $\ensuremath{\mathrm{rob}}_{i}$ instead of $\ensuremath{\mathrm{rob}}_{i1}$ and $\ensuremath{\mathrm{ASR}}(s)$ instead of $\ensuremath{\mathrm{ASR}}_1(s)$. As a convention, motivated by experimental results of state of the art methods for adversarial training, we assume the first model (trained normally) to have higher accuracy on benign samples but lower on adversarial samples (robustness) compared to the second model. Furthermore, accuracy on attacked samples of both models is assumed to be lower than that on clean samples, since the adversary aims to cause a misclassification and not to help the defender. This means the accuracy and robustness are ordered as \begin{equation} \ensuremath{\mathrm{acc}}_1 > \ensuremath{\mathrm{acc}}_2 > \ensuremath{\mathrm{rob}}_2 > \ensuremath{\mathrm{rob}}_1 \enspace. \label{eq:ordering} \end{equation} This order also makes sense from a game-theoretical point of view. If the first model would have both, higher accuracy and higher robustness, choosing this model would be a strictly dominant strategy and the defender will never use the second model, assuming equal ongoing costs as has been justified above. Contrary, if the first inequality would not hold, choosing model 2 would be strictly dominant. In order to simplify the equations in this section, we define \begin{align*} \Dacc &\defeq \ensuremath{\mathrm{acc}}_1 - \ensuremath{\mathrm{acc}}_2 > 0\\ \Drob &\defeq \ensuremath{\mathrm{rob}}_2 - \ensuremath{\mathrm{rob}}_1 > 0 \end{align*} \subsection{Best response analysis of the adversary}\label{sec:BRAadv} Assuming a fixed strategy of the defender $s\in\StrategySetDEF$, a best response of the adversary maximizes the utility $r^* \in \arg\max_{r\in\StrategySetADV}\ensuremath{\mathrm{Utility}}^\ensuremath{\mathrm{adv}}\!(s,r)$ defined in Equation~\eqref{eq:UtilityADV_with_EPPS}. Since the initial costs $\ensuremath{\mathrm{I}}^\ensuremath{\mathrm{adv}}$ are constant (by assumption~\eqref{item:A1}), the utility linearly increases in $r$, hence the maximization is trivial depending on the sign of her expected payout per sample, $\ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{adv}}\!(s),$ as follows: \begin{equation}\label{eq:casesEPPS_ADV} r_1^* \in \begin{cases} \{0\} & \text{ iff } \ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{adv}}\!(s) < 0\\ \{1\} & \text{ iff } \ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{adv}}\!(s) > 0\\ [0,1] & \text{ iff } \ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{adv}}\!(s) = 0 \end{cases} \enspace. \end{equation} The last line means that any $r^*\in\StrategySetADV$ is a best response. We refer to these cases as Case 1 (never attack), Case 2 (always attack), and Case 3 (indifferent) in that order. The definition of $\ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{adv}}\!$ from Equation~\eqref{eq:EPPSadv} can be reformulated as \begin{equation*} \ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{adv}}\!(s) = \left(R_+^\ensuremath{\mathrm{adv}} + R_-^\ensuremath{\mathrm{adv}}\right)(\ensuremath{\mathrm{ASR}}(s) - \mu^\ensuremath{\mathrm{adv}})\enspace, \end{equation*} where \begin{equation*} \mu^\ensuremath{\mathrm{adv}} \defeq \frac{\ensuremath{\mathrm{O}}^\ensuremath{\mathrm{adv}} + R_-^\ensuremath{\mathrm{adv}}}{R_+^\ensuremath{\mathrm{adv}} + R_-^\ensuremath{\mathrm{adv}}}\enspace, \end{equation*} which leads to an alternative characterization of the best responses from Equation~\eqref{eq:casesEPPS_ADV} \begin{equation}\label{eq:casesASR} r_1^* \in \begin{cases} \{0\} & \text{ iff } \ensuremath{\mathrm{ASR}}(s) < \mu^\ensuremath{\mathrm{adv}}\\ \{1\} & \text{ iff } \ensuremath{\mathrm{ASR}}(s) > \mu^\ensuremath{\mathrm{adv}}\\ [0,1] & \text{ iff } \ensuremath{\mathrm{ASR}}(s) = \mu^\ensuremath{\mathrm{adv}} \end{cases} \enspace. \end{equation} The last Equation~\eqref{eq:casesASR} relates the attack success rate $\ensuremath{\mathrm{ASR}}$ to the economic factors of adversary rewards, as $\mu^\ensuremath{\mathrm{adv}}$ is defined in terms of her rewards and ongoing costs. Assuming the penalty for a failed attack $R^\ensuremath{\mathrm{adv}}_-$ is negligible (presently, most crimes of this type are left unpunished due to lack of legislation, law enforcement, and forensic tools) and the reward $R^\ensuremath{\mathrm{adv}}_+$ dominates the ongoing costs, $\mu^\ensuremath{\mathrm{adv}}$ is in practice going to be close to zero. In consequence this means that if there is a slight chance of an attack to succeed, every rational adversary will always attack. \begin{table}[t] \centering \caption{All possible best responses $r^*=(r_1^*,1-r_1^*)\in\StrategySetADV$ of the adversary depending on the defender's strategy $s=(s_1,1-s_1)\in\StrategySetDEF$. The first, second and third row summarize Case 1, 2, and 3, respectively. If the precondition does not hold, then the condition on $s\in\StrategySetDEF$ cannot be satisfied.} \begin{tabular}{@{~~~}c@{~~~}l@{~~~~~~~}c@{~~~~~}r} \toprule Case & Best responses & Condition & Precondition \\ \midrule 1 & $r^*=(0,1)$ & $\ensuremath{\mathrm{ASR}}(s) < \mu^\ensuremath{\mathrm{adv}}$ & $\hphantom{\ensuremath{\mathrm{rob}}_1<{}}1-\mu^\ensuremath{\mathrm{adv}}<\ensuremath{\mathrm{rob}}_2$ \\ 2 & $r^*=(1,0)$ & $\ensuremath{\mathrm{ASR}}(s) > \mu^\ensuremath{\mathrm{adv}}$ & $\ensuremath{\mathrm{rob}}_1<1-\mu^\ensuremath{\mathrm{adv}}\hphantom{{}<\ensuremath{\mathrm{rob}}_2}$ \\ 3 & $r^*\in\StrategySetADV$ & $\ensuremath{\mathrm{ASR}}(s) = \mu^\ensuremath{\mathrm{adv}}$ & $\ensuremath{\mathrm{rob}}_1\le1-\mu^\ensuremath{\mathrm{adv}}\le\ensuremath{\mathrm{rob}}_2$ \\ \bottomrule \end{tabular} \label{tab:AdversaryBestResponse} \end{table} We summarize our results in Table~\ref{tab:AdversaryBestResponse}. We specify the case on the left-most column. The best responses for the given case are shown in the second column. For each case, we have a corresponding equivalent condition on the $\ensuremath{\mathrm{ASR}}(s)$, which is shown in the third column. The fourth and last column shows for each case if it is satisfiable at all by any $s\in\StrategySetDEF$. These preconditions depend only on non-choice parameters in Table~\ref{tab:GameInstantiation} and not on the defender's strategy itself. \subsection{Best response analysis of the defender}\label{sec:BRAdef} For a given adversary strategy $r\in\StrategySetADV$, a best response of the defender maximizes the utility $s^*\in\arg\max_{s\in\StrategySetDEF}\ensuremath{\mathrm{Utility}}^\ensuremath{\mathrm{def}\phantom{.}}\!\!(s,r)$, see Equation~\eqref{eq:UtilityDEF_with_EPPS}. This is equivalent to maximizing the expected payout per sample $s^T \ensuremath{\mathrm{EPPS}}^\ensuremath{\mathrm{def}\phantom{.}}\!\!(r)$ as defined in Equation~\eqref{eq:EPPSdef}, which we rewrite as \begin{equation}\label{eq:EPPSdef_frac} \ensuremath{\mathrm{EPPS}}_i^\ensuremath{\mathrm{def}\phantom{.}}\!(r) = \left(R_+^\ensuremath{\mathrm{def}\phantom{.}}\!+R_-^\ensuremath{\mathrm{def}\phantom{.}}\!\right)\left(\ensuremath{\mathrm{CCR}}_i(r;\ensuremath{r_{\max}}) - \mu_i^\ensuremath{\mathrm{def}\phantom{.}}\!\right)\enspace, \end{equation} where \[\ensuremath{\mathrm{CCR}}_i(r;\ensuremath{r_{\max}}) = (1-r_1 \ensuremath{r_{\max}})\ensuremath{\mathrm{acc}}_i + r_1\ensuremath{r_{\max}}\ \ensuremath{\mathrm{rob}}_i\] and \[\mu_i^\ensuremath{\mathrm{def}\phantom{.}}\! = \frac{\ensuremath{\mathrm{O}}_i^\ensuremath{\mathrm{def}\phantom{.}}\! + R_-^\ensuremath{\mathrm{def}\phantom{.}}\!}{R_+^\ensuremath{\mathrm{def}\phantom{.}}\! + R_-^\ensuremath{\mathrm{def}\phantom{.}}\!}\enspace.\] We observe that similarly to the adversary's case, the defender chooses a model $i$ with maximal $\ensuremath{\mathrm{CCR}}_i(r;\ensuremath{r_{\max}}) - \mu_i^\ensuremath{\mathrm{def}\phantom{.}}\!$. Here again the performance metric $\ensuremath{\mathrm{CCR}}_i(r;\ensuremath{r_{\max}})$ is related to the economic setting term $\mu_i^\ensuremath{\mathrm{def}\phantom{.}}\!$. Importantly and unlike the adversary's case, the choice of the defender's strategy depends on $\ensuremath{r_{\max}}$, the maximal fraction of samples the adversary can influence. As will be seen below, this means that for some small $\ensuremath{r_{\max}}$ (which occurs in many practical situations), the defender will not have an incentive to use the robust classifier.\\ For the analysis of Nash equilibria in the next section, it is useful to analyze the difference $\ensuremath{\mathrm{EPPS}}_1^\ensuremath{\mathrm{def}\phantom{.}}\! - \ensuremath{\mathrm{EPPS}}_2^\ensuremath{\mathrm{def}\phantom{.}}\!$, \begin{equation}\label{eq:EPPSdiff} \DCCR - \Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\! = \frac{\ensuremath{\mathrm{EPPS}}_1^\ensuremath{\mathrm{def}\phantom{.}}\!\!(r)-\ensuremath{\mathrm{EPPS}}_2^\ensuremath{\mathrm{def}\phantom{.}}\!\!(r)}{R_+^\ensuremath{\mathrm{def}\phantom{.}}\!+R_-^\ensuremath{\mathrm{def}\phantom{.}}\!} \enspace, \end{equation} where \begin{align*} \DCCR &\defeq \ensuremath{\mathrm{CCR}}_1(r;\ensuremath{r_{\max}})-\ensuremath{\mathrm{CCR}}_2(r;\ensuremath{r_{\max}}),\text{ and}\\ \Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\! &\defeq \mu_1^\ensuremath{\mathrm{def}\phantom{.}}\! - \mu_2^\ensuremath{\mathrm{def}\phantom{.}}\! = \frac{\ensuremath{\mathrm{O}}_1^\ensuremath{\mathrm{def}\phantom{.}}\!-\ensuremath{\mathrm{O}}_2^\ensuremath{\mathrm{def}\phantom{.}}\!}{R_+^\ensuremath{\mathrm{def}\phantom{.}}\! + R_-^\ensuremath{\mathrm{def}\phantom{.}}\!}\enspace. \end{align*} Notice that in many practical situations, the difference in ongoing costs of two classifiers is almost zero (non-robust and robust versions differ mainly in training, not in the architecture of the model, which makes inference costs akin), therefore $\Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\!\approx 0$. The best responses for the defender are given by \begin{equation}\label{eq:casesCCR} s_1^* \in \begin{cases} \{0\} & \text{ iff } \DCCR < \Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\!\\ \{1\} & \text{ iff } \DCCR > \Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\!\\ [0,1] & \text{ iff } \DCCR = \Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\! \end{cases} \enspace. \end{equation} The last line means that any $s^*\in\StrategySetDEF$ is a best response. We refer to these cases as Case A (always defend), Case B (never defend), and Case C (indifferent) in that order. Defender's best responses are summarized in Table~\ref{tab:DefenderBestResponse}. The case is specified in the left-most column, the best responses for the given case are shown in the second column, and the corresponding equivalent condition on $r$ is shown in the third column. The fourth and last column shows prerequisites for the given case that only depend on non-choice parameters. \begin{table}[t] \centering \caption{All possible best responses $s^*=(s_1^*,1-s_1^*)\in\StrategySetDEF$ of the defender depending on the adversary's strategy $r=(r_1,1-r_1)\in\StrategySetADV$. The first, second and third row summarize Case A, B, and C, respectively. If the precondition does not hold true, then the condition on $r\in\StrategySetADV$ cannot be satisfied.} \begin{tabular}{@{~~~}c@{~~~}l@{~~~~~~~}cr} \toprule Case & Best responses & Condition & Precondition \\ \midrule A & $s^*=(0,1)$ & $\DCCR < \Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\!$ & \phantom{as}$\phantom{0<{}}\ensuremath{\frac{\Dacc - \Delta\mu^\DEF}{\Dacc + \Drob}}<\ensuremath{r_{\max}}$ \\ B & $s^*=(1,0)$ & $\DCCR > \Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\!$ & \phantom{as}$\Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\!<\Dacc$\\%\ensuremath{\frac{\Dacc - \Delta\mu^\DEF}{\Dacc + \Drob}}\phantom{{}<\ensuremath{r_{\max}}}$ \\ C & $s^*\in\StrategySetDEF$ & $\DCCR = \Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\!$ & \phantom{as}$0\le\ensuremath{\frac{\Dacc - \Delta\mu^\DEF}{\Dacc + \Drob}}\le\ensuremath{r_{\max}}$ \\ \bottomrule \end{tabular} \label{tab:DefenderBestResponse} \end{table} \subsection{(Fully) Mixed Nash equilibria} \label{sec:mixednash} In Subsection \ref{sec:BRAadv} and \ref{sec:BRAdef} we discussed and listed all possible best responses for both adversary and defender. The results are summarized in Table \ref{tab:AdversaryBestResponse} and \ref{tab:DefenderBestResponse}. Now we investigate if and when mixed strategy Nash equilibria exist at all. For this we consider Case 3 and Case C, that is \[\ensuremath{\mathrm{ASR}}(s) = \mu^\ensuremath{\mathrm{adv}} ~~\text{and}~~ \DCCR = \Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\!\] or, equivalently, \begin{equation}\label{eq:explicit_s1_r1} s_1 = \ensuremath{\frac{\rob_2 -1 + \mu^\ADV}{\Drob}} ~~\text{and}~~ r_1\ensuremath{r_{\max}} = \ensuremath{\frac{\Dacc-\Delta\mu^\DEF}{\Dacc + \Drob}}, \end{equation} respectively. Obviously, $s\in\StrategySetDEF$ and $r\in\StrategySetADV$ are mixed strategies, and therefore $s_1,r_1\in(0,1)$, if and only if \begin{equation}\label{eq:fullymixedcond} \ensuremath{\mathrm{rob}}_1<1-\mu^\ensuremath{\mathrm{adv}}<\ensuremath{\mathrm{rob}}_2 ~~\text{and}~~0<\ensuremath{\frac{\Dacc-\Delta\mu^\DEF}{\Dacc + \Drob}}<\ensuremath{r_{\max}}\enspace, \end{equation} respectively. \newtheorem{thm}{Theorem} \begin{thm} Let $\ensuremath{\mathrm{rob}}_1<1-\mu^\ensuremath{\mathrm{adv}}<\ensuremath{\mathrm{rob}}_2$ and $0<\ensuremath{\frac{\Dacc - \Delta\mu^\DEF}{\Dacc + \Drob}}<\ensuremath{r_{\max}}$. Then the fully mixed strategy Nash equilibrium $(s^*,r^*)$ given by \begin{equation}\label{eq:mixednash} s_1^* = \ensuremath{\frac{\rob_2 -1 + \mu^\ADV}{\Drob}} ~~\text{and}~~ r_1^*\ensuremath{r_{\max}}= \ensuremath{\frac{\Dacc-\Delta\mu^\DEF}{\Dacc + \Drob}}\enspace. \end{equation} is unique. \end{thm} \begin{proof} Consider Tables~\ref{tab:AdversaryBestResponse} and \ref{tab:DefenderBestResponse}. By the analysis above, $s_1^*$ and $r_1^*$ are given as in Equation~\eqref{eq:explicit_s1_r1} and therefore the conditions for Case 3 and Case~C are satisfied. That is all $r\in\StrategySetADV$ are a best response to $s^*$ and all $s\in\StrategySetDEF$ are a best response to $r^*$. Furthermore, the conditions in Equation~\eqref{eq:fullymixedcond} are fulfilled and therefore both strategies are mixed strategies. In conclusion, $(s^*,r^*)$ is a fully mixed strategy Nash equilibrium.\\ In order to prove uniqueness of the Nash equilibrium, we first assume there exists another Nash equilibrium $(\hat{s},\hat{r})$. That is, either $\hat{s}_1\neq s_1^*$ or $\hat{r}_1\neq r_1^*$. We consider only the case $\hat{s}<s^*$, as all other cases are conducted analogously. Observe that \[s\mapsto \ensuremath{\mathrm{ASR}}(s)=1-\ensuremath{\mathrm{rob}}_2+s_1\Drob\] strictly increases in $s_1$ and \[r\mapsto \DCCR=\Dacc - r_1\ensuremath{r_{\max}}(\Dacc + \Drob)\] strictly decreases in $r_1$. Since $\hat{s}$ and $\hat{r}$ are mutual best responses, we have \begin{align*} \hat{s}_1<s_1^*&\implies \ensuremath{\mathrm{ASR}}(\hat{s})<\ensuremath{\mathrm{ASR}}(s^*)=\mu^\ensuremath{\mathrm{adv}}\\ &\implies \hat{r}=(0,1)\\ &\implies \Delta\ensuremath{\mathrm{CCR}}(\hat{r};\ensuremath{r_{\max}}) > \Delta\ensuremath{\mathrm{CCR}}(r^*;\ensuremath{r_{\max}}) = \mu^\ensuremath{\mathrm{adv}}\\ &\implies \hat{s}=(1,0) \text{\vspace{1cm}} \implies \hat{s}_1>s_1^* \enspace, \end{align*} which is a contradiction. All in all, $(s^*,r^*)$ is a unique Nash equilibrium. \qed \end{proof} \subsection{Results} \begin{figure}[t] \centering \input{fig/adv-case-analysis} \caption{Adversary's preferences with regards to $\ensuremath{\mathrm{rob}}_1$ and $\ensuremath{\mathrm{rob}}_2$ for $\mu^\ensuremath{\mathrm{adv}}$. Decreasing $\mu^\ensuremath{\mathrm{adv}}$ shifts the horizontal and vertical lines (captioned ``$1 -\mu^\ensuremath{\mathrm{adv}}$'') up and right. This means that area of Case 2 (always attack) is getting bigger and the area Case 3 (mixed strategy is possible) is getting smaller. As discussed earlier, in practice $\mu^\ensuremath{\mathrm{adv}} \approx 0.$ This would mean the adversary will always attack. Note that the area above the minor diagonal line is unreachable, because of the assumption in Eq.~\eqref{eq:ordering}. } \label{fig:adv_case} \end{figure} In the previous section, we have identified best responses and their preconditions for both actors given the opponent's strategy (see Tables~\ref{tab:AdversaryBestResponse} and~\ref{tab:DefenderBestResponse}). They allow to identify the set of strategies available to each rationally behaving actor for the given economic factors. We visualize the adversary's options in Figure~\ref{fig:adv_case}, where we can see that $\mu^\ensuremath{\mathrm{adv}}$ (see lines captioned ``$1 - \mu^\ensuremath{\mathrm{adv}}$'') determines the areas, where the adversary will always attack (Case 2), never attack (Case 1), and where we cannot say without considering the defender's actions and she might play a mixed strategy (Case 3). Note that the area above minor diagonal is unreachable (by Eq.~\eqref{eq:ordering}). We can observe that most area is covered by Case 2, which means that the adversary is incentivized to always attack, especially if a value of $\mu^\ensuremath{\mathrm{adv}}$ is low, which happens if the costs for being caught $R_-^\ensuremath{\mathrm{adv}}$ and her ongoing cost $\ensuremath{\mathrm{O}}^\ensuremath{\mathrm{adv}}$ are low and when the potential reward $R_+^\ensuremath{\mathrm{adv}}$ is high. Note that the black area, where neither Case 1, nor Case 2 (and thus also not Case 3) is true, does never fulfill the ordering in Equation~\eqref{eq:ordering} and thus is undefined in our setting. \begin{figure}[t] \centering \newcommand{20}{20} \input{fig/mu-0-def-case-analysis} \caption{Visualization of the defender cases with regards to $\Dacc$ and $\Drob$ for $\Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\! = 0$ and $\ensuremath{r_{\max}} = 0.45$. Note that lower (negative) $\Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\!$ values shift the horizontal line that indicates the area where $\neg$ Case B downwards and the diagonal line that indicates $\neg$ Case A to the right.} \label{fig:def_case} \end{figure} The same visualization for the defender is shown in Figure~\ref{fig:def_case} for a given $\Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\! = 0$ and $\ensuremath{r_{\max}} = 0.45$. Reachable areas are Case B (between the solid and dotted line), where the defender will never defend, and Case C where possibly a mixed strategy occurs. Note, while the slope of the solid line is fixed (by the ordering in Eq.~\eqref{eq:ordering}), the slope of the dotted line depends only on the value of $\ensuremath{r_{\max}}$ \footnote{An alternative formulation of the linear equation for the dotted line in Fig.~\ref{fig:def_case} is: $\Dacc = \frac{\ensuremath{r_{\max}}\Drob}{1-\ensuremath{r_{\max}}}$ (for $\Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\!=0)$}. With decreasing value of $\ensuremath{r_{\max}}$, the slope will also decrease, which means that the defender will have less incentive to use the robust classifier (the area of Case B will increase). Therefore if the proportion of samples the adversary can influence (or attack) $\ensuremath{r_{\max}}$ is small, a rational defender might not have an incentive to use the robust model, regardless of the strategy of the adversary. The case, where the defender will always use the robust classifier is not considered, as it would correspond to a case when the robust model has lower costs than the non-robust, even though it might have a lower $\ensuremath{\mathrm{CCR}}$ (but such a pure strategy can be still a solution of Case C). The solid diagonal line depicts the condition that $\Dacc + \Drob < 1$, so all points to the right (the striped area) are not valid.\footnote{Note that $\Dacc+\Drob=\ensuremath{\mathrm{acc}}_1 - (\ensuremath{\mathrm{acc}}_2 - \ensuremath{\mathrm{rob}}_2) - \ensuremath{\mathrm{rob}}_1 < \ensuremath{\mathrm{acc}}_1 \le 1$, by Equation~\eqref{eq:ordering}.} Further, the ordering (Eq.~\eqref{eq:ordering}) ensures that $\ensuremath{\mathrm{acc}}_1 > \ensuremath{\mathrm{acc}}_2$ and thus $\Dacc > 0$. Therefore, all points below the horizontal line at $\Dacc = 0$ are invalid. We include this area into the figure to illustrate when the defender would purely deploy the second model (Case A) without considering the adversary's strategy. This is only the case when $\Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\!$ is positive which in turn is only true if $\ensuremath{\mathrm{O}}_1^\ensuremath{\mathrm{def}\phantom{.}}\! > \ensuremath{\mathrm{O}}_2^\ensuremath{\mathrm{def}\phantom{.}}\!$, i.e. in our scenario, the defended model has lower ongoing costs than the undefended model. Colored dots and a star in Figure~\ref{fig:adv_case} and Figure~\ref{fig:def_case} illustrate the CIFAR-10 models proposed in \cite{madry2017towards} and \cite{shafahi2019adversarial}, where the accuracy and robustness values are taken from corresponding publications. We see in Figure~\ref{fig:adv_case} that for these values and the chosen $\mu^\ensuremath{\mathrm{adv}},$ an adversary would always attack, independent of the strategy the defender chooses (Case 1). Similarly in Figure~\ref{fig:def_case}, we can see that for the given values and the chosen $\mu^\ensuremath{\mathrm{def}\phantom{.}}\!$ and $\ensuremath{r_{\max}}$, the defender is always in Case C, meaning that she might use both models. Keep in mind that values of $\ensuremath{r_{\max}}, \mu^\ensuremath{\mathrm{adv}}$ and $\mu^\ensuremath{\mathrm{def}\phantom{.}}\!$ are arbitrarily chosen and the defender will play a model with higher $\ensuremath{\mathrm{CCR}}$ for that given attack rate (recall that the adversary will likely attack). Needless to say that our chosen value $\ensuremath{r_{\max}} = 0.45$ is very high as it means that the adversary can influence up to $45\%$ of samples. Realistically $\ensuremath{r_{\max}}$ will be much lower, even as low as one percent, which means that the defender might opt to use the non-robust model, although the adversary will always attack. \section{Discussion} \label{sec:discussion} \begin{figure}[t] \centering \input{fig/CCR_Free} \caption{Correct classification rate on CIFAR-10 based on the original data in \cite{shafahi2019adversarial}.} \label{fig:Free} \end{figure} First of all, we extend the analysis of the $\ensuremath{\mathrm{CCR}}$, as already shown in Figure~\ref{fig:Madry} to the results reported in~\cite{shafahi2019adversarial} in Figure~\ref{fig:Free}. Here, we can see that of all the countermeasures proposed (for CIFAR-10 data), only $m=2$ and $m=8$ would be considered by the defender, again depending on the strategy of the adversary. If the adversary would choose to attack in less than $\approx 10\%$ of the cases or $\ensuremath{r_{\max}} \leq 0.1$, the undefended model would be strictly preferable. The proposed ideal solution from~\cite{shafahi2019adversarial}, $m=8$, is only optimal if the adversary chooses to attack with an probability of more than $\approx 30\%$ (or if $\ensuremath{r_{\max}} > 0.3$). In between, the countermeasure with $m=2$ is the defender's optimal strategy and all other strategies ($m=4, m=10$) are strictly dominated and thus never optimal (under assumption~\ref{item:D4}). A similar figure was also shown in~\cite{gilmer2018}, but with a completely different focus and lack of theoretical foundation. By analysing our advanced adversarial classification game with the help of $\ensuremath{\mathrm{ASR}}$ and $\ensuremath{\mathrm{CCR}}$, we justify these figures by a solid theory and encourage researchers to incorporate this evaluation method when reporting results about new attack methods or countermeasures. Then, as mentioned in Sec.~\ref{sec:BRAdef}, in many cases $\Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\! \approx 0$. This holds especially true, if we consider the setting where the defender can choose between any standard trained model (model $i=1$) and its adversarially trained counterpart (model $i=2$). Since the architecture of both models is identical, they incur the same ongoing costs to operate. By Table~\ref{tab:DefenderBestResponse} the defender's best response is determined by the sign of \DCCR. This can be interpreted geometrically; $\DCCR=0$ (Case C), corresponds to an intersection of the functions \begin{align*} \rho &\mapsto \ensuremath{\mathrm{CCR}}_{11}(\rho)=(1-\rho)\ensuremath{\mathrm{acc}}_1 + \rho\ \ensuremath{\mathrm{rob}}_{11}\\ \rho &\mapsto \ensuremath{\mathrm{CCR}}_{21}(\rho)=(1-\rho)\ensuremath{\mathrm{acc}}_2 + \rho\ \ensuremath{\mathrm{rob}}_{21}\enspace, \end{align*} where $\rho=r_1\ensuremath{r_{\max}}\in[0,\ensuremath{r_{\max}}]$. Visually, this intersection can be seen in Figure~\ref{fig:Madry} and Figure~\ref{fig:Free}. All $\rho$ values before and after the point of intersection correspond to Case B and A respectively. Finally, if $\ensuremath{r_{\max}}=1$, then the condition of all cases $A$, $B$ and $C$ are satisfiable by some $r\in\StrategySetADV$. Similarly, if $\mu^\ensuremath{\mathrm{adv}}$ is close to zero, as motivated above, the adversary will always attack at her maximum. Thus, in such a situation, the defender will face a proportion $\ensuremath{r_{\max}}$ adversarial examples out of all samples. Contrarily, the defender could also aim at increasing the value of $\mu^\ensuremath{\mathrm{adv}}$. This means, either increasing the ongoing costs (e.\,g., by specifically designed countermeasures), decreasing the positive reward, or increasing the negative reward (e.\,g., by law enforcement and legal frameworks that harm an adversary). Furthermore, some of our simplifying assumptions might be object to discussion, such as that the adversary has to pay the initial costs, even when she does not attack, that misclassifying adversarial examples costs the defender exactly the same as misclassifying benign samples, that each successful adversarial example gives the same reward to the adversary, or that the ongoing costs of the adversary do not depend on the defender's strategy. Our intention was to advance the theory of adversarial classification games by a first simple cost/reward structure, and thus we leave the mentioned extensions for future work. Finally, we want to mention that although interesting from a game-theoretical point of view, the situations where adversary and defender play a mixed strategy Nash equilibrium might not be particularly relevant in practice. First of all, as mentioned above, in practice it seems unrealistic to reach one of these states at all, and even if we reach such a state, these Nash equilibria are very unstable. Small changes to the defender's strategy lead to a pure strategy best response of the adversary. Small changes to the adversary's strategy lead to a pure strategy best response of the defender. \section{Conclusion} We started this paper with the question: when should you defend your classifier? To answer this, we present the \emph{advanced adversarial classification game} that captures all relevant aspects of the interplay between an adversary and a defender in adversarial machine learning. We introduce two new metrics, the attack success rate for the adversary and the correct classification rate for the defender, which enables us to capture both players' expected payoff when being faced with every, possibly mixed, opponent strategy. By analyzing in detail the most common case in the literature, where the adversary has one possible attack and the defender may choose to implement one countermeasure, we are able to identify pure and mixed strategy equilibria for our game. By taking into consideration that in realistic scenarios both cost parameters, $\mu^\ensuremath{\mathrm{adv}}$ and $\Delta\mu^\ensuremath{\mathrm{def}\phantom{.}}\!$ will be close to zero, we can conclude that the most important parameter of the game is $\ensuremath{r_{\max}}$, i.\,e., the proportion of samples an adversary can perturb. As shown in Figures~\ref{fig:Madry} and~\ref{fig:Free}, no rational defender would implement any of these proposed countermeasures if she would expect less than $\approx 17\%$ , respectively $\approx 10\%$ adversarial examples. Putting this into an universal answer to the question we set out to answer, it means: \begin{center} Do not defend your classifier when $\ensuremath{r_{\max}} \leq \frac{\Dacc}{\Dacc+\Drob}$. \end{center} \bibliographystyle{splncs04}
2,877,628,091,360
arxiv
\section{Introduction} Non-linear dimensionality reduction algorithms are a powerful tool in data analysis and manifold learning. Many of these algorithms require the calculation of the eigendecomposition of some data-dependent kernel matrix. Examples of such algorithms include Laplacian eigenmaps~\cite{belkin2003laplacian}, LLE~\cite{roweis2000nonlinear}, Isomap~\cite{balasubramanian2002isomap}, MDS~\cite{buja2008data}, Spectral clustering~\cite{shi2000normalized}, and more. One of the key issues with these non-linear methods is how to map new unseen data to the previously learnt embedding. This process is called out-of-sample extension. Naively, one could repeat the eigendemoposition calculation on the entire data from scratch. However, since in most of these methods the dimension the kernel matrix grows with the number of data points, the computation of the full or even the partial eigendecoposition of a large kernel matrix is impractical due to its runtime and space requirements. For example, algorithms for partial eigendecomposition such as the Lanczos algorithm and some variants of SVD require $O(n^2m)$ floating point operations, where $n$ is the dimension of the matrix (number of data points) and $m$ is the number of components calculated. Randomized algorithms~\cite{halko2011algorithm, halko2011finding} use random projections of the data to reduce the time complexity of the decomposition to $O(n^2 \log m)$, which is still impractical for large $n$. Moreover, all eigendecomposition algorithms require to store the $n \times n$ kernel matrix either in the RAM or on the disc. Various methods for out-of-sample extension were proposed (see for example~\cite{strange2011generalised,jansen2017scalable,aizenbud2015pca,mitz2019symmetric}), with the most prominent one being the Nystr{\"o}m method~\cite{bengio2004out,drineas2005nystrom}. We will describe the Nystr{\"o}m method in detail in the next section. Due to the prominence and relevance of the Nystr{\"o}m method to this work, we wish to further discuss some of its aspects. An important application of the Nystr{\"o}m method is kernel approximation. In this task, we are mainly interested in approximating the kernel matrix of the entire data itself, and the approximation of the individual eigenvalues and eigenvectors of the kernel plays a second role. In this case, we use the eigendecomposition approximation obtained by the Nystr{\"o}m method to produce a low-rank approximation of the kernel. This allows to speed-up kernel related calculations~\cite{williams2001using}. The error analysis of the Nystr{\"o}m method and its variants is widely investigated, see~\cite{wang2013improving, gittens2011spectral} and the references therein. To the best of our knowledge, all error bounds obtained in the literature focus on the error of the kernel matrix approximation, rather than on the error of the individual extended eigenvectors. These bounds may be less useful when the Nystr{\"o}m method is used for out-of-sample extension (usually as part of dimensionality reduction), where the individual eigenvectors of the kernel are of great importance. The performance of the Nystr{\"o}m method depends on sampling a subset of the data points and many methods for sampling this subset were proposed, see~\cite{kumar2012sampling,sun2015review} and the references therein. Our results are independent of the methodology used to obtain the subset of samples, and hence we do not discuss this issue in detail. Further improvements of the Nystr{\"o}m method that are not sampling-related were also proposed in literature. The most notable ones include the ensemble Nystr{\"o}m method~\cite{kumar2009ensemble}, that averages several Nystr{\"o}m extensions in order to improve performance, the spectral shifted Nystr{\"o}m method~\cite{wang2014improving} that provides superior performance in cases where the spectrum of the matrix decays slowly, and the modified Nystr{\"o}m method~\cite{wang2013improving}. More recently, works that use the structure of the kernel matrix were proposed. For example, the MEKA algorithm~\cite{si2017memory} provides superior kernel approximation for kernels the admit a block-diagonal structure. We will describe some of these methods in detail in the next section. A problem related to eigendecomposition approximation that is relevant to this paper is updating a known eigendecomposition of a matrix following a ``small'' perturbation, without calculating the entire decomposition from scratch. Classical perturbation results~\cite{stewart1990matrix} exist for a general symmetric perturbation, and will be described in detail in the next section. Other related works consider perturbations that have some structure; see for example,~\cite{bunch1978rank,mitz2019symmetric} for the case where the perturbation is of rank one, and~\cite{oh2018multiple,brand2006fast} for a general low-rank perturbation. Other approaches of updating a known eigendecomposition include restarting the power method~\cite{langville2006updating} or the inverse iteration algorithm~\cite{trefethen1997numerical}, both require applying the updated matrix several times until convergence, which may be expensive if the matrix is large. The contribution of the current paper is threefold. First, we derive eigendecomposition perturbation formulas accompanied by error bounds for matrices that only part of their spectrum is known. Second, we use these perturbation formulas to derive a new framework for out-of-sample extension of eigenvectors. Unlike some of the existing extension methods, we show explicit error bounds for our approach for the individual extended eigenvectors. Third, we prove that the Nystr{\"o}m method and its generalizations are in fact special cases of our framework. This reveals the essence behind existing Nystr{\"o}m methods, allows to analyze their accuracy, and provides means to derive new Nystr{\"o}m-type extensions that utilize the structure of the kernel matrix. It also allows for our approach to be used for kernel approximation, analogously to the Nystr{\"o}m method. The rest of this paper is organized as follows. In Section~\ref{sec:prem}, we describe classical perturbation results, along with the Nystr{\"o}m method and some of its variants. In Section~\ref{sec:trunc_pert}, we extend the classical perturbation formulas to the case where only part of the spectrum of the perturbed matrix is known, and derive their error term. In Section~\ref{sec:the_extension_framework}, we use these formulas to develop a perturbation based extension framework. In Section~\ref{sec:nys_is_pert}, we prove that our extension framework generalizes the Nystr{\"o}m method. In Section~\ref{sec:applications}, we suggest methods to improve the accuracy of our extension, and prove that some of them are related to variants of the Nystr{\"o}m method. In Section~\ref{sec:numerical}, we provide numerical results to support our theory and show the advtanges of our extension framework. In Section~\ref{sec:summary}, we summarize our work. \section{Preliminaries} \label{sec:prem} In this section, we describe two methods for approximating the eigendecomposition of a matrix that are relevant to our work. We first describe the perturbation method in Section~\ref{sec:pert_eig}, and then describe the Nystr{\"o}m method in Section~\ref{sec:preliminaries_nystrom}. \subsection{Perturbation of eigenvalues and eigenvectors} \label{sec:pert_eig} Let $A' \in \mathbb{R}^{n \times n}$ be a real symmetric positive definite matrix with distinct eigenvalues $\{t_i\}_{i=1}^n$ and their corresponding orthonormal eigenvectors $\{v_i\}_{i=1}^n$. Assume that $t_1 > t_2 > \cdots > t_n$. Let $E \in \mathbb{R}^{n \times n}$ a real symmetric matrix. Consider a perturbation $A$ of $A'$ given by $A=A'+E$, with the eigenpairs of~$A$ denoted by $\{(s_i,w_i)\}_{i=1}^n$. We wish to find an approximation to the eigenpairs of $A$. The classical perturbation solution to this problem~\cite{stewart1998perturbation} is as follows. The approximated eigenvectors of $A$ are given by \begin{equation} \label{eqn:pert_org_vecs} \widetilde{w}_{i} = v_{i} + \sum_{k=1, k \neq i}^{n} \frac{(Ev_i,v_k)}{t_i-t_k}v_k + O(\norm{E}_2^2), \quad 1 \leq i \leq n , \end{equation} and the approximated eigenvalues of $A$ are given by \begin{equation} \label{eqn:pert_org_vals} \widetilde{s}_i = t_i + v_i^TEv_i + O(\norm{E}_2^2), \quad 1 \leq i \leq n . \end{equation} We note that the eigenvalues update formula~\eqref{eqn:pert_org_vals} depends only on the updated eigenvalue and its corresponding eigenvector, whereas the eigenvectors update formula~\eqref{eqn:pert_org_vecs} depends on all eigenvalues and eigenvectors of $A'$. \begin{remark} There exist perturbation results for matrices with non-simple eigenvalues~\cite{byron2012mathematics}. However, since non-simple eigenvalues are highly unlikely in data dependent matrices, we do not discuss this case and leave it for a future work. \end{remark} \subsection{The Nystr{\"o}m method and its variants}\label{sec:preliminaries_nystrom} Let $K \in \mathbb{R}^{n \times n}$ be a symmetric positive-definite matrix. We wish to find the $k$ leading eigenpairs $\{(\lambda_i,u_i)\}_{i=1}^k$ of $K$. The Nystr{\"o}m method~\cite{sun2015review,williams2001using} finds an approximation $\{(\widetilde{\lambda}_i,\widetilde{u}_i)\}_{i=1}^k$ to these eigenpairs as follows. First, $k$ columns of $K$ are sampled (typically uniformly at random without replacement). We assume, without loss of generality, that the columns and rows of~$K$ were rearranged so that the first~$k$ columns of~$K$ were sampled. Denote by $K'$ the $k \times k$ matrix consisting of the first~$k$ rows and columns of~$K$, and by~$C$ the $n \times k$ matrix consisting of the first~$k$ columns of~$K$. Then, we calculate the~$k$ eigenpairs of~$K'$ and denote them by $\{(\lambda_i', u_i' )\}_{i=1}^{k}$. Finally, the Nystr{\"o}m extension approximates the $k$ leading eigenvectors of $K$ by \begin{equation} \label{eq:nystrom} \widetilde{u}_i = \sqrt{\frac{k}{n}} \frac{1}{\lambda_i'}Cu_i', \quad i=1,\ldots,k. \end{equation} Moreover, the $m$ leading eigenvalues of $K$ are approximated by \begin{equation} \label{eq:nystrom_vals} \widetilde{\lambda}_i = \frac{n}{k} \lambda_i', \quad i=1,\ldots,k. \end{equation} The runtime complexity of the Nystr{\"o}m method is $O(nk^2 + k^3)$. In some applications, we are interested in an approximation of $K$ itself rather than its $k$ leading eigenpairs. In this case, the Nystr{\"o}m approximation of $K$ is \begin{equation} \label{eqn:nys_k_app} \widetilde{K}_{\text{nys}} = \sum_{i=1}^{k} \widetilde{\lambda}_i \widetilde{u}_i^{T} \widetilde{u}_i . \end{equation} A straightforward generalization of the Nystr{\"o}m method is the following. Let $l \geq k$ and choose $K'$ to be the $l \times l$ top-left submatrix of $K$. We calculate the $k$ leading eigenpairs of $K'$, and then extend them using~\eqref{eq:nystrom} and~\eqref{eq:nystrom_vals}. This form of extension is a generalization of the Nystr{\"o}m method, since choosing $l = k$ is equivalent to the Nystr{\"o}m method. If we choose $l = n$, we get the exact eignvectors of $K$. Intuitively, the larger $l$ is, the better the approximation will be, at the cost of a greater computational complexity. The runtime complexity of this method is $O(nk^2 + lk^2)$. We will use this generalization of the Nystr{\"o}m method in the numerical experiments in Section~\ref{sec:numerical}. Since the Nystr{\"o}m approximation~\eqref{eqn:nys_k_app} of the kernel matrix $K$ is a low-rank approximation, it may provide poor results when $K$ is not low-rank. This might occur, for example, when its spectrum decays slowly. A possible approach to overcome this problem is the spectrum shifted Nystr{\"o}m extension~\cite{wang2014improving}. This method essentially applies the classical Nystr{\"o}m extension on a shifted kernel matrix, i.e., applies the Nystr{\"o}m extension on \begin{equation} \label{eq:shift} K_{\text{shift}} = K - \mu I, \end{equation} for some $\mu \geq 0$. The updated eigenvalues~\eqref{eq:nystrom_vals} are then shifted-back by $\mu$. It is suggested in~\cite{wang2014improving} to set $\mu$ to be the mean of the smallest $n-k$ eigenvalues of $K$, that is \begin{equation} \label{eq:mu_of_ss} \mu = \frac{\sum_{j = k+1}^n \lambda_j}{n - k} = \frac{\text{trace}(K) - \sum_{j = 1}^k \lambda_j}{n - k} . \end{equation} If we denote by $\widetilde{K}_{\text{shift}}$ the kernel approximation~\eqref{eqn:nys_k_app} obtained using the shifted Nystr{\"o}m extension, it is shown in~\cite{wang2014improving} that \begin{equation} \norm{K - \widetilde{K}_{\text{shift}}}_F \leq \norm{K - \widetilde{K}_{\text{nys}}}_F . \end{equation} Alternatively, the kernel $K$ may admit a block-diagonal structure. As demonstrated in~\cite{si2017memory}, this may happen for some kernel functions when the data consist of several clusters. In this case, the MEKA algorithm~\cite{si2017memory} essentially performs a Nystr{\"o}m approximation on each cluster of data. Each such approximation corresponds to a block on the diagonal of the kernel matrix, and the resulting approximation is block-diagonal. A related approach to the MEKA algorithm for improving the Nystr{\"o}m approximation~\eqref{eqn:nys_k_app} is the ensemble Nystr{\"o}m method~\cite{kumar2009ensemble}. The idea behind this method is to perform $q$ independent Nystr{\"o}m kernel approximations on random subsets of the data, and then average them. Formally, given $q$ independent Nystr{\"o}m approximations $\{ \widetilde{K}_i \}_{i=1}^q$, the ensemble Nystr{\"o}m approximation is given by \begin{equation} \widetilde{K}_{\text{ens}} = \sum_{i=1}^q \mu_i \widetilde{K}_i , \end{equation} for some weights $\{\mu_i\}_{i=1}^q$. It is suggested in~\cite{kumar2009ensemble} to use $\mu_i = \frac{1}{q}$ for $1 \leq i \leq q$. Better error bounds for this method compared to the classical Nystr{\"o}m method are proven in~\cite{kumar2009ensemble}. The difference between the ensemble Nystr{\"o}m method and the MEKA algorithm is that in the former, the individual Nystr{\"o}m approximations are chosen at random rather than by clusters, and the resulting approximation is their average rather than their concatenation in a block-diagonal matrix. \section{Truncating the perturbation formulas} \label{sec:trunc_pert} In this section, we consider a variant of the problem presented in Section~\ref{sec:pert_eig}, in which only the $m$ leading eigenpairs $\{(t_i,v_i)\}_{i=1}^m$ of the unperturbed matrix $A'$ are known, and we wish to approximate the $m$ leading eigenpairs of $A$. To this end, we introduce a parameter $\mu \in \mathbb{R}$ whose purpose is to approximate the unknown eigenvalues $\{t_i\}_{i={m+1}}^n$ of $A'$. We derive two approximation formulas based on the classical perturbation formula~\eqref{eqn:pert_org_vecs}. These two approximation formulas differ in their order of approximation as well as in their computational complexity. The first formula, which we refer to as the first order truncated perturbation formula, provides a first order approximation to the eigenvectors of $A$, as detailed in the following proposition. \begin{proposition} \label{prop:pert_partial_1} Let $A' \in \mathbb{R}^{n \times n}$ be a real symmetric matrix with $m$ leading eigenpairs $\{(t_i,v_i)\}_{i=1}^m$. Assume that $t_1 > t_2 > \cdots > t_m$. Let $E \in \mathbb{R}^{n \times n}$ be a real symmetric matrix. Let $A=A'+E$ be a perturbation of $A'$, and denote the $m$ leading eigenpairs of $A$ by $\{(s_i,w_i)\}_{i=1}^m$. Denote by $V^{(m)}$ the $n \times m$ matrix consisting of the $m$ leading eigenvectors of $A'$. Let $\mu \in \mathbb{R}$ and $1 \leq i \leq m$. Denote \begin{equation} \label{eqn:r} r_i = \Big( I - V^{(m)}V^{(m)T} \Big) Ev_i . \end{equation} Then, $w_i$ is approximated by the first order truncated perturbation formula \begin{equation} \label{eq:pert_expansion} \widetilde{w}_{i}^{(1)} = v_{i} + \sum_{k=1, k \neq i}^{m} \frac{(Ev_i,v_k)}{t_i-t_k}v_k + \frac{1}{t_i- \mu} r_i , \end{equation} with an error satisfying \begin{equation} \label{eqn:error_trunc_1} \norm{w_i - \widetilde{w}_i^{(1)}} \leq \frac{ \sum_{k=m+1}^n \abs{t_k - \mu} }{\abs{t_i - t_m}\abs{t_i - \mu}} \norm{E}_2 + O \big( \norm{E}_2^2 \big). \end{equation} \end{proposition} The proof for Proposition~\ref{prop:pert_partial_1} is given in Appendix~\ref{app1}. The second formula, which we refer to as the second order truncated perturbation formula, provides a second order approximation to the eigenvectors of $A$, as detailed in the following proposition. \begin{proposition} \label{prop:pert_partial_2} Let $A' \in \mathbb{R}^{n \times n}$ be a real symmetric matrix with $m$ leading eigenpairs $\{(t_i,v_i)\}_{i=1}^m$. Assume that $t_1 > t_2 > \cdots > t_m$. Let $E \in \mathbb{R}^{n \times n}$ be a real symmetric matrix. Let $A=A'+E$ be a perturbation of $A'$, and denote the $m$ leading eigenpairs of $A$ by $\{(s_i,w_i)\}_{i=1}^m$. Denote by $V^{(m)}$ the $n \times m$ matrix consisting of the $m$ leading eigenvectors of $A'$. Let $\mu \in \mathbb{R}$ and $1 \leq i \leq m$. Denote \begin{equation} \label{eqn:r2} r_i = \Big( I - V^{(m)}V^{(m)T} \Big) Ev_i . \end{equation} Then, $w_i$ is approximated by the second order truncated perturbation formula \begin{equation} \label{eq:pert_expansion2} \widetilde{w}_{i}^{(2)} = v_{i} + \sum_{k=1, k \neq i}^{m} \frac{(Ev_i,v_k)}{t_i-t_k}v_k + \frac{1}{t_i- \mu} r_i - \frac{\mu}{(t_i- \mu)^2} r_i + \frac{1}{(t_i- \mu)^2} A'r_i, \end{equation} with an error satisfying \begin{equation} \label{eqn:error_trunc_2} \norm{w_i - \widetilde{w}_i^{(2)}} \leq \frac{ \sum_{k=m+1}^n \abs{t_k - \mu}^2 }{\abs{t_i - t_m}\abs{t_i - \mu}^2} \norm{E}_2 + O \big( \norm{E}_2^2 \big). \end{equation} \end{proposition} The proof for Proposition~\ref{prop:pert_partial_2} is given in Appendix~\ref{app2}. We note that formula~\eqref{eq:pert_expansion2} requires applying $A'$, and is computationally more expensive. We discuss in detail the runtime and memory requirements of formulas~\eqref{eq:pert_expansion} and~\eqref{eq:pert_expansion2} in Appendix~\ref{app:runtime}. The update formulas~\eqref{eq:pert_expansion} and~\eqref{eq:pert_expansion2} depend on a parameter $\mu$, whose choice is discussed in~\cite{mitz2019symmetric}. If $A'$ is known to be low-rank, we use $\mu = 0$. When $A'$ is not low-rank, and especially if its spectrum is known to decay slowly, we follow~\cite{mitz2019symmetric} and suggest to use \begin{equation} \label{eqn:mu_mean} \mu_{\text{mean}} = \frac{\text{trace}(A') - \sum_{i=1}^{m}t_i}{n-m}, \end{equation} which is the the mean of the unknown eigenvalues. We conclude this section by proving that under a certain assumption on $A'$, the first order truncated perturbation formula~\eqref{eq:pert_expansion} and the second order truncated perturbation formula~\eqref{eq:pert_expansion2} are equal. Furthermore, in this case, the $O \big( \norm{E}_2 \big) $ term in the error bound of both approximations cancels out, as stated in the following proposition. \begin{proposition} \label{prop:order1_is_order2} Let $\delta \geq 0$ and assume that $A'$ can be written in the form of a low-rank matrix plus a spectrum shift, that is $A' = V^{(m)}TV^{(m)T} + \delta I$ for some diagonal matrix $T \in \mathbb{R}^{m \times m}$. Then, for $\mu = \delta$, the first order truncated perturbation formula~\eqref{eq:pert_expansion} and the second order truncated perturbation formula~\eqref{eq:pert_expansion2} are equal, that is \begin{equation} \widetilde{w}_{i}^{(1)} = \widetilde{w}_{i}^{(2)}, \end{equation} and the approximation errors satisfy \begin{equation} \norm{w_i - \widetilde{w}_{i}^{(1)}} = \norm{w_i - \widetilde{w}_{i}^{(2)}} = O \big( \norm{E}_2^2 \big), \end{equation} for all $1 \leq i \leq m$. \end{proposition} The proof of Proposition~\ref{prop:order1_is_order2} is given in Appendix~\ref{app3}. \begin{corollary} \label{col:first_is_second} If $A'$ is of rank $m$, and $\mu = 0$ in~\eqref{eq:pert_expansion} and~\eqref{eq:pert_expansion2}, then the first order and second order truncated perturbation formulas give rise to the same approximation. The error of this approximation is $ O \big( \norm{E}_2^2 \big)$. \end{corollary} \section{Perturbation based extension framework} \label{sec:the_extension_framework} In this section, we derive our perturbation based extension framework based on Proposition~\ref{prop:pert_partial_1}. Let $K \in \mathbb{R}^{n \times n}$ be a symmetric positive semidefinite matrix whose $m$ leading eigenpairs are denoted by $\{ (\lambda_i, u_i ) \}_{i=1}^m$. Let $K^s\in \mathbb{R}^{n \times n}$ be a symmetric matrix consisting of any subset of entries of $K$, with the rest of its entries being $0$, as illustrated in Figure~\ref{fig:K_star}. Our extension framework enables to "extend" the eigenvectors of any such $K^s$ to the eigenvectors of $K$, as follows. Let $\{ (\lambda_i^s, u_i^s ) \}_{i=1}^m$ be the leading eigenpairs of $K^s$, let $U^{s(m)} \in \mathbb{R}^{n \times m}$ be the matrix consisting of the~$m$ eigenvectors corresponding to the~$m$ largest eigenvalues of $K^s$, and let $\mu \geq 0$ be a parameter. Let $1 \leq i \leq m$. By the first order approximation in Proposition~\ref{prop:pert_partial_1}, the eigenvector $u_i$ is approximated by \begin{equation} \label{eqn:pert_extension} \widetilde{u}_{i} = u^s_{i} + \sum_{k=1, k \neq i}^{m} \frac{((K - K^s)u^s_i,u^s_k)}{\lambda^s_i-\lambda^s_k}u^s_k + \frac{1}{\lambda^s_i- \mu} \big( I_m - U^{s(m)}U^{s(m)T} \big) (K - K^s)u^s_i , \end{equation} with an error satisfying \begin{equation} \label{eqn:ext_error} \norm{u_i - \widetilde{u}_i} \leq \frac{ \sum_{k=m+1}^n \abs{\lambda^s_k - \mu} }{\abs{\lambda^s_i - \lambda^s_m}\abs{\lambda^s_i - \mu}} \norm{K - K^s}_2 + O \big( \norm{K - K^s}_2^2 \big). \end{equation} Furthermore, by~\eqref{eqn:pert_org_vals}, the eigenvalue $\lambda_i$ is approximated by \begin{equation} \label{eqn:pert_ext_vals} \widetilde{\lambda}_i = \lambda_i^s + u_i^{sT}(K - K^s)u^s_i, \end{equation} with an error of magnitude $\abs{\lambda_i - \widetilde{\lambda}_i } = O(\norm{K - K^s}^2_2)$. Equations~\eqref{eqn:pert_extension} and~\eqref{eqn:pert_ext_vals} are our perturbation based extension method. We will refer to this extension as the \emph{perturbation extension of the eigenpairs of $K^s$ to the eigenpairs of $K$}. Note that this framework is quite general, and enables us to perform extensions over any symmetric sub-matrix of $K$. We will propose and discuss several methods for choosing $K^s$ in Section~\ref{sec:applications}. An extension framework analogous to~\eqref{eqn:pert_extension} that is based on the second order approximation in Proposition~\ref{prop:pert_partial_2} can also be obtained. Our extension framework can also be used to obtain a low-rank approximation of the kernel matrix of the entire data $K$. Analogously to the Nystr{\"o}m method~\cite{williams2001using}, this low-rank kernel approximation is defined by \begin{equation} \widetilde{K}_{\text{pert}} = \sum_{i=1}^m \widetilde{\lambda}_i \widetilde{u}_i^T \widetilde{u}_i . \end{equation} \begin{figure} \centering \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{grid_all.png} \caption{$K$.} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{grid_general2.png} \caption{Possible $K^s$.} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{grid_general.png} \caption{Another possible $K^s$.} \end{subfigure} \caption{Illustration of the submatrix $K^s$. Blank entries indicate $0$.} \label{fig:K_star} \end{figure} \section{Equivalence with the Nystr{\"o}m method} \label{sec:nys_is_pert} In this section, we prove that our perturbation extension framework~\eqref{eqn:pert_extension} is in fact a generalization of the Nystr{\"o}m method described in Section~\ref{sec:preliminaries_nystrom}, by showing that the Nystr{\"o}m method arises from our extension framework by a specific choice of $K^s$. Let $K \in \mathbb{R}^{n \times n}$ be a kernel matrix, and let $m < n$. Assume, without loss of generality, that we sample the first $m$ columns of $K$ to perform the Nystr{\"o}m extension, and denote by $\{ (\hat{\lambda}_i, \hat{u}_i) \}_{i=1}^m$ the approximation to eigenpairs of $K$ obtained by the Nystr{\"o}m method as defined in~\eqref{eq:nystrom} and~\eqref{eq:nystrom_vals}. The following proposition states that a specific choice of the matrix $K^s$ for the perturbation extension~\eqref{eqn:pert_extension} gives rise, up to a multiplicative constant, to the Nystr{\"o}m method eigenpairs defined above. \begin{proposition} \label{prop:nys_is_pert} Using the above notation, let $K^s$ be the $n\times n$ matrix whose top left $m \times m$ submatrix is the top left $m \times m$ submatrix of $K$, and the rest of its entries are 0. Denote by $\{ (\lambda_i^s, u_i^s) \}_{i=1}^{m}$ the eigenpairs of $K^s$. Set $\mu$ of~\eqref{eqn:pert_extension} to be~$0$, and denote by $\{ (\widetilde{\lambda}_i, \widetilde{u}_i) \}_{i=1}^m$ the perturbation extension of the eigenpairs $\{ (\lambda_i^s, u_i^s) \}_{i=1}^{m}$ of $K^s$ to the eigenpairs of $K$. Denote by $K'$ the top left $m \times m$ submatrix of $K$ and by $\{ (\lambda_i',u_i') \}_{i=1}^m$ its eigenpairs. Denote by $\{ (\hat{\lambda}_i,\hat{u}_i) \}_{i=1}^m$ the Nystr{\"o}m extension of $\{ (\lambda_i',u_i') \}_{i=1}^m$ (see Section~\ref{sec:preliminaries_nystrom}). Then, \begin{equation}\label{eqn:ev equivalence} \hat{u}_i = \sqrt{\frac{m}{n}} \widetilde{u}_i \quad \text{and} \quad \hat{\lambda}_i = \frac{n}{m} \widetilde{\lambda}_i \end{equation} for all $1 \leq i \leq m$. \end{proposition} The proof of Proposition~\ref{prop:nys_is_pert} is given in Appendix~\ref{app4}. \iffalse \rvc{DO YOU THINK THE FOLLOWING IS IMPORTANT?}{I suggest to comment it for now.} We now wish to discuss the error term~\eqref{eqn:ext_error} in the setting of this section. Denote $\rvc{\kappa}{$\kappa$ is usually reserved for condition number.} = \frac{\sum_{k=m+1}^{n}\abs{\lambda_k^s}}{n-m}$, that is the mean of the unknown eigenvalues. We can write the error term~\eqref{eqn:ext_error} in the form \begin{equation} \label{eqn:ext_error_cont} \frac{ (n - m) \kappa } {\abs{\lambda^s_i - \lambda^s_m}\abs{\lambda^s_i }} \norm{K - K^s}_2 + O \big( \norm{K - K^s}_2^2 \big). \end{equation} We can see that if $\kappa = O(1)$, that is, $K$ is not low-rank, then the error may grow linearly with the data size $n$. Additionally, the $\abs{\lambda^s_i - \lambda^s_m}$ term in the denominator suggests that this approximation may fail to provide good results for a matrix $K$ whose eigengap is small. These two limitation of the Nystr{\"o}m method are well known in literature [xxx]. \fi The formulation of the Nystr{\"o}m method as a perturbation based extension using Proposition~\ref{prop:nys_is_pert} enables us to provide an error analysis based on Propositions~\ref{prop:pert_partial_1} and~\ref{prop:pert_partial_2}. Contrary to previous works that provide an error bound for the kernel approximation itself, our error analysis is for the individual approximated eigenvectors, as stated in the following proposition. \begin{proposition} [Vector-wise error for the Nystr{\"o}m method] \label{prop:pert_error_for_nys} Using the above notation, the error induced by the Nystr{\"o}m method satisfies \begin{equation} \label{eqn:ext_error_as_nys} \norm{u_i - \hat{u}_i} = O \big( \norm{K - K^s}_2^2 \big), \quad 1 \leq i \leq m . \end{equation} \end{proposition} \begin{proof} Follows directly from the equivalence stated in Proposition~\ref{prop:nys_is_pert} by noting that in the Nystr{\"o}m method settings, the requirements of Corollary~\ref{col:first_is_second} hold. \end{proof} \section{New extensions based on the perturbation framework} \label{sec:applications} The perturbation extension framework derived in Section~\ref{sec:the_extension_framework} allows for various extensions that depend on the choice of the matrix $K^s$. In this section, we propose several types of extensions corresponding to different choices of $K^s$. The key idea is to choose a matrix $K^s$ whose eigendecomposition is ``easy" to compute, in order to make the extension computationally attractive. In addition, we prove that similarly to the classical Nystr{\"o}m method, the spectral shifted Nystr{\"o}m method and the ensemble Nystr{\"o}m method described in Section~\ref{sec:prem} are in fact special cases of our general extension framework, for suitable choices of~$K^s$. \subsection{$\mu$-shifted extension} \label{sec:shifted_ext} In this type of extension, we choose the matrix $K^s$ to be the top left $m \times m$ submatrix of $K$ padded with zeros, similarly to the Nystr{\"o}m method (see Figure~\ref{fig:app_illus}). The difference from the Nystr{\"o}m method lies in the parameter $\mu$ of~\eqref{eqn:pert_extension}. In Proposition~\ref{prop:nys_is_pert}, we used the value of the parameter $\mu$ to be $\mu=0$. This might be a reasonable choice when the kernel matrix $K$ is low-rank, or when its spectrum decays fast. When that is not the case, it might be beneficial to choose a parameter $\mu$ that approximates the unknown eigenvalues of $K$. A reasonable choice for $\mu$ in such a case is $\mu_{mean}$ of~\eqref{eqn:mu_mean}. We now prove that given a parameter $\mu \geq 0$, the spectral shifted Nystr{\"o}m method with parameter $\mu$ coincides with the perturbation extension method with the same $\mu$, as detailed in the following proposition. \begin{proposition} \label{prop:ss_nys_is_pert} Using the notation of this section, the eigenpairs approximated by the spectrum shifted Nystr{\"o}m method and the $\mu$-shifted extension method are equal. \end{proposition} The proof of Proposition~\ref{prop:ss_nys_is_pert} is given in Appendix~\ref{app5}. The runtime complexity of the $\mu$-shifted extension is the same as of the Nystr{\"o}m method, that is $O(nm^2 + m^3)$. \subsection{Block diagonal extension} \label{sec:block_ext} In this type of extension, we choose $K^s$ to be a block diagonal matrix (see Figure~\ref{fig:app_illus}). The block sizes can be arbitrary, but for simplicity of notation, we choose $k$ blocks of an identical size $l \geq m$. For each block, we pad the block with zeros to obtain an $n \times n$ matrix and calculate its $m$ leading eigenpairs, and then extend them using~\eqref{eqn:pert_extension}. Denote by $\{ (\widetilde{\lambda}_i^{(j)}, \widetilde{u}_i^{(j)}) \}_{i=1}^m$ the eigenpairs extension of block~$j$, and by $\widetilde{K}_j \in \mathbb{R}^{n \times n}$ the resulting kernel approximation. To combine the $k$ approximations $\{ \widetilde{K}_j \}_{j=1}^k$, one might use a weighted mean, that is \begin{equation} \widetilde{K} = \sum_{i=1}^{k} \mu_i \widetilde{K}_i . \end{equation} It easily follows from Proposition~\ref{prop:nys_is_pert} that the block diagonal extension is identical to the ensemble Nystr{\"o}m method. The runtime complexity of this method is $O\big( k(nm^2 + lm^2) \big)$. However, the eigendecomposition of the blocks can be done in parallel. \subsection{$p$-band extension} \label{sec:bp_ext} In this type of extension, we choose $K^s$ to be a band matrix of width $p$ (see Figure~\ref{fig:app_illus}). This extension may provide superior results when the kernel $K$ has most of its energy concentrated around the diagonal. We demonstrate the advantage of this extension method in Section~\ref{sec:num_p_band}. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{grid_m.png} \caption{$K^s$ for $\mu$-shifted extension. \\ \hfill} \end{subfigure} \label{fig:1} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{grid_block.png} \caption{$K^s$ for block diagonal extension.} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{tridiag.png} \caption{$K^s$ for $p$-band extension. \\ \hfill} \end{subfigure} \caption{Illustration of the submatrix $K^s$ for each of the discussed extensions. Blank entries indicate $0$.} \label{fig:app_illus} \end{figure} \subsection{Sparse extension} \label{sec:sparse_ext} In this type of extension, we assume that the kernel matrix $K$ is sparse, and choose $K^s$ to be some sparse submatrix of it, as illustrated in Figure~\ref{fig:app_sparse}. More concretely, denoting by $\text{nnz}(K)$ the number of non-zero entries of $K$, in the sparse extension framework, we need to choose $q \cdot \text{nnz}(K)$ entries of $K$ to define~$K^s$, for some $0 < q \leq 1$. While this extension can be applied to any such subset, motivated by the $\norm{E}_2$ term in the error bounds~\eqref{eqn:error_trunc_1} and~\eqref{eqn:error_trunc_2}, we suggest to choose the $q \cdot \text{nnz}(K)$ largest entries of $K$. We demonstrate the advantage of this extension method in Section~\ref{sec:num_sparse}. \begin{figure} \centering \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{k_spase.png} \caption{Sparse $K$.} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{k_spase_samp.png} \caption{Corresponding $K^s$.} \end{subfigure} \caption{Illustration of a sparse extension. Blank entries indicate $0$.} \label{fig:app_sparse} \end{figure} \section{Numerical examples} \label{sec:numerical} In this section, we demonstrate numerically the results obtained in the previous sections. We start by demonstrating numerically the error bounds derived in Propositions~\ref{prop:pert_partial_1} and~\ref{prop:pert_partial_2}. Then, we demonstrate the advantages of the extensions proposed in Section~\ref{sec:applications} for both real and synthetic datasets. \subsection{Perturbation error bounds} In this section, we demonstrate numerically the behavior of the error bounds in Propositions~\ref{prop:pert_partial_1} and~\ref{prop:pert_partial_2}. In our first example, we demonstrate the linear dependence of the error on the norm of the matrix $E$ in Propositions~\ref{prop:pert_partial_1} and~\ref{prop:pert_partial_2}. To that end, we generate a random symmetric matrix $A'$, normalize it to have a unit norm, and then calculate its 10 leading eigenpairs. We then generate a random symmetric matrix $E$ and normalize it to have a unit norm. Then, for various values of $c$, we approximate the 10 leading eigenpairs of $A_c = A' + cE$ by the first and second order approximations~\eqref{eq:pert_expansion} and~\eqref{eq:pert_expansion2} using $\mu = 0$. Denote by $v_c$ the leading eigenvector of $A_c$, and by $u^1_c$ and $u^2_c$ its approximations by~\eqref{eq:pert_expansion} and~\eqref{eq:pert_expansion2}, respectively. For each~$c$, we measure the errors $\norm{v_c - u^1_c}$ and $\norm{v_c - u^2_c}$. In~\Cref{fig:pert_1}, we plot $\log{\norm{v_c - u^1_c}}$ and $\log{\norm{v_c - u^2_c}}$ versus $\log \norm{cE}$. As predicted by theory, there is a linear dependence between the error in the eigenvector approximation and the norm of the perturbation matrix. Furthermore, the errors achieved by the first and second order formulas are comparable since the dominant term in~\eqref{eqn:error_trunc_1} and~\eqref{eqn:error_trunc_2} is the $\norm{E}$ term. In our second example, we demonstrate the linear and quadratic dependence of the error on the $\sum_{j=m+1}^n \abs{\lambda_j -\mu}$ term. We generate a random symmetric matrix $A'$ of rank 10, so that its 10 leading eigenvalues are between 1 and 2. We then generate a random symmetric matrix $E$ and normalize it to have a norm of $10^{-6}$. We choose $\norm{E}_2$ to be relatively small, so that its contribution to the error will not mask the effect of $\sum_{j=m+1}^n \abs{\lambda_j -\mu}$. Then, for various values of $c$, we generate a matrix $A'_c$, whose leading 10 eigenvalues are the same as of $A'$, and the rest are exactly $c$. We approximate the 10 leading eigenpairs of $A_c = A'_c + E$ by the first and second order approximations~\eqref{eq:pert_expansion} and~\eqref{eq:pert_expansion2} using $\mu = 0$, and measure the error in the same way as in the previous example. In~\Cref{fig:pert_2} we plot $\log{\norm{v_c - u^1_c}}$ and $\log{\norm{v_c - u^2_c}}$ versus $\log{\abs{\lambda_j -\mu}} = \log c$. As predicted by theory, there is a linear dependence between the error in the eigenvector approximation and $c$ for the first order approximation, and a quadratic dependence for the second order formula. \begin{figure} \centering \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\linewidth]{pert_ex_1.eps} \caption{Dependence on $\norm{E}$.} \label{fig:pert_1} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\linewidth]{pert_ex_2.eps} \caption{Dependence on $\sum_{j=m+1}^n \abs{\lambda_j -\mu}$. } \label{fig:pert_2} \end{subfigure} \caption{Numerical demonstration of the error terms in the approximations~\eqref{eq:pert_expansion} and~\eqref{eq:pert_expansion2}. (\subref{fig:pert_1})~$\log{(\text{error})}$ vs. $\log{\norm{cE}}$. The slope of both curves is~1, demonstrating the linear dependence of the error terms~\eqref{eqn:error_trunc_1} and~\eqref{eqn:error_trunc_2} on $\norm{E}$ for both the first and seconder order approximations. (\subref{fig:pert_2})~$\log{(\text{error})}$ vs. $\log{c}$. The slope of the linear curve of the first order approximation is 1, whereas the slope of the linear curve of the second order approximation is 2, demonstrating the linear and quadratic dependence of the first and second order error terms on $\sum_{j=m+1}^n \abs{\lambda_j -\mu}$, respectively.} \label{fig:numerical_pert} \end{figure} \subsection{Perturbation extension for synthetic and real-world data} In this section, we demonstrate the advantages of the various extension methods proposed in Section~\ref{sec:applications} for both synthetic and real-world data. We do not include in this section the $\mu$-shifted extension and the block diagonal extension, as they were proven to be identical to variants of the Nystr{\"o}m method that were already discussed in the literature (see \cite{kumar2009ensemble,wang2014improving}). The kernel functions we use in this section are the Gaussian kernel, resulting in a kernel matrix whose $(i,j)$ entry is $\exp(-\gamma \norm{x_i - x_j}^2)$ for some parameter $\gamma > 0$, and the polynomial kernel, resulting in the kernel matrix whose $(i,j)$ entry is $(1 + x_i^Tx_j)^d$ for some integer $d$. As our metric for comparing the performance of the various methods we use the principal angle~\cite{knyazev2012principal} between the exact subspace spanned by the kernel's top eigenvectors and the subspace spanned by their approximations. The real-world datasets we use are taken from the UCI Machine Learning Repository~\cite{Dua:2019} and are described in Table~\ref{tbl:datasets}. \begin{table} \centering \begin{tabular}{|l|l|p{10cm}|} \hline Name & Dimension & Description \\ \hline MNIST & 784 & Each sample is a grey scale image of a handwritten digit between zero and nine. \\ \hline Superconductivity & 81 & Each sample contains 81 features extracted from one of 21263 superconductors. \\ \hline Poker & 10 & Each sample is a hand consisting of five playing cards drawn from a standard deck of 52 cards. Each card is described using two attributes (suit and rank). \\ \hline Wine quality & 11 & Each sample corresponds to a variant of a Portuguese wine, where the 11 attributes are numerical characteristics of the wine such as acidity, pH, residual sugar etc. \\ \hline \end{tabular} \caption{Real-world datasets used.} \label{tbl:datasets} \end{table} \subsubsection{$p$-band extension} \label{sec:num_p_band} To demonstrate the advantage of the $p$-band extension method, we generate a matrix $K$ whose $(i,j)$ and $(j,i)$ entries are $X^{-\frac{|i-j|}{0.1}}$, where $X$ is drawn uniformly between 0 and 1 (see~\Cref{fig:p_band_image}). We also set entries of $K$ that are smaller than $10^{-10}$ to $0$. Then, for various values of $p$, we compute a $p$-band extension with $m = 10$, and compare the approximation error of the $p$-band extension to the best rank~$10$ approximation of $K$ using the $\norm{\cdot}_2$. For comparison, we also compute several Nystr{\"o}m extensions for several values of $l$ and measure their error in the same way. We repeat this experiment 20 times. In~\Cref{fig:p_band_result}, we plot the approximation error of each repetition versus the percentage of non-zero entries selected in $K^s$ out of the total number of non-zero entries of $K$. We can see that the error graphs of the the $p$-band extension decay to 0 much faster than the error graphs of the partial-eigenspace extension. We conclude that when the kernel admits the structure discussed in this section, the $p$-band extension results in superior performance and converges to the optimal solution much faster than the Nystr{\"o}m extension. \begin{figure} \centering \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\linewidth]{p_band_image.eps} \caption{The kernel matrix $K$.} \label{fig:p_band_image} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\linewidth]{p_band_result.eps} \caption{Subspace error.} \label{fig:p_band_result} \end{subfigure} \caption{$p$-band extension for simulated data.} \label{fig:app_p_band} \end{figure} \subsubsection{Sparse extension} \label{sec:num_sparse} To demonstrate the advantage of the sparse extension, we randomly choose $n=1000$ points from each tested dataset, and normalize each of its features to have 0 mean and unit variance. We then calculate the corresponding kernel matrices using a Gaussian kernel, a linear kernel, and a quadratic kernel. To sparsify the kernel, we set its 90\% smallest entries to 0. We then compute several sparse extensions for various values of $q$ with $m$ depending on the dataset and kernel chosen. We compare the error of the sparse extensions to best $m$-rank approximation of $K$ using the $\norm{\cdot}_2$. For comparison, we also compute several Nystr{\"o}m extensions for several values of $l$, and measure their error in the same way. We repeat this experiment 20 times. We then plot the approximation error of each extension method versus the percentage of non-zero entries selected in $K^s$ out of the total number of non-zero entries of $K$. The results of this procedure are shown in Figure~\ref{fig:app_mnist} for the MNIST dataset, in Figure~\ref{fig:app_semi} for the superconductivity dataset, in Figure~\ref{fig:app_poker} for the poker dataset and in Figure~\ref{fig:app_wine} for the wine quality dataset. We can see that in most scenarios, the error graphs of the the sparse extension decay to 0 much faster than the error graphs of the partial-eigenspace extension in most repetitions of the experiments. Additionally, the error decay of the sparse extension has smaller variance. We conclude that when the kernel is sparse, the sparse extension usually provides superior performance, and convergences to the optimal solution much faster than the Nystr{\"o}m extension. \begin{figure}[H] \centering \captionsetup{justification=centering} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{mnist_gauss_001_m_5.eps} \caption{Gaussian kernel \\ ($\gamma = 0.01, m = 5$) } \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{mnist_poly_1_m_20.eps} \caption{Linear kernel \\ ($m = 20$)} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{mnist_poly_2_m_5.eps} \caption{Quadratic kernel \\ ($m = 5$)} \end{subfigure} \caption{Extension of the MNIST dataset.} \label{fig:app_mnist} \end{figure} \begin{figure}[H] \centering \captionsetup{justification=centering} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{superconductor_gauss_01_m_5.eps} \caption{Gaussian kernel \\ ($\gamma = 0.1, m = 5$) } \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{superconductor_poly_1_m_10.eps} \caption{Linear kernel \\ ($m = 10$)} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{superconductor_poly_2_m_10.eps} \caption{Quadratic kernel \\ ($m = 10$)} \end{subfigure} \caption{Extension of the superconductivity dataset.} \label{fig:app_semi} \end{figure} \begin{figure}[H] \centering \captionsetup{justification=centering} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{poker_gauss_01_m_5.eps} \caption{Gaussian kernel \\ ($\gamma = 0.1, m = 5$) } \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{poker_poly_1_m_10.eps} \caption{Linear kernel \\ ($m = 10$)} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{poker_poly_2_m_5.eps} \caption{Quadratic kernel \\ ($m = 5$)} \end{subfigure} \caption{Extension of the poker dataset.} \label{fig:app_poker} \end{figure} \begin{figure}[H] \centering \captionsetup{justification=centering} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{wine_gauss_05_m_10.eps} \caption{Gaussian kernel \\ ($\gamma = 0.5, m = 10$) } \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{wine_poly_1_m_10.eps} \caption{Linear kernel \\ ($m = 10$)} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \includegraphics[width=\linewidth]{wine_poly_2_m_10.eps} \caption{Quadratic kernel \\ ($m = 10$)} \end{subfigure} \caption{Extension of the wine quality dataset.} \label{fig:app_wine} \end{figure} \section{Summary} \label{sec:summary} In this paper, we propose an eigenvectors extension framework that is based on perturbation theory. We prove that this framework is a generalization of the popular Nystr{\"o}m method and some of its variants. Furthermore, contrary to existing error bounds for the Nystr{\"o}m method, our framework provides error bounds for the individual eigenvectors. This is useful when the extension is used as part of a dimensionality reduction procedure. Our extension framework is quite flexible, and can thus take advantage of the structure of the kernel matrix. We demonstrate our theoretical derivations numerically for kernel matrices that are either sparse or concentrated around the diagonal. \section*{Acknowledgements} This research was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement 723991 - CRYOMATH). \bibliographystyle{plain}
2,877,628,091,361
arxiv
\section{Introduction} The last years have seen a revolution in ground-based {$\gamma$}-ray detectors. We can now detect the spectra of nearby TeV blazars like Mrk 421 and 501 out to $\sim 20$ TeV, and during the strongest flares (with observed TeV fluxes up to 10 times that of the Crab), we can follow fluctuations in these spectra on timescales down to the shortest ones likely in these objects. This represents a unique opportunity. Using these detectors in combination with X-ray satellites like ASCA, SAX, and RXTE, we can now begin to simultaneously follow all significant X-ray/{$\gamma$}-ray variations in a blazar's emission (e.g., see the contribution by Takahashi in this proceedings). This will provide the most stringent test yet for the synchrotron-Compton (SC) blazar emission model (see, e.g., \cite{sikora} and \cite{cracow} for reviews of current emission models and controversies). In this paper, we will argue that such a test is crucial in helping us reach one of the ``holy grails'' of TeV astronomy: the detection of absorption in blazar {$\gamma$}-ray spectra due to {$\gamma$}-ray pair production on the low energy diffuse extragalactic background radiation (DEBRA). As discussed in detail by several authors in this proceedings (see the contributions of Primack, Stecker, and Biller and references therein), a strong constraint on the amount of absorption is very exciting because it constrains the density of the target infrared/optical DEBRA photons responsible for it. The DEBRA at these energies is most probably redshifted stellar and dust emission and thus contains important information on galaxy evolution and cosmology. It is hard to measure by other means, especially in the $\sim 5-50\mu$ range, because of the large Galactic and solar system foregrounds present. In \S 2 below, we briefly review what sort of absorption effects one should expect to see in objects like Mrk 421 and Mrk 501. We then show that it is difficult to constrain absorption at $\sim 1-10$ TeV without knowing in detail the shape of the intrinsic, unabsorbed {$\gamma$}-ray spectrum. (Note that since blazars are so variable, one really needs to know the shape of the {\it instantaneous} {$\gamma$}-ray spectrum.) In particular, even though Mrk 421/501 are nearby, their spectra may already be significantly absorbed (by a factor up to 2!) at 3 TeV. Therefore, the lack of a sharp cutoff in the spectra of both Mrk 421 \cite{Zw97} and Mrk 501 \cite{Ah9799,Sam98,TA,CAT} up to 10 TeV does not allow us to unambiguously extract information on DEBRA, although these data probably are enough to rule out some of the more exotic DEBRA models \cite{Biller}. We then note that the extension of the HEGRA measurement of the Mrk 501 spectrum to at least $\sim 20$ TeV (Konopelko, this proceedings) may be extremely important. Making rather minimal assumptions about the intrinsic {$\gamma$}-ray spectrum at these energes, we obtain a strong constraint on the DEBRA intensity at $\sim 10-60 \mu.$ All recently published DEBRA models either run into trouble with this constraint or significantly underpredict the flux detected by COBE at $140 \mu.$ If the COBE detection is correct, this implies the sources responsible for the far IR DEBRA are qualitatively different from the typical ones we see today. Most of the star formation in the early Universe must occur in highly obscured, dusty environments. In \S 3, we show that if the SC model works during a large flare (where the emission from a single region may dominate), then we can use the observed X-ray spectrum to robustly predict the intrinsic TeV spectrum. Then, and only then, we can try to look for absorption below $\sim$ 10 TeV. \section{ Gamma-Ray Pair Production on Diffuse Background Radiation } To obtain the mean free path for a {$\gamma$}-ray of energy $E_\gamma,$ one must in general convolve the DEBRA photon number distribution, $n(\epsilon),$ with the pair production cross-section. However, this cross-section is peaked, and for nearby ($z\ll1$) HBLs and almost all plausible DEBRA shapes, over half the interactions occur on DEBRA target photons with energies $\epsilon = 0.5-1.5 \epsilon_\ast,$ where $\epsilon_\ast = 4m_e^2c^4/E_\gamma\approx 1.04 (E_\gamma/1{\rm TeV})^{-1}{\rm\ eV}.$ To accuracy better than $\sim 40\%,$ we can thus approximate the absorption optical depth as $$\tau_{\gamma\gamma}(E_\gamma) \approx 0.24 ({E_\gamma \over \rm 1 TeV})({u(\epsilon_\ast) \over 10^{-3} \rm eV cm^{-3}})({z_s \over 0.1}) h_{60}^{-1}.$$ Here $u(\epsilon_\ast)=\epsilon_\ast^2 n(\epsilon_\ast)$ is the typical energy density in a energy band centered on $\epsilon_\ast,$ $h_{60}$ is the Hubble constant in units of $60 {\rm km}{\rm s}^{-1} {\rm Mpc}^{-1},$ and $z_s$ is the source redshift. If $I_0 (E_\gamma)$ is the intrinsic source spectrum, the corresponding observed spectrum is then $I(E_\gamma)=I_0(E_\gamma)\exp(-\tau_{\gamma\gamma}).$ Note that if the DEBRA spectrum near $\epsilon_{\ast}$ can be approximated by a power law, $n(\epsilon) \propto \epsilon^{-\alpha_\ast},$ then $\tau_{\gamma\gamma}$ goes as $E_\gamma^{\alpha_{\ast}-1}.$ Connecting the COBE far IR measurements to the latest UV background estimates, one gets a crude DEBRA spectral index $\alpha \sim 2$ (e.g., see \cite{dwek} for a good compilation of the latest DEBRA observations and models). To zeroth order, then, $\tau_{\gamma\gamma} \propto E_\gamma,$ and the observed spectrum should be $\sim I_0(E_\gamma)\exp(-E_\gamma/E_c)$ where the cutoff energy $E_c$ is set by $\tau_{\gamma\gamma}(E_c)=1.$ Interestingly, this is exactly the type of shape seen in HEGRA observations of Mrk 501 (Konopelko, this proceedings). Does this mean we are seeing absorption? No. An intrinsic blazar {$\gamma$}-ray spectrum of this type is exactly what one expects (e.g., see next section) in an SC emission model. To next order, the DEBRA is better described as the sum of two emission components: starlight from galaxies peaking at $\sim 1$ eV, and dust re-emission peaking at $\sim 100 \mu$ (see Fig. 1a), The $ 1-10 \mu$ side of the ``valley'' between the DEBRA emission peaks is typically a power law with $\alpha \sim 1.$ At the corresponding $E_\gamma\sim 1-10$ TeV, roughly the energy range of current TeV detectors, $\tau_{\gamma\gamma}$ is thus $\sim$ constant! The shape of the spectrum is unchanged, and if we only had the $\sim 1-10$ TeV data, we again cannot infer absorption (even if it is strong). We demonstrate this explicitly in Fig. 1b by computing the intrinsic source spectrum corrected for absorption effects. (The numerical calculations shown use the exact energy dependent cross-section for $\gamma$-$\gamma$ pair production.) Between $\sim 1-10$ TeV, both the corrected and uncorrected spectra are very reasonable looking SC spectra, and without more information (next section), we have no constraint. Note that recent results at $\epsilon_\ast \sim 3 \mu$ \cite{dwekb} give a high DEBRA energy density, $u(3\mu)\sim 2\times 10^{-3}.$ Even for Mrk 501 ($z_s = 0.034$), $\tau_{\gamma\gamma} \approx 0.5$ at $E_\gamma \sim 3$ TeV, i.e., absorption corrections may in fact be important ($I_0/I \sim 2)$! \begin{figure*} \centerline{\psfig{file=fig1a.ps,height=8cm}\psfig{file=fig1b.ps,height=8cm}} \caption{ {\it(a-left panel)} The DEBRA assumed for the absorption calculations shown in the adjoining panel. The two rightmost data points (black squares) are the COBE detections with 2$\sigma$ errorbars shown. The leftmost point with errorbars (2$\sigma$) is the recent 3.5 $\mu$ result \cite{dwekb}. The remaining data points are various lower limits taken from the compilation of Dwek et al. \cite{dwek}, see their Fig. 8. Curve $a$ shows the DEBRA model of Franceschini et al. \cite{fran}. Curve $b$ shows the same model, but multiplied by a factor 2. Curves $c$ and $d$ show two modifications to the DEBRA prediction in the 10-140 $\mu$ range which are normalized respectively to match the COBE 140 $\mu$ best fit value and the 2$\sigma$ lower limit. {\it (b-right panel)} The {\it heavy (lower) solid line} shows the best fit spectrum reported by HEGRA for Mrk 501 (Konopelko, these proceedings). The other curves show the intrinsic blazar spectra obtained by correcting this observed spectrum for absorption caused by the corresponding backgrounds in the left panel. The dashed vertical line at 20 TeV represents a conservative estimate for the maximum detected photon energy. } \label{f1} \end{figure*} Without detailed spectral information, the strongest DEBRA constraints may in fact come from energies $E_\gamma \sim 10-30$ TeV, which probe DEBRA energies on the ``other'' side of the valley ($\epsilon_\ast\sim10-60\mu,$). Here, $\alpha > 2$ and absorption should grow {\it super-exponentially} with {$\gamma$}-ray energy. HEGRA does {\it not} show such a rapidly falling spectrum. The implications of this for the background models of Fig. 1a are shown in Fig. 1b. As a representative example of current DEBRA models, we took that of Franceschini et al. \cite{fran}. This model significantly underpredicts the COBE flux points, yet it is still marginally ruled out by the fact that the ``unabsorbed'' spectrum begins to curve up at $\sim$ 18 TeV, and at 25 TeV exceeds a power law extrapolation from lower energies by a factor 3. If the SC model applies, this is a serious problem since the shape of the intrinsic Compton {$\gamma$}-ray spectrum is generically concave down. Arbitrarily increasing the Franceschini et al. \cite{fran} DEBRA by a factor 2, we obtain a model that fits the COBE data well and appears compatible with current upper limits at other energies. However, the unabsorbed spectrum then explodes above $\sim$ 10 TeV and almost certainly is not compatible with the intrinsic {$\gamma$}-ray spectrum of Mrk 501. Most of the DEBRA models shown in the compilation of Dwek et al. \cite{dwek} have similar problems. To avoid them, the predicted $10-40\mu$ DEBRA flux must be low (a few nW ${\rm m}^{-2} {\rm sr}^{-1}$). To match the COBE points, the DEBRA at longer wavelengths must then increase rapidly with wavelength. For example (see curves $c$ and $d$ in Fig. 1), if we assume the DEBRA spectrum shortward of 140 $\mu$ is a power law $n(\lambda) \propto \lambda^{\alpha}$, then $\alpha$ must be $\mathrel{\mathpalette\simov >} 4.$ Since the DEBRA is an integral (smoothed) quantity, the individual spectra of the objects that dominate the DEBRA must be at least as steep. Either the standard DEBRA/galaxy evolution scenario is correct and the COBE and/or HEGRA measurements are wrong (favored by Stecker, this proceedings), or the far IR DEBRA is produced by objects that are not typical of what we see in our local Universe. An increasingly discussed possibility (see Primack, these proceedings) is that much of the star formation in the Universe in fact occurs in heavily obscured regions, e.g., in ultraluminous IR galaxies like Arp 220 which are relatively rare today. If the COBE and TeV data are both correct, this conclusion becomes inescapable. Note that if the IR DEBRA sources are like Arp 220 (with an IR emission peak at $\sim 60\mu$), the 20 TeV absorption constraint tells us they must evolve very strongly with redshift ($\mathrel{\mathpalette\simov >} (1+z)^{3}$) and emit the bulk of their light at $z\sim 3.$ \section{Predicting the Intrinsic Gamma-Ray Spectrum of a TeV Blazar} If the SC model works for TeV blazars, we may be able to robustly predict their intrinsic {$\gamma$}-ray spectra. The key is that the Compton scattering responsible for the TeV {$\gamma$}-rays probably occurs in the Klein-Nishina limit. In this regime, the target photon comes away with essentially all the energy of the incident electron. Also, the cooling of the electrons is dominated by synchrotron radiation. In an external acceleration scenario (where electrons are injected into the source region), this means the {\it only} way to change the shape of the cooled electron distribution is to change the shape of the electron injection function, i.e., the cooled electron spectrum may be rather insensitive to changes in the source. Since the scattering electron is effectively replaced by a photon of the same energy, this also means the observed TeV {$\gamma$}-ray spectrum is essentially the cooled GeV/TeV electron distribution (see dotted line in Fig. 2) and, hence, may be similarly insensitive to source changes. In particular, it does not depend on the target photon distribution, as shown in Fig. 2, where a completely different (non-synchrotron) target photon distribution gives the same Compton upscattered spectrum. This may explain the remarkable stability of the Mrk 501 TeV spectrum despite the large changes seen in source luminosity (and may mean that coadding spectra to improve statistics, e.g., Konopelko, this proceedings, is justified). In short, if we can ``invert'' the observed synchrotron X-ray spectrum to obtain the underlying electron distribution (e.g., as in Fig. 2), we have all we need to predict the shape of the upscattered TeV spectrum. Extrapolating from the spectrum observed at low energies where intergalactic absorption should not be important (e.g., 500-700 GeV), we then predict the unabsorbed flux at TeV energies. Our accuracy is limited by uncertainties in $B$ and $\delta$ (the region's characteristic magnetic field and Doppler boost factor) and the presence of external IR target photons (too many low energy ones take the model out of the Klein-Nishina regime). However, bad estimates of $B$ and $\delta$ only cause an overall energy shift of the predicted {$\gamma$}-ray spectrum by a factor $({\delta \over B})^{1/2},$ i.e., a fairly weak dependence. Also, the rest frame energy density of external IR photons must exceed the synchrotron photon energy density to cause significant deviations in the predicted spectrum. This is possible, but not likely since objects like Mrk 421/501 have underluminous accretion disks and and do not have a {$\gamma$}-ray (Compton) luminosity that significantly exceeds the X-ray (synchrotron) luminosity. \begin{figure*} \centerline{\psfig{file=fig2.ps,height=8cm}} \caption{ The {\it solid} line is the time-integrated photon spectrum from a variable SSC model chosen to give spectra similar to those seen in the April 1997 Mrk 501 flare \cite{catan,pian}. (In this model, the variable parameter is total electron luminosity; electrons are always injected into the source with the same energy spectrum.) The {\it dashed } line shows the synchrotron and Compton fluxes produced by the electron distribution reconstructed from the ``observed'' 0.1-300 keV model X-ray spectrum. The target soft photon distribution used to compute the Compton spectrum was $n(\epsilon) \propto \epsilon^{-2}$ between $0.2<\epsilon<5$ eV (as measured in the source frame). The {\it dotted} line shows the electron distribution in a steady-state SSC model with the same mean parameters as the variable SSC model. The distribution is plotted in the same way as the photon distribution, i.e., as $\gamma^2 N(\gamma)$ ($\gamma$ is the electron Lorentz factor.) Above $\sim 1$ TeV, note the excellent agreement with the Compton {$\gamma$}-ray spectrum. The distribution has {\it not} been rescaled. \label{f2}} \end{figure*} \section{Conclusions} TeV blazars like Mrk 421/501 sources provide ideal laboratories to test in detail the emission models for these objects. If we can show that a simple SC model works during at least the strongest flares, then we can use good broadband X-ray spectra of these sources to infer their intrinsic TeV spectra. Then, and only then, can we look for evidence of {$\gamma$}-ray absorption below $\sim 10$ TeV and attempt to constrain the corresponding $\sim 1-20 \mu$ DEBRA. (Blazar modelers should also not forget that the Compton spectra they are trying to fit could be strongly attenuated!) Above $\sim 10$ TeV, absorption is expected to grow so rapidly with {$\gamma$}-ray energy that simply requiring the absorption-corrected spectrum to be concave down is sufficient to impose very interesting constraints on the DEBRA. If the COBE and HEGRA Mrk 501 data are correct, the DEBRA must rise very steeply ($n(\lambda) \propto \lambda^{\alpha}$, with $\alpha \sim 4$) longwards of $\sim 40 \mu.$ Unless we identify closer sources at 20-30 TeV, the finite energy resolution of detectors will prevent us from obtaining much stronger constraints than those presented for this wavelength region. In any single observation, absorption effects could be due both to intrinsic blazar IR/O photons as well as intergalactic ones. While these contributions can be difficult to disentangle (note, though, that internal absorption does not affect the concave down argument), Mrk 421 and 501 conveniently have the same redshift. Thus, we can require that any absorption attributed to intergalactic photons be exactly the same for {\it all} flares in {\it both} sources. These two sources alone may give us the first firm handle on DEBRA {$\gamma$}-ray absorption.
2,877,628,091,362
arxiv
\section{Introduction} An accurate determination of the ages and distances of Globular Clusters (GCs) is an important constraint for the age of the Universe, and for the theory of galaxy formation. In particular it is important to compute very accurate relative ages to understand if there is a spread in ages among the Galactic GCs or not. The use of the stellar luminosity function (LF) to compute ages of GCs was first proposed by Paczynski (1984). Later on, Jimenez \& Padoan (1996) and Padoan \& Jimenez (1996) developed a method to determine {\it the age and the distance of a GC simultaneously}, using the LF. The method is described in detail in Padoan \& Jimenez (1996), where it is concluded, on the basis of artificial data, that an uncertainty of about 0.6 Gyr in the age and 0.06 mag in the distance modulus can be achieved, if the number of stars, in 1 mag-wide luminosity bins, is known with an uncertainty of 3\%. In this paper we use recent observations of the Galactic Globular Clusters M5 (Sandquist et al. 1996) and M55 (Desidera \& Ortolani, private communication) to apply the LF method and compute accurate ages and distance module. These two clusters are very adequate since M55 is a metal-poor one and M5 has intermediate metallicity, so we can investigate the spread in ages (if any) in the formation of the GC system. In this letter we apply for the first time the LF method to real data and we show that the method is much more superior to traditional methods (isochrone fitting to the main sequence turn off point, $\Delta V$ method, or any other methods that involve the fitting of the main sequence turn off). The method is superior because it allows to determine the age and the distance simultaneously and independently and because the errors in computing the age and distance are straightforward to calculate. Furthermore, it gives age determinations with sufficient accuracy to make cosmological predictions. This first application of our LF method to real data shows that our previous theoretical predictions were correct. \section{The data} In order to apply our LF method it is necessary to obtain the complete LF of the globular cluster from almost the tip of the red giant branch (RGB), down to the upper main sequence. The number of observed stars should be very large, in order to keep statistical errors sufficiently low. Recently, two LFs that fulfill these requirements have been obtained by Sandquist et al. (1996) for M5, and by Desidera \& Ortolani (private communication) for M55. M5 is a massive globular cluster, with an average metallicity of $[Fe/H]=-1.17\pm0.01$, according to Sneden et al. (1992), and $[Fe/H]=-1.4$, according to Zinn \& West (1984). Since it is a high altitude cluster ($b=46.8^o$), it is not seriously affected by contamination and interstellar reddening. For the metallicity of this cluster we used $[Fe/H]=-1.3$, and adopt $Y=0.24$. The completeness of the LF is discussed in detail in Sandquist et al. (1996). M55 is among the poor-metal clusters. The main advantage of M55 is that it is not very concentrated and therefore it is possible to resolve its core into stars. Again due to its high galactic latitude ($b=-23^{o}$), interstellar reddening and contamination are negligible. We adopt a metallicity of $[Fe/H]=-1.9$ (Briley et al. 1993) and $Y=0.24$. The LF was provided to us by Desidera \& Ortolani (private communication.), who have performed extensive tests with artificial stars, in order to compute the completeness of the sample. In this letter we use the data in the filter band $V$ for M55 and in $I$ for M5. This allows us to illustrate the robustness of the method in two very different bands. The other colours were used in both cases to remove the horizontal and asymptotic giant branch (AGB) stars. In the case of the horizontal branch it is quite easy to distinguish them from the RGB. Even if this is not the case for the AGB stars, the number of stars used in the LF method is so large in that bin that a small mistake in distinguishing RGB and AGB stars does not contribute significantly to the errors in the final age and distance determination. In producing LFs to study both GCs we have used $[\alpha/Fe]=0.4$ and compute this effect in our solar-scaled tracks using the approach by Chieffi, Straniero \& Salaris (1991). This value of alpha enhancement is well justified by spectroscopic observations of giants in GCs (Minniti et al 1996), and this enhancement is valid for a metallicity range ($[Fe/H]=-1.5$ to $-2.0$). \section{The luminosity function method} To determine the age and the distance of a GC, two independent constraints are needed from the LF, that is to say the number of stars in two different bins. One more bin is needed for the normalization, and a fourth bin is useful to estimate the completeness of the data, but it is not used in this work, since the completeness was previously estimated performing experiments with artificial stars (nevertheless, we have checked that the fourth bin gives the same completeness estimation as the artificial star tests). We use a bin positioned at the RGB to normalize the LF, and two bins around the sub-giant region to constrain age and distance modulus. As discussed in Padoan \& Jimenez (1996), the precise position of the two bins at the sub-giant region is extremely important. In fact, our LF method is based on the careful optimization of those two bins. The result of the optimization process is a contour plot of the quantity $R(t,m-M)$, on the plane $(t,m-M)$, where: \begin{eqnarray} R^2(t,m-M)=[n_{2,\rm th}(t)-n_{2,\rm obs}(t,m-M)]^2 \nonumber \\ +[n_{3,\rm th}(t)-n_{3,\rm obs}(t,m-M)]^2 \end{eqnarray} where $t$ is the age, $m-M$ the distance modulus, $n_{i,\rm obs}$ the normalized ratio $N_{i,\rm obs}/N_{1,\rm obs}$, $N_{i,\rm obs}$ the number of stars in the $i$th observational bin and $n_{i,\rm th}$ the corresponding theoretical ratio. The values of $n_{i,\rm th}$ are only functions of the age of the GC, since the shape of the theoretical LF depends only on the age (for a given chemical composition), while the values of $n_{i,\rm obs}$ are also functions of the distance modulus, because the observational LF depends on the distance modulus, if the bins are defined in absolute magnitudes. If the set of bins is not optimal, the contour plot of $R$ shows only open lines, which define a relation between age and distance modulus. Once the set of bins is optimized, the contour plot shows also closed lines, that define both the age and the distance modulus of the GC at the same time. If the lines start to become closed only for $R\le0.1$, the degeneracy age-distance modulus is broken only if stellar counts are available with uncertainty smaller than 10\%. This is the reason why the method requires LF with a large number of stars and excellent photometry. The optimization process, applied to M55 and M5 gave the following set of bins: \begin{eqnarray} M_{V,optimal,M55}=(t_{Gyr}-9.0)\times0.05 \nonumber \\ +[4.01,3.01,2.01,0.01] mag \nonumber \\ M_{I,optimal,M5}=(t_{Gyr}-8.0)\times0.05 \nonumber \\ +[2.87,2.07,1.27,-3.13] mag \nonumber \end{eqnarray} The optimal bins shift by $0.05$ mag/Gyr, as determined in our previous work (Padoan \& Jimenez 1996). This is an essential point in the attempt to obtain the contour plot of $R$. Note that for M5 we could use two bins narrower than 1.0 mag, in order to improve the sensitivity of the method, thanks to the very large number of stars in the bins and to the good quality of the photometry. Our results for M55 is shown in Fig.~1, and for M5 in Fig.~2. In both cases, closed contour lines are obtained for $R\le 0.1$, and the uncertainty in age and $m-M$, obtained at any given level of $R$, is comparable for the two clusters, and similar to what we previously predicted with artificial LFs (Padoan \& Jimenez, 1996) Given the photometric uncertainty and the statistical uncertainty in the stellar counts ($1/\sqrt{N}$, where N is the number of stars), we estimate a global uncertainty of 6\% for the case of M55, and of 4\% for the case of M5. Entering Fig.~1 and Fig.~2 with $R=0.06$ and $R=0.04$ respectively, one gets the results listed in Table~1. \section{Discussion and Conclusions} The results listed in Table~1 show that the age and the distance modulus of M55 are in good agreement with previous determinations by Mandushev et al. (1996) and Alcaino et al. (1992). \begin{table} \begin{center} \begin{tabular}{ccc} & M5 & M55 \\ \hline\hline age & $ 11.1 \pm 0.7$ & $11.8 \pm 1.5$ \\ m-M & $ 14.49 \pm 0.06$ & $14.13 \pm 0.11$ \\ \hline \end{tabular} \caption{The table gives the values for the age and distance modulus for M5 and M55. These values have been determined {\it simultaneously} using the luminosity function method described in the text.} \end{center} \end{table} In the case of M5 the results agree with Sanquist et al. conclusions. They estimate in fact an age of $13.5 \pm1$ Gyr, for $[Fe/H]=-1.17$, and they state that the age would be 11.5 Gyr, for $[Fe/H]=-1.4$. We use $[Fe/H]=-1.3$, and get an age of $11.1\pm0.7$ Gyr. They also estimate $m-M=14.50\pm0.07$ mag for $[Fe/H]=-1.17$, and $m-M=14.41\pm0.07$ mag, for $[Fe/H]=-1.4$, using the sub-dwarf fitting of the main sequence. We get $m-M=14.49\pm0.06$ mag, for $[Fe/H]=-1.3$. Note that in Padoan \& Jimenez (1996) we estimated a variation of 0.02 mag in $m-M$, for a shift of 0.1 in metallicity; so we would predict $m-M=14.47\pm0.06$ mag, for $[Fe/H]=-1.4$. \begin{figure} \centering \leavevmode \epsfxsize=1.0 \columnwidth \epsfbox{fig1.eps} \caption[]{The figure shows the contour plots of $R(t,m-M)$ (see text) in determining simultaneously the distance modulus and age of M55. Notice that the contours closed around a central value, showing that the method works quite well in breaking the age-distance degeneracy.} \end{figure} The distance modulus measured with the LF method is therefore in excellent agreement with the distance modulus determined with the sub-dwarf fitting. The uncertainty of our estimates is very small ($\pm0.06$ mag), and no assumption on the age is required. The LF method is superior to the main sequence turn-off (MSTO) method (Chaboyer, Demarque \& Sarajedini 1996), to determine the absolute age of globular clusters, because it is not affected by the three largest sources of theoretical uncertainty affecting the MSTO method, that is to say the determination of the value of the mixing length parameter, the morphology of the MSTO and the color-$T_{\rm eff}$ calibration (see Jimenez et al. 1996 for a detailed discussion of the main uncertainties in the MSTO method). Furthermore, the MSTO method needs to know the distance in order to determine the age, and it is unable to break this degeneracy. The absolute ages, determined in this work for M55 and M5, seem to indicate that the oldest GCs are not older than 14 Gyr. The LF method is a very powerful tool to investigate relative ages, since most uncertainties of stellar evolution theories are in that case avoided. From the comparison of the ages of M5 and M55 we can conclude that the age of the two GCs is not significantly different. We conclude by remarking that most methods to determine age and distance module of GCs share two common problems: some degree of dependence of age on distance modulus (or vice-versa), and a somewhat fuzzy procedure to estimate the uncertainty of the final result. Our LF method, instead, gives constraints for both age and distance modulus independently, and estimates both most probable values and uncertainties in a straightforward way. \begin{figure} \centering \leavevmode \epsfxsize=1.0 \columnwidth \epsfbox{fig2.eps} \caption[]{The same as before but for M5. The estimated uncertainty of 4\% is marked with dashed lines. } \end{figure} On the basis of the present work, we think that very high quality data for GCs, together with the LF method, may shed new light on the problems of the age of the oldest stars in the Universe and the formation of the Galaxy. \section*{acknowledgements} We are grateful to S. Desidera \% S. Ortolani for providing us with unpublished data of M55.
2,877,628,091,363
arxiv
\section{Executive summary} In this white paper we summarise the construction and applications of lattice theories possessing exact supersymmetry focusing, in particular, on ${\cal N}=4$ Yang-Mills theory. Lattice formulations of this theory allow for numerical simulation of the theory at strong coupling and hence give a window on non-perturbative physics away from the planar limit. This has important applications to our understanding of holographic approaches to quantum gravity and conformal field theories. In particular: \begin{itemize} \item We find that quantities scale with the 't Hooft coupling $\lambda$ in a way that is consistent with holography. In particular, Wilson loops scale as $\exp(- c \sqrt{\lambda})$, where $c$ is some constant. \item Success in this regime opens the door to other interesting studies at strong coupling and away from the planar limit including tests of S-duality, computations of the dimension of the Konishi operator and calculations of string loop corrections to classical supergravity. \item Such calculations can also help bridge to other theoretical efforts such as the scattering amplitudes and conformal bootstrap programs. \end{itemize} \section{Review} In recent years a new approach to the problem of formulating supersymmetric lattice theories has been developed with the result that a certain class of supersymmetric theory can be discretized while preserving one or more supercharges at non-zero lattice spacing. These theories can be derived in two independent ways; by exploiting orbifold and deconstruction techniques or by careful discretization of a topologically twisted formulation of the target supersymetric theory \cite{Catterall:2009it}~\footnote{Actually the orbifold methods only yield Yang-Mills theories while the topological constructions are also capable of describing Wess-Zumino models}. In the case of ${\cal N}=4$ Yang-Mills the resultant lattice action is \begin{equation} S=\frac{N}{4\lambda} {\cal Q} \sum_{x}{\rm Tr\;} \left(\chi_{ab}{\cal F}_{ab}+\eta {\overline{\cal D}}_a{\cal U}_a+\frac{1}{2}\eta d+ \kappa\,\eta\,\left({\rm Re\,det}\left[{\cal U}_a(x)\right]-1\right)\right)+S_{\rm closed} \end{equation} where the lattice field strength \begin{equation}{\cal F}_{ab}(x)={\cal U}_a(x){\cal U}_b(x+\hat{a})-{\cal U}_b(x){\cal U}_a(x+\hat{b})\end{equation} where ${\cal U}_a(x)$ denotes a {\it complexified} gauge field living on the lattice link running from $x\to x+\hat{a}$ and where $\hat{a}$ denotes one of the five basis vectors of an underlying $A_4^*$ lattice. Similarly \begin{equation}{\overline{\cal D}}_a {\cal U}_a={\cal U}_a(x)\cUb_a(x)-\cUb_a(x-\hat{a}){\cal U}_a(x-\hat{a}).\end{equation} The five fermion fields $\psi_a$, being superpartners of the gauge fields, live on the corresponding links, while the ten fermion fields $\chi_{ab}(x)$ are associated with new face links running from $x+\hat{a}+\hat{b}\to x$. The scalar fermion $\eta(x)$ lives on the lattice site $x$ and is associated with a conserved supercharge ${\cal Q}$ which acts on the fields in the following way\footnote{One of the things that is learned from the orbifold construction is that the number conserved supercharges is equal to the the number of site fermions.} \begin{align} {\cal Q}\, {\cal U}_a&\to \psi_a\nonumber\\ {\cal Q}\, \psi_a&\to0\nonumber\\ {\cal Q}\, \eta&\to d\nonumber\\ {\cal Q}\, d&\to 0\nonumber\\ {\cal Q}\, \chi_{ab}&\to {\overline{\cal F}}_{ab}\nonumber\\ {\cal Q}\, \cUb_a&\to 0 \end{align} Notice that ${\cal Q}^2=0$ which guarantees the supersymmetric invariance of the first part of the lattice action. The auxiliary site field $d(x)$ is needed for nilpotency of ${\cal Q}$ offshell. The second term $S_{\rm closed}$ is given by \begin{equation} S_{\rm closed}=-\frac{N}{16\lambda}\sum_x {\rm Tr\;} \epsilon_{abcde}\chi_{ab}{\overline{\cal D}}_c\chi_{de}\end{equation} where the covariant difference operator acting on the fermion field $\chi_{de}$ takes the form \begin{equation} {\overline{\cal D}}_c\chi_{de}(x)=\cUb_c(x-\hat{c})\chi_{de}(x+\hat{a}+\hat{b})-\chi_{de}(x-\hat{d}-\hat{e})\cUb_c(x+\hat{a}+\hat{b})\end{equation} To retain exact supersymmetry all fields reside in the algebra of the gauge group -- taking their values in the adjoint representation of $U(N)$: $f(x)=\sum_{A=1}^{N^2} T^A f^A(x)$ with ${\rm Tr\;} (T^A T^B)=-\delta^{AB}$. The latter term can be shown to be supersymmetric via an exact lattice Bianchi identity $\epsilon_{abcde}{\overline{\cal D}}_c \chi_{de}=0$. This action is invariant under ${\cal Q}$, $SU(N)$ lattice gauge invariance and the $S^5$ point group symmetry of the $A_4^*$ lattice.\footnote{Notice that there are five lattice vectors, ${\hat a} = {\hat 1}, \ldots, {\hat 5}$, corresponding to the nearest-neighbor links of the $A_4^*$ lattice, and the fact that we have five complexified ``gauge fields.'' The $A_4^*$ lattice is four-dimensional, in spite of having five primitive vectors.} Carrying out the ${\cal Q}$ variation and integrating out the auxiliary field $d$ we obtain the supersymmetric lattice action $S=S_b+S_f$ where \begin{equation} S_b=\frac{N}{4 \lambda} \sum_x {\rm Tr\;} \left( {\cal F}_{ab} {\overline{\cal F}}_{ab} \right) + \frac{1}{2} {\rm Tr\;} \left( {\overline{\cal D}}_a {\cal U}_a-\kappa \left[ {\rm Re \, det} \left[{\cal U}_a(x)\right]-1\right]^2\right)\end{equation} and \begin{equation} S_f=-\frac{N}{4\lambda}\sum_x \left({\rm Tr\;}\chi_{ab}{\cal D}_{\left[a\right.}\psi_{\left. b\right]}+ \eta {\overline{\cal D}}_a\psi_a-\frac{\kappa}{2}{\rm Tr\;}(\eta){\rm det\,} ({\cal U}_a(x)){\rm Tr\;}({\cal U}_a^{-1}(x)\psi_a(x))\right)+S_{\rm closed}\end{equation} In the continuum this action can be obtained by discretization of the Marcus or GL twist of ${\cal N}=4$ Yang-Mills but in flat space is completely equivalent to it. In the continuum the twist is done as a prelude to the construction of a topological quantum field theory but in the context of lattice supersymmetry it is merely used as a change of variables that allows for discretization while preserving a single exact supersymmetry. The twisting removes the spinors from the theory replacing them by the antisymmetric tensor fields $\eta,\psi_a,\chi_{ab}$ which appears as components of a K\"{a}hler-Dirac field. The latter is equivalent at zero coupling to a (reduced) staggered field and hence describes four physical Majorana fermions in the continuum limit - as required for ${\cal N}=4$ Yang-Mills. The twisting procedure also packs the six scalar fields of the continuum theory together with the four gauge fields into five complex gauge fields corresponding to the lattice fields ${\cal U}_a$. The coupling $\kappa$ is needed to project the theory from $U(N)$ to $SU(N)$ and thereby evade instability issues that otherwise would arise at strong coupling. General arguments have been put forward that the theory should approach the continuum ${\cal N}=4$ theory after tuning a single marginal operator \cite{Catterall:2013roa}. The theory can be simulated using the same algorithms that are employed for lattice QCD \cite{Catterall:2011pd,Catterall:2012yq,Catterall:2014vka}.~\footnote{The theory does not appear to suffer from a sign problem although the exact reasons for this are not well understood \cite{Catterall:2020lsi}.} It has also been used to explore the physics of black holes and gauge-gravity duality in lower dimensions~ \cite{Anagnostopoulos:2007fw,Hanada:2008gy,Catterall:2008yz,Catterall:2009xn,Catterall:2010fx,Hanada:2016zxj,Berkowitz:2016jlq,Catterall:2017lub,Rinaldi:2017mjl, Catterall:2020nmn}. There is one final wrinkle that needs to be mentioned. To regulate the flat directions of the theory to do simulations it is necessary to add a soft supersymmetry breaking term of the form \begin{equation} S_{\rm mass}=\mu^2\sum_x {\rm Tr\;}\left(\cUb_a(x){\cal U}_a(x)-I\right)^2\end{equation} While this breaks the exact supersymmetry softly all counter terms induced by this breaking will have couplings that are multiplicative in $\mu^2$ and hence vanishing as $\mu^2\to 0$. \section{Conformal invariance and holography} ${\cal N}=4$ Yang-Mills is thought to be a non-trivial conformal field theory for any value of the 't Hooft coupling. Simulations are consistent with this and show a single phase theory with vanishing string tension. Furthermore, the theory can be solved in the planar limit $N\to\infty$ and exhibits a non-trivial dependence on the 't Hooft coupling $\lambda$. Specifically circular supersymmetric Wilson loops $W_{\rm susy}$ in the planar strong coupling limit are independent of size and depend only on $\sqrt{\lambda}$ \cite{Drukker:2000rr,Maldacena:1998im}. \[ \ln {W_{\rm susy}} = {\rm const} \sqrt{\lambda} \] This result was first derived by exploiting holography to relate this Yang-Mills theory to classical supergravity in five dimensional $AdS$ space. The characteristic $\sqrt{\lambda}$ dependence can also be seen in the results of numerical simulations at strong coupling {\it even for small numbers of colors} - see fig.~\ref{loops} which plots the logarithm of the square lattice Wilson loop constructed from ${\cal U}_a$ as a function of $\sqrt{\lambda}$ for $N=2$. \begin{figure}[htbp] \centering \includegraphics[width=0.75\textwidth]{wloop_l12_b0.025.pdf} \caption{\label{loops}Supersymmetric $n\times n$ Wilson loops on $12^4$ lattice at $\mu=0.025$ } \end{figure} The dependence on loop size $R$ reflects the presence of a constant perimeter term in the static potential arising from the (static) quark mass \cite{Erickson:2000af}. Indeed if this is subtracted out by normalizing the Wilson loops by appropriate powers of the Polyakov line one obtains the plot in fig.~\ref{wilson6} which exhibits both an insensitivity to loop size and also the $\sqrt{\lambda}$ behavior expected from holography. \begin{figure}[htbp] \centering \includegraphics[width=0.75\textwidth]{wloop_l12_6v3.pdf} \caption{\label{wilson6}Renormalized supersymmetric $6\times 6$ and $3\times 3$ Wilson loops on $12^4$ lattice at $\mu=0.025$ } \end{figure} The strange $\sqrt{\lambda}$ dependence {\it cannot} be seen in perturbation theory and this result is a very non-trivial test of the correctness of the lattice approach in a non-perturbative regime. \section{Future Directions - executive summary} Supersymmetric lattice actions can be formulated which conserve one or more continuum supersymmetries and flow to the continuum theory with minimal tuning as the lattice spacing is sent to zero. One of the most interesting examples that has been studied is ${\cal N}=4$ super Yang-Mills. Results that have been obtained so far are consistent with a single conformally invariant phase for any value of the 't Hooft coupling and agree with holographic predictions for Wilson loops even for small numbers of colors -- an unexpected and non-trivial result. Future work will focus on a variety of outstanding issues \begin{itemize} \item Look for precise numerical agreement of the lattice and continuum results for supersymmetric Wilson loops in the planar limit at strong coupling. \item Explore whether fine tuning is indeed needed to restore the remaining supersymmetries in the continuum limit. \item Compute the Konishi operator and supergravity operator scaling dimensions that characterize the conformal behavior of the theory for arbitary numbers of colors comparing with bootstrap and planar calculations. Here it is important to take into account the impact of discretization on the $SU(4)_R \simeq SO(6)$ flavor symmetry. \item Search for evidence of S-duality in the lattice theory by measuring gauge boson and monopole masses in the Coulomb phase of the lattice theory. Here one has a precise, BPS-protected formula to compare to. \end{itemize} \acknowledgments This work was supported by the US Department of Energy (DOE), Office of Science, Office of High Energy Physics, under Award Numbers {DE-SC0009998} (SC) and {DE-SC0013496} (JG). Numerical calculations were carried out on the DOE-funded USQCD facilities at Fermilab. \bibliographystyle{JHEP3}
2,877,628,091,364
arxiv
\section{Introduction} A perceptron is a mathematical model inspired by signal processing between neural cells that are assumed to be in either of the two states `active' or `resting'. It consists of $n$ input nodes called \textit{neurons} with values $x_k = \{-1,1\}, \; k =1,...,n $ that feed signals into a single output neuron $y$ (Figure \ref{figure1} left). Each input neuron is connected to the output neuron with a certain strength denoted by a weight parameter $w_k \in [-1,1)$ and the input-output relation is governed by the activation function \begin{equation} y = \left\{ \begin{array}{l l} 1, & \quad \mathrm{if} \; \sum\limits_{k=1}^n w_{k} x_k \geq 0,\\ -1, & \quad \mathrm{else.} \end{array} \right . . \label{af} \end{equation} In other words, the net input $h(\vec{w},\vec{x})= \sum_{k=1}^n w_{k} x_k$ decides if the step-function activates the output neuron\footnote{Another frequent class of perceptrons use values $x_k = [-1,1], \; k =1,...,n $ and the logistic sigmoid activation function $y = \mathrm{sgm}(\sum\limits_{k=1}^m w_{k} x_k + \theta_y)$}. With their introduction by Rosenblatt in 1958 \cite{rosenblatt58}, perceptrons were a milestone in both the fields of neuroscience and artificial intelligence. Just like biological neural networks, perceptrons can learn an input-output function from examples by subsequently initialising $x_1,...,x_n$ with a number of example inputs, comparing the resulting outputs with the target outputs and adjusting the weights accordingly \cite{rojas96}. The high expectations of their potential for image classification tasks were disappointed when a study by Minsky and Papert in 1969 \cite{minsky69} revealed that perceptrons can only classify linearly separable functions, i.e. there has to be a hyperplane in phase space that divides the input vectors according to their respective output (Figure \ref{figure2}). An example for an important non-separable function is the $\mathrm{XOR}$ function. The combination of several layers of perceptrons to artificial neural networks (also called multi-layer perceptrons, see Figure \ref{figure1} right) later in the 1980s elegantly overcame this shortfall, and neural networks are up to today an exciting field of research with growing applications in the IT industry\footnote{Consider for example the latest developments in Google's image recognition algorthms \cite{lequoc13}.}.\\ Since two decades, quantum information theory \cite{nielsen10, plenio01} offers a fruitful extension to computer science by investigating how quantum systems and their specific laws of nature can be exploited in order to process information efficiently \cite{aharonov01, grover96}. Recent efforts investigate methods of artificial intelligence and machine learning from a quantum computational perspective, including the `quest for a quantum neural network' \cite{schuld14b}. Some approaches try to find a quantum equivalent for a perceptron, hoping to construct the building block for a more complex quantum neural network \cite{altaisky01,fei03,siomau14}. A relatively influential proposal to introduce a quantum perceptron is Altaisky's \cite{altaisky01} direct translation of Eq. (\ref{af}) into the formalism of quantum physics, namely $\ket{y} = \hat{F}\sum_{k=1}^m \hat{w}_{k} \ket{x_k}$, where the neurons $y,x_1,...,x_n$ are replaced by qubits $\ket{y},\ket{x_1},...,\ket{x_n}$ and the weights $w_k$ become unitary operators $\hat{w}_k$. The step activation function is replaced by another unitary operator $\hat{F}$. Unfortunately, this proposal has not been extended to a full neural network model. A significant challenge is for example the learning procedure, since the suggested rule inspired by classical learning, $\hat{w}_{k}^{[t+1]} = \hat{w}_{k}^{[t]} + \eta(\ket{d} - \ket{y^{[t]}}) \bra{x_k}$ with target output $\ket{d}$ and the learning rate $\eta \in [0,1]$, does not maintain the unitarity condition for the operators $\hat{w}_{k}$. Other authors who pick up Altaisky's idea do not provide a solution to this severe violation of quantum theory \cite{fei03, sagheer13, zhou07} (or propose an according open quantum systems framework, in which the operators still have to remain completely positivity and non-trace-increasing). Further models of quantum perceptrons can be found in the literature on quantum neural networks, but often remain vague in terms of the actual implementations \cite{gupta01, lewenstein94}, or do not apply quantum mechanics in a rigorous way \cite{kouda05, purushothaman97}. An interesting exception is Elizabeth Behrman's work introducing a perceptron as the time evolution of a single quantum object \cite{behrman00}, as well as Ricks and Ventura's ideas towards a superposition based learning procedure based on Grover's search algorithm \cite{ricks03}.\\ This contribution introduces a unitary quantum circuit that with only a small number of extra resources simulates the nonlinear input-output function of a classical perceptron as given in Eq. (\ref{af}). This quantum perceptron model has a high probability of reproducing the classical result upon measurement and can therefore be used as a classification device in quantum learning algorithms. The computational resources needed are comparable with the classical model, but the advantage lies in the fact that a quantum perceptron can process the entire learning set as a superposition, opening up new strategies for efficient learning. It can thus be seen as a building block of a more complex quantum neural network that harvests the advantages of quantum information processing. \\ \begin{figure}[t] \centering \includegraphics[width=0.25\textwidth]{figure1a.pdf} \hspace{2cm} \includegraphics[width=0.32\textwidth]{figure1b.pdf} \caption{(Colour online) Left: Illustration of a perceptron model with input neurons $x_k = \{-1,1\}$, weights $w_k \in [-1,1)$, $ k =1,...,n $ and output neuron $y \in \{-1,1\}$. Right: Perceptrons are the basic unit of artificial neural networks (here a feed-forward neural network). The network has an input layer, one hidden layer and an output layer, which get updated in chronological order. Every node or neuron computes its value according to the perceptron activation function Eq (\ref{af}), so that the network maps an input ($x_1,...,x_4$) to an output ($o_1,...,o_4$). } \label{figure1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.25\textwidth]{figure2.pdf} \caption{A dataset is linearly separable if it can be divided regarding its outputs by a hyperplane in phase space. } \label{figure2} \end{figure} \section{The quantum perceptron algorithm} The quantum perceptron circuit is based on the idea of writing the normalised net input $\bar{h}(\vec{w},\vec{x})= \varphi \in [0,1)$ into the phase of a quantum state $\ket{x_1,...,x_n}$, and applying the phase estimation algorithm with a precision of $\tau$. This procedure will return a quantum state $\ket{J_1,...,J_{\tau}}$ which is the binary fraction representation of $\theta$ (or, equivalently, the binary integer representation of $j$ in $\theta = \frac{j}{2^{\tau}}$), which is in turn a good approximation for $\varphi$. More precisely, the output encodes the phase via $\theta = J_1 \frac{1}{2} + ... + J_{\tau} \frac{1}{2^{\tau}}$ (or $j = J_1 2^{\tau-1} + ... + J_{\tau} 2^0$) \cite{nielsen10}. The first digit of the output state of the quantum phase estimation algorithm, $J_1$, thus indicates if $\theta$ (and therefore with a good chance also $\varphi$) is bigger than $\frac{1}{2}$. The quantum perceptron consequently maps $(\vec{x}, \vec{w}) \rightarrow J_1$, which as we will see below reproduces the step activation function of a classical perceptron with a high probability.\\ To give a more detailed impression of the quantum perceptron circuit (see also Figure \ref{figure3}), we assume an initial state $\ket{0,...,0}\ket{x_1,...,x_n} = \ket{0,...,0}\ket{\psi_0} $ composed of a register of $\tau$ qubits in state $0$ as well as an input register $\ket{\psi_0}$ with $n$ qubits encoding the binary states of the input neurons (note that in the quantum model, the $-1$ value is represented by a $0$ state). Hadamard transformations on the $\tau$ zeroes in the first register lead to the superposition $\frac{1}{\sqrt{2^{\tau}}} \sum_{j=0}^{2^{\tau}-1} \ket{J} \ket{x_1,...,x_n}$, in which $J$ is the binary representation of the integer $j$, and $\ket{J} = \ket{J_1,...,J_{\tau}}$. We apply an oracle $\mathcal{O}$ that writes $j$ copies of a unitary transformation parameterised with the weights in front of the input register, \begin{equation}\ket{J} \ket{\psi_0} \xrightarrow{\mathcal{O}} \ket{J} U(\vec{w})^j \ket{\psi_0}. \label{step1}\end{equation} The unitary $U$ writes the normalised input $\varphi$ into the phase of the quantum state. This can be done using the decomposition into single qubit operators $U(\vec{w}) = U_n(w_n)...U_2(w_2)U_1(w_1)U_0$ with each \[U_k(w_k) = \begin{pmatrix} e^{-2\pi i w_k \Delta\phi}&0\\ 0&e^{2\pi i w_k \Delta\phi} \end{pmatrix},\] working on the input register's qubit $x_k$, and $\Delta{\phi} = \frac{1}{2n}$. $U_0$ adds a global phase of $\pi i$ so that the resulting phase of state $\ket{J} \ket{x_1,...,x_n}$ is given by $\exp(2\pi i (\Delta{\phi} h(\vec{w},\vec{x}) + 0.5) = \exp(2\pi i \varphi)$. For learning algorithms it might be useful to work with parameters represented in an additional register of qubits instead of parametrised unitaries, and below we will give an according variation of the quantum perceptron algorithm. \\ The next step is to apply the inverse quantum Fourier transform \cite{watrous06, nielsen10}, $\mathrm{QFT}^{-1}$, resulting in \begin{equation*} \frac{1}{\sqrt{2^{\tau}}} \sum \limits_{j=0}^{2^{\tau}-1} \exp^{2\pi i j \varphi} \ket{J} \ket{\psi_0} \xrightarrow{\mathrm{QFT}^{-1}} \sum \limits_{j=0}^{2^{\tau}-1} \left( \frac{1}{2^{\tau}} \sum \limits_{k=0}^{2^{\tau}-1} \exp^{2\pi i k (\varphi - \frac{j}{2^{\tau}})}\right) \ket{J}. \end{equation*} In case the phase can be exactly expressed as $\varphi = \frac{j}{2^{\tau}}$ for an integer $j$, the amplitude of all states except from $\ket{J}$ is zero and the algorithm simply results in $\ket{J}$. For cases $\varphi \neq \frac{j}{2^{\tau}}$, it can be shown that in order to obtain $\varphi$ accurately up to $m$ bits of precision with a success probability of $1-\epsilon$, one has to choose $\tau = m + \lceil \log{(2+\frac{1}{2 \epsilon})}\rceil $ \cite{nielsen10}. Since we are only interested in the value of the first qubit, we would naively choose a precision of only $\tau = 2$ to obtain a $85\%$ probability of success. This would allow us to compute the quantum Fourier transform with minimal resources. \\ \begin{figure}[t] \centering \includegraphics[width=0.55\textwidth]{figure3.pdf} \caption{Quantum circuit for the quantum perceptron model. See also \cite{nielsen10}.} \label{figure3} \end{figure} However, it is important to note that the required size of $\tau$ needed can depend on the number of neurons $n$. To show this, let us assume a random distribution of binary values for the entries of $\vec{x}$ as well as random real values in $[-1,1)$ for $\vec{w}$. The higher the number of neurons $n$, the sharper the probability distribution of $\bar{h}(\vec{w},\vec{x})$ peaks around the average value of $\frac{1}{2}$ (Figure \ref{figure4}). This means that a perceptron unit has to have a higher resolution around this value, and we consequently need to increase the precision parameter $\tau$. Simulations show that for $n=10$ we need $\tau \geq 4$ to get a probability of more than $85\%$ to reproduce the classical perceptron's result, while $n=100$ requires a precision of $\tau \geq 6$ and $n=1000$ a precision of $\tau \geq 8$. To quantify the relation between the number of binary digits $\tau$ and the number of neurons $n$, we assume (consistent with the simulations) that the standard deviation of the distribution of values for $\bar{h}(\vec{w},\vec{x})$ scales with $\sigma \sim \frac{1}{\sqrt{n}}$. We require a precision $\tau$ that allows for a resolution in the order (e.g., a tenth) of the standard deviation, so that $\sigma \approx \frac{10}{2^{\tau}}$. The precision consequently scales as $\tau \sim \log{\sqrt{n}}$. Of course, these considerations are only true for random input variables and parameters, and we would expect a realistic case of a neural network to have its input values $\bar{h}(\vec{w},\vec{x})$ not necessarily distributed around $0.5$. But since the quantum perceptron might find application in the training of quantum neural networks, it is desirable that it can deal with almost random initial distributions over these values. It is therefore good news that the precision only grows logarithmically with the square number of neurons. \\ \begin{figure}[t] \centering \includegraphics[width=0.54\textwidth]{figure4.pdf} \caption{(Colour online) Histogram of distribution of values $\bar{h}(\vec{w},\vec{x})$ using random values for $\vec{w},\vec{x}$ with $10000$ data points. For $n=1000$ neurons, the distribution is much narrower in terms of the standard deviation $\sigma$ than for $n=10$. The precision of the algorithm consequently has to increase with the number of neurons.} \label{figure4} \end{figure} The computational complexity of the quantum perceptron algorithm is comparable to resources for the $n$ multiplications and single $\mathrm{IF}$-operation needed to implement a classical perceptron, which are in $\mathcal{O}(n)$. The quantum algorithm up to the inverse quantum Fourier transform requires $\tau + (n+1) \sum_k^{2^{\tau}-1} k$ elementary quantum gates\footnote{We can consider a set of elementary gates consisting of single qubit operations as well as the $\mathrm{CNOT}$ gate. \cite{barenco95}}. An efficient implementation of the inverse quantum Fourier transform requires $\frac{\tau (\tau+1)}{2} + 3\frac{\tau}{2}$ gates \cite{nielsen10}. Taking $\tau$ as a fixed number we end up at a complexity of $\mathcal{O}(n)$. If we assume the above relationship between $\tau$ and $n$ derived from random sampling of $\vec{w}$ and $\vec{x}$, we still obtain $\mathcal{O}(n\; \mathrm{log}^2(\sqrt{n}))$, which is not a serious increase. A major advantage of the quantum perceptron is the fact that a quantum perceptron can process an arbitrary number of input vectors in quantum parallel if they are presented as a superposition $\sum_i \ket{x_i}$. The computation results in a superposition of outputs $\sum_i \ket{y_i}$ from which information can be extracted via quantum measurements, or which can be further processed, for example in superposition-based learning algorithms. The application of the quantum perceptron model will be discussed below. \\ As stated earlier, it can be useful to introduce a slight variation of the quantum perceptron algorithm, in which instead of parametrised operators, the weights $w_k, k=1,...,n$ are written into (and read out from) an extra quantum register. The initial state $\ket{\psi_0}$ in Eq. (\ref{step1}) thus becomes \[\ket{x_1,...,x_n; W^{(1)}_1,...,W^{(\delta)}_1 , \hdots , W^{(1)}_{n},...,W^{(\delta)}_n} = \ket{\vec{x}; \vec{w}}.\] Consistent to above, $W^{(m)}_k$ is the $m$th digit of the binary fraction representation that expresses $w_k$ as $w_k = W^{(1)}_k \frac{1}{2} + ... + W^{(\delta)}_k \frac{1}{2^{\delta}}$ with a precision $\delta$. To write the normalised net input $\bar{h}(\vec{w},\vec{x})$ into the phase of quantum state $\ket{\vec{x}; \vec{w}}$ one has to replace the parameterised operator $U(\vec{w})$ in Eq. (\ref{step1}) with $\tilde{U} = U_0 \prod_{k=1}^n \prod_{m=1}^{\delta} U_{ W^{(m)}_k, x_k} $ where $U_0$ again adds $\sfrac{1}{2}$ to the phase and we introduce the controlled two-qubit operator \[U_{W^{(m)}_k, x_k} = \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&e^{-2\pi i \Delta\phi \frac{1}{2^m}}&0\\ 0&0&0&e^{2\pi i \Delta\phi \frac{1}{2^m}} \end{pmatrix}.\] The $m$th bit $W_k^{(m)}$ of the binary representation of $w_k$ controls the operation of shifting the phase by $-\Delta\phi \frac{1}{2^m}$ (for $x_k = 0$) or $\Delta\phi \frac{1}{2^m}$ (for $x_k = 1$), using $\Delta\phi$ from above. Note that this implementation restricts the weights to $[0,1)$, but a sign for each parameter can be stored in an additional qubit, and its inverse $\mathrm{XOR}$ with $x_k$ can be used to control the sign of the phase shift.\\ \section{Application in quantum learning algorithms} As mentioned before, perceptrons can be trained to compute a desired input-output relation by iteratively adjusting the weights when presented with training data (see Figure \ref{figure5}). The training data set $\mathcal{T} = \{(\vec{x}^p,d^p)\}_{p=1,...,P}$ consists of examples of input vectors $\vec{x}^p$ and their respective desired output $d^p$. The actual output $y^p$ is calculated for a randomly selected vector $\vec{x}^p$ from this training set, using the current weight vector $\vec{w}$. The weights get adjusted according to the distance between $d^p$ and $y^p$, \begin{equation} \vec{w}'=\vec{w} + \eta (d^p-y^p)\vec{x}^p, \end{equation} where $\eta \in [0,1]$ is a given learning rate. By successively choosing random training examples, this procedure converges for linearly seperable problems to a weight vector that classifies all training examples correctly and can process new inputs as learned from the training set \cite{rojas96}.\\ \begin{figure}[t] \centering \includegraphics[width=0.64\textwidth]{figure5.pdf} \caption{(Colour online) Illustration of one iteration in the classical perceptron training algorithm (the principle also holds for feed-forward neural networks constructed from perceptrons). A randomly selected training vector $\vec{x}^p = (x_1,...,x_n)^p$ from a training set is presented to the input layer (A) and the perceptron computes the actual output $y^p$ according to the perceptron activation function Eq (\ref{af}) using the current weights $w_1,...,w_n$ (B). The output is compared with the given desired output $d^p$ for $\vec{x}^p$ (C) and the weights are adjusted to decrease the distance between the two (D). The quantum perceptron model can be applied to execute step B in the quantum versions of this training algorithm. } \label{figure5} \end{figure} While training a perceptron simply contains a number of classifications followed by vector addition, training a feed-forward neural network consisting of many interconnected perceptron units (see Figure \ref{figure1} right) quickly grows in terms of computational complexity since each output neuron indirectly depends on each weight of the previous layers. The most well-known training algorithm for feed-forward neural networks is based on gradient-descent \cite{rumelhart86} and changes the weights $w_{kl}$ between node $k$ and $l$ according to a very similar rule as for the percpetron, $w'_{kl} = w_{kl} - \eta \frac{\partial \mathrm{E}(\vec{o}^p - \vec{d}^p)}{\partial w_{kl}}$, where $\mathrm{E}$ is an error function depending on the computed output $\vec{o}^p$ for a given input vector $\vec{x}^p$ of the training set and its target value $\vec{d}^p$. In other words, each weight is changed towards the steepest descent of an error function comparing the actual result with a target value. This procedure is called \textit{backpropagation} as it gets executed from the last to the first layer. There have recently been major improvements thanks to methods for efficient pre-training \cite{hinton06}, but the learning phase remains computationally costly for the dimensions of commonly applied neural networks.\\ A central goal of quantum neural network research is to improve the computing time of the training phase of artificial neural networks through a clever exploitation of quantum effects. Several training methods have been investigated, for example using a Grover search in order to find the optimal weight vector \cite{ricks03}, or using the classical perceptron training method to adjust a quantum perceptron's weight parameters\footnote{As mentioned in the introduction, an unresolved problem is to ensure that the operators remain unitary (or completely positive trace non-increasing).} \cite{altaisky01, zhou07}. Even though mature quantum learning algorithms are still a subject to ongoing research, from the examples it seems to be essential to generate an equivalent to the classical quality measure $\vec{d}^p-\vec{o}^p$ for the current weight vector $\vec{w}$. For this purpose a quantum perceptron unit is needed which maps input vectors $\ket{x^p}$ onto outputs $\ket{y^p}$ that can be compared with the target output. \\ The quantum perceptron is a model that is able to calculate $\ket{y^p}$ equivalent to the classical model and with only very few resources. The difference to the classical model however is that it processes quantum information. This is not only a missing building block for the existing learning schemes mentioned above, but a basis on which to develop new quantum learning algorithms. For example, we currently investigate superposition-based learning algorithms, in which the training set is presented to the quantum perceptron as a superposition of feature vectors, and the quantum perceptron calculates the outputs in quantum parallel which can be further processed for learning. Such a scheme would be independent from the size of the training set. \section{Conclusion} The quantum perceptron model presented here offers a general procedure to simulate the step-function characteristic for a perceptron on a quantum computer, with an efficiency equivalent to the classical model. This fills a void in quantum neural network research, especially for quantum learning methods which rely on an equivalent to classical classification using quantum information. As a future outlook, the quantum perceptron model could be used to develop superposition-based learning schemes, in which a superposition of training vectors is processed in quantum parallel. This would be a valueable contribution to current explorations of quantum machine learning.\\ \section*{Acknowledgements} This work is based upon research supported by the South African Research Chair Initiative of the Department of Science and Technology and National Research Foundation.
2,877,628,091,365
arxiv
\section{Introduction} The fruitful concept of the \keyword{maximal globally hyperbolic development} of Cauchy data was introduced in 1969 by Choquet-Bruhat and Geroch \cite{Choquet1969}. These solutions to Einstein's field equations have, in particular, the property of being uniquely determined (up to isometries) by the Cauchy data on a Cauchy surface. Sometimes, however, such a maximal globally hyperbolic spacetime can be extended. The extension is not globally hyperbolic and hence there is a \keyword{Cauchy horizon} whose topology and smoothness may in general be very complicated. A famous example is the Taub solution \cite{Taub51}, a two-parametric family of spatially homogeneous cosmological models with spatial $\ensuremath{\mathbb S^3}\xspace$-topology. This solution can be extended through smooth complete Cauchy horizons with $\ensuremath{\mathbb S^3}\xspace$-topology to the Taub-NUT solutions \cite{NUT63,Misner1963,MisnerTaub1969}. However, there are several non-equivalent extensions \cite{chrusciel93}. Moreover, there exist \emph{closed causal curves} in the extended regions, which is a violation of causality. These unexpected properties have raised the question as to whether such pathological phenomena occur only under very special circumstances, like the high symmetry of the Taub solution. Or would quite general solutions in general relativity always suffer from such defects? The former alternative is proposed in Penrose's famous strong cosmic censorship conjecture \cite{Penrose69,moncrief81a,Chrusciel91a,Rendall05,Ringstrom09}, according to which the maximal globally hyperbolic development of ``generic'' Cauchy data is inextendible. This means that models like the Taub solution would belong to a (in some sense) negligibly small subset of ``non-generic spacetimes''. However, this hypothesis is far from being proven in the general case. The interesting features of the Taub-NUT models have motivated the investigation of larger classes of solutions with similar properties. In \cite{moncrief84}, Moncrief studies \keyword{generalized Taub-NUT spacetimes} with a $U(1)$ isometry group and spatial $\ensuremath{\mathbb S^3}\xspace$-topology under the assumption of analyticity. Without analyticity, just assuming smoothness, existence of a class of solutions with $U(1)\times U(1)$ symmetry (and again $\ensuremath{\mathbb S^3}\xspace$-topology) was shown in \cite{beyer11}. These \keyword{smooth Gowdy-symmetric generalized Taub-NUT solutions} have two functional degrees of freedoms, i.e., for any choice of two smooth functions (subject to a periodicity condition) a corresponding solution exists. {Like the Taub models, they always have a smooth past and (with exception of special singular cases) a smooth future Cauchy horizon} of $\ensuremath{\mathbb S^3}\xspace$-topology, through which they can be extended. Properties of these spacetimes have been studied by means of Fuchsian methods and soliton methods --- however, without explicitly solving Einstein's vacuum equations. Nevertheless, it is desirable to have exact solutions that can be studied in more detail than possible with abstract considerations alone. In this paper we derive and study such an exact solution: a three-parametric, spatially inhomogeneous generalization of the Taub solution. This solution is a particular case of the smooth Gowdy-symmetric generalized Taub-NUT solutions and can be derived with soliton methods. The application of methods from soliton theory to the equations of general relativity has a long tradition, in particular for axisymmetric and stationary equilibrium configurations (see, e.g., \cite{BelinskiZakharov1979, KramerNeugebauer1968, KramerNeugebauer1980, Neugebauer1979, NeugebauerMeinel1995, Neugebauer2003, Ruiz1995, Varzugin1997}, to mention just a few of many interesting publications), but also for plane waves and inhomogeneous cosmologies (see, e.g., \cite{AlekseevGriffiths2000,AlekseevGriffiths2004,Lim2008,RendallWeaver2001}). These methods are based on the integrability of the Einstein equations in the case of symmetric spacetimes and make use of reformulations of the nonlinear field equations in terms of associated linear matrix problems. In particular, it is possible to reduce boundary or initial value problems to linear integral equations. Here we will apply a particular approach due to Sibgatullin \cite{Sibgatullin} in order to construct our exact solution. A particular motivation for a detailed study of the exact solution described in this paper is the following. Since the works of Berger and Moncrief \cite{berger93} on the singularity of Gowdy-symmetric solutions of the vacuum equations with spatial $\mathbb T^3$-topology, there has been increasing evidence that spiky phenomena are a general feature of singular solutions of Einstein's field equations \cite{Lim2009}. While in the $\mathbb T^3$-Gowdy case, solutions with spikes can be ``manufactured'' \cite{RendallWeaver2001,Lim2008} using certain solution generating techniques, the existence and properties of spikes in the case of $\ensuremath{\mathbb S^3}\xspace$- or $\ensuremath{\mathbb S^1\!\times \mathbb S^2}\xspace$-Gowdy solutions is less well understood, in particular due to the degeneracy of the action of the symmetry group at its axes. There are only a few discussions of this in the literature, see \cite{garfinkle99, Stahl02,beyer08}. {To the best of our knowledge, the family of solutions derived here is the first example of a family of exact $\ensuremath{\mathbb S^3}\xspace$-Gowdy solutions where spiky features develop \textit{on} the symmetry axes.} This is discussed in more detail in \Sectionref{sec:spikes}. This paper is organized as follows. In Sec.~\ref{sec:Gowdy} we summarize some properties of smooth Gowdy-symmetric generalized Taub-NUT solutions. Then we construct the exact solution in Sec.~\ref{sec:construction}. Afterwards, in Sec.~\ref{sec:prop}, we study various properties of this cosmological model. In particular, we show that the Taub spacetimes are a special case of our solution. Then we look at symmetries of the solution, we show that it is regular in the maximal globally hyperbolic region, and we visualize particular 2-surfaces by embedding them into 3-dimensional Euclidean space. {Moreover, we study singularities that are present for special parameter choices, we extend the solution beyond the Cauchy horizons, and we identify ``false'' and ``true spikes'' on the symmetry axes.} Finally, we discuss our results in Sec.~\ref{sec:discussion}. \section{Background} \label{sec:background} \subsection{Geroch's symmetry reduction and the wave map structure of the vacuum equations} Without going into the details, let us give a quick summary of Geroch's symmetry reduction \cite{Geroch1} with particular emphasis on the resulting wave map structure of the vacuum field equations; more details which are relevant for our particular case here can be found in \cite{beyer11}. Let $M=\mathds R\xspace\times H$ be an oriented and time-oriented globally hyperbolic $4$-dimensional Lorentzian manifold endowed with a metric $g_{{a}{b}}$ of signature $(-,+,+,+)$, a smooth global time function $t$ and a Cauchy surface $H$. We denote the chosen volume form associated with $g_{{a}{b}}$ by $\epsilon_{{a}{b}{c}{d}}$ and the hypersurfaces given by $t=t_0$ for any constant $t_0$ by $H_{t_0}\cong H$. Let $\xi^a$ be a smooth globally defined spacelike Killing vector field which is tangent to the hypersurfaces $H_t$. The flow generated by $\xi^a$ induces a map $\pi$ from $M$ to the space of orbits $S$, i.e., $\pi$ maps every $p\in M$ to the uniquely determined integral curve of $\xi^a$ starting at $p$. The quotient manifold $S$ has a canonical smooth structure and is hence a smooth manifold. Next, we denote the (square of the) norm of $\xi^a$ by \begin{equation} \label{eq:deflambda} \lambda:=g_{ab}\xi^a\xi^b, \end{equation} the twist $1$-form of $\xi^a$ as \begin{equation} \label{eq:deftwist} \Omega_{a}:=\epsilon_{{a}{b}{c} {d}}\xi^{b}\nabla^{c}\xi^{d}, \end{equation} where $\nabla$ is the covariant derivative compatible with $g_{ab}$, and the ``$3$-metric'' as \[h_{{a}{b}}:=g_{{a}{b}}-\frac 1\lambda \xi_{a}\xi_{{b}}. \] It turns out that there is a unique smooth Lorentzian metric on $S$ which pulls back to $h_{{a}{b}}$ along $\pi$, which we refer to with the same symbol $h_{ab}$. In the same way there are a unique function and $1$-form on $S$ which pull back to the function $\lambda$ and the $1$-form $\Omega_{a}$, respectively, on $M$; hence we also denote them by the same symbols. The quantities $\lambda$, $\Omega_a$ and $h_{{a}{b}}$ on $S$ completely characterize the local geometry of $(M,g)$. Geroch found that Einstein's vacuum field equations on $(M,g)$ imply that the $1$-form $\Omega_a$ is closed, $\mathrm d\Omega=0$. We therefore locally find a twist potential $\omega$ such that $\Omega=\mathrm d\omega$. Let us define a new smooth Lorentzian metric $\hat h$ on $S$ as \[\hat h_{{a}{b}}:=\lambda h_{{a}{b}}.\] We refer to the associated covariant derivative operator as $\hat D_a$, Ricci tensor as $\hat S_{{a}{b}}$, and raise and lower indices with $\hat h$. Geroch was able to show that the vacuum field equations for $(M,g)$ (and certain geometric identities) are equivalent to the following set of equations on $S$: \begin{eqnarray} \label{eq:Gerochevollambda} \hat D_a\hat D^a\lambda &=\frac 1\lambda\left(\hat D^{a}\lambda\hat D_{a}\lambda -\hat D^{a}\omega\hat D_{a}\omega\right),\\ \label{eq:Gerochevolomega} \hat D_a\hat D^a\omega &=\frac 2\lambda\hat D^{a}\lambda\hat D_{a}\omega,\\ \label{eq:GerochRicci3} \hat S_{{a}{b}}&=\frac 1{2\lambda^2}\left( \hat D_{a}\lambda\hat D_{b}\lambda +\hat D_{a}\omega\hat D_{b}\omega\right). \end{eqnarray} These equations can be interpreted as $2+1$-dimensional gravity $(S,\hat h)$ coupled to the wave map \[u:S\rightarrow\mathcal H,\quad p\mapsto u(p)=(\lambda(p),\omega(p)),\] where $\mathcal H$ is the $2$-dimensional half-plane model of the hyperbolic space with coordinates $(\lambda,\omega)$ and metric \[l=\frac{\mathrm d\lambda^2+\mathrm d\omega^2}{\lambda^2}.\] The right-hand side of \Eqref{eq:GerochRicci3} can be interpreted as the energy-momentum tensor associated with this wave map. Given any smooth curve $\gamma(\tau)$ in $S$ and a solution $u=(\lambda,\omega)$ and $\hat h$ of the equations above, the quantity \begin{equation} \label{eq:hypspeed} s(\tau):=\sqrt{\frac{\left(\frac{\mathrm d}{\mathrm d\tau}(\lambda(\gamma(\tau))\right)^2+\left(\frac{\mathrm d}{\mathrm d\tau}(\omega(\gamma(\tau))\right)^2}{\lambda^2(\gamma(\tau))}} \end{equation} is referred to as the \keyword{hyperbolic speed}\footnote{In the literature, it is customary to give this quantity a sign, for example, the same sign as the term $\frac{\mathrm d}{\mathrm d\tau}(\lambda(\gamma(\tau))$ and then to refer to it as \keyword{hyperbolic velocity}. In this paper, we refrain from doing this.} of $\gamma$. \subsection{Gowdy-symmetric spacetimes with spatial $3$-sphere topology} Now, we specialize to the case $H=\ensuremath{\mathbb S^3}\xspace$. We think of \ensuremath{\mathbb S^3}\xspace as the submanifold of $\mathds R\xspace^4$ determined by $x_1^2+x_2^2+x_3^2+x_4^2=1$. We are interested in smooth effective actions of the group $U(1)\times U(1)$ on $\ensuremath{\mathbb S^3}\xspace$. From the smooth effective action of $U(1)$ on $\mathds R\xspace^2$ by rotations around the origin, we construct an action of $U(1)\times U(1)$ on $\mathds R\xspace^4$ by demanding that the first factor of $U(1)\times U(1)$ generates rotations in the $x_3,x_4=\mathrm{constant}$-planes around the origin, while the second factor generates rotations in the $x_1,x_2=\mathrm{constant}$-planes around the origin. Clearly, this action is well-defined also when it is restricted to the subset $\ensuremath{\mathbb S^3}\xspace$ of $\mathds R\xspace^4$. As summarized in \cite{Chrusciel1990}, all smooth effective actions of $U(1)\times U(1)$ are equivalent to this action. It is useful to introduce coordinates $(\theta,\lambda_1,\lambda_2)$ on $\ensuremath{\mathbb S^3}\xspace$ so that the $\theta=\mathrm{constant}$-surfaces (wherever they are defined) equal the orbits of the group action. The \keyword{Euler coordinates} $(\theta,\lambda_1,\lambda_2)$ on $\ensuremath{\mathbb S^3}\xspace$ are \begin{eqnarray} x_1&=\cos\frac\theta 2\cos\lambda_1, \quad x_2&=\cos\frac\theta 2\sin\lambda_1,\label{eq:euler1}\\ x_3&=\sin\frac\theta 2\cos\lambda_2, \quad x_4&=\sin\frac\theta 2\sin\lambda_2,\label{eq:euler2} \end{eqnarray} with $\theta\in (0,\pi)$ and $\lambda_1,\lambda_2\in (0,2\pi)$. Clearly, these coordinates break down at the points $\theta=0$ and $\pi$, which we refer to as ``poles'' or ``axes'' of \ensuremath{\mathbb S^3}\xspace in the following. We also make use of the coordinates $(\theta,{\rho_1},{\rho_2})$ (which we also call Euler coordinates) with $\theta$ as above and \begin{equation}\label{eq:eulerangleparm2} \lambda_1=:({\rho_1}+{\rho_2})/2,\quad \lambda_2=:({\rho_1}-{\rho_2})/2. \end{equation} Let us fix any value $\theta\in[0,\pi]$. Then, the $2\pi$-periodicity of $\lambda_1$ and $\lambda_2$ implies that, for each choice of $\rho_{1*}\in\mathds R\xspace$, all conditions of the form $\rho_1+2\pi k=\rho_{1*}$ given by all integers $k$ yield the same subset of $\ensuremath{\mathbb S^3}\xspace$; in the same way, for each choice of $\rho_{2*}\in\mathds R\xspace$, all conditions of the form $\rho_2+2\pi k=\rho_{2*}$ given by all integers $k$ yield the same subset of $\ensuremath{\mathbb S^3}\xspace$. In this sense, the coordinates $\rho_1$ and $\rho_2$ are $2\pi$-periodic. However, each of these subsets is a closed curve which is $4\pi$-periodic in $\rho_2$ in the first case and $4\pi$-periodic in $\rho_1$ in the second case. The coordinate fields $\partial_{\rho_1}$ and $\partial_{\rho_2}$ (which can be characterized geometrically without making reference to any coordinates in terms of left- and right-invariant vector fields of the standard action of $SU(2)$ on \ensuremath{\mathbb S^3}\xspace) are smooth non-vanishing vector fields on \ensuremath{\mathbb S^3}\xspace. They are linearly independent everywhere except at the axes where they are parallel: $\partial_{\rho_1}=\partial_{\rho_2}$ at $\theta=0$ and $\partial_{\rho_1}=-\partial_{\rho_2}$ at $\theta=\pi$. The integral curves of both fields are closed circles. The smooth field $\partial_{\lambda_1}$ on the other hand vanishes at $\theta=\pi$, while $\partial_{\lambda_2}$ vanishes at $\theta=0$. Both of the two sets of vector fields span the algebra of generators of the $U(1)\times U(1)$-action on $\ensuremath{\mathbb S^3}\xspace$. Given this action $\Phi: G\times\ensuremath{\mathbb S^3}\xspace\rightarrow\ensuremath{\mathbb S^3}\xspace$ for $G=U(1)\times U(1)$, we find an action \[\tilde\Phi: G\times M\rightarrow M,\quad (u,(t,p))\mapsto (t,\Phi(p)),\] where $M=\mathds R\xspace\times\ensuremath{\mathbb S^3}\xspace$ is equipped with the global smooth time function $t$ above and where $p$ represents any point in $\ensuremath{\mathbb S^3}\xspace$. As a consequence the generators of this action are globally defined smooth spacelike vector fields. We assume that we have chosen coordinates $(t,\theta,\rho_1,\rho_2)$ so that $(\theta,\rho_1,\rho_2)$ are Euler coordinates on each $H_t$ and so that $\partial_{\rho_1}$ and $\partial_{\rho_2}$ generate the $U(1)\times U(1)$-action as before. We shall demand now in addition to the above that $U(1)\times U(1)$ acts \emph{by isometries} on $(M,g)$ and hence that $\partial_{\rho_1}$ and $\partial_{\rho_2}$ span the algebra of Killing vector fields. Let \begin{equation} \label{eq:transformedbasis} \xi_1=a \partial_{\rho_1}+b\partial_{\rho_2},\quad \xi_2=c \partial_{\rho_1}+d\partial_{\rho_2} \end{equation} be any other two generators of the algebra of Killing vector fields where $a$, $b$, $c$ and $d$ are real numbers with $ad-bc\not=0$. As long as $a\not=\pm b$ (which is assumed in all of what follows), it follows that $\xi_1$ never vanishes (in particular not at the axes). This allows us to perform the Geroch reduction with respect to $\xi_1$ globally and hence to define the corresponding projection map $\pi$, quotient manifold $S$, and objects $\lambda$, $\omega$ and $\hat h$ as above. Notice that the twist scalar $\omega$ is defined globally because $M$ and $S$ are simply connected. For example, in the case $\xi_1=\partial_{\rho_1}$ (i.e., for $a=1$, $b=0$, and $c,d\in\mathds R\xspace$ such that $ad-bc\not=0$), the quotient map is the special Hopf map in \cite{beyer11} and $S=\mathds R\xspace\times\ensuremath{\mathbb S^2}\xspace$. Notice that the $1$-parameter subgroup of $U(1)\times U(1)$ generated by $\xi_1$ is not necessarily isomorphic to $U(1)$ since its integral curves are not necessarily closed; indeed, this is the case if and only if $a/b\in\ensuremath{\mathds Q}\xspace$. Since $[\xi_1,\xi_2]=0$, it follows that the push-forward of $\xi_2$ along $\pi$ is a Killing vector field of $(S,h_{ab})$ and we can perform the Geroch reduction a second time. Since the push-forward of $\xi_2$ vanishes at some points in $S$ (because $\xi_2$ itself must either vanish or must be parallel to $\xi_1$ at some points in $M$), the result is, however, not a smooth manifold, but rather a manifold with boundary. For the discussion in \Sectionref{sec:construction} and in some parts of \Sectionref{sec:prop}, this is not a problem. However, in \Sectionref{sec:spikes}, it is important to use the smooth manifold structure which is obtained by only \emph{one} Geroch reduction with respect to any choice of $\xi_1$ with $a$, $b$, $c$ and $d$ as above. \subsection{Smooth Gowdy-symmetric generalized Taub-NUT solutions\label{sec:Gowdy}} In this section, we summarize the definition and some properties of the class of inhomogeneous cosmological models that we have called \keyword{smooth Gowdy-symmetric generalized Taub-NUT solutions}. For details we refer to \cite{beyer11}. Such a spacetime is a Gowdy-symmetric, oriented, time-oriented maximally extended globally hyperbolic vacuum solution to Einstein's field equations whose spatial topology is that of the three-sphere $\ensuremath{\mathbb S^3}\xspace$. The main property is that it can be extended (not necessarily as a solution of the vacuum equations) to a non-globally hyperbolic Gowdy-symmetric spacetime in the past. The corresponding Cauchy horizon is supposed to be a smooth null surface with $\ensuremath{\mathbb S^3}\xspace$-topology and its null generator is parallel to a generator of one of the $U(1)$-factors of the symmetry group on the horizon; in particular, the orbits of the generators are therefore closed. {In terms of a time function $t\in(0,\pi)$ and the above described coordinates $\theta$, $\rho_1$ and $\rho_2$, one can achieve the following form of the metric \cite{beyer11},} \begin{equation}\label{eq:metric} g_{ab}=\mathrm e^M(-\mathrm d t^2+\mathrm d\theta^2)+R_0\left[\sin^2\!t\,\mathrm e^u (\mathrm d\rho_1+Q \mathrm d\rho_2)^2+\sin^2\!\theta\,\mathrm e^{-u} \mathrm d\rho_2^2\right], \end{equation} where $R_0$ is a positive constant and $u$, $Q$ and $M$ are smooth functions of $t$ and $\theta$. The past Cauchy horizon is located at $t=0$.\footnote{\label{fn:coordinates}{Strictly speaking, the time coordinate $t$ is not defined at $t=0$. However, it is possible to introduce ``regular coordinates'' $(x,y,\rho_1',\rho_2')$ with $x=\cos\theta$ and $y=\cos t$, in which the solution can be extended to and beyond that boundary (see Sec.~\ref{sec:extensions}). The past Cauchy horizon is then located at $y=1$, corresponding to $t=0$. For the sake of simplicity, we will in the following often talk about the surfaces $t=0$ or, in a similar manner, $t=\pi$, without always giving explicit reference to regularized coordinates.}} With respect to the choice $\xi_1=\partial_{\rho_1}$ and $\xi_2=\partial_{\rho_2}$ in \Eqref{eq:transformedbasis}, we therefore have \begin{equation*} \lambda=R_0\sin^2\!t\,\mathrm e^u, \end{equation*} \begin{equation*} \partial_t\omega = -R_0\frac{\sin^3 t}{\sin\theta}\mathrm e^{2u}\partial_\theta Q,\quad \partial_\theta\omega = -R_0\frac{\sin^3 t}{\sin\theta}\mathrm e^{2u}\partial_t Q, \end{equation*} and \begin{equation} \label{eq:3metric} h_{ab}=\mathrm e^M(-\mathrm d t^2+\mathrm d\theta^2)+R_0\sin^2\!\theta\,\mathrm e^{-u} \mathrm d\rho_2^2. \end{equation} It was not \emph{a priori} guaranteed that there are any solutions to Einstein's field equations that have all the above properties in the entire time interval {$(0,\pi)$}. In order to establish such a global existence result, we chose the following approach in \cite{beyer11}. In the first step we showed \emph{local} existence in a neighbourhood of the past Cauchy horizon. This was done with the Fuchsian methods developed in \cite{AmesA,AmesB}. With respect to the choice $\xi_1=\partial_{\rho_1}$ and $\xi_2=\partial_{\rho_2}$ in \Eqref{eq:transformedbasis}, the result can be summarized as follows. \begin{theorem}\label{Thm1} Let $S_{**}$ and $Q_{*}$ be axially symmetric functions in $C^\infty(\mathbb S^2)$ so that $S_{**}(0)=S_{**}(\pi)$ and $R_0$ a positive constant. Then there exists a unique smooth Gowdy-symmetric generalized Taub-NUT solution for all $t\in(0,\delta]$ (for a sufficiently small $\delta>0$) satisfying the following uniform expansions at $t=0$: \begin{eqnarray*} R_0\, \mathrm e^{u(t,\theta)} &=& \mathrm e^{S_{**}(\theta)}+O(t^2),\\ Q(t,\theta)&=&\cos\theta+Q_*(\theta)\sin^2\!\theta +O(t^2),\\ M(t,\theta)&=&S_{**}(\theta)-2S_{**}(0)+2\ln R_0+O(t^2). \end{eqnarray*} \end{theorem} Besides local existence, this theorem also shows what the available degrees of freedom are: the two \keyword{asymptotic data functions} $S_{**}$ and $Q_{*}$, which describe the behaviour of the solution in a vicinity of the past horizon. After local existence on the time interval $(0,\delta]$ was established, we used a global existence result due to Chru\'sciel \cite{Chrusciel1990} that guarantees existence and regularity in the entire time interval $0<t<\pi$. The only remaining question was what happens at $t=\pi$, where the above defined coordinates become singular. In order to answer this question, we applied methods from soliton theory and discussed the linear matrix problem that is equivalent to the essential part of the Einstein vacuum equations under Gowdy symmetry. In this way, we were able to find explicit expressions for the metric functions on the boundaries $\theta=0$, $\theta=\pi$ and $t=\pi$ in terms of the data at $t=0$. These expressions were used for an analysis of the solution at $t=\pi$, which strongly indicates the following: \begin{quote} In general, smooth Gowdy-symmetric generalized Taub-NUT solutions (with a past Cauchy horizon at $t=0$) develop a second Cauchy horizon at $t=\pi$. The only exceptions are special cases in which curvature singularities form. These cases occur when the imaginary part $b=\Im\mathcal E$ of the Ernst potential $\mathcal E$ [the Ernst potential is defined in \eqref{eq:deff}-\eqref{eq:defa} below] satisfies \begin{equation} b_B-b_A= \pm4, \end{equation} where $b_A=b(t=0,\theta=0)$ and $b_B=b(t=0,\theta=\pi)$ are the values at the poles $A$ and $B$ at $t=0$, see Fig.~\ref{fig:Gowdy} below. Then the solutions have a curvature singularity at $t=\pi, \theta=0$ (for a `$+\!$' sign) or at $t=\pi, \theta=\pi$ (for a `$-\!$' sign), respectively. Hence, whether the solution will be regular or singular at $t=\pi$ can be read off from the data at $t=0$. \end{quote} Note, however, that the analysis in \cite{beyer11} does not rule out the possibility that the metric potentials obtained \emph{at} $t=\pi$ do not connect sufficiently smoothly to the potentials at $t<\pi$ (which have not been obtained explicitly). Hence the solutions might develop additional defects as $t\to\pi$, even though we doubt that this can actually happen. Nevertheless, this uncertainty is an additional motivation for studying examples of exact solutions. And the solution presented here turns out to have all expected properties as described above. Finally, we note that it is assumed in \Theoremref{Thm1} that the past horizon is generated\footnote{{Note that, strictly speaking, $\partial_{\rho_1}$ is not defined at $t=0$ where the coordinates break down. However, similarly to the remark in footnote~\ref{fn:coordinates}, we can introduce regular coordinates $(x,y,\tilde\rho_1,\tilde\rho_2)$ that extend to the Cauchy horizon. In these coordinates, we find that $\partial_{\tilde\rho_1}=\partial_{\rho_1}$ becomes null on the horizon. Moreover, the integral curves of this vector field are autoparallel curves (i.e.\ ``geodesics in a non-affine parametrization'') which corresponds to a nontrivial surface gravity. Moreover, an appropriate rescaling of the Killing field leads to a vector field whose integral curves are null geodesics. In the following we will nevertheless also refer to $\partial_{\rho_1}$ as a ``generator'' of the Cauchy horizon. The same remark applies to generators of the future Cauchy horizon.}} by the Killing vector field $\partial_{\rho_1}$ and hence its integral curves are closed. In the special case $b_A=b_B$, the future horizon is generated by $\partial_{\rho_1}$ as well. Otherwise, the future horizon is generated by $Q_\mathrm{f}\partial_{\rho_1}-\partial_{\rho_2}$, where $Q_\mathrm{f}$ is the (constant) value of the metric potential $Q$ at the future horizon. In general, this implies that the integral curves of the generator of the future horizon are not closed, except in the special case where $Q_\mathrm{f}$ is a rational number. Note also that the metric function $u$ might blow up in the limit $t\to\pi$ even if the spacetime is regular there. \section{Construction of the exact solution\label{sec:construction}} \subsection{Einstein's field equations and the Ernst formulation} The Einstein equations for the metric \eqref{eq:metric} lead to two second-order equations for $u$ and $Q$, which are independent of $M$. Hence one might calculate $u$ and $Q$ in a first step. Afterwards, the remaining Einstein equations provide formulae for $\partial_t M$ and $\partial_\theta M$ so that $M$ can immediately be obtained from a line integral (which turns out to be path independent as a consequence of the field equations for $u$ and $Q$). The two equations for $u$ and $Q$ are equivalent to the \keyword{Ernst equation} \begin{equation}\label{eq:EE} \Re(\mathcal E)\left(-\partial_t^2\mathcal E-\cot t\,\partial_t\mathcal E +\partial_\theta^2\mathcal E+\cot\theta\,\partial_\theta\mathcal E\right) =-(\partial_t\mathcal E)^2+(\partial_\theta\mathcal E)^2 \end{equation} for the complex \keyword{Ernst potential} $\mathcal E=f+\mathrm i b$, which is constructed from the two Killing vectors $\partial_{\rho_1}$ and $\partial_{\rho_2}$. The real part $f$ of $\mathcal E$ is defined by \begin{equation}\label{eq:deff} f:=\frac{1}{R_0}g(\partial_{\rho_2},\partial_{\rho_2}) =Q^2\mathrm e^u\sin^2\! t+\mathrm e^{-u}\sin^2\!\theta \end{equation} and the imaginary part $b$ is given by \begin{equation}\label{eq:defb} \partial_t a=\frac{1}{f^2}\sin t\sin\theta\,\partial_\theta b,\quad \partial_\theta a=\frac{1}{f^2}\sin t\sin\theta\,\partial_t b \end{equation} with \begin{equation}\label{eq:defa} a:= \frac{g(\partial_{\rho_1},\partial_{\rho_2})} {g(\partial_{\rho_2},\partial_{\rho_2})} = \frac{Q}{f}\mathrm e^u\sin^2 t. \end{equation} Note that the Ernst equation was originally formulated in the context of \emph{axisymmetric and stationary} spacetimes \cite{Ernst1968,KramerNeugebauer1968}. These are characterized by the existence of a space\-like Killing vector (corresponding to axisymmetry) and a second Killing vector (corresponding to stationary), which is timelike in a vicinity of spatial infinity. Since the Gowdy-symmetric solutions also admit two Killing vectors (which, however, both are spacelike), the mathematical formulation of the field equations and the solution methods are very similar in these two cases. Indeed, we may even use the formal coordinate transformation \begin{equation}\label{eq:transform} \rho = \mathrm i \sin t\sin\theta,\quad \zeta = \cos t\cos\theta \end{equation} to coordinates ($\rho,\zeta,\rho_1,\rho_2$) in which the metric \eref{eq:metric} takes the Weyl-Lewis-Papapetrou form for axisymmetric and stationary spacetimes. (The two Killing variables $\rho_1$, $\rho_2$ would then play the role of an azimuthal angle and a stationary time coordinate.) In the following, we wish to solve an \emph{initial value problem} for the Ernst equation \eref{eq:EE}, where we prescribe the initial Ernst potential at $t=0$. However, in terms of the corresponding axisymmetric and stationary formulation, we obtain a \emph{boundary value problem} with prescribed axis values at $\rho=0$, $\zeta\in[-1,1]$ [cf.~\eref{eq:transform}] as illustrated in Fig.~\ref{fig:Gowdy}. Mathematically, initial and boundary value problems have, of course, completely different properties and we cannot expect to find solutions to arbitrary initial value problems from a discussion of a corresponding boundary value problem. On the other hand, in this paper we consider a particular family of solutions where this procedure can indeed be applied. (In any case, one may check afterwards whether the constructed solution really is a solution to the original time-evolution problem.) \begin{figure}\centering \includegraphics[scale=0.7]{GowdyX.eps} \caption{Illustration of an initial value problem for the Ernst equation of a Gowdy-symmetric generalized Taub-NUT solution (with initial data at $t=0$, left panel) and a boundary value problem for the axisymmetric and stationary Ernst equation (with boundary values in the interval $[-1,1]$ on the $\zeta$-axis, right panel). \label{fig:Gowdy}} \end{figure} A useful method for tackling axisymmetric and stationary boundary value problems is ``Sibgatullin's integral method'' \cite{Sibgatullin}, which we discuss in the next section. For more details on the axisymmetric and stationary Ernst equation and exact solution methods we refer the reader also to \cite{Neugebauer1996} and \cite{Neugebauer2003}. \subsection{Solution of the Ernst equation} As shown by Sibgatullin \cite{Sibgatullin}, a boundary value problem for the Ernst equation of an axisymmetric and stationary spacetime can be reformulated in terms of the linear integral equation \begin{equation}\label{eq:inteq} \Xint-_{-1}^1\frac{\mu(\xi;\rho,\zeta)[e(\xi)+\tilde e(\eta)]\,\mathrm d\sigma}{(\sigma-\tau)\sqrt{1-\sigma^2}}=0 \end{equation} for a complex function $\mu(\xi;\rho,\zeta)$, where $\Xint-$ denotes the principal value integral. We can fix a unique solution to this homogeneous problem by imposing the additional constraint \begin{equation}\label{eq:constraint} \int_{-1}^1\frac{\mu(\xi;\rho,\zeta)\,\mathrm d\sigma}{\sqrt{1-\sigma^2}}=\pi. \end{equation} Here, we have used the definitions $\xi:=\zeta+\mathrm i\rho\sigma$, $\eta:=\zeta+\mathrm i\rho\tau$ with $\sigma,\tau\in[-1,1]$. The boundary values $\mathcal E(\rho=0,\zeta)$ appear in the form of their analytical continuations \begin{equation} e(\xi):=\mathcal E(\rho=0,\zeta=\xi),\quad \tilde e(\xi):=\overline{e(\bar\xi)}, \end{equation} where the bar denotes complex conjugation. Once $\mu$ is calculated\footnote{For ease of notation, we will often simply write $\mu$ or $\mu(\xi)$ for $\mu(\xi;\rho,\zeta)$.}, the corresponding Ernst potential can be obtained from \begin{equation}\label{eq:EP} \mathcal E(\rho,\zeta)=\frac{1}{\pi}\int_{-1}^1\frac{e(\xi)\mu(\xi)\mathrm d\sigma}{\sqrt{1-\sigma^2}}. \end{equation} In the following, we intend to construct a family of generalized Taub-NUT solutions for which the initial Ernst potential is simple enough to allow for an exact solution of the integral equation \eref{eq:inteq}, but which contains enough parameters to describe both the regular solutions (with a second Cauchy horizon at $t=\pi$) and the singular cases (with scalar curvature singularities at the points $C$ or $D$ in Fig.~\ref{fig:Gowdy}), see Sec.~\ref{sec:Gowdy}. Before we can choose appropriate initial data, we derive some restrictions on the initial Ernst potential $\mathcal E_\mathrm{p}=f_\mathrm{p}+\mathrm i b_\mathrm{p}$ at the past horizon $t=0$ (or, equivalently, on the boundary values at $\rho=0$ in the corresponding boundary value problem). At $t=0$, the real part $f$ of $\mathcal E$ and the regular metric potential $u$ are related by \begin{equation} f(t=0,\theta)=\mathrm e^{-u(t=0,\theta)}\sin\!^2\theta, \end{equation} see \eqref{eq:deff}. As a consequence, $f_\mathrm{p}$ has to satisfy the conditions \begin{equation}\label{eq:condf1} f_\mathrm{p}(\zeta=\pm1)=0,\quad f_\mathrm{p}(\zeta)>0\textrm{ for } -1<\zeta<1, \end{equation} because $\theta=0,\pi$ corresponds to $\zeta=\pm1$ for $t=0$. A second restriction on $f_\mathrm{p}$ follows from the requirement that the first-order equations for the metric potential $M$ must have a regular solution. This led to the condition $S_{**}(0)=S_{**}(\pi)$ in Theorem~\ref{Thm1}, which translates into \begin{equation}\label{eq:condf2} \frac{\mathrm d f_\mathrm{p}}{\mathrm d\zeta}\Big|_{\zeta=1}=-\frac{\mathrm d f_\mathrm{p}}{\mathrm d\zeta}\Big|_{\zeta=-1}. \end{equation} Finally, a condition for the imaginary part $b_\mathrm{p}$ follows from the relation between $b_\mathrm{p}$ and the metric potential $Q$ \cite{beyer11}, \begin{equation} b_\mathrm{p}(\theta)=b_A+2\int_0^\theta Q(0,\theta')\sin\theta'\,\mathrm d\theta', \end{equation} where $b_A=b(t=0,\theta=0)$ is the value of $b$ at the point $A$, see Fig.~\ref{fig:Gowdy}. In our setting, the function $Q$ takes on the boundary values $Q=1$ for $\theta=0$ and $Q=-1$ for $\theta=\pi$. Using $\zeta=\cos(\theta)$ for $t=0$ together with the latter equation, these boundary conditions lead to \begin{equation}\label{eq:condb} \frac{\mathrm d b_\mathrm{p}}{\mathrm d\zeta}\Big|_{\zeta=1}=-2,\quad \frac{\mathrm d b_\mathrm{p}}{\mathrm d\zeta}\Big|_{\zeta=-1}=2. \end{equation} As probably the simplest non-trivial possibility for the initial Ernst potential $\mathcal E_\mathrm{p}$, we choose a cubic imaginary part $b_\mathrm{p}=c_0+c_1\zeta+c_2\zeta^2+c_3\zeta^3$. The constant $c_0$, which plays the role of an integration constant in \eqref{eq:defb}, has no physical meaning. Hence we may set $c_0=0$. Now we must ensure that \eref{eq:condb} holds, which leads to $c_2=-1$ and $c_1=-3c_3$. Thus we arrive at \begin{equation}\label{eq:indatb} b_\mathrm{p}(\zeta)=c_3\zeta(\zeta^2-3)-\zeta^2. \end{equation} For the real part $f_\mathrm{p}$, subject to the conditions \eref{eq:condf1}, \eref{eq:condf2}, we choose a quadratic function \begin{equation}\label{eq:indatf} f_\mathrm{p}=c_1(1-\zeta^2) \end{equation} with $c_1>0$ (which is not related to the auxiliary quantity $c_1$ above). However, for our choice of a cubic function $b_\mathrm{p}$, it turns out that the method for solving the integral equation \eref{eq:inteq} as described in the following will only work if $f_\mathrm{p}$ is a cubic polynomial, too. A possible way out is to start from the \emph{cubic} initial potential \begin{equation}\label{eq:indat} \mathcal E_\mathrm{p}=c_1(1-\zeta^2)\left(1-\frac{\zeta}{d}\right) +\mathrm i\zeta\left[c_3(\zeta^2-3)-\zeta\right] \end{equation} depending on the two parameters $c_1$ and $c_3$ and on an auxiliary parameter $d$. At the end, when we have constructed $\mathcal E$, we may take the limit $d\to\infty$ in which the real part of \eref{eq:indat} reduces to \eref{eq:indatf}. (Note that the condition \eqref{eq:condf2} is only satisfied in the limit $d\to\infty$, i.e.\ a finite $d$ cannot lead to a regular solution of our original initial value problem.) Now we have to solve the integral equation \eqref{eq:inteq} for our choice \eqref{eq:indat} of the initial potential. According to \cite{Sibgatullin}, it is not too difficult to find exact solutions of \eqref{eq:indat} for \emph{rational} initial data. In that case, one needs to find the zeros $\xi_1,\dots,\xi_N$ of the equation $e(\xi)+\tilde e(\xi)=0$ together with their multiplicities $m_1,\dots,m_N$. The solution $\mu(\xi)$ should then have the form \begin{equation}\label{eq:ansatz} \mu(\xi)=A(\rho,\zeta)+\sum\limits_{k=1}^N \left[\frac{A^1_k(\rho,\zeta)}{\xi-\xi_k} +\frac{A^2_k(\rho,\zeta)}{(\xi-\xi_k)^2}+\dots +\frac{A^{m_k}_k(\rho,\zeta)}{(\xi-\xi_k)^{m_k}}\right]. \end{equation} The unknown functions $A$ and $A^n_k$ can be found from the algebraic system of equations that one obtains by plugging \eqref{eq:ansatz} into \eqref{eq:inteq}, \eqref{eq:constraint}. In our case we have to solve the equation \begin{equation} e(\xi)+\tilde e(\xi)\equiv 2c_1(1-\xi^2)\left(1-\frac{\xi}{d}\right)=0, \end{equation} which has the solutions $\xi=\pm 1,d$ (of respective multiplicities one). Hence we start from the ansatz \begin{equation} \mu(\xi)=A(\rho,\zeta)+\frac{A_+(\rho,\zeta)}{\xi+1}+\frac{A_-(\rho,\zeta)}{\xi-1}+\frac{A_d(\rho,\zeta)}{\xi-d}. \end{equation} In order to determine the functions $A$, $A_{\pm}$ and $A_d$ we need to evaluate the integrals in \eqref{eq:inteq} and \eqref{eq:constraint}, which can be done with the aid of the formulae \begin{equation}\fl \int_{-1}^1\frac{\mathrm d\sigma}{\sqrt{1-\sigma^2}}=\pi,\quad \int_{-1}^1\frac{\xi\mathrm d\sigma}{\sqrt{1-\sigma^2}}=\pi\zeta,\quad \int_{-1}^1\frac{\xi^2\mathrm d\sigma}{\sqrt{1-\sigma^2}}=\pi\left(\zeta^2-\frac{\rho^2}{2}\right) \end{equation} and \begin{equation}\fl \Xint-_{-1}^1\frac{\mathrm d\sigma}{\sqrt{1-\sigma^2}(\sigma-\tau)}=0,\quad \int_{-1}^1\frac{\mathrm d\sigma}{\sqrt{1-\sigma^2}(\xi-\alpha)}=\frac{\pi\,\mathrm{sgn}(\zeta-\alpha)}{\sqrt{\rho^2+(\zeta-\alpha)^2}}\quad\textrm{for}\quad \alpha\in\mathds R\xspace. \end{equation} As the first step, we find that the constraint \eqref{eq:constraint} leads to \begin{equation}\label{eq:con0} A+\frac{A_+}{r_+}-\frac{A_-}{r_-}-\frac{A_d}{r_d}=1 \end{equation} for $\zeta\in[-1,1]$, $d>1$, where \begin{equation} r_\pm:=\sqrt{\rho^2+(\zeta\pm1)^2},\quad r_d:=\sqrt{\rho^2+(\zeta-d)^2}. \end{equation} Similarly, we obtain from \eqref{eq:inteq} that \begin{equation}\label{eq:con1} T_0+\zeta T_1+\left(\zeta^2-\frac{\rho^2}{2}\right)T_2+\frac{T_+}{r_+}-\frac{T_-}{r_-}-\frac{T_d}{r_d}=0, \end{equation} where \begin{eqnarray*}\fl T_0 & = & -\left(\frac{c_1}{d} + 3 \mathrm i c_3\right) A - \left[\left(\frac1 d + 1\right) c_1 + \mathrm i (c_3 + 1)\right] A_+\\ \fl && + \left[\left(\frac1 d - 1\right) c_1 + \mathrm i (c_3 - 1)\right] A_- + \mathrm i (c_3 d - 1) A_d \\ \fl && - \left[(\mathrm i + c_1) A - \left(\frac{c_1}{d} + \mathrm i c_3\right) (A_+ + A_- + A_d)\right] \eta + \left(\frac{c_1}{d} + \mathrm i c_3\right) A \eta^2,\\ \fl T_1 & = & -(\mathrm i + c_1) A + \left(\frac{c_1}{d} + \mathrm i c_3\right) (A_+ + A_- + A_d) + \left(\frac{c_1}{d} + \mathrm i c_3\right) A \eta,\\ \fl T_2 & = & \left(\frac{c_1}{d} + \mathrm i c_3\right) A,\\ \fl T_+ & = & -\left[[c_1 + \mathrm i (2 c_3 - 1)] - \left(\left(\frac1 d + 1\right) c_1 - \mathrm i (c_3 + 1)\right) \eta + \left(\frac{c_1}{d} - \mathrm i c_3\right) \eta^2\right] A_+,\\ \fl T_- & = & \left[[c_1 - \mathrm i (2 c_3 + 1)] + \left(\left(-\frac1 d + 1\right) c_1 + \mathrm i (c_3 - 1)\right) \eta - \left(\frac{c_1}{d} -\mathrm i c_3\right) \eta^2\right] A_-,\\ \fl T_d & = & \left[\left(\frac{c_1}{d} - \mathrm i (3 c_3 + d - c_3 d^2)\right) - \mathrm i (1 - c_3 d) \eta - \left(\frac{c_1}{d} - \mathrm i c_3\right) \eta^2\right] A_d. \end{eqnarray*} The left hand side of \eqref{eq:con1}, which is quadratic in $\eta$, must vanish for all $\eta$. Hence, by separately equating the coefficients of $\eta^0$, $\eta^1$ and $\eta^2$ to zero, we find three further algebraic equations. Together with \eqref{eq:con0}, we arrive at a system of four algebraic equations for the four unknowns $A, A_\pm, A_d$. It is a lengthy but straightforward calculation to solve this system and to plug the solution into formula \eqref{eq:EP} for the Ernst potential. Afterwards, we can proceed with our programme and take the limit $d\to\infty$. (As explained above, the parameter $d$ is only an auxiliary quantity introduced for technical reasons. At the end, however, we are only interested in the Ernst potential in the limit $d\to\infty$.) In a next step we ``transform'' the obtained solution of the axisymmetric and stationary Ernst equation into a solution of our original time-evolution problem by virtue of the coordinate transformation \eqref{eq:transform}. In particular, we replace $r_+$ with $\cos t+\cos\theta$ and $r_-$ with $\cos t-\cos\theta$. In this way, we arrive at the following Ernst potential, \begin{eqnarray}\fl \mathcal E & = & -\Big\{c_3^4 (x^2-1)^3 (y-1)^6 (y+1) + 16 c_1^4 (x^2-1) (y+1)^3 - 16 \mathrm i c_1^3 (y+1) \Big[y^2 + 4 y-5 \nonumber\\ \fl && \quad - x^2 (y^2 - 4 y+7) - c_3 x \Big(y^3+ 2 y^2-y + 10 - x^2 (y^3+2y^2-y+2)\Big)\Big] \nonumber\\ \fl && \quad +4 \mathrm i c_1 (x^2-1) (y-1)^3 \Big[4 - 4 c_3 x (3 y+2) + c_3^2 \Big(y^2+4 y +19 + x^2 (11 y^2+20 y +9)\Big) \nonumber\\ \fl && \quad + c_3^3 x \Big(y^3 + 10 y^2+ 31 y +18 + x^2 (3 y^3+ 14 y^2+ 17 y+6)\Big)\Big] \nonumber\\ \fl && \quad + 4 c_1^2 (y-1) \Big[4 \Big(- y^2+ 4 y-3 + x^2 ( y^2+4 y+7)\Big) \nonumber\\ \fl && \quad + 8 c_3 x \Big( y^3 + 2 y^2+ 5 y +10 - x^2 (y^3 + 6 y^2+ 7 y +2)\Big) \nonumber\\ \fl && \quad + c_3^2 \Big( 3 y^4+ 8 y^3 + 26 y^2+ 56 y +51 - 2 x^2 (5 y^4 + 16 y^3 + 32 y^2 + 24 y-5 ) \nonumber\\ \fl && \quad + x^4 (7 y^4 + 24 y^3 + 38 y^2+ 24 y +3 )\Big)\Big]\Big\} \nonumber\\ \fl &&\quad /\Big\{16 c_1 \Big[c_3^2 (x^2-1) (y-1)^3 + 4 c_1^2 (y+1) + 4 \mathrm i c_1 (y-1) \Big(1 - c_3 x (y+2)\Big)\Big]\Big\}\label{eq:E}, \end{eqnarray} where $x:=\cos\theta$, $y:=\cos t$. One can explicitly verify that the Ernst equation \eqref{eq:EE} is satisfied for this potential, i.e.\ we have indeed constructed the Ernst potential of a smooth Gowdy-symmetric generalized Taub-NUT solution. \subsection{Metric potentials} \label{sec:metricpotentials} In order to obtain the corresponding metric potentials $u$, $Q$ and $M$, we could proceed as follows. In a first step, we calculate the auxiliary quantity $a$ from $\mathcal E$ via line integration using \eqref{eq:defb}. Then we solve \eqref{eq:deff}, \eqref{eq:defa} for $u$ and $Q$ to obtain \begin{equation}\label{eq:uQ} \mathrm e^u=\frac{f a^2}{\sin^2\! t}+\frac{\sin^2\!\theta}{f},\quad Q=\frac{f^2 a}{f^2 a^2 + \sin^2\! t\sin^2\!\theta}. \end{equation} Finally, we compute $M$ from a line integral using \cite{beyer11} \pagebreak \begin{eqnarray}\fl\label{eq:M1} (\cos^2\! t-\cos^2\!\theta)\partial_t M & = & \frac{1}{2}\mathrm e^{2u}\frac{\sin^3\! t}{\sin\theta} \Big[\cos t\sin\theta[(\partial_t Q)^2+(\partial_\theta Q)^2] -2\sin t\cos\theta (\partial_t Q)(\partial_\theta Q)\Big] \nonumber\\ & & +\frac{1}{2}\sin t \sin\theta \Big[\cos t\sin\theta[(\partial_t u)^2+(\partial_\theta u)^2] -2\sin t\cos\theta (\partial_t u)(\partial_\theta u)\Big] \nonumber\\ & & -(2\cos^2\!t\,\cos^2\!\theta\,-\cos^2\!t-\cos^2\!\theta) \,\partial_t u \nonumber\\ & & -2\sin t\cos t\sin\theta\cos\theta(\partial_\theta u+\tan\theta), \end{eqnarray} \begin{eqnarray}\label{eq:M2}\fl (\cos^2\! t-\cos^2\!\theta)\partial_\theta M & = & -\frac{1}{2}\mathrm e^{2u}\frac{\sin^3\! t}{\sin\theta} \Big[\sin t\cos\theta[(\partial_t Q)^2+(\partial_\theta Q)^2] -2\cos t\sin\theta (\partial_t Q)(\partial_\theta Q)\Big] \nonumber\\ & & -\frac{1}{2}\sin t \sin\theta \Big[\sin t\cos\theta[(\partial_t u)^2+(\partial_\theta u)^2] -2\cos t\sin\theta (\partial_t u)(\partial_\theta u)\Big] \nonumber\\ & & -2\sin t\cos t\sin\theta\cos\theta(\partial_t u-\tan t) \nonumber\\ & & -(2\cos^2\!t\,\cos^2\!\theta\,-\cos^2\!t-\cos^2\!\theta) \,\partial_\theta u. \end{eqnarray} However, it turns out that the first step, i.e.\ the calculation of $a$ from \eqref{eq:defb}, leads to fairly complicated integrals which cannot easily be solved. Fortunately, as an alternative to \eqref{eq:defb}, the function $a$ may also be calculated directly from the solution $\mu(\xi)$ of the integral equation \eqref{eq:inteq}. As shown by Manko and Sibgatullin \cite{Manko93}\footnote{The quantity $\omega$ in Eq. (3.21) of \cite{Manko93} is the negative of our function $a$.}, $a$ is given by \begin{equation} a=\frac{2}{\pi f}\,\Im\int_{-1}^1\frac{\xi\mu(\xi)\,\mathrm d\sigma}{\sqrt{1-\sigma^2}}. \end{equation} Applying this formula to $\mu(\xi)$ as given in \eqref{eq:ansatz}, we obtain \begin{equation} a=\frac{2}{f}\Im\left[\zeta A+\left(1-\frac{1}{r_+}\right)A_++\left(1-\frac{1}{r_-}\right)A_-+\left(1-\frac{1}{r_d}\right)A_d\right]. \end{equation} Here, we can replace $A$, $A_\pm$, $A_d$ by the solutions of the algebraic system of equations as discussed above. Afterwards, we again take the limit $d\to\infty$ and then transform the solution to the coordinates $t$, $\theta$ via \eqref{eq:transform}. In this way, we obtain the correct function $a$ for our time-evolution problem. Finally, we may calculate $u$ and $Q$ from $a$ and $f$ using \eqref{eq:uQ}. The results are the remarkably simple functions \begin{eqnarray}\fl \mathrm e^u & = & 16 c_1 [c_3^2 (1-x^2) (1-y)^3 + 4 c_1^2 (1 + y)] /\Big[(1 + y) \Big(c_3^4 (1-x^2)^2 (1-y)^6 + 16 c_1^4 (1+y)^2 \nonumber\\ \fl && + 8 c_1^2 (1-y)^2 [2 - 4 c_3 x (y+2) + c_3^2 (1 - y^2 + x^2 (3 y^2+ 8 y+7))]\Big)\Big],\label{eq:solu}\\ \fl Q & = & x+\frac{c_3}{8} (1-x^2) \Big[4 c_1^2 (y^3 + 5 y^2+ 11 y +7 )+(1-y)^3 \Big(4 - 8 c_3 x (y+2) \nonumber\\ \fl && + c_3^2 [y^2+ 4 y +7 + 3 x^2 (y^2 + 4 y+3 )]\Big)\Big] /[c_3^2 (1-x^2) (1-y)^3 + 4 c_1^2 (1+y)],\label{eq:solQ} \end{eqnarray} where we again used the abbreviations $x=\cos\theta$, $y=\cos t$. From these expressions for $u$ and $Q$, we may calculate the remaining metric potential $M$ with \eqref{eq:M1}, \eqref{eq:M2}. The corresponding integration can be done explicitly and we obtain \begin{eqnarray}\fl \mathrm e^M & = & c \Big[c_3^4 (x^2-1)^2 (y-1)^6 + 16 c_1^4 (y+1)^2 + 8 c_1^2 (y-1)^2 \Big(2 - 4 c_3 x (y+2) \nonumber\\ \fl && + c_3^2 [1 - y^2 + x^2 (3 y^2+8 y +7)]\Big)\Big],\label{eq:solM} \end{eqnarray} where $c>0$ is an integration constant. However, $c$ cannot be chosen freely but is fixed by axis regularity conditions. It follows from the analysis in \cite{beyer11} that a combination of the potentials $M$ and $u$ must be constant on the axes, \begin{equation} \theta=0,\pi:\quad \mathrm e^{M+u}=R_0. \end{equation} For the above functions $u$ and $M$ we find $\lim_{\theta\to0/\pi}\mathrm e^{M+u}=64cc_1^3$. Thus $c$ is given by \begin{equation}\label{eq:c} c=\frac{R_0}{64c_1^3}. \end{equation} We have now found all metric potentials corresponding to our initial data \eqref{eq:indatb}, \eqref{eq:indatf} and in this way constructed a family of smooth Gowdy-symmetric generalized Taub-NUT solutions depending on the three parameters $c_1>0$, $c_3\in\mathds R\xspace$ and $R_0>0$. Finally, we note that the metric potentials can be written in the concise form \begin{eqnarray}\label{eq:solnew1} \mathrm e^M & = & \frac{R_0}{64 c_1^3}(U^2+V^2),\quad \mathrm e^u = \frac{R_0 }{4c_1^2}\cdot\frac{U\mathrm e^{-M}}{1+y},\\ \label{eq:solnew2} Q & = & x+\frac{c_3}{8}(1-x^2)\left(7+4y+y^2+\frac{(1-y)V^2}{4c_1^2U}\right) \end{eqnarray} with \begin{equation}\fl\label{eq:defUV} U := c_3^2(1-x^2)(1-y)^3+4c_1^2(1+y),\quad V := 4c_1(1-y)[1-c_3x(2+y)]. \end{equation} \section{Properties of the solution\label{sec:prop}} \subsection{Taub solution\label{sec:Taub}} The solution derived above contains the Taub solution \cite{Taub51} as a special case. If we set \begin{equation} c_3=0 \end{equation} and replace the parameters $c_1$ and $R_0$ in terms of constants $l$ and $m$ via \begin{equation} c_1=\frac{1}{l}\left(\sqrt{l^2+m^2}+m\right),\quad R_0=2l\sqrt{l^2+m^2}, \end{equation} then the solution \eqref{eq:solnew1}-\eqref{eq:solnew2} simplifies to \begin{equation} \label{eq:taubsolutions} \mathrm e^M=l^2+\left(m+\sqrt{l^2+m^2}y\right)^2,\quad \mathrm e^u=2l\sqrt{l^2+m^2}\,\mathrm e^{-M},\quad Q=x. \end{equation} This is indeed the Taub solution in our coordinates \eqref{eq:metric}, see \cite{beyer11}. \subsection{Discrete symmetry} It follows immediately from \eqref{eq:solu}-\eqref{eq:solM} that $u$ and $M$ are invariant under the transformation \begin{equation} c_3\mapsto -c_3, \quad \theta\mapsto\pi-\theta \quad (\Leftrightarrow x\mapsto -x), \end{equation} whereas $Q$ changes into $-Q$. As a consequence, we see that the metric \eqref{eq:metric} is invariant under the simultaneous transformation \begin{equation}\label{eq:sym} c_3\mapsto -c_3,\quad \theta\mapsto \pi-\theta,\quad \rho_2\mapsto -\rho_2, \end{equation} which interchanges the axes $\theta=0$ and $\theta=\pi$. \subsection{Regularity\label{sec:regularity}} As discussed in Sec.~\ref{sec:Gowdy}, smooth Gowdy-symmetric generalized Taub-NUT solutions are regular for $t\in (0,\pi)$ and they can be smoothly extended through $t=0$. Moreover, it is expected that they can also be smoothly extended through $t=\pi$, provided $b_B-b_A\neq \pm4$ holds, where $b_A=b(t=0,\theta=0)$ and $b_B=b(t=0,\theta=\pi)$. If this condition is violated, then we expect scalar curvature singularities at the points $C$ or $D$, see Fig.~\ref{fig:Gowdy}. For our solution \eqref{eq:solu}-\eqref{eq:solM}, we find $b_A=-1-2c_3$, $b_B=-1+2c_3$ and hence \begin{equation} b_B-b_A=4c_3. \end{equation} Consequently, the solution should be regular as long as $c_3\neq\pm1$. That this is true can easily be verified by calculating the components $g_{ij}$ of the metric in terms of the functions $\mathrm e^M$, $\mathrm e^u$ and $Q$ in \eqref{eq:solnew1}, \eqref{eq:solnew2}. All components turn out to be analytic functions of $x=\cos\theta$ and $y=\cos t$ everywhere in the interior of the Gowdy square, i.e.\ for $\theta\in(0,\pi)$, $t\in(0,\pi)$, provided that $\mathrm e^M\neq0$ holds. Moreover, the determinant of the metric is \begin{equation}\label{eq:det} \det(g)=-R_0^2\,\mathrm e^{2M}\sin^2\!\theta\,\sin^2\! t, \end{equation} i.e.\ the metric is non-degenerate in the interior of the Gowdy square, again under the condition $\mathrm e^M\neq 0$. (Note, however, that the above representation of the metric in terms of Euler coordinates \emph{is} degenerate at the boundaries $\theta=0,\pi$ and $t=0,\pi$ as a consequence of coordinate singularities. At the axes $\theta=0,\pi$ we find the usual axes singularities, which can be removed by locally introducing Cartesian coordinates\footnote{In \ref{App2}, where geodesics at $\theta=0$ and $\theta=\pi$ are calculated, it is shown how the axes singularities can be removed.}. And the coordinate singularities at the Cauchy horizons at $t=0$ and $t=\pi$ can also be removed by introducing suitable ``regular'' coordinates, see Sec.~\ref{sec:extensions} below.) We conclude from the above discussion that the regularity of the solution is related to the zeros of $\mathrm e^M$. In order to find out whether $\mathrm e^M$ can vanish for $x\in[-1,1]$, $y\in[-1,1]$, we note that according to \eqref{eq:solnew1}, $\mathrm e^M=0$ is equivalent to $U=V=0$. This leads to the conditions \begin{equation} y+1=0,\quad 1-x^2=0,\quad 1-c_3(y+2)x=0 \end{equation} for vanishing $\mathrm e^M$, which have the two solutions \begin{eqnarray} c_3=\pm1,\quad x=c_3,\quad y=-1. \end{eqnarray} This shows that $\mathrm e^M$ can never vanish in the \emph{interior} of the Gowdy square, but in the singular cases $c_3=\pm1$, there are zeros at the \emph{boundary points} $C$ ($x=1$, $y=-1$) or $D$ ($x=-1$, $y=-1$), respectively. We may also calculate the Kretschmann scalar $K=R_{ijkl}R^{ijkl}$, which turns out to have the form \begin{equation}\label{eq:Kret} K(x,y)=\frac{P(x,y)}{\mathrm e^{6M(x,y)}}, \end{equation} where $P$ is a lengthy polynomial in $x$ and $y$. Obviously, also $K$ is regular wherever $\mathrm e^M\neq0$ holds. Hence we conclude that the Kretschmann scalar is bounded in the entire Gowdy square --- with exception of the two singular cases $c_3=\pm1$, in which $K$ diverges as expected at the points $C$ or $D$. \subsection{Embedding of 2-surfaces} In order to get a better idea of the geometric properties of the solution, it is interesting to visualize particular 2-surfaces by embedding them in Euclidean space. To find appropriate 2-surfaces\footnote{Suppose that we have at least one spacelike Killing vector field $\xi$. Then a more geometrical construction of 2-spheres on the basis of our discussion in \Sectionref{sec:background} (more details are given in \cite{beyer11}) is as follows. Since the Hopf map maps $M$ to the quotient manifold $\mathds R\xspace\times\ensuremath{\mathbb S^2}\xspace$ with a natural $2+1$-dimensional Lorentzian metric, the $t=\mathrm{constant}$-surfaces in the quotient manifold are naturally homeomorphic to $\ensuremath{\mathbb S^2}\xspace$ and their induced metric is Riemannian. A comparison with \Eqref{eq:3metric} reveals that this metric can be expressed explicitly in terms of the function $u$ and $M$ in analogy to \Eqref{eq:metrich}. In the Gowdy case, the result depends on the choice of the Killing vector field $\xi=\xi_1$.}, we start by considering the embedding of the 3-sphere \ensuremath{\mathbb S^3}\xspace in $\ R^4$ with Euler coordinates \eqref{eq:euler1}, \eqref{eq:euler2}, \begin{eqnarray}\label{eq:emb1} x_1=\cos\frac\theta 2 \cos\lambda_1,\quad x_2=\cos\frac\theta 2 \sin\lambda_1,\\ \label{eq:emb2} x_3=\sin\frac\theta 2 \cos\lambda_2,\quad x_4=\sin\frac\theta 2 \sin\lambda_2. \end{eqnarray} Here, $x_1,\dots,x_4$ are Cartesian coordinates in $\mathds R\xspace^4$ and $\theta$, $\lambda_1$, $\lambda_2$ are coordinates in \ensuremath{\mathbb S^3}\xspace. The relation between $\lambda_1$, $\lambda_2$ and our coordinates $\rho_1$, $\rho_2$ is [cf.~\eqref{eq:eulerangleparm2}] \begin{equation} \lambda_1=\frac{\rho_1+\rho_2}{2},\quad \lambda_2=\frac{\rho_1-\rho_2}{2}. \end{equation} It follows from \eqref{eq:emb1}, \eqref{eq:emb2} that $\lambda_2=0$ is a two-dimensional hemisphere (with $x_3\ge0$ and such that $\theta=\pi$ corresponds to the north pole and $\theta=0$ to the equator) in the three-dimensional space $x_4=0$. Similarly, the subspace $\lambda_2=\pi$ describes a hemisphere with $x_3\le 0$ (where the south pole and the equator are obtained for $\theta=\pi$ and $\theta=0$, respectively). Hence, a complete 2-sphere can be obtained by considering $\lambda_2=0$ and $\lambda_2=\pi$ together, which corresponds to $\rho_1=\rho_2$ and $\rho_1=\rho_2+2\pi$. Since slices $t=\textrm{constant}$ of smooth Gowdy-symmetric generalized Taub-NUT solution have \ensuremath{\mathbb S^3}\xspace-topology, we may expect with the above discussion that subspaces \begin{equation} \Sigma=\Sigma_1\cup\Sigma_2,\quad \end{equation} with \begin{eqnarray} \Sigma_1:=\{\theta\in[0,\pi], t=t_0, \rho_1=\rho_2\in[0,2\pi)\},\\ \Sigma_2:=\{\theta\in[0,\pi], t=t_0, \rho_1=\rho_2+2\pi\in[0,2\pi)\}, \end{eqnarray} describe two-dimensional surfaces with \ensuremath{\mathbb S^2}\xspace-topology for any $t_0\in(0,\pi)$. In the following we try to embed $\Sigma$ isometrically into $\mathds R\xspace^3$. It is generally not guaranteed that such an embedding exists globally, but we will see that this is possible for some surfaces $\Sigma$. To this end we set $\mathrm d t=0$ and $\mathrm d\rho_1=\mathrm d\rho_2=:\mathrm d\varphi$ in \eqref{eq:metric} to obtain the metric $h$ in $\Sigma$, \begin{equation}\label{eq:metrich} h=\mathrm e^M\mathrm d\theta^2+R_0[\sin^2\!\theta\,\mathrm e^u(1+Q)^2+\sin^2\!\theta\,\mathrm e^{-u}]\mathrm d\varphi^2. \end{equation} In a next step, we perform a coordinate transformation $\theta=\theta(\alpha)$ and investigate whether the metric $h$ in these coordinates can be brought to the form \begin{equation} h=(r^2+r'^{\,2})\mathrm d\alpha^2+r^2\sin^2\!\alpha\,\mathrm d\varphi^2 \end{equation} for an appropriate function $r=r(\alpha)$ describing the embedded surface in spherical coordinates $(r,\alpha,\varphi)$, where a prime $'$ denotes a derivative with respect to $\alpha$. Hence we have to solve the two equations \begin{equation}\label{eq:emb} r^2+r'^{\,2}=\mathrm e^M\theta'^{\,2},\quad r^2\sin^2\!\alpha=R_0[\sin^2\!\theta\,\mathrm e^u(1+Q)^2+\sin^2\!\theta\,\mathrm e^{-u}], \end{equation} which can be done numerically\footnote{For a numerical solution, we use the second equation in \eqref{eq:emb} to eliminate $r$ and $r'$ from the first equation. This leads to an ODE of the form $\theta'(\alpha)=F(\theta,\alpha)$. Starting from the north pole $\alpha=0$, where we have the initial condition $\theta=\pi$ according to the above discussion, we solve the ODE with a fourth-order Runge-Kutta method until $\theta=0$ is reached (corresponding to the equator). This provides the upper ``hemisphere'' of the embedded figure --- the lower one is obtained from a reflection. A technical detail is a degeneracy of the equation at $\alpha=0$, which allows to choose the initial derivative $\theta'$ in addition to the function value, where the particular value $\theta'(0)$ is unimportant and just fixes the origin of the polar coordinates. At the end we shift the embedding diagram to obtain a symmetric picture.}. It turns out that the embedding in $\mathds R\xspace^3$ for surfaces $\Sigma$ near $t=\pi$ is only possible for negative values of $c_3$. On the other hand, we could consider slices $\lambda_1=0,\pi$ instead of $\lambda_2=0,\pi$, in which case embeddings for positive $c_3$ were possible. However, because of the invariance of the solution under the transformation \eqref{eq:sym}, it is sufficient to consider $c_3\le 0$, which we will do in the remainder of this subsection. \begin{figure}\centering \vspace{2mm} \includegraphics[scale=1.0]{EinbettungX.eps} \caption{Embeddings of surfaces $t=\textrm{constant}$, $\rho_1-\rho_2=0,\pi$ in Euclidean space. The complete 2-surfaces are obtained by rotating the curves around the $\tilde y$-axis. Parameters: $R_0=1$, $c_1=2$.\label{fig:Einbettung}} \end{figure} A couple of examples for several parameter values is given in Fig.~\ref{fig:Einbettung}, where the solution $r=r(\alpha)$ is plotted in Cartesian coordinates $\tilde x$, $\tilde y$ in the form $\tilde x(\alpha)=r(\alpha)\sin\alpha$, $\tilde y(\alpha)=r(\alpha)\cos(\alpha)$. The resulting curves represent cross sections $\varphi=\textrm{constant}$ of the cylindrically symmetric embedded surface. Panel (a) of Fig.~\ref{fig:Einbettung} shows the behaviour of the embedded surfaces in the limit $t\to 0$. For $t=1$ we obtain a 2-surface of spherical topology as expected. For smaller $t$, the ``equatorial'' circumference at $\tilde y=0$ decreases and finally reaches $0$ for $t=0$. Therefore, we interestingly observe that the embedding for $t=0$ corresponds to \emph{two} spheres instead of only one spherical surface. Indeed, \eqref{eq:emb} can be solved exactly for $t=0$ and leads to $\theta=\alpha$, $r=\sqrt{R_0 c_1}=\textrm{constant}$ independently of the value of $c_3$. Hence each of the components $\Sigma_1$ and $\Sigma_2$ corresponds to an entire sphere of radius $\sqrt{R_0 c_1}$ instead of only a hemisphere (as for $t>0$). The situation near $t=\pi$ for $c_3=0$ (which is the ``Taub case'', see Sec.~\ref{sec:Taub}) is shown in panel (b). We see that the qualitative behaviour near $t=\pi$ for $c_3=0$ is the same as the behaviour near $t=0$ for arbitrary $c_3$: we have surfaces of spherical topology which narrow down at the equator in the limit and finally divide into two spheres. Equations \eqref{eq:emb} can also be solved exactly for $t=\pi$ and $c_3=0$. The solution is $\theta=\alpha$, $r=\sqrt{R_0/c_1}=\textrm{constant}$. The behaviour near $t=\pi$ is slightly different for $c_3\neq 0$, see Fig.~\ref{fig:Einbettung}c. In this case we again obtain spherical surfaces that approach a surface with two spherical components. However, the limiting surface is badly behaved at $\theta=0$ (corresponding to $\tilde x=\tilde y=0$), where a \emph{conical singularity} is present, i.e.\ the curves are not orthogonal to the $\tilde y$-axis at this point. A special case is the ``singular case'' $c_3=-1$. As illustrated in Fig.~\ref{fig:Einbettung}d, the 2-surfaces contract to an interval on the $\tilde y$-axis for $t\to\pi$. This also follows from \eqref{eq:metrich}, because the coefficient of $\mathrm d\varphi^2$ tends to $0$ for $t\to\pi$ such that the two-metric degenerates to $h=\mathrm e^M\mathrm d\theta^2$. Obviously, this is the metric of a one-dimensional line. The reason is that the tangent vector $\partial_{\rho_1}+\partial_{\rho_2}$ on $\Sigma$ becomes a null vector for $t=\pi$, i.e.\ one of the two directions within $\Sigma$ becomes lightlike and does not contribute to the distance anymore. Moreover, we observe the expected singular behaviour of the solution at $t=\pi$, $\theta=0$ (corresponding to the ``north'' and ``south poles'' of the embedded figures), where the curvature of the embedded surfaces diverges. Indeed, the Gaussian curvature at the poles turns out to be $c_1(1-3c_3)/[R_0(1+c_3)^3]$ and diverges for $c_3\to -1$. \subsection{The singular cases} \label{sec:singularcases} In our previous discussion we have mostly assumed that $c_3\neq\pm1$ holds, i.e.\ we have excluded the singular cases. But in the following we will have a closer look at them. As a consequence of the symmetry \eqref{eq:sym}, it is sufficient to discuss only the solutions with $c_3=1$. The models with $c_3=-1$ will differ from these only by a reflection at $\theta=\pi/2$, i.e.\ an interchange of the two axes, and a $\rho_2$-reflection. We have seen in Sec.~\ref{sec:regularity} that the Kretschmann scalar $K$ diverges at point $C$ ($\theta=0$, $t=\pi$, or, equivalently, $x=1$, $y=-1$) for $c_3=1$. It is interesting to study the behaviour of $K$ in a vicinity of the singularity in more detail. To this end, starting from the $x$-$y$-coordinates, we introduce polar coordinates $(r,\phi)$ centered at the point $C$, \begin{equation}\label{eq:polar} x=1-r\cos\phi,\quad y=-1+r\sin\phi,\quad r\ge0, \quad\phi\in\Big[0,\frac{\pi}{2}\Big]. \end{equation} In terms of these coordinates, the Kretschmann scalar becomes a rational function of $r$, $\sin\phi$ and $\cos\phi$, i.e.\ it has a simple structure even though the explicit expression is rather lengthy. The leading order behaviour close to the singularity at $r=0$ is given by \begin{equation}\label{eq:exp} K = \frac{g(\phi)}{r^6}+\mathcal O(r^{-5}) \end{equation} with \begin{equation} g(\phi)=\frac{768c_1^6(c_1^2-4)(1+T^2)^3}{R_0^2(c_1^2+4)^2(4+c_1^2T^2)^6}p_1(c_1T)p_2(c_1T),\quad T=\tan\phi, \end{equation} where \begin{equation}\fl p_{1/2}(x)=x^3-6\alpha_{1/2}x^2-12x+8\alpha_{1/2},\quad \alpha_1=\frac{c_1-2}{c_1+2},\quad \alpha_2=-\frac{c_1+2}{c_1-2}. \end{equation} Note that, as a consequence of the rational structure of the full expression for $K$, the expansion \eqref{eq:exp} is not only valid for constant $\phi$, but actually holds uniformly in $\phi$. Hence, if we approach the singularity along an arbitrary curve $r(s)$, $\phi(s)$, where $s$ is some curve parameter, then the divergent behaviour of $K$ is determined by the behaviour of $g(\phi(s))$. Of particular importance are the zeros of $g(\phi)$, which are identical with the zeros of the polynomial $p_1(x)p_2(x)$. $p_1$ and $p_2$ are polynomials of third degree, and they turn out to always have three real zeros --- if we exclude the special case $c_1=2$ for a moment. Moreover, the zeros of $p_1(x)$ and $p_2(x)$ are distinct, hence the product $p_1(x)p_2(x)$ has six distinct real zeros. However, since $c_1>0$ and $T=\tan\phi\ge0$ for $\phi\in[0,\pi/2]$, the argument $c_1T$ must be non-negative, i.e.\ we are only interested in non-negative zeros. Since precisely three of the six zeros turn out to be positive, we see that there are always three directions $\phi_1$, $\phi_2$, $\phi_3$, along which the leading order term $\propto r^{-6}$ of $K$ vanishes, such that $K$ may then diverge at most proportional to $r^{-5}$. Moreover, at these zeros, the sign of $p_1(c_1T)p_2(c_1T)$ changes, i.e.\ there are both regions in which $K$ diverges to $+\infty$ and regions where it diverges to $-\infty$. So far we have assumed $c_1\neq 2$. Now we look at the special case $c_1=2$, in which $g(\phi)$ simplifies to \begin{equation} g(\phi)=\frac{192}{R_0^2(1+T^2)^3}T(T^2-3)(3T^2-1). \end{equation} The non-negative zeros are then located at $T=0, 1/\sqrt{3}, \sqrt{3}, \infty$, corresponding to $\phi=0,\pi/6,\pi/3, \pi/2$. Hence, for $c_1=2$, the function $g(\phi)$ has four non-negative zeros. The discussion so far shows that $K$ can diverge to $\pm\infty$, depending on the curve along which the singularity is approached. But could there even be curves along which $K$ remains finite? Such a curve would necessarily have to approach the singularity asymptotically along one of the directions given by the zeros of $g(\phi)$, since evidently the vanishing of the leading divergent term $\propto r^{-6}$ of $K$ is a necessary condition for $K$ to remain finite. And the behaviour of such a curve would need to be sufficiently ``fine-tuned'' near the singularity to achieve that also the other divergent terms $\propto r^{-5}$, $\propto r^{-4},\dots,\propto r^{-1}$ in $K$ vanish. Remarkably, this turns out to be possible, and we will illustrate this in the special case $c_1=2$, where the relevant formulae become simpler. By way of example, we give some curves with the desired properties in the form $x=x(y)$ or $y=y(x)$. The following four families of curves $\gamma_1,\dots,\gamma_4$, which depend on an additional constant parameter $z\in\mathds R\xspace$, indeed all lead to a bounded Kretschmann scalar. The limit of $K$ as $x\to1$, $y\to-1$, which depends on $z$, is also indicated: \begin{eqnarray}\fl\label{eq:curve1} \gamma_1:\quad && x = 1-\frac{1}{96}(y+1)^4-\frac{1}{48}(y+1)^5-\frac{13}{768}(y+1)^6+z(y+1)^7,\\ \fl\nonumber && K \to -\frac{3}{4R_0^2}(768z+5), \\ \fl \gamma_2:\quad && x=1-\frac{\sqrt{3}}{3}(y+1)+\frac{1-\sqrt{3}}{6}(y+1)^2+\frac{2-\sqrt{3}}{12}(y+1)^3\\ \fl\nonumber && \qquad +\left(\frac{5}{36}-\frac{53}{648}\sqrt{3}\right)(y+1)^4+\left(\frac{175}{1296}-\frac{19}{162}\sqrt{3}\right)(1+y)^5 \\\fl\nonumber && \qquad +\left(\frac{973}{5184}-\frac{781}{5184}\sqrt{3}\right)(y+1)^6 +z(y+1)^7,\\ \fl\nonumber && K\to \frac{1}{768R_0^2}(139968z-53496+19261\sqrt{3}),\\ \fl \gamma_3:\quad && x=1-\sqrt{3}(y+1)+\frac{3-\sqrt{3}}{2}(y+1)^2+\frac{6-5\sqrt{3}}{4}(y+1)^3\\ \fl \nonumber &&\qquad +\left(\frac{49}{12}-\frac{15}{8}\sqrt{3}\right)(y+1)^4 +\left(\frac{347}{48}-\frac{125}{24}\sqrt{3}\right)(y+1)^5\\ \fl\nonumber &&\qquad +\left(\frac{3155}{192}-\frac{2233}{192}\sqrt{3}\right)(y+1)^6+z(y+1)^7,\\ \fl\nonumber && K\to-\frac{1}{256R_0^2}(576z-25500+11695\sqrt{3}),\\ \fl \label{eq:curve4} \gamma_4:\quad && y=-1+\frac{1}{96}(x-1)^4-\frac{1}{192}(x-1)^5-\frac{5}{768}(x-1)^6 +z(x-1)^7,\\ \fl\nonumber && K\to-\frac{9}{8R_0^2}(512z-5). \end{eqnarray} Some of the curves in each family are illustrated in Fig.~\ref{fig:Curves}. One can clearly see that curves of the same family are almost indistinguishable close to the singularity, since they have to approach this point in a well-defined way to guarantee regularity of the Kretschmann scalar. Note that the directions, along which the four families approach the singularity, correspond to the four non-negative zeros of $g(\phi)$ in this case. \begin{figure}\centering \includegraphics[scale=.9]{CurvesX.eps} \caption{Illustration of the four families of curves $\gamma_1,\dots,\gamma_4$, cf.~\eqref{eq:curve1}-\eqref{eq:curve4}, along which the Kretschmann scalar in the case $c_1=2$ approaches a finite limit at the singularity. \label{fig:Curves}} \end{figure} Since the limit of the Kretschmann scalar is a linear function of $z$ in all four cases, the limit can be any real number. We conclude that we can approach the singularity at point $C$ either along curves such that $K\to\pm\infty$, or along curves such that $K$ has any prescribed finite limit. In other words, we observe a \emph{directional behaviour} of the Kretschmann scalar. This is similar to the behaviour of the Kretschmann scalar in the Curzon solution \cite{Curzon1925}, where it turned out that the singularity contains some ``hidden structure'', and it is actually possible to extend the solution beyond that singularity. The original construction of the extended Curzon spacetime by Scott and Szekeres can be found in \cite{ScottSzekeres1986a,ScottSzekeres1986b}, and for a detailed overview we refer to \cite{Whale2014}. The Curzon singularity was classified later on as a so-called \emph{directional singularity}. Roughly speaking, this means that it is possible to approach the singularity either along curves such that the curvature becomes singular (e.g.\ the Kretschmann scalar diverges) or along curves at which everything remains regular. Moreover, it is possible to extend the spacetime through the singularity and to reach further regular regions. For a precise definition and explanation of directional singularities from the point of view of abstract boundary constructions, we refer to \cite{ScottSzekeres1994} and to \cite{Ashley2002,Whale2010}. The directional behaviour described above for our smooth Gowdy-symmetric generalized Taub-NUT solution might lead to the conjecture that the singularity for $c_3=1$ is also a directional singularity. However, for this it is not enough that the Kretschmann scalar remains finite along some curves. Instead, the entire geometry must remain regular. In particular, there must be curves approaching the directional singularity along which \emph{every} curvature invariant is bounded. Hence, if we could only find one invariant that diverges, even though the Kretschmann scalar remains finite, we would not have a directional singularity. Interestingly --- or unfortunately, if one likes directional singularities --- such a quantity can indeed be provided, namely the invariant \begin{equation} J:=R^{ab}{}_{cd} R^{cd}{}_{ef}R^{ef}{}_{ab}, \end{equation} which is cubic in the Riemann tensor, in contrast to $K$, which is quadratic. The explicit calculation shows that \begin{equation} J=\frac{\tilde P(x,y)}{\mathrm e^{9M(x,y)}}, \end{equation} where $\tilde P$ is another (very lengthy) polynomial in $x$ and $y$ (of 24th degree in $x$ and 36th degree in $y$). In terms of the polar coordinates \eqref{eq:polar}, $J$ becomes \begin{equation} J=\frac{\tilde g(\phi)}{r^9}+\mathcal O(r^{-8}), \end{equation} with \begin{eqnarray} \tilde g(\phi) &=& -\frac{6144 c_1^9(1+T^2)^{9/2}}{R_0^3(c_1^2+4)^3(4+c_1^2T^2)^9}\tilde p_1(T)\tilde p_2(T),\\ \tilde p_1(T) &=& c_1^4T^3+12c_1^2T^2-12c_1^2T-16,\\ \tilde p_2(T) &=& c_1^6(c_1^2-12)T^6+96c_1^6T^5-12c_1^4(11c_1^2-36)T^4-1280c_1^4T^3\\ &&+48c_1^2(9c_1^2-44)T^2+1536c_1^2T-192c_1^2+256.\nonumber \end{eqnarray} Again, the expansion holds uniformly in $\phi$. The zeros of $\tilde g(\phi)$ correspond to directions in which $J$ diverges slower, or, potentially, remains finite --- similarly to the above mentioned properties of $K$ in relation to the zeros of $g(\phi)$. Therefore, a necessary condition for the existence of a curve along which both $K$ \emph{and} $J$ remain finite is a simultaneous zero of $\tilde g(\phi)$ and $g(\phi)$. However, as appropriate combinations of the polynomials $p_1$ or $p_2$ with $\tilde p_1$ or $\tilde p_2$ reveal, there are no values of the parameter $c_1$ for which such simultaneous zeros exist. Hence, $J$ necessarily diverges along all curves on which $K$ remains finite and vice versa. We conclude that, even though the Kretschmann scalar exhibits some directional behaviour, the singularity is actually not a directional singularity in the strict sense. In particular, there cannot be any reasonable extension of the spacetime through the singularity. This provides an interesting example of a solution for which the Kretschmann scalar does not contain all information about the singular behaviour, but where, in addition, other invariants have to be studied. \subsection{Beyond the Cauchy horizons\label{sec:extensions}} {In all of our previous discussion we have considered the situation between the two boundaries $t=0$ and $t=\pi$, where, unless in the singular cases, smooth Cauchy horizons are located}. However, since it is a general property of smooth Gowdy-symmetric generalized Taub-NUT solutions that they can be extended through the horizons \cite{beyer11}, it might also be interesting to study some properties of \emph{extensions} of our exact solution. For a given spacetime $(M,g)$, another spacetime $(\hat M,\hat g)$ is called an extension of $(M,g)$, if there exists an isometric embedding $\Lambda:M\to\hat M$ and if $\hat M$ is ``larger'' than $M$ in the sense that $\Lambda(M)\subsetneqq \hat M$. We refer to the article by Chru\'sciel and Isenberg \cite{chrusciel93} for detailed definitions and discussions of spacetime extensions. In particular, extensions of the Taub solutions have been investigated in \cite{chrusciel93}. There are two ``standard'' past extensions and two ``standard'' future extensions of the Taub solutions. Chru\'sciel and Isenberg have shown that the two standard past extensions are equivalent (i.e.\ related via an isometry), and also the two standard future extensions are equivalent. Combining the two future and the two past extensions, one obtains four spacetimes that include both types of extensions. Interestingly, these four extensions can be divided into two groups. Both groups contain two equivalent extensions. However, each extension in the first group is \emph{not} equivalent to either extension in the second group. Hence there are \emph{inequivalent} extensions of the Taub spacetimes. In the following we will see that our exact solution has similar properties. The starting point for the construction of extensions of arbitrary smooth Gowdy-symmetric generalized Taub-NUT solutions in \cite{beyer11} was the observation that the representation of the metric \eqref{eq:metric} in terms of our coordinates is singular at $t=0$ and $t=\pi$, where $\det(g)=0$ holds, see \eqref{eq:det}. However, it is possible to introduce new coordinates $(t',\theta,\rho_1',\rho_2')$ with respect to which we can extend the solution in a regular way through the Cauchy horizons. An extension is then obtained by extending the domain of $t'$ and keeping the same form of the metric also for $t'$-values that correspond to points beyond the Cauchy horizons. To this end, appropriate smooth extensions of the metric potentials $M$, $u$ and $Q$ also have to be chosen. The required isometry $\Lambda$ is then just given by the identity map $(t',\theta,\rho_1',\rho_2')\in M\mapsto (t',\theta,\rho_1',\rho_2')\in\hat M$. Here we follow the same idea and construct extensions of our solution by first introducing new coordinates. However, instead of the new time coordinate $t'$ from \cite{beyer11}, we can also use $y=\cos t$. This is particularly useful since all metric potentials are already given as functions of $y$ (and $x=\cos\theta$, for which reason we will also use $x$ as new coordinate, even though this would not be necessary for guaranteeing regularity at the Cauchy horizons). In addition, we perform transformations of the coordinates $\rho_1$ and $\rho_2$, which will be given shortly. The potentials $M$, $u$ and $Q$ are then extended from the domain $y\in(-1,1)$ to $y\in\mathds R\xspace$ by choosing their analytic continuations (which is possible in our case, since our solution is not only smooth but even analytic). In other words, we use the same formulae \eqref{eq:solnew1}, \eqref{eq:solnew2} also for $|y|\ge1$. In the following, we separately discuss future extensions, past extensions, and combinations of both. We start by extending the solution through the past Cauchy horizon at $t=0$ ($y=1$). For that purpose, we introduce new coordinates $(x,y,\rho_1',\rho_2')$ via \begin{equation}\label{eq:tran1} x=\cos\theta,\quad y=\cos t,\quad \rho_1=\rho_1'+\kappa\ln(1-y),\quad \rho_2=\rho_2', \end{equation} where $\kappa=\mathrm{constant}$. In terms of these coordinates, the metric becomes \pagebreak \begin{eqnarray}\fl g &=& \frac{\mathrm e^M}{1-x^2}\,\mathrm d x^2 +\frac{R_0\kappa^2(1+y)^2\mathrm e^u-\mathrm e^M}{1-y^2}\,\mathrm d y^2 \\ \fl\nonumber && +R_0\Big[-2\kappa(1+y)\mathrm e^u(\mathrm d\rho_1'+Q\mathrm d\rho_2')\mathrm d y +(1-y^2)\mathrm e^u(\mathrm d\rho_1'+Q\mathrm d\rho_2')^2+(1-x^2)\mathrm e^{-u}\mathrm d\rho_2'^2\Big]. \end{eqnarray} The apparently singular component $g_{yy}$ remains regular at $y= 1$ if we choose \begin{equation}\label{eq:kap} \kappa=\pm\sqrt{\lim\limits_{y\to 1}\frac{\mathrm e^{M-u}}{4R_0}}=\pm\frac{c_1}{2}. \end{equation} Note that this is only possible because $\lim_{y\to 1}\mathrm e^{M-u}$ does not depend on $x$, so that $\kappa$ is indeed a constant. However, this is not a coincidence for our particular solution, but holds in general for all smooth Gowdy-symmetric generalized Taub-NUT solutions as a consequence of the Einstein equations. The above coordinate transformation removes the coordinate singularity at the past Cauchy horizon. Consequently, we will use the transformed version of the metric also for $y\ge1$, i.e.\ in the region beyond the past Cauchy horizon. This provides us with the required extension. Note that the two possible sign choices for $\kappa$ correspond to two different past extensions. Adopting the notation from \cite{chrusciel93}, we denote these as $(M^{\downarrow\pm},g^{\downarrow\pm})$, where `$\pm$' specifies the sign of $\kappa$. Using the explicit solution, it is easy to show that the metric coefficients have no singularities in the extended region (with exception of the usual axes singularities at $x=\pm 1$, which could be removed by another coordinate transformation), provided $\mathrm e^M\neq 0$ holds. Moreover, the determinant of the metric is now $\det(g)=-R_0^2\mathrm e^{2M}$, i.e.\ the metric is invertible wherever $\mathrm e^M\neq 0$ holds. Therefore, the question of regularity of our extensions reduces to a discussion of zeros of $\mathrm e^M$. We have seen earlier that, in the regular cases with $c_3\neq\pm1$, $\mathrm e^M$ has no zeros inside the Gowdy square. However, there might be zeros in the extension $y>1$. And indeed, we show in \ref{App1} that there is precisely \emph{one} zero in this region. Since we have already seen that the Kretschmann scalar diverges at zeros of $\mathrm e^M$, cf.\ \eqref{eq:Kret}, these zeros do not correspond to mere coordinate singularities, but to physical curvature singularities. Hence we conclude that \emph{there is always one singularity in each of our two past extensions}. This singularity can be represented as a point in an $x$-$y$ diagram, but due to the additional degrees of freedom $\rho_1'$ and $\rho_2'$, it actually has the topology of a 2-torus. {(An exception are singularities on the axes, which appear exclusively in the singular cases $c_3=\pm1$. These have the topology of a circle.)} In a next step, we consider a future extension of our solution. To this end, we perform a slightly different coordinate transformation, \begin{equation}\label{eq:tran2}\fl x=\cos\theta,\quad y=\cos t,\quad \rho_1=\rho_1'+\kappa_1\ln(1+y),\quad \rho_2=\rho_2'+\kappa_2\ln(1+y) \end{equation} with constants $\kappa_1$ and $\kappa_2$. Similarly to the above discussed past extension, this transformation removes the coordinate singularity at the future Cauchy horizon \mbox{($y=-1$)}, provided we choose \begin{equation}\label{eq:kap12}\fl \kappa_2=\pm\sqrt{\lim\limits_{y\to -1}\frac{(1-y^2)\mathrm e^{M+u}}{4R_0(1-x^2)}} = \pm\frac{c_3}{c_1},\quad \kappa_1=-\kappa_2\lim\limits_{y\to -1}Q=-\frac{c_3^2+1}{2c_3}\kappa_2. \end{equation} Again we have the freedom to choose a sign, which gives rise to two different future extensions. We denote these as $(M^{\uparrow\pm},g^{\uparrow\pm})$, where `$\pm$' indicates the sign of $\kappa_2$. An investigation of the transformed metric shows that its regularity is again equivalent to $\mathrm e^M\neq 0$. We show in \ref{App1} that $\mathrm e^M$ has either one or two zeros in the future extension, depending on the value of $c_3$. Hence, \emph{there is at least one curvature singularity in each of our two future extensions}. As mentioned above, in the case of the Taub solution (i.e.\ the special case $c_3=0$ of our solution), the two standard future/past extensions are equivalent. We can easily show that this is also true for our future/past extensions with general $c_3\in\mathds R\xspace$. The extensions $(M^{\downarrow+},g^{\downarrow+})$ and $(M^{\downarrow-},g^{\downarrow-})$ are related via the isometry $(\rho_1',\rho_2')\mapsto(-\rho_1',-\rho_2')$ and therefore equivalent. Similarly, the extensions $(M^{\uparrow+},g^{\uparrow+})$ and $(M^{\uparrow-},g^{\uparrow-})$ are equivalent, which follows from the same isometry. Finally, we look at simultaneous past and future extensions. These could be constructed by pasting together one of our past extensions with one of the future extensions, which leads to four different spacetimes. Each of these would be described in terms of two coordinate patches, namely one for the past region and one for the future region. However, it is even possible to obtain past and future extensions for which a single coordinate patch is sufficient. To this end, we start again from our original, not yet extended solution and perform the coordinate transformation \begin{equation}\fl\label{eq:tran3} x=\cos\theta,\ y=\cos t,\ \rho_1=\rho_1'+\kappa\ln(1-y)+\kappa_1\ln(1+y),\ \rho_2=\rho_2'+\kappa_2\ln(1+y), \end{equation} which essentially combines the earlier transformations \eqref{eq:tran1} and \eqref{eq:tran2}. With the same choices for the constants $\kappa$, $\kappa_1$ and $\kappa_2$ as before we arrive at an extended spacetime that is regular wherever $\mathrm e^M\neq 0$. We denote these extensions as $(M^{ab},g^{ab})$, where $a=+,-$ determines the sign of $\kappa$ and $b=+,-$ the sign of $\kappa_2$. Note that both extensions $(M^{+b},g^{+b})$, $b=+,-$, when restricted to $y>-1$, are basically the same as our earlier past extension $(M^{\downarrow +},g^{\downarrow +})$, since they only differ by a \emph{regular} coordinate transformation $\rho_1'\mapsto\rho_1'+\kappa_1\ln(1+y)$, $\rho_2'\mapsto\rho_2'+\kappa_2\ln(1+y)$. In the same way, the two extensions $(M^{-b},g^{-b})$, restricted to $y>-1$, both correspond to $(M^{\downarrow -},g^{\downarrow -})$. Similar statements apply to the restriction of $(M^{a\pm},g^{a\pm})$ to $y<1$ and our two future extensions. The above mentioned remarkable result in \cite{chrusciel93} for extensions of the Taub solution was that \begin{enumerate} \item $(M^{++},g^{++})$ is equivalent to $(M^{--},g^{--})$, \item $(M^{+-},g^{+-})$ is equivalent to $(M^{-+},g^{-+})$, \item there are no isometries between the other pairs of extensions. \end{enumerate} The statement (iii) might be particularly surprising, given that the ingredients of the global extensions, namely the two future extensions the two past extensions, are equivalent, respectively. It is easily possible to generalize (i) and (ii) to our solution. This follows immediately from the isometry $(\rho_1',\rho_2')\mapsto(-\rho_1',-\rho_2')$. The interesting question now is whether (iii) also applies in our situation. The proof of (iii) in the Taub case made essential use of properties of null geodesics and their extendibility through the Cauchy horizons. Since the Taub solution has four Killing vectors, there are enough conservation laws to determine all geodesics up to quadrature, see \cite{MisnerTaub1969}. In our case, however, there are ``only'' two Killing vectors, which makes the calculation of geodesics more complicated. Hence we do not aim for a rigorous proof of (iii) for our solution. However, the behaviour of those special null geodesics that are restricted to the axes $\theta=0,\pi$ is very similar to the geodesics of the Taub solution. This is discussed in \ref{App2}. And it might well turn out to be sufficient to study the extendibility of \emph{axis} geodesics to prove (iii)% \footnote{Following the idea of Chru\'sciel and Isenberg's proof of (iii) in the Taub case, we would need to show that a hypothetical isometry between, say, $M^{++}$ and $M^{+-}$ necessarily maps an axis geodesic to an axis geodesic. This could possibly be shown using the following observation. The boundaries $x=\pm 1$, $y=\pm 1$ of the Gowdy square can be characterized in terms of the Killing vectors $\xi$, $\eta$ (where $\xi=\partial_{\rho_1}$, $\eta=\partial_{\rho_2}$ in our coordinates) as zeros of $W:=g(\xi,\xi) g(\eta,\eta)-g(\xi,\eta)^2$. Note that $W$ is a scalar and therefore invariant under coordinate transformations. Moreover, also a change of the Killing basis leaves the zeros of $W$ invariant, since $W$ is then only multiplied by a positive factor. Hence the hypothetical isometry would map the coordinate set $x=\pm 1$, $y=\pm 1$ of $M^{++}$ to the same set in $M^{+-}$. However, a rigorous extension of Chru\'sciel and Isenberg's proof of (iii) to our situation is beyond the scope of this paper.}. Based on these observations, we conjecture that also (iii) generalizes to our solution. Finally we note that, as in the case of the Taub solutions, the extensions contain \emph{closed causal curves}, i.e.\ there are problems in terms of causality. As an example, consider the curve \begin{equation} x(s)=0,\quad y(s)=1+\frac{2c_1}{|c_3|},\quad \rho_1'(s)=s,\quad \rho_2'(s)=0,\quad s\in[0,4\pi] \end{equation} in our \emph{past} extensions. Due to the periodicity of the $\rho_1'$ and $\rho_2'$ coordinates, this curve is closed. Moreover, we have $g_{\rho_1'\rho_1'}=-\frac{4|c_3|R_0}{1+c_3^2}<0$ along the curve, i.e.\ the tangent vectors are indeed timelike. Similarly, \begin{equation}\fl x(s)=0,\quad y(s)=y_0=\mbox{constant}\ll -1,\quad \rho_1'(s)=0,\quad \rho_2'(s)=s,\quad s\in[0,4\pi] \end{equation} is an example of a closed timelike curve in our \emph{future} extensions. (Because of $g_{\rho_2'\rho_2'}|_{x=0}=-\frac{R_0 c_3^2}{16c_1}y^4+\mathcal O(y^3)$, this curve is timelike for sufficiently negative $y_0$.) \subsection{Leading-order behaviour and spikes} \label{sec:spikes} Another way of looking at the exact solution found in this paper is to derive the ``leading-order behaviour'' and hence expansions at $t=0$ and $\pi$. The formulation of the vacuum equations as a singular initial value problem in \cite{beyer11} (Theorem~3.1 there) and the expansions given in \Theoremref{Thm1} here with asymptotic data $S_{**}$ and $Q_*$ gives rise to this in the case of $t=0$; the expansion of the function $\omega$ is given by Proposition~3.2 in \cite{beyer11} in terms of an irrelevant constant $\omega_*$ and the data function $\omega_{**}$ related to $Q_*$ as \[\omega_{**}(\theta)=\mathrm e^{2S_{**}(\theta)}\frac{1-\partial_\theta Q_*(\theta)\sin\theta -2Q_*(\theta)\cos\theta}{4R_0}.\] The variables $\lambda$, $\omega$ and $Q$ here are defined with respect to the choice $\xi_1=\partial_{\rho_1}$ and $\xi_2=\partial_{\rho_2}$; indeed, one of the assumptions which was made in \cite{beyer11} is that $\partial_{\rho_1}$ is parallel to the generator of the past horizon. For the explicit solution in this paper now, which is expressed with respect to the same choice $\xi_1$ and $\xi_2$, we shall indeed confirm these expansions at $t=0$ below. Concerning expansions at $t=\pi$, however, we expect a different behaviour of $\lambda$, $Q$ and $\omega$ defined with respect to the same choice of $\xi_1$ and $\xi_2$ since, in general, $\partial_{\rho_1}$ is not parallel to a generator of the horizon at $t=\pi$. Indeed, we find ``spiky features''. Expressing these quantities with respect to a different choice of $\xi_1$ and $\xi_2$, however, removes those, at least in the ``regular'' cases $c_3\not=\pm 1$, and we find analogous expansions as at $t=0$ (except for some minor differences due to different topological properties of the generators). This is consistent with the established idea \cite{RendallWeaver2001} that spikes which can be removed by a change of the Killing bases are \keyword{false} spikes. In the ``singular cases'' $c_3=\pm 1$, the spiky features, which we identify below at $t=\pi$, cannot be removed by a change of the Killing basis and hence those are \keyword{true} spikes. \subsubsection{Some background.} A consequence of the condition $[\xi_1,\xi_2]=0$ for a general choice of $\xi_1$ and $\xi_2$ according to \Eqref{eq:transformedbasis} is that both fields generate alternative coordinates $\phi_1$, $\phi_2$ on the symmetry orbits with $\xi_1=\partial_{\phi_1}$ and $\xi_2=\partial_{\phi_2}$. The metric $g$ can then locally be written in a very similar manner as before in \Eqref{eq:metric}: \begin{equation*} g=\mathrm e^M(-\mathrm d t^2+\mathrm d\theta^2) +\tilde\lambda (\mathrm d{\phi_1}+\tilde{Q} \mathrm d{\phi_2})^2+\frac{\tilde{R}^2}{\tilde\lambda} \mathrm d{\phi_2}^2, \end{equation*} with \[\tilde R=\tilde R_0 \sin t\sin\theta,\] for some $\tilde R_0>0$, where \begin{equation} \label{eq:relationPQomegatheta} \partial_t\tilde\omega = -\tilde R^{-1}\tilde\lambda^2\,\partial_\theta \tilde Q,\quad \partial_\theta\tilde\omega = -\tilde R^{-1}\tilde\lambda^2\,\partial_t \tilde Q, \end{equation} are the quantities defined with respect to a general choice of $\xi_1$ and $\xi_2$ according to \Eqref{eq:transformedbasis}, while, as we agree from now on, the corresponding quantities with no tilde refer to the particular choice $\xi_1=\partial_{\rho_1}$ and $\xi_2=\partial_{\rho_2}$. Using the transformation laws in \cite{beyer11}, we derive \[\fl\,\,\tilde R_0=|ad-bc|R_0,\quad \tilde\lambda=(a+b Q)^2 \lambda+b^2 R^2\lambda^{-1},\quad \tilde Q=\frac{(a+b Q) (c+d Q) \lambda+bd R^2 \lambda^{-1}}{(a+b Q)^2 \lambda+b^2 R^2 \lambda^{-1}}.\] This and \Eqref{eq:relationPQomegatheta} then allows to compute $\tilde\omega$ by a line integration. We have seen that for any choice of Killing vector fields $\xi_1$ and $\xi_2$ (under the conditions $a\not=\pm b$), Geroch's reduction leads to a smooth quotient manifold $S$ with a smooth projection map $\pi$, which, locally in adapted coordinates, looks like \[\pi: M\rightarrow S,\quad (t,\theta,\phi_1,\phi_2)\mapsto (t,\theta,\phi_2).\] Moreover, Einstein's vacuum equations imply that the pair $(\tilde \lambda,\tilde \omega)$ is a wave map into the half-plane model of hyperbolic space. In the following, we consider the hyperbolic speed $\tilde s$ as defined in \Eqref{eq:hypspeed} with respect to the curves $t=2\arctan\mathrm e^\tau$, and $\theta,\phi_2=\mathrm{constant}$ on $S$. Since $(\tilde\lambda,\tilde\omega)$ is a wave map with respect to the hyperbolic metric $(\mathrm d\tilde\lambda^2+\mathrm d\tilde\omega^2)/\tilde\lambda^2$, it follows that the pair $(\tilde S,\tilde \omega)$ is a wave map into hyperbolic space with the metric $\mathrm d \tilde S^2+\mathrm e^{-2\tilde S}\mathrm d\tilde \omega^2$ (this is just a change of coordinates on hyperbolic space). Hence, the hyperbolic speed $\tilde s$ becomes \begin{equation} \label{eq:ourspeed} \tilde s(t,\theta)=\sin t\, \sqrt{\frac{[\partial_t \tilde \lambda(t,\theta)]^2+[\partial_t \tilde\omega(t,\theta)]^2}{\tilde\lambda^2(t,\theta)}}. \end{equation} Before we analyse the behaviour of all the quantities at $t=0$ and $\pi$, we notice that the explicit formulae in \Sectionref{sec:metricpotentials} allow us determine \[\lambda(x,y)=R_0 (1-y^2)\mathrm e^{u(x,y)}\] from \Eqref{eq:solu}; recall that $y=\cos t$ and $x=\cos\theta$. The function $Q(x,y)$ is given by \Eqref{eq:solQ}. By line integration of \Eqref{eq:relationPQomegatheta} and choosing the irrelevant constant appropriately, we find an explicit formula for the twist potential, \begin{equation} \omega(x,y) = 16c_1R_0 \frac{(1-y)V}{U^2+V^2} \end{equation} with $U$ and $V$ as in \eqref{eq:defUV}. \subsubsection{The behaviour at $t=0$.} It is straightforward to determine expansions at $t=0$ and therefore confirm the results above. Consistent with our expectations, we find that the uniform expansions of \Theoremref{Thm1} hold at $t=0$ with \[\mathrm e^{S_{**}(x)}=\frac{R_0}{c_1}, \quad Q_*(x)=\frac 32 c_3,\] i.e., \[\omega_{**}(x)=\frac{{R_0} (1-3 c_3 x )}{4 c_1^2}.\] The hyperbolic speed $s$ in \Eqref{eq:ourspeed} converges uniformly to the value $2$ at $t=0$. \subsubsection{The behaviour at $t=\pi$ for $c_3\not=\pm1$.} Let us stick with the choice $\xi_1=\partial_{\rho_1}$ and $\xi_2=\partial_{\rho_2}$ and determine the limit of the relevant functions at $t=\pi$ (i.e., $y\rightarrow-1$) first. We find that \[\lim_{y\rightarrow-1} \lambda(x,y) =\frac{256 c_1 c_3^2 R_0 (1 - x^2)}{64 [c_1^2 (1 - c_3 x)^2 + c_3^4 (1 - x^2)^2]},\] for every $x\in[-1,1]$. Unless $c_3=\pm 1$, the convergence is uniform in space and the limit function is smooth. This is to be expected since the horizon at $t=\pi$ is smooth and $\partial_{\rho_1}$ (which, as mentioned earlier, can be defined without making reference to coordinates) extends as a smooth vector field to the future horizon. Since $\partial_{\rho_1}$ is not proportional to the generator at $t=\pi$ (except for $c_3=0$), the function $\lambda$ does not vanish (in contrast to the situation at $t=0$). Similarly, we find \[\lim_{y\rightarrow-1}\omega(x,y) =\frac{256 c_1^2 R_0 (1 - c_3 x)}{64 [c_1^2 (1 - c_3 x)^2 + c_3^4 (1 - x^2)^2]},\] and the convergence to a smooth function (unless $c_3=\pm 1$) is uniform as before. The limit of $Q$ for $c_3\not =0$ is \[\lim_{y\rightarrow-1}Q(x,y)=\left\{ \begin{array}{ll} (1 + c_3^2)/(2 c_3) & \mbox{if } x\in (-1,1) \\ 1 & \mbox{if } x=1\\ -1 & \mbox{if } x=-1. \end{array} \right. \] Consequently, $Q$ cannot be extended as a continuous function to $t=\pi$. Only in the special case $c_3=0$ (the case of the Taub solution) we have $Q(x,y)=x$ so that $\lim_{y\rightarrow-1}Q(x,y)=x$ and therefore $Q$ extends smoothly. Despite the fact that $(\lambda,\omega)$ is a pair of smooth well-defined quantities through $t=\pi$, the hyperbolic speed defined with respect to it does not have continuous limit: \begin{equation} \label{eq:limithyperbolicspeed} \lim_{y\rightarrow-1}s^2(x,y)=\left\{ \begin{array}{ll} 0 & \mbox{if } x\in (-1,1) \\ 4(1+4c_3^2/c_1^2) & \mbox{if } x=\pm 1. \end{array} \right. \end{equation} This discontinuous behaviour of the hyperbolic speed at $t=\pi$ is interpreted as \keyword{spikes}. However, since the geometry is smooth at $t=\pi$ for $c_3\not=\pm1$, we claim that there exists another parametrization of the solution (i.e., another choice of $\xi_1$ and $\xi_2$ and hence another choice of wave map $(\tilde\lambda,\tilde\omega)$) for which this spiky behaviour disappears. Hence these are false spikes. To this end, we choose \begin{equation} \label{eq:transfxi1} b=-2a c_3/(1+c_3^2) \end{equation} for the definition of $\xi_1$ and arbitrary $c$ and $d$ such that $ad-bc\not=0$ for the definition of $\xi_2$ in \Eqref{eq:transformedbasis}. Notice that this is compatible with the requirement $b\not=\pm a$ unless $c_3=\pm1$. Then we find that $\tilde\lambda$ is a smooth function through $t=\pi$ with the property \[\lim_{y\rightarrow -1} \frac{\tilde\lambda(x,y)}{1+y} =\frac{2 a^2 R_0 [c_1^2 (1 - c_3 x)^2 + c_3^4 (1 - x^2)^2]}{c_1 (1 + c_3^2)^2},\] where the convergence is uniform in $x\in[-1,1]$. Notice that this limit is in agreement with the expansions at $t=0$ before, namely, this limit function corresponds to the data function $\mathrm e^{S_{**}}$ at $t=\pi$. The only new aspect is that in general the condition $S_{**}(0)=S_{**}(\pi)$, which was part of Theorem~\ref{Thm1}, is violated. This is a consequence of the fact that the future horizon is not generated by $\partial_{\rho_1}$ but in fact by $\xi_1$ with the above choice of $b$. Similarly, we obtain \begin{eqnarray*} \fl\,\,\lim_{y\rightarrow -1} \tilde Q(x,y)= (1 + c_3^2) \Biggl(-2 b c_1^2 \Bigl(c_1^2 (-1 + c_3 x)^2 + c_3^3 (-1 + x^2) (c_3 - 4 x + 3 c_3 x^2)\Bigr)\\ \fl\qquad+ d \Bigl(c_3^7 (-1 + x^2)^4 + c_1^4 (-1 + c_3 x)^2 (-2 x + c_3 (-1 + x^2))\\ \fl\qquad\qquad+ c_1^2 c_3^2 (-1 + x^2) [4 x - 4 c_3^2 x (-2 + x^2) - c_3 (3 + x^2) + c_3^3 (-1 - 5 x^2 + 2 x^4)]\Bigr) \Biggr)\\ \Biggl/\Biggl({2 a [c_1^2 (-1 + c_3 x)^2 + c_3^4 (-1 + x^2)^2]^2}\Biggr), \end{eqnarray*} which is a smooth function and converges uniformly in $x$ unless $c_3=\pm1$. Next, we can use \Eqsref{eq:relationPQomegatheta} to determine the function $\tilde\omega$ by line integration. We refrain from giving the explicit expression here since this is very long. Still, the fact that $\tilde w(x,y_0)$ must be a smooth function of $x$ for each $y_0\in (-1,1)$ together with the relation \[\tilde\omega(x,y)=\tilde\omega(x,y_0)-\int_{y_0}^y \frac{\mathrm e^{2\tilde S(y',x)}}{1-(y')^2}\tilde Q_x(y',x)\,\mathrm d y',\] and the above limits of $\mathrm e^{\tilde S}$ and $\tilde Q$, implies that $\lim_{y\rightarrow -1}\tilde\omega (x,y)$ converges uniformly to a smooth function in $x$ unless $c_3=\pm 1$. The limit $y\rightarrow-1$ of the hyperbolic speed $\tilde s$ is constant with value $2$ if $c_3\not=\pm 1$. This shows that the false spike behaviour completely disappears under the transformation \Eqref{eq:transfxi1}. \subsubsection{The behaviour at $t=\pi$ for $c_3=\pm1$.} Let us now proceed with the singular cases $c_3=\pm1$ and their behaviour at $t=\pi$. The curvature singularity is located on one of the axes at $t=\pi$, i.e., at $\theta=0$ for $c_3=1$ and at $\theta=\pi$ for $c_3=-1$. Let us for definiteness now restrict to the case $c_3=1$. In a first step let $\tilde\ensuremath{\mathbb S^3}\xspace$ be the set of points on $\ensuremath{\mathbb S^3}\xspace$ without the points corresponding to $\theta=0$. Then, we remove this axis from $M$, i.e., we define \[\tilde M:=M\backslash \left((0,\pi)\times\tilde\ensuremath{\mathbb S^3}\xspace\right).\] The restriction of the solution $g_{ab}$ to $\tilde M$ yields a smooth (but not globally hyperbolic) spacetime which satisfies Einstein's vacuum equations. We can show this can be extended smoothly through $t=\pi$. In fact $t=\pi$ corresponds to a smooth null hypersurface whose generator is proportional to $\partial_{\lambda_2}$. The field $\partial_{\lambda_2}$ never vanishes on $\tilde M$ since we have removed precisely those point on $M$ where it does. For the choice of Killing fields $\xi_1$ and $\xi_2$ as in \Eqref{eq:transfxi1} where now $\xi_1=\partial_{\lambda_2}$, the Geroch reduction is well-defined on $\tilde M$ and therefore yields a global smooth wave map structure as before. We find that the hyperbolic speed $\tilde s$ with respect to this choice converges pointwise on $\tilde M$ to the constant function $2$ at $t=\pi$ (i.e., $y=-1$). Geometrically, the singularity of the spacetime $M$ at $t=\pi$ arises because $\partial_{\lambda_2}$, being parallel to the generator of the null hypersurface at $t=\pi$, describes smaller and smaller loops in the limit $\theta\rightarrow 0$, and hence, at $\theta=0$, the null generator does not have a well-defined direction. What can we say about the solution at $t=\pi$ when we now consider the whole spacetime $M$ --- including the previously removed axis? The choice of $\xi_1$ and $\xi_2$ above, which is well-defined on $\tilde M$, yields a singular Geroch reduction at $\theta=0$. Nevertheless for our explicit solution, we can compute the limit $x\rightarrow 1$ (corresponding to $\theta=0$) of the hyperbolic speed $\tilde s$ at each $y\in (-1,1)$ (corresponding to $t\in(0,\pi)$). Surprisingly, it turns out the function which yields this limit of $\tilde s$ at every $y$ is smooth, and its limit $y\rightarrow -1$ is $4$. Hence, although the hyperbolic speed $\tilde s$ with respect to this choice of $\xi_1$ and $\xi_2$ extends nicely to the future horizon as a constant function with value $2$, it does not extend to a continuous function when the axis is taken into account. This discontinuity therefore appears precisely where the curvature is unbounded (\Sectionref{sec:singularcases}). It follows that in the singular cases $c_3=1$ (and similarly $c_3=-1$), the discontinuous behaviour at $t=\pi$ cannot be ``undone'' by a reparametrization of the Killing orbits and hence, the solution has a {true spike} at $t=\pi$ as expected. \subsubsection{Comparison to the variables of St{\aa}hl.} The work in \cite{Stahl02} by St{\aa}hl is the first attempt in the literature to formulate a singular initial value problem for Gowdy solutions with spatial $\ensuremath{\mathbb S^1\!\times \mathbb S^2}\xspace$-topology and $\ensuremath{\mathbb S^3}\xspace$-topology using Fuchsian techniques similar to the results obtained in \cite{KichenassamyRendall,rendall2000} for the case of spatial $\mathbb T^3$-topology. However, there are unexpected limitations as the results do not yield a family of solutions as large as expected from the $\mathbb T^3$-Gowdy case\footnote{St{\aa}hl also does not account for the constraint equations implied by the vacuum equations correctly. This problem is fixed, for a special class of solutions, in \cite{beyer11} which is also based on a different Fuchsian method \cite{AmesA,AmesB}.}. St{\aa}hl conjectures that these are possibly related to the formation of spikes at the axes of symmetry under general conditions. In order to shed light on this let us study this question for our solution here. To this end, we must first relate the different sets of variables used here to these in \cite{Stahl02}. St{\aa}hl chooses $\xi_1=\partial_{\lambda_1}$ and $\xi_2=\partial_{\lambda_2}$ (i.e., $a=b=c=1$, $d=-1$ in \Eqref{eq:transformedbasis}) throughout, for which, as described above, the Geroch reduction becomes singular at one of the axes. This, however, is not a problem, since St{\aa}hl focusses on the vicinity of the \emph{other} axis, namely the one at $\theta=0$. With respect to this basis of the algebra of Killing vectors, St{\aa}hl's quantity $X$ corresponds to our quantity $\tilde Q$, his quantity $Y$ corresponds to our $\tilde L:=\log\tilde\lambda-\tilde R$ and his $Z$ is the same as our $\tilde S:=\log\tilde\lambda$. In order to distinguish these quantities for this choice of Killing basis from the ones which we use above, we refer to them as $X$, $Y$ and $Z$ in the following. Now there is an interesting relation between these sets of variables. Namely, it turns out that the pair $(\tilde L,\tilde Q)$ satisfies the wave map equations with the hyperbolic target metric $\mathrm d \tilde L^2+\mathrm e^{2\tilde L} \mathrm d \tilde Q^2$ and the same source manifold as the pair $(\tilde S,\tilde \omega)$; one can check this easily by writing the wave map part of the field equations, namely \Eqsref{eq:Gerochevollambda} and \eref{eq:Gerochevolomega}, in terms of $\tilde L$ and $\tilde Q$ instead of $\tilde \lambda$ and $\tilde \omega$. However, since $\tilde R\sim\sin\theta$, the quantity $\tilde L$ is singular at each time $t$ at, at least, one of the axes. Hence $(\tilde L,\tilde Q)$ is a \textit{singular} wave map. This is different in the case of Gowdy solutions with $\mathbb T^3$-topology where $R=R_0 t$ and the main reason why the \keyword{Gowdy-to-Ernst transformation} does not work in the vicinity of the symmetry axes in the $\ensuremath{\mathbb S^3}\xspace$- and $\ensuremath{\mathbb S^1\!\times \mathbb S^2}\xspace$-Gowdy cases; for more details see \cite{ RendallWeaver2001}. Let us now go back to St{\aa}hl's parametrization of the solutions. He chooses to define a hyperbolic speed with respect to the singular pair $(\tilde L,\tilde Q)=(Y,X)$. In analogy with \Eqref{eq:ourspeed}, St{\aa}hl's hyperbolic speed is (up to a sign) \begin{equation*} \nu(t,\theta)=\sin t \sqrt{[\partial_t Y(t,\theta)]^2+\mathrm e^{2Y(t,\theta)}[\partial_t X(t,\theta)]^2}. \end{equation*} For our family of explicit solutions here, it turns out that the limit of the hyperbolic speed $\nu$ at $t=\pi$ is uniformly $1$ at $t=\pi$ if $c_3\not=\pm1$. In particular, this quantity can be unexpectedly extended continuously to the, in this parametrization, ``singular'' axis $\theta=\pi$. With respect to St{\aa}hl's variables the hyperbolic speed is therefore well behaved without signs of false spikes in contrast to our regular wave map parametrization above; cf.\ \Eqref{eq:limithyperbolicspeed}. If $c_3=1$, however, the limit of $\nu$ at $t=\pi$ is discontinuous at $\theta=0$ where the solution becomes singular: $\nu$ converges to the value $1$ everywhere except for $\theta=0$ (in particular, also in the same way as above, at $\theta=\pi$) and to the value $3$ along the axis $\theta=0$. In the case $c_3=-1$, the same discontinuity occurs at $\theta=\pi$. This is a further hint that these discontinuities must be considered as true spikes. We notice that the limit values $1$ and $3$ of $\nu$, which we have found to occur for our solutions, are in consistency with St{\aa}hl's argument about the behaviour of general solutions at the axes in \cite{Stahl02}. \section{Discussion\label{sec:discussion}} We have derived an exact solution to Einstein's vacuum equations, which is a particular smooth Gowdy-symmetric generalized Taub-NUT solution. This was done by solving an initial value problem for the Ernst equation with ``Sibgatullin's integral method''. Our solution depends on three parameters $R_0>0$, $c_1>0$, $c_3\in\mathds R\xspace$. For $c_3=0$, we arrive at the spatially homogeneous Taub spacetimes as a special case. Otherwise, we obtain spatially \emph{inhomogeneous} cosmological models. {We have shown that the solution is regular in the maximal globally hyperbolic region $0< t< \pi$. Moreover, the solution can be extended through $t=0$ and $t=\pi$ and has smooth Cauchy horizons at these surfaces.} Only in the ``singular cases'' $c_3=\pm 1$ are there scalar curvature singularities at the points $t=\pi$, $\theta=0$ or $t=\pi$, $\theta=\pi$. In these cases, the Kretschmann scalar $K$ shows a directional behaviour: $K$ diverges to $+\infty$ or $-\infty$ or to any real number, depending on the curve along which the singular point is approached. However, even if $K$ remains bounded, there are other scalars that diverge. Consequently, the singularities are not directional singularities in the sense used in abstract boundary constructions. Furthermore, we have explicitly constructed several extensions of our solution. In particular, we have argued that it is likely that some of these extensions are not isometric, i.e.\ our solution seems to have inequivalent extensions, similar to the Taub spacetimes. Interestingly, for $c_3\neq 0$ all of our extensions contain singularities. We point out that there might be other extensions that are not isometric to the discussed ones. Moreover, among these there might be extensions that do not have singularities. However, we doubt that. Moreover, this exact solution is an interesting example of an $\ensuremath{\mathbb S^3}\xspace$-Gowdy solution with spikes, both false and true spikes. In future research, this should therefore help to untangle the so far poorly understood relationship between the expected presence of spikes in generic situations and the behaviour at the axes of symmetry. \section*{Acknowledgments} We would like to thank Ben Whale, Gerrard Liddell {and J\"org Frauendiener} for valuable discussions and Gerrard Liddell for commenting on the manuscript. This work was supported by the Marsden Fund Council from Government funding, administered by the Royal Society of New Zealand. \begin{appendix} \section{Zeros of \texorpdfstring{$\mathrm e^M$}{exp(M)}\label{App1}} The zeros of the potential $\mathrm e^M$ determine the position of curvature singularities of our exact solution. The discussion in Sec.~\ref{sec:regularity} has shown that --- with the exception of the singular cases $c_3=\pm1$ --- the function $\mathrm e^M$ cannot vanish in the Gowdy square $x\in[-1,1]$, $y\in[-1,1]$. However, it is still possible that there are zeros in the extended regions with $y\in\mathds R\xspace$, which will be investigated in this section. According to \eqref{eq:solnew1}, zeros of $\mathrm e^M$ correspond to $U=V=0$, i.e. to \begin{eqnarray}\label{eq:app1} U &=& c_3^2(1-x^2)(1-y)^3+4c_1^2(1+y)=0,\\ V &=& 4c_1(1-y)[1-c_3x(2+y)]=0.\label{eq:app2} \end{eqnarray} For $c_3=0$ (the Taub case) we have $U=4c_1^2(1+y)$, $V=4c_1(1-y)$, so there are no simultaneous zeros of $U$ and $V$ (recall that $c_1>0$). This corresponds to the fact that the standard extensions of the Taub solution are free of curvature singularities. Therefore, we can now assume that $c_3>0$ (again taking advantage of the discrete symmetry of the solution). From $V=0$ we conclude that either $y=1$ or $y=\frac{1}{c_3 x}-2$ holds. In the former case, we obtain $U=8c_1^2\neq 0$, whereas the latter case leads to \begin{equation} U=\frac{\omega(x)}{c_3 x^3},\quad \omega(x):=(1-x^2)(3c_3x-1)^3-4c_1^2x^2(c_3x-1). \end{equation} Hence the positions of curvature singularities are determined by those zeros of the fifths-degree polynomial $\omega(x)$ that are in the $x$-interval $[-1,1]$. For a discussion of these zeros we look at the following cases. \begin{description} \item[1st case: $0<c_3<1$]\mbox{}\\ From $\omega(0)=-1<0$, $\omega(1)=4c_1^2(1-c_3)>0$ and $\omega(\frac{1}{3c_3})=\frac{8c_1^2}{27c_3^2}>0$ we conclude that $\omega$ has a zero in the interval $x\in\Big(0,\min(1,\frac{1}{3c_3})\Big)$. Since, for $x<\frac{1}{3c_3}$, we have $y=\frac{1}{c_3x}-2>1$, this zero corresponds to a singularity in the \emph{past} extension. Moreover, from $\omega(-1)=4c_1^2(1+c_3)>0$ and $\omega(0)=-1<0$ we conclude that $\omega$ has a zero in $(-1,0)$. For negative $x$ we get $y<-2$, i.e.\ we find a singularity in the \emph{future} extension. Due to $\omega(1)>0$ and $\lim_{x\to\infty}\omega(x)=-\infty$, there is another real zero in $[1,\infty)$. However, since our $x$-coordinate is restricted to $[-1,1]$, this zero has no physical meaning. \item[2nd case: $c_3=1$]\mbox{}\\ In this singular case we see that $\omega(1)=0$, which corresponds to the singularity at $x=1$, $y=-1$ (point $C$ in Fig.~\ref{fig:Gowdy}). As in the first case, we also have a zero for $x\in\Big(0,\min(1,\frac{1}{3c_3})\Big)=(0,\frac13)$, corresponding to a singularity in the \emph{past} extension, and an irrelevant zero in $(1,\infty)$. \item[3rd case: $c_3>1$]\mbox{}\\ As above, we observe that $\omega$ has a zero in $(0,\frac{1}{3c_3})$, corresponding to a singularity in the \emph{past} extension, and a zero in $(-1,0)$, corresponding to a singularity in the \emph{future} extension. In addition, we find a zero for $x\in(\frac{1}{c_3},1)$, because $\omega(\frac{1}{c_3})=8(1-\frac{1}{c_3^2})>0$ and $\omega(1)=-4c_1^2(c_3-1)<0$. This leads to a second singularity in the \emph{future} extension. \end{description} A detailed analysis of the polynomial $\omega$ and its corresponding Sturm's sequence reveals that $\omega$ has three real zeros and two complex zeros for all parameter values $c_1>0$, $c_3>0$. Therefore, there are no further real zeros besides the ones found in the above case-by-case analysis. Hence we arrive at the following result. The extended function $\mathrm e^M$ always has one zero in the past extension and, depending on the value of $c_3$, either one or two zeros in the future extension. \section{Null geodesics on the axes\label{App2}} In the following, we consider null geodesics that are restricted to either $\theta=0$ or $\theta=\pi$. Note that in the case of the spatially homogeneous Taub solution, one can find geodesics with $\theta=\mbox{constant}$ for arbitrary values of $\theta$. Here, however, due to the $\theta$-dependence of the metric potentials $M$ and $u$, only the special values $0$ and $\pi$ lead to geodesics. We start by looking at the globally hyperbolic region $0<t<\pi$, before we study whether geodesics from that region can also be extended beyond the Cauchy horizons. Since we are interested in geodesics on the axes, and since the coordinates $(t,\theta,\rho_1,\rho_2)$ have a coordinate singularity there, we first introduce regular coordinates. In view of the intended extension of the geodesics through the Cauchy horizons, we start by introducing the coordinates $(x,y,\rho_1',\rho_2')$ with the coordinate transformation \eqref{eq:tran3}. In a next step we replace $\rho_1'$ and $\rho_2'$ with $\lambda_1'$ and $\lambda_2'$ via \begin{equation} \lambda_1'=\frac{\rho_1'+\rho_2'}{2},\quad \lambda_2'=\frac{\rho_1'-\rho_2'}{2}. \end{equation} Finally, we remove the axes singularities. To this end, we separately study the cases $x=1$ and $x=-1$. In a vicinity of the axis $x=1$, we locally introduce ``Cartesian coordinates'', \begin{equation} p=\sqrt{1-x^2}\cos\lambda_2',\quad q=\sqrt{1-x^2}\sin\lambda_2', \end{equation} which replace $x$ and $\lambda_2$. The metric in terms of the new coordinates $(y,\lambda_1,p,q)$ is regular at the axis ($p=q=0$) and at the Cauchy horizons ($y=\pm1$). This can be verified with the explicit form of the solution and with the definitions of the constants $\kappa$, $\kappa_1$ and $\kappa_2$, which are introduced with the first of the above coordinate transformations, see Sec.~\ref{sec:extensions}. At the axis $x=1$, the metric has now the form \begin{equation} x=1:\quad g = g_{yy}\,\mathrm d y^2+2g_{y\lambda_1}\,\mathrm d y\,\mathrm d\lambda_1 +g_{\lambda_1\lambda_1}\,\mathrm d\lambda_1^2 +\mathrm e^M(\mathrm d p^2+\mathrm d q^2), \end{equation} where \begin{eqnarray} \fl g_{yy} &=& -\frac{\mathrm e^M}{1-y^2}+R_0(1-y^2)\mathrm e^u\left(\frac{\kappa_1+\kappa_2}{1+y}-\frac{\kappa}{y-1}\right)^2,\\ \fl g_{y\lambda_1'} &=& 2R_0(1-y^2)\mathrm e^u\left(\frac{\kappa_1+\kappa_2}{1+y}-\frac{\kappa}{y-1}\right),\quad g_{\lambda_1'\lambda_1'}=4R_0(1-y^2)\mathrm e^u. \end{eqnarray} We can find the geodesics by making use of the conservation laws that follow from the Killing vectors. The two Killing vectors $\partial_{\rho_1}$ and $\partial_{\rho_2}$ degenerate at $x=1$, where $\partial_{\rho_1}=\partial_{\rho_2}=\frac12\partial_{\lambda'_1}$. Hence we have only one conservation law, namely \begin{equation}\label{eq:cons1} g(\partial_{\lambda_1'},v)=\mbox{constant} \end{equation} for the tangent vector $v^i=\mathrm d x^i/\mathrm d\lambda$ to the geodesic, where $\lambda$ is an affine parameter. Since $v^i$ is only determined up to a factor (we can rescale the null vector), we can set the constant to $2\varepsilon R_0$, $\varepsilon=\pm 1$. Together with $g(v,v)=0$, we obtain the two equations \begin{eqnarray} g_{\lambda_1'\lambda_1'}v^{\lambda_1}+g_{y\lambda_1'}v^y=2\varepsilon R_0,\\ g_{yy}(v^y)^2+2g_{y\lambda_1'}v^yv^{\lambda_1'}+g_{\lambda_1'\lambda_1'}(v^{\lambda_1'})^2=0, \end{eqnarray} which fix $v^y$ and $v^{\lambda_1'}$ (up to a sign). The remaining components $v^p$ and $v^q$ vanish since axis geodesics are characterized by $p=q=0$. In this way, we finally obtain \begin{equation}\fl\label{eq:geo} v^y=-1,\quad v^{\lambda_1'}=\frac{\varepsilon\mathrm e^{-u}-2\kappa}{4(1-y)}+\frac{\varepsilon\mathrm e^{-u}+2(\kappa_1+\kappa_2)}{4(1+y)},\quad v^p=0,\quad v^q=0. \end{equation} Here, we have chosen a negative sign for $v^y$ in order to restrict to future-directed vectors. (Note that $y$ is \emph{decreasing} for increasing values of the time coordinate $t$.) We see that $-y$ can be used as an affine parameter. As a consequence, the geodesics are curves of the form \begin{equation} y=-\lambda,\quad \lambda_1'=\lambda_1'(\lambda),\quad p=0,\quad q=0, \end{equation} where $\lambda_1'(\lambda)$ follows from $v^{\lambda_1'}$ in \eqref{eq:geo} with a $y$-integration. We denote the two classes of geodesics with $\varepsilon=\pm 1$ as $\Gamma^{\pm}$. For the calculation of geodesics on the second axis $\theta=\pi$, we can repeat the previous consideration, this time considering a vicinity of $x=-1$. The only difference is that ``Cartesian coordinates'' are now introduced via \begin{equation} p=\sqrt{1-x^2}\cos\lambda_1',\quad q=\sqrt{1-x^2}\sin\lambda_1', \end{equation} i.e.\ this time we arrive at coordinates $(y,\lambda_2', p,q)$, where $\lambda_2'$ instead of $\lambda_1'$ is used as a regular coordinate. The geodesics turn out to be given by \begin{equation}\fl v^y=-1,\quad v^{\lambda_2'}=\frac{\varepsilon\mathrm e^{-u}-2\kappa}{4(1-y)}+\frac{\varepsilon\mathrm e^{-u}+2(\kappa_1-\kappa_2)}{4(1+y)},\quad v^p=0,\quad v^q=0. \end{equation} Again we denote the geodesics with $\varepsilon=\pm1$ as $\Gamma^{\pm}$. For geodesics on either axis, we observe that the components of the tangent vector are analytic functions of $y$ in the interval $(-1,1)$, whereas they are potentially singular at $y=\pm 1$. More precisely, we have \begin{eqnarray} x=1:\quad && v^{\lambda_1'}=\left\{ \begin{array}{ll} \frac{(\varepsilon-\mathrm{sgn}\,\kappa)c_1}{4(1-y)}+\mathcal O[(1-y)^0],& y\to1\\ \frac{(\varepsilon-\mathrm{sgn}\,\kappa_2)(1-c_3)^2}{4c_1(1+y)}+\mathcal O[(1+y)^0] ,&y\to-1 \end{array}\right.,\\ x=-1:\quad && v^{\lambda_2'}=\left\{ \begin{array}{ll} \frac{(\varepsilon-\mathrm{sgn}\,\kappa)c_1}{4(1-y)}+\mathcal O[(1-y)^0],& y\to1\\ \frac{(\varepsilon-\mathrm{sgn}\,\kappa_2)(1+c_3)^2}{4c_1(1+y)}+\mathcal O[(1+y)^0] ,&y\to-1 \end{array}\right. . \end{eqnarray} We observe that there are \emph{no} singularities in $v^{i'}$, if we choose suitable signs for $\kappa$ and $\kappa_2$. Then, the components of $v^{i'}$ will be regular at and beyond $y=\pm 1$. This shows that there are extensions of our solutions in which the geodesics can be extended through the horizons. The ``correct'' sign choices can be summarized as follows: \begin{enumerate} \item[(a)] An axis null geodesic $\Gamma^+$ extends through the past Cauchy horizon in $M^{ab}$ iff $a=+$, it extends through the future horizon in $M^{ab}$ iff $b=+$, and it extends through both horizons iff $a=b=+$. \item[(b)] An axis null geodesic $\Gamma^-$ extends through the past Cauchy horizon in $M^{ab}$ iff $a=-$, it extends through the future horizon in $M^{ab}$ iff $b=-$, and it extends through both horizons iff $a=b=-$. \end{enumerate} This should be compared with Lemma 3.2 in \cite{chrusciel93}, which states a similar result for the Taub solutions. However, whereas the Lemma in \cite{chrusciel93} considers more general null geodesics, our result applies only to axis geodesics. \end{appendix} \section*{References}
2,877,628,091,366
arxiv
\section*{Acknowledgment} I would like to express my deepest gratitude to my advisor Kurt Johansson for his guidance and support. \section{Introduction} \subsection{The Airy process} The central object of study in this paper is the local behavior of the Airy process, $t \rightarrow \mathcal{A}(t)$, $t \in \mathbb{R}$, \cite{PS}. The Airy process is a one dimensional process with continuous paths, \cite{Jo1}, \cite{PS}. The interest in this process is mainly due to the fact that it is the limit of a number of processes appearing in the random matrix literature. One example is the top curve in Dyson's Brownian motion, see \cite{Dy}, which, when appropriately rescaled, converges to the Airy process, see for instance \cite{FNH} and \cite{Jo2}. Another example is the boundary of the north polar region in the Aztec diamond, see \cite{EKLP1}, \cite{EKLP2} and \cite{Jo3}, a discrete process also converging to the Airy process, \cite{Jo3}. A third example, the Discrete polynuclear growth model (PNG), \cite{Jo2}, \cite{KS}, will be described in some detail in section \ref{discpol} where we also state a theorem about its local (in a certain sense) fluctuations. A precise definition of $\mathcal{A}(t)$ goes as follows: The extended Airy kernel, \cite{FNH}, \cite{M}, \cite{PS}, is defined by \begin{equation} A_{s,t}(x,y) = \Bigg \{ \begin{array}{rl} \int_0^{\infty} e^{-z (s-t)} \mathrm{Ai}(x+z) \mathrm{Ai}(y+z) \, \mathrm{dz} & \textrm{if } s \geq t \\ -\int_{-\infty}^0 e^{z (t-s)} \mathrm{Ai}(x+z) \mathrm{Ai}(y+z) \, \mathrm{dz} & \textrm{if } s < t , \end{array} \end{equation} where $\mathrm{Ai}$ is the Airy function. $A_{s,s}(x,y)$ is easily seen to be the ordinary Airy kernel, \cite{TW}. Given $\xi_1, \ldots , \xi_m \in \mathbb{R}$ and $t_1 < \ldots < t_m$ in $\mathbb{R}$ we define $f$ on $\{ t_1, \ldots ,t_m \} \times \mathbb{R}$ by \begin{equation*} f(t_i,x) = \chi_{(\xi_i,\infty)}(x). \end{equation*} It is shown in \cite{Jo2} that \begin{equation*} f^{1/2}(s,x) A_{s,t}(x,y) f^{1/2}(t,y) \end{equation*} is the integral kernel of a trace class operator on $L^2(\{ t_1, \ldots ,t_m \} \times \mathbb{R})$ where we have counting measure on $\{ t_1, \ldots ,t_m \}$ and Lebesgue measure on $\mathbb{R}$. The Airy process, $t \rightarrow \mathcal{A}(t)$, is the stationary stochastic process with finite dimensional distributions given by \begin{equation*} \mathbb{P} \left[ \mathcal{A}(t_1) \leq \xi_1, \ldots ,\mathcal{A}(t_m) \leq \xi_m \right] = \mathrm{det} \left( I - f^{1/2} A f^{1/2} \right)_{L^2(\{ t_1, \ldots ,t_m \} \times \mathbb{R})}. \end{equation*} The determinant in the right hand side is a so called Fredholm determinant. Our main theorem states that if we condition the Airy process to be at some given point at time $t_1$ it will then behave, on a local scale, like a Brownian motion. \begin{theorem}\label{theorem1} Let $\epsilon > 0$ be small, $t_1 \in \mathbb{R}$ and $t_i = t_{i-1} + s_{i} \epsilon$, $2 \leq i \leq m$, where $s_2, \ldots , s_{m} >0$. Also, let $p_1 \in \mathbb{R}$ and define the sets $A_i$, $i=2, \ldots ,m$, by \begin{equation*} A_i = \left\{ x \in \mathbb{R} | p_1 + a_i \sqrt{\epsilon} \leq x \leq p_1 + b_i \sqrt{\epsilon} \right\} \end{equation*} where $a_i,b_i$ are given real numbers. It holds that \begin{multline*} \mathbb{P} \left[ \mathcal{A}(t_2) \in A_2 , \ldots , \mathcal{A}(t_m) \in A_m \right | \mathcal{A}(t_1) = p_1] \\ = \int_{a_2}^{b_2} dx_2 \cdots \int_{a_m}^{b_m} dx_m \frac{1}{\sqrt{4 \pi s_2}} e^{-\frac{x_2^2}{4 s_2}} \prod_{i=3}^m \frac{1}{\sqrt{4 \pi s_i}} e^{-\frac{(x_i - x_{i-1})^2}{4 s_i}} + E \end{multline*} where \begin{equation*} |E| \leq \sqrt{\epsilon} \log{\epsilon^{-1}} \prod_{i=2}^m(b_i-a_i) C_{p_1,s_2, \ldots ,s_m}. \end{equation*} \end{theorem} Figure 1 describes the setup in the theorem. \begin{figure}[hb] \begin{center} \includegraphics{bildfil.1} \caption{Conditioned that $\mathcal{A}(t_1)=p_1$ Theorem \ref{theorem1} gives the approximate probability for the process to move through the sets $A_i$. Note that $t_{i+1}-t_i \sim \epsilon$ and $|A_i| \sim \sqrt{\epsilon}$.} \end{center} \end{figure} \emph{Remark 1.} A couple of previous results about the Airy process are the following: In \cite{PS} it is shown that \begin{equation*} \mathrm{Var}(\mathcal{A}(t) - \mathcal{A}(0)) = 2t + \mathcal{O}(t^2) \end{equation*} as $t \rightarrow 0$. In \cite{AvM}, see also \cite{W}, the long distance covariance asymptotics for the Airy process is calculated to be \begin{equation*} \mathbb{E} \left[ \mathcal{A}(t) \mathcal{A}(0) \right] - \mathbb{E} \left[ \mathcal{A}(t)\right] \mathbb{E} \left[ \mathcal{A}(0) \right] = t^{-2} + \mathcal{O}(t^{-4}) \end{equation*} as $t \rightarrow \infty$. This proves that $\mathcal{A}(t)$ is not a Markov process since this would imply exponential decay. \emph{Remark 2.} Given Theorem \ref{theorem1} it is natural to ask the corresponding question about processes converging to the Airy process. Theorem \ref{PNG} in section \ref{discpol} below provides such a result for the Discrete polynuclear growth process. \subsection{The extended Airy point process} We now present another construction, \cite{Jo2}, of the Airy process that will help us analyzing its local behavior. Let $m \in \mathbb{Z}_+$ be arbitrary and $t_1 < t_2 \ldots < t_m$ be points in $\mathbb{R}$ which we shall think of as times. Define \begin{equation*} E = \mathbb{R}_{t_1} \cup \mathbb{R}_{t_2} \cup \cdots \cup \mathbb{R}_{t_m}. \end{equation*} We shall refer to $\mathbb{R}_{t_j}$ as time line $t_j$. We define $X$ to be the space of all locally finite countable configurations of points (or particles) in $E$. Locally finite means that, if $x=(x_1,x_2, \ldots) \in X$ then, for any bounded set $C \subset E$, it holds that $\# (C \cap x) < \infty$. Here $\# B$ represents the number of points in the set $B$. One can construct a $\sigma$-algebra on $X$ from the cylinder sets: Let $B \subset E$ be any bounded Borel set and $n \geq 0$. Define \begin{equation*} C_n^B = \left\{ x \in X : \# B = n\right\} \end{equation*} to be a cylinder set and $\Sigma$ to be the minimal $\sigma$-algebra that contains all cylinder sets. One can now define probability measures on the space $(X,\Sigma)$. The so called extended Airy point process is an example of such a measure and it will be described below. For the sake of convenience, we will often denote the extended Airy kernel by $A(x,y)$ instead of $A_{t_i,t_j}(x,y)$ when it is clear that $x \in \mathbb{R}_{t_i}$ and $y \in \mathbb{R}_{t_j}$. Let $z_1, \ldots , z_k$ be points in $E$. The $k$-point correlation function is defined by \begin{equation} \label{corr_def} R(z_1, \ldots , z_k) = \mathrm{det} \left[ A(z_i,z_j) \right]_{i,j=1}^k . \end{equation} It is possible to show that these correlation functions determine a probability measure on $(X,\Sigma)$, the extended Airy point process, by demanding that the following identity holds, \cite{So}: \begin{equation} \label{corr_eq} \mathbb{E} \left[ \prod_{i=1}^n \frac{\# B_i !}{(\# B_i - k_i)!} \right] = \int_{B_1^{k_1} \times \cdots \times B_n^{k_n}} R(z_1, \ldots ,z_k)\, \mathrm{dz} . \end{equation} Here $B_1, \ldots , B_n$ are disjoint Borel subsets of $E$ and $k_i \in \mathbb{Z}_+$, $1 \leq i \leq n$, are such that $k_1 + \ldots + k_n = k$. It is possible to show that, at each time line $\mathbb{R}_{t_i}$, there is almost surely a largest particle, $\lambda(t_i)$, and \begin{equation} \label{airy=extairy} (\lambda(t_1), \ldots ,\lambda(t_m)) = (\mathcal{A}(t_1), \ldots , \mathcal{A}(t_m)) \end{equation} in distribution, \cite{Jo2}. It is through this representation that we are able to show that the Airy process behaves locally as a Brownian motion. \subsection{Discrete polynuclear growth} \label{discpol} The second object of interest in this paper is the the so called Discrete polynuclear growth model (PNG), \cite{Jo2}, \cite{KS}. It is defined by \begin{equation} h(x,t+1) = \max{(h(x-1,t),h(x,t),h(x+1,t))} + \omega(x,t+1) \end{equation} where $x \in \mathbb{Z}$, $t \in \mathbb{N}$, $h(x,0)=0$ $\forall x \in \mathbb{Z}$ and $\omega(x,t+1) = 0$ if $|x| > t$ or if $t-x$ is even, otherwise $\omega(x,t+1)$ are independent geometric random variables with \begin{equation}\label{geometric} \mathbb{P} [ \omega(x,t+1) = m ] = (1 - q) q^m \qquad 0 < q < 1 . \end{equation} It is convenient to extend the process to all $x \in \mathbb{R}$ by setting $h(x,t) = h(\lfloor x \rfloor,t)$. A description of this process using words and pictures goes as follows: At time $t=1$ a block of width one and height $\omega(0,1)$ appears over the interval $[0,1)$. This block then grows sideways one unit in both directions and at time $t=2$ two blocks of width one and heights $\omega(-1,2)$, $\omega(1,2)$ are placed on top of it over the intervals $[-1,0)$ and $[1,2)$ respectively. These blocks now grow one unit in each direction disregarding overlaps. At time $t=3$ three new blocks are placed over $[-2,-1)$, $[0,1)$ and $[2,3)$. This procedure goes on producing at each time the curve $h(x,t)$ that can be thought of as a growing interface. Figure 2 shows a realization for $t=1,2,3$. \begin{figure}[ht] \label{fig2} \begin{center} \includegraphics{bildfil.2} \caption{A sample of the discrete PNG process for $t=1,2,3$. The shaded blocks represent the growth due to the random variables $\omega (x,t)$.} \end{center} \end{figure} The process $h$ is closely connected to a growth model, $G(M,N)$, studied in \cite{Jo1}. Let $w(i,j)$, $(i,j) \in \mathbb{Z}_+^2$, be independent random variables with distribution given by (\ref{geometric}). Define \begin{equation*} G(M,N) = \max_{\pi} \sum_{(i,j) \in \pi} w(i,j) \end{equation*} where the maximum is taken over all up/right paths from $(1,1)$ to $(M,N)$. One can think of $G(M,N)$ as a point to point last-passage time and \begin{equation*} G_{pl}(N) = \max_{|K|<N} G(N+K,N-K) \end{equation*} as a point to line last-passage time. In \cite{Jo2} it is shown that \begin{equation*} G(i,j) = h(i-j,i+j-1). \end{equation*} The definition of $G_{pl}$ therefore inspires the study of $K \rightarrow h(2K,2N-1)$, that is, the height curve at even sites at time $2N-1$. In \cite{Jo2} the rescaled process, $t \rightarrow H_N(t)$, $t \in \mathbb{R}$, is, for appropriate $t$, defined by \begin{equation*} d N^{1/3} H_N(t) = h \left( 2 \frac{1 + \sqrt{q}}{1 - \sqrt{q}} d^{-1} N^{2/3} t,2N-1 \right) - \frac{2 \sqrt{q}}{1- \sqrt{q}} N \end{equation*} and for the rest of $\mathbb{R}$ by the use of linear interpolation. The constant $d$ is given by \begin{equation*} d = \frac{(\sqrt{q})^{1/3}(1 + \sqrt{q})^{1/3}}{1- \sqrt{q}}. \end{equation*} The main result about $H_N(t)$ in \cite{Jo2} is the following theorem: \begin{theorem}[Johansson]\label{H_thm} Let $\mathcal{A}(t)$ be the Airy process defined by its finite dimensional distributions and $T$ be an arbitrary positive number. There is a continuous version of $\mathcal{A}(t)$ and \begin{equation*} H_N(t) \rightarrow \mathcal{A}(t) - t^2 \end{equation*} as $N \rightarrow \infty$ in the weak$^*$-topology of probability measures on $C(-T,T)$. \end{theorem} In particular this theorem shows that the fluctuations of $h$ are of order $N^{1/3}$ and that non-trivial correlations in the transversal direction show up when looking at times $t_i$ where $t_{i+1} - t_i \sim N^{2/3}$. Motivated by Theorems \ref{theorem1} and \ref{H_thm} one could guess that $h$, on a time scale of order $N^{\gamma}$, $0< \gamma < 2/3$, behaves like a Brownian motion. The theorem below shows that this is indeed the case. Given some $m \in \mathbb{Z}_+$ set \begin{align*} K_1 & = \frac{1 + \sqrt{q}}{1 - \sqrt{q}}d^{-1}N^{2/3}\tau_1 \\ K_{i+1} & = K_i + \frac{1 + \sqrt{q}}{1 - \sqrt{q}}d^{-1}s_{i+1}N^{\gamma} \qquad i=1, \ldots ,m-1 \end{align*} where $0 < \gamma < \frac{2}{3}$ and $\tau_1$, $s_i > 0$ are real numbers such that $K_i \in \mathbb{Z}$. Define \begin{equation*} J_1 = \frac{2 \sqrt{q}}{1 - \sqrt{q}}N + \psi d N^{1/3} \in \mathbb{Z}_+ \end{equation*} where $\psi$ is any real number such that $J_1 \in \mathbb{Z}$. \begin{theorem}\label{PNG} Define the sets $A_i$, $i=2, \ldots ,m$, by \begin{equation*} A_i = \left\{ j \in Z_+ | j = J_1 + x_i d N^{\gamma / 2}, \, a_i \leq x_i \leq b_i \right\} \end{equation*} where $a_i,b_i$ are given real numbers. There exists $c>0$ such that \begin{align*} \mathbb{P} \big[ h(2K_2, 2N-1) \in A_2 , \ldots , h(2K_m, 2N-1) \in & A_m \\ & | \, h(2K_1, 2N-1) = J_1 \big] \\ = \int_{a_2}^{b_2} dx_2 \cdots \int_{a_m}^{b_m} dx_m \frac{1}{\sqrt{4 \pi s_2}} e^{-\frac{x^2}{4 s_2}} \prod_{i=3}^m \frac{1}{\sqrt{4 \pi s_i}} & e^{-\frac{(x_i - x_{i-1})^2}{4 s_i}} + E \end{align*} where \begin{equation*} |E| \leq N^{-c} \prod_{i=2}^m(b_i-a_i) C_{\psi_1,s_2, \ldots ,s_m} . \end{equation*} \end{theorem} \section{Proof of theorem \ref{theorem1}} The connection (\ref{airy=extairy}) shows that we can prove the theorem by studying the largest particle in the extended Airy point process at times $t_1, \ldots , t_m$. The appearance of $C$ in formulae below should be interpreted as follows: There exists a positive constant which may depend on $p_i$, $s_i$, $i=2, \ldots , m$, validating the inequality to the left when inserted instead of $C$. Other error terms will typically also depend on $p_i$, $s_i$. Set $J_1 = [p_1 - \delta_1, p_1] \subset \mathbb{R}_{t_1}$ and $J_i = [p_i - \sqrt{\epsilon} \delta_i,p_i] \subset \mathbb{R}_{t_i}$, $2 \leq i \leq m$, where $\delta_i > 0$ and $p_i = p_{i-1} + y_i \sqrt{\epsilon}$, $y_i \in \mathbb{R}$. We also set $I_i = ( p_i,\infty)$, $i=1, \ldots , m$. We will show that \begin{multline}\label{target} \lim_{\scriptstyle{\delta_1 , \ldots , \delta_m \rightarrow 0^+}} \frac{1}{\delta_2 \cdots \delta_m} \frac{\mathbb{P} \left[ \# J_1 \geq 1, \ldots, \# J_m \geq 1, \# I_1 = \ldots = I_m = 0 \right]} {\mathbb{P} \left[\# J_1 \geq 1, \# I_1 = 0 \right]} \\ = \frac{1}{\sqrt{(4 \pi)^{m-1} s_2 \cdots s_{m}}} e^{-\frac{y_2^2}{4s_2} - \ldots -\frac{y_m^2}{4s_m}} + \mathcal{O}(\sqrt{\epsilon} \log{\epsilon}), \end{multline} implying Theorem \ref{theorem1}. The first step is to show that the probabilities in the numerator and denominator above can be approximated by appropriate expected values. For $k,n \in \mathbb{Z}_+$ we shall use the common notation \begin{equation*} n^{[k]} = n (n-1) \cdots (n-k+1). \end{equation*} Let $J$ be an interval on some time line and $\chi_A$ be the indicator function for the event $A$. Since \begin{eqnarray*} \#J - \chi_{\{ \#J \geq 1\}} & = & \Bigg\{ \begin{array}{ll} k-1 & \quad \, ; \, \#J = k \geq 2 \\ 0 & \quad \, ; \, \#J = 0,1 \end{array} \\ \#J^{[2]} = \#J(\#J - 1) & = & \Bigg\{ \begin{array}{ll} k(k-1) & ; \, \#J = k \geq 2 \\ 0 & ; \, \#J = 0,1 \end{array} \end{eqnarray*} it holds that \begin{equation} \label{ineq1} 0 \leq \#J - \chi_{\{ \#J \geq 1\}} \leq \#J^{[2]}. \end{equation} This together with the following facts will be useful: \begin{multline}\label{prob_approx} \mathbb{P} \left[ \# J_1 \geq 1, \ldots, \# J_m \geq 1, \# I_1 = 0 \right] \\ - \mathbb{P} \left[ \# J_1 \geq 1, \ldots, \# J_m \geq 1, \# I_1 = \ldots = \# I_m = 0 \right] = \end{multline} \begin{multline*} = \mathbb{P} \left[ \# J_1 \geq 1, \ldots, \# J_m \geq 1, \#I_1 = 0, (\# I_2 = \ldots = \# I_m = 0)^c \right] \\ = \mathbb{P} \left[ \# J_1 \geq 1, \ldots, \# J_m \geq 1, \#I_1 = 0, \cup_{i=2}^m \{ \# I_i \neq 0 \} \right] \\ \leq \sum_{i=2}^m \mathbb{P} \left[ \# J_1 \geq 1, \ldots, \# J_m \geq 1, \#I_i \neq \#I_1 \right] \end{multline*} We now express the probabilities in terms of expected values. If we set \begin{equation} T(J_i) = \# J_i - \chi_{\{ \# J_i \geq 1 \}} . \end{equation} then \begin{align*} \mathbb{P} \big[ \# J_1 \geq 1, & \ldots, \# J_m \geq 1, I_1 = 0 \big] \\ & = \mathbb{E} \left[ ( \# J_1 - T(J_1)) \cdots ( \# J_m - T(J_m)) \cdot \chi_{\{ \#I_1 = 0 \}} \right] \\ & = \mathbb{E} \left[ (\# J_1 \cdots \# J_m + U(J_1, \ldots , J_m)) \cdot \chi_{\{ \#I_1 = 0 \}} \right] \end{align*} where $U$ is defined by the last equality. In view of (\ref{ineq1}) and (\ref{corr_eq}) we get, for example, \begin{multline*} \mathbb{E} \left[ T(J_1) \cdot \# J_2 \cdots \# J_m \right] \leq \mathbb{E} \left[ \# J_1^{[2]} \cdot \# J_2 \cdots \# J_m \right] \\ = \int_{J_1^2 \times J_2 \cdots \times J_m} R (x_1,x_2,\ldots,x_{m+1}) \mathrm{dx} = \mathcal{O} (\delta_1^2 \cdot \delta_2 \cdots \delta_m). \end{multline*} Since $U(J_1, \ldots , J_m)$ is a sum of terms like this one (at least one $T(J_i)$) we see that \begin{equation*} \lim_{\scriptstyle{\delta_1 , \ldots , \delta_m \rightarrow 0^+}} \frac{1}{\delta_1 \cdots \delta_m} \mathbb{E} \left[ U(J_1, \ldots , J_m) \cdot \chi_{\{ \#I_1 = 0 \}} \right] = 0. \end{equation*} Repetition of this argument shows together with (\ref{prob_approx}) that \begin{align*} \lim_{\scriptstyle{\delta_1, \ldots ,\delta_m \rightarrow 0^+}} & \frac{1}{\delta_1 \cdots \delta_m} \mathbb{P} \left[ \# J_1 \geq 1, \ldots, \# J_m \geq 1, \# I_1 = \ldots = I_m = 0 \right] \\ & = \lim_{\scriptstyle{\delta_1 , \ldots , \delta_m \rightarrow 0^+}} \frac{1}{\delta_1 \cdots \delta_m} \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot \chi_{\{I_1 = 0\}} \right] \\ & \, \, \, \, \, \, + \mathcal{O} \left(\sum_{i=2}^m \lim_{\scriptstyle{\delta_1, \ldots , \delta_m \rightarrow 0^+}} \frac{1}{\delta_1 \cdots \delta_m} \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot \chi_{\{\#I_i \neq \#I_1\}} \right] \right) \end{align*} and also that \begin{equation*} \lim_{\scriptstyle{\delta_1 \rightarrow 0^+}} \frac{1}{\delta_1} \mathbb{P} \left[ \# J_1 \geq 1, \# I_1 = 0 \right] = \lim_{\scriptstyle{\delta_1 \rightarrow 0^+}} \frac{1}{\delta_1} \mathbb{E} \left[ \# J_1 \cdot \chi_{\{I_1 = 0\}} \right] . \end{equation*} Later it will be shown that \begin{equation}\label{uggly_eq} \lim_{\scriptstyle{\delta_1 , \ldots ,\delta_m \rightarrow 0^+}} \frac{1}{\delta_1 \cdots \delta_m} \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot \chi_{\{\#I_i \neq \#I_1\}} \right] = \mathcal{O}(\sqrt{\epsilon} \log{\epsilon}) \end{equation} but let us first be constructive. We want to show that \begin{multline} \label{constr_eq} \lim_{\scriptstyle{\delta_1 , \ldots , \delta_m \rightarrow 0^+}} \frac{1}{\delta_1 \cdots \delta_m} \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot \chi_{\{I_1 = 0\}} \right] \\ = \lim_{\scriptstyle{\delta_1 \rightarrow 0^+}} \frac{1}{\delta_1} \mathbb{E} \left[ \# J_1 \cdot \chi_{\{I_1 = 0\}} \right] \\ \times \frac{1}{\sqrt{(4 \pi)^{m-1} s_2 \cdots s_{m}}} \, e^{-\frac{y_2^2}{4s_2} - \ldots -\frac{y_m^2}{4s_m}} + \mathcal{O}(\sqrt{\epsilon}) . \end{multline} To start with we need to find a representation of the left hand side of (\ref{constr_eq}) that is suitable for analysis. \begin{align*} \mathbb{E} \big[ \# J_1 \cdots \# J_m \cdot \chi_{\{I_1 = 0\}} \big] & = \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot \lim_{\lambda \rightarrow \infty}e^{- \lambda \#I_1} \right] \\ & = \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot \lim_{\lambda \rightarrow \infty} \sum_{k=0}^{\infty} \frac{(e^{-\lambda} - 1)^k}{k!} I_1^{[k]} \right] \\ &= \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot \sum_{k=0}^{\infty} \frac{(- 1)^k}{k!} I_1^{[k]} \right] \\ & = \sum_{k=0}^{\infty} \frac{(-1)^k}{k!} \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot \# I_1^{[k]} \right] \end{align*} In the second equality we have used the formula \begin{equation} \label{comb_formula} e^{\lambda n} = \sum_{k=0}^{\infty} \frac{({e}^{\lambda} -1)^k}{k!} n^{[k]}. \end{equation} In the fourth equality we take the sum out of the expectation. By Fubini's theorem we are allowed to do this since \begin{multline*} \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot \sum_{k=0}^{\infty} \frac{\# I_1^{[k]}}{k!} \right] \leq \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot \sum_{k=0}^{\infty} \frac{\# I_1^{k}}{k!} \right] \\ = \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot e^{\# I_1} \right] \leq \mathbb{E} \left[ \# J_1^2 \cdots \# J_m^2 \right]^{1/2} \, \mathbb{E} \left[ e^{2 \# I_1} \right]^{1/2} < \infty. \end{multline*} In fact $\mathbb{E} \left[ z^{\# I_1} \right]$ is an entire function in $z$, \cite{So}. Another technical issue we need to deal with is to prove that \begin{align*} \lim_{\scriptstyle{\delta_1, \ldots ,\delta_m \rightarrow 0^+}} & \frac{1}{\delta_1 \cdots \delta_m} \sum_{k=0}^{\infty} \frac{(-1)^k}{k!} \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot \# I_1^{[k]} \right] \\ & = \sum_{k=0}^{\infty} \frac{(-1)^k}{k!} \lim_{\scriptstyle{\delta_1, \ldots ,\delta_m \rightarrow 0^+}} \frac{1}{\delta_1 \cdots \delta_m} \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot \# I_1^{[k]} \right] \\ & = \sum_{k=0}^{\infty} \frac{(-1)^k}{k!} \int_{I_1^k} (\sqrt{\epsilon})^{m-1} R (p_1,\ldots ,p_m, x_1,\ldots,x_k) \, \mathrm{dx}. \end{align*} Please recall definition (\ref{corr_def}) and note that the second equality is immediate from (\ref{corr_eq}). Define $G_k(z_1, \ldots , z_m)$, $z_i \in \mathbb{R}_{t_i}$, by \begin{equation} G_k(z_1, \ldots , z_m) = \frac{(-1)^k}{k!} \int_{I_1^k} R (z_1,\ldots ,z_m, x_1,\ldots,x_k) \, \mathrm{dx}. \end{equation} The identity sought for is \begin{multline} \label{sought} \lim_{\scriptstyle{\delta_1, \ldots ,\delta_m \rightarrow 0^+}} \frac{1}{\delta_1 \cdots \delta_m} \sum_{k=0}^{\infty} \int_{J_1 \times \cdots \times J_m} G_k(z_1, \ldots, z_m) \, \mathrm{dz} \\ = \sum_{k=0}^{\infty} (\sqrt{\epsilon})^{m-1} G_k(p_1, \ldots, p_m) . \end{multline} This will hold if for some neighbourhood $\Omega$ of $(p_1, \ldots, p_m)$ there exist constants $C_k > 0$ such that \begin{equation*} |G_k(z_1, \ldots , z_m)| \leq C_k \end{equation*} if $(z_1, \ldots , z_m) \in \Omega$ and \begin{equation*} \sum_{k=0}^{\infty} C_k < \infty. \end{equation*} That this is indeed the case follows from calculations similar to the ones appearing in the proof of Lemma \ref{princ_lemma} which is given at the end of this section. The following lemma can be found in \cite{Ok}: \begin{lemma} \label{phi_theorem} Let $\alpha > 0$, then \begin{equation*} \int_{-\infty}^{\infty} e^{\alpha z} \mathrm{Ai}(x+z) \mathrm{Ai}(y+z) \, \mathrm{dz} = \frac{1}{\sqrt{4 \pi \alpha}} e^{-\frac{(x-y)^2}{4 \alpha} - \frac{\alpha}{2}(x+y) + \frac{\alpha^3}{12}}. \end{equation*} \end{lemma} In this section we call this function $\phi_{\alpha}(x,y)$ or simply $\phi(x,y)$ when it is clear what $\alpha$ is. From Lemma \ref{phi_theorem} and the definition of the Airy kernel it follows that, for $s<t$ \begin{multline*} A_{s,t}(x,y) = \int_{0}^{\infty} e^{z(t-s)} \mathrm{Ai}(x+z) \mathrm{Ai}(y+z) \, \mathrm{dz} - \phi_{t-s}(x,y) \\ =: \widetilde{A}_{s,t}(x,y) - \phi_{t-s}(x,y). \end{multline*} For $s \geq t$ it is convenient to set $\widetilde{A}_{s,t}(x,y) = A_{s,t}(x,y)$. \begin{lemma} \label{princ_lemma} Suppose that $1 \leq v \leq m$, $v \in \mathbb{Z}$. Then, for some $C$ depending on $p_1, \ldots , p_m$, \begin{align} \label{D_u(k)} \nonumber & (\sqrt{\epsilon})^{m-1} \, \int_{I_v^k} R(p_1, \ldots , p_m, x_1, \ldots , x_k) \, \mathrm{dx} \\ & \qquad \qquad = (\sqrt{\epsilon})^{m-1} \phi(p_1,p_2) \phi(p_2,p_3) \cdots \phi(p_{m-1},p_m) \\ \nonumber & \qquad \qquad \qquad \qquad \times \int_{I_1^k} R(p_1, x_1, \ldots , x_k) \, \mathrm{dx} + \sqrt{\epsilon} \, \mathcal{O} \left( (Ck)^{\frac{k+m}{2}} \right). \end{align} Furthermore, if $v \geq 2$ then \begin{align} \label{D_{1,v}} \nonumber & (\sqrt{\epsilon})^{m-1} \, \int_{I_1} \mathrm{dx} \int_{I_v} \mathrm{dy} \, R(p_1, \ldots , p_m, x, y) \\ & \qquad = (\sqrt{\epsilon})^{m-1} \, \phi(p_1,p_2) \phi(p_2,p_3) \cdots \phi(p_{m-1},p_m) \\ \nonumber & \qquad \qquad \times \left( \int_{I_1^2} R(p_1,x_1,x_2) \, \mathrm{dx} + \int_{I_1} R(p_1,x) \, \mathrm{dx} \right) + \mathcal{O}(\sqrt{\epsilon} \log{\epsilon}). \end{align} \end{lemma} \noindent From (\ref{D_u(k)}) we now get (\ref{constr_eq}). We turn now to (\ref{uggly_eq}). Clearly \begin{multline*} \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot \chi_{\{\#I_i \neq \#I_1\}} \right] \leq \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot (\#I_i - \#I_1)^2 \right] \\ = \mathbb{E} \left[ \# J_1 \cdots \# J_m \cdot (\#I_i^{[2]} + \#I_1^{[2]} + \#I_i + \#I_1 -2 \#I_1 \#I_i ) \right]. \end{multline*} We now obtain (\ref{uggly_eq}) since \begin{align}\label{corr_eq_2} \nonumber (\sqrt{\epsilon})^{m-1} & \bigg( \int_{I_i^2} R(p_1, \ldots , p_m,x,y) \, \mathrm{dx} \mathrm{dy} + \int_{I_1^2} R(p_1, \ldots , p_m,x,y) \, \mathrm{dx} \mathrm{dy} \\ & \qquad + \int_{I_i} R(p_1, \ldots , p_m,x) \, \mathrm{dx} + \int_{I_1} R(p_1, \ldots , p_m,x) \, \mathrm{dx} \\ \nonumber & \qquad \qquad \qquad - 2 \int_{I_1 \times I_i} R(p_1, \ldots , p_m,x,y) \, \mathrm{dx} \mathrm{dy} \bigg) = \mathcal{O} (\sqrt{\epsilon} \log{\epsilon}) \end{align} by Lemma \ref{princ_lemma}. To get (\ref{target}) we need one more result, namely that \begin{equation} \label{pos} \lim_{\delta_1 \rightarrow 0^+} \frac{1}{\delta_1} \mathbb{E} [ \# J_1 \chi_{\{ \# I_1 = 0 \}}] > 0. \end{equation} Let $F_2(s)$ be the Tracy-Widom distribution function corresponding to the largest eigenvalue in the GUE, \cite{TW}. Then \begin{align}\label{twprimeq} \nonumber \lim_{\delta_1 \rightarrow 0^+} \frac{1}{\delta_1} & \mathbb{E} [ \# J_1 \chi_{\{ \# I_1 = 0 \}}] \\ & = \lim_{\delta_1 \rightarrow 0^+} \frac{1}{\delta_1} \sum_{k=0}^{\infty} \frac{(-1)^k}{k!} \int_{J_1} dx_0 \int_{I_1^k} d^k x \, \mathrm{det} (A(x_i,x_j))_{0 \leq i,j \leq k} \\ & = \sum_{k=0}^{\infty} \frac{(-1)^k}{k!} \nonumber \int_{I_1^k} \mathrm{det} (A(x_i,x_j))_{0 \leq i,j \leq k} d^k x = F'_2(p_1) \end{align} where in the last row $x_0 = p_1$. The last equality can be obtained by differentiating the corresponding equality for the distribution function $F_2(t)$, \cite{TW}, we omit the details here. The first equality has been shown above and the second is a special case of (\ref{sought}). Since $F_2'(s) > 0$ for all $s \in \mathbb{R}$, see \cite{TW}, we obtain (\ref{pos}). What is still left is to prove Lemma \ref{princ_lemma}. \noindent \textbf{Proof of Lemma \ref{princ_lemma}:} We start with (\ref{D_u(k)}). For $0 \leq r \leq m-1$ and $k \geq 1$ define $D_r(k)$ by \begin{multline*} D_r(k) = (\sqrt{\epsilon})^r \phi(p_1,p_2) \phi(p_2,p_3)\cdots \phi(p_r,p_{r+1}) \int_{I_v^k} \mathrm{dx} \\ \times \left| \begin{array}{ccccc} A(p_{r+1},p_1) & \sqrt{\epsilon} A(p_{r+1},p_{r+2}) & \ldots & \sqrt{\epsilon} A(p_{r+1},p_{m}) & A(p_{r+1},x_j) \\ \vdots & \vdots & \, & \vdots & \vdots \\ A(p_{m},p_1) & \sqrt{\epsilon} A(p_{m},p_{r+2}) & \ldots & \sqrt{\epsilon} A(p_{m},p_{m}) & A(p_{m},x_j) \\ A(x_i,p_1) & \sqrt{\epsilon} A(x_i,p_{r+2}) & \ldots & \sqrt{\epsilon} A(x_i,p_{m}) & A(x_i,x_j) \end{array} \right| . \end{multline*} In the determinant $1 \leq i,j \leq k$ and for $r=0$ we set the empty product in front of the integral to 1. Please note that $D_0(k)$ is equal to the left hand side in (\ref{D_u(k)}). We let $\widetilde{D}_r(k)$ be almost the same as $D_r(k)$. The only difference is that we put in $\widetilde{A}(p_{r+1},p_{r+2})$ in position (1,2) in the matrix instead of $A(p_{r+1},p_{r+2})$. By using induction we shall now prove that \begin{equation} \label{ind_eq} D_0(k) = D_r(k) + \sqrt{\epsilon} \, \mathcal{O} \left( (Ck)^{\frac{k+m}{2}} \right) \end{equation} for $0 \leq r \leq m-1$. Clearly (\ref{ind_eq}) holds if $r=0$. Suppose now that (\ref{ind_eq}) holds for some $r$ such that $0 \leq r \leq m-2$. By expanding the determinant in $D_r(k)$ along the first row we see that \begin{equation} D_r(k) = D_{r+1}(k) + \widetilde{D}_r(k). \end{equation} What has to be proved is hence that \begin{equation*} \widetilde{D}_r(k) = \sqrt{\epsilon} \, \mathcal{O} \left( (Ck)^{\frac{k+m}{2}} \right). \end{equation*} To do this, Hadamard's inequality will come in handy but before we recall this inequality we present a lemma which will be frequently used from now on. The proof is readily obtained from Lemma \ref{phi_theorem} and the standard estimates, see \cite{Ol}, \begin{align*} |\mathrm{Ai}(x)| & \leq C_M e^{-2 |x|^{3/2}/3} \\ |\mathrm{Ai'}(x)| & \leq C_M \sqrt{|x|} e^{-2 |x|^{3/2}/3} \end{align*} that hold for $x \geq - M$. \begin{lemma}\label{K_approx} Suppose that $s<t$ and $M > 0$. For $x,y \geq -M$ and any $\lambda > 0$ it holds that \begin{align*} |A_{t,s}(x,y)| & \leq C_{M,\lambda} e^{-\lambda(x+y)} \\ A_{t,s}(x,y) & = A_{t,t}(x,y) + \mathcal{O}(t-s) \, e^{-\lambda(x+y)} \\ A_{s,t}(x,y) & = A_{t,t}(x,y) - (1 + \mathcal{O}(t-s)) \frac{1}{\sqrt{4 \pi (t-s)}} \, e^{-\frac{(x-y)^2}{4 (t-s)}} \\ & \hspace{7cm} + \mathcal{O}(t-s) \, e^{-\lambda(x+y)}. \end{align*} The errors depend only on $M$ and $\lambda$. Moreover, \begin{equation*} |A_{s,s}(x+\alpha,y) - A_{s,s}(x,y)| \leq \alpha \, C_{M,\lambda} e^{-\lambda (x+y)} \end{equation*} for all $\alpha > 0$. \end{lemma} Let $B = (b_{i,j})_{1\leq i,j \leq n}$, $b_{i,j} \in \mathbb{R}$ be a matrix. Hadamard's inequality states that \begin{equation} |\mathrm{det} B| \leq \left( \prod_{i=1}^n \sum_{j=1}^n b_{ji}^2 \right)^{1/2}. \end{equation} Below we find upper bounds for the equivalent to $\sum_{j=1}^n b_{ji}^2$ in the matrix appearing in $\widetilde{D}_r(k)$. Column 1: \begin{equation*} \sum_{j=r+1}^m A^2(p_{r+1},p_1) + \sum_{j=1}^k A^2(x_j,p_1) \leq C (k+m) \end{equation*} Column 2: \begin{multline*} \epsilon \, \left( \widetilde{A}^2(p_{r+1},p_{r+2}) + \sum_{j=r+2}^m A^2(p_{j},p_{r+2}) + \sum_{j=1}^k A^2(x_j,p_{r+2}) \right) \\ \leq \epsilon \, \bigg\{ \begin{array}{ll} C (k+m) & \mathrm{if} \, \, v \geq r+2 \\ Cm + C \sum_{j=1}^k (\widetilde{A}(x_j,p_{r+2}) - \phi(x_j,p_{r+2}))^2 & \mathrm{if} \, \, v < r+2 \end{array} \end{multline*} Columns $3, \ldots , m-r$ ($r+3 \leq i \leq m$): \begin{equation*} \epsilon \, \left( \sum_{j=r+1}^m A^2(p_{j},p_{i}) + \sum_{j=1}^k A^2(x_j,p_i) \right) \leq C (k+m) \end{equation*} Last $k$ columns ($1 \leq i \leq k$): \begin{multline*} \sum_{j=r+1}^m A^2(p_{j},x_i) + \sum_{j=1}^k A^2(x_j,x_i) \\ \leq \Bigg\{ \begin{array}{ll} \sum_{j=r+1}^{v-1} \left( \widetilde{A}(p_{j},x_i) - \phi(p_{j},x_i) \right)^2 + C k e^{-2x_i} & \textrm{if } v \geq r+2 \\ C (k+m) e^{-2x_i} & \textrm{if } v < r+2 \end{array} \end{multline*} Next we multiply everything together, take the square root and then integrate. Assume that $v<r+2$. \begin{multline*} \int_{I_v^k} \Bigg[ C(k+m) \, \epsilon \left( C + C \sum_{j=1}^k (\widetilde{A}(x_j,p_{r+2}) - \phi(x_j,p_{r+2}))^2 \right)\\ \times (C(m+k))^{m-r-2} \, (C(k+m))^k e^{-2(x_1 + \ldots + x_k)} \Bigg]^{1/2} \, \mathrm{dx} \\ \leq \sqrt{\epsilon} \, (Ck)^{\frac{k+m}{2}} \int_{I_v^k} e^{-(x_1 + \ldots + x_k)} \left( 1 + \sum_{j=1}^k (1 + \phi(x_j,p_{r+2})) \right) \, \mathrm{dx} \\ \leq \sqrt{\epsilon} \, (Ck)^{\frac{k+m}{2}} \end{multline*} The case $v \geq r+2$ can be treated similarly. To obtain (\ref{D_u(k)}) it remains to show that \begin{multline*} \int_{I_v^k} \mathrm{det} \left[ \begin{array}{cc} A(p_m,p_1) & A(p_m,x_j) \\ A(x_i,p_1) & A(x_i,x_j) \end{array} \right]_{1 \leq i,j \leq k} \, \mathrm{dx} \\ = \int_{I_1^k} \mathrm{det} \left[ \begin{array}{cc} A(p_1,p_1) & A(p_1,x_j) \\ A(x_i,p_1) & A(x_i,x_j) \end{array} \right]_{1 \leq i,j \leq k} \, \mathrm{dx} + \sqrt{\epsilon} \, \mathcal{O} \left( (Ck)^{\frac{k+m}{2}} \right). \end{multline*} This is quite easily achieved using Hadamard's inequality and Lemma \ref{K_approx}. We do not present the details here but instead go on to prove (\ref{D_{1,v}}). The first part of the proof will be similar to the proof of (\ref{D_u(k)}) and the second part is an application of Lemma \ref{approx_delta} below. Let $D_r(2)$ and $\widetilde{D}_r(2)$ be as defined above with the exception that the variables $x_1$ and $x_2$ are now integrated over $I_1$ and $I_v$ respectively. By construction $D_0(2)$ equals the left hand side in (\ref{D_{1,v}}). If we can show that \begin{equation} \label{D_r(2)} \widetilde{D}_r(2) = \mathcal{O}(\sqrt{\epsilon}) \end{equation} then by the same argument as above \begin{equation*} D_0(2) = D_{m-1}(2) + \mathcal{O}(\sqrt{\epsilon}). \end{equation*} To see this we shall only need the trivial fact that \begin{equation*} |\mathrm{det} B| \leq \prod_{i=1}^n \sum_{j=1}^n |b_{ji}| \end{equation*} where as before $B$ is a real $n \times n$ matrix. Define $B$ as the $(m+2-r) \times (m+2-r)$ matrix appearing in $\widetilde{D}_r(2)$. We now estimate the column sums \begin{equation*} B_i := \sum_{j=1}^n |b_{ji}| . \end{equation*} Column 1: \begin{equation*} B_1 = |A(p_{r+1},p_1)| + \ldots + |A(p_{m},p_1)| + |A(x_1,p_1)| + |A(x_2,p_1)| \leq Cm \end{equation*} Column 2: \begin{multline*} B_2 = \sqrt{\epsilon} \, \bigg( |\widetilde{A}(p_{r+1},p_{r+2})| + |A(p_{r+2},p_{r+2})| + \ldots + |A(p_{m},p_{r+2})| \\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + |A(x_1,p_{r+2})| + |A(x_2,p_{r+2})| \bigg) \\ \leq \sqrt{\epsilon} \, \left( Cm + |A(x_1,p_{r+2})| + |A(x_2,p_{r+2})| \right) \end{multline*} Middle columns (if any) ($r+3 \leq i \leq m$): \begin{equation*} B_i = \sqrt{\epsilon} \, \left( |A(p_{r+1},p_i)| + \ldots + |A(p_{m},p_i)| + |A(x_1,p_i)| + |A(x_2,p_i)| \right) \leq Cm \end{equation*} Last two columns: \begin{multline*} B_{m-r+1} = |A(p_{r+1},x_1)| + \ldots + |A(p_{m},x_1)| \\ + |A(x_1,x_1)| + |A(x_2,x_1)| \leq Cm e^{-x_1} \end{multline*} \begin{multline*} B_{m-r+2} = |A(p_{r+1},x_2)| + \ldots + |A(p_{m},x_2)| \\ + |A(x_1,x_2)| + |A(x_2,x_2)| \leq C e^{-x_2} + \phi(x_1,x_2) + \sum_{k=r+1}^{v-1} \phi(p_k,x_2) \end{multline*} Consider the estimates above for $B_2$ and $B_{m-r+2}$. The function $A(x_2,p_{r+2})$ will contain a $\phi$-function if and only if $v < r+2$, but in this case the sum \begin{equation*} \sum_{k=r+1}^{v-1} \phi(p_k,x_2) \end{equation*} is empty. This means that we do not get terms like \begin{equation*} \phi(x_2,p_{r+2}) \phi(p_k,x_2) \end{equation*} in the product $B_2 B_{m-r+2}$. Given this observation it is easy to see that \begin{equation*} \int_{I_1 \times I_v} B_2 B_{m-r+1} B_{m-r+2} \, \mathrm{dx} = \mathcal{O}(\sqrt{\epsilon}) \end{equation*} and this proves (\ref{D_r(2)}). The second part of the proof consists of showing that \begin{multline} \label{sec_part} \int_{I_1 \times I_v} \mathrm{det} \left[ \begin{array}{ccc} A(p_m,p_1) & A(p_m,x_1) & A(p_m,x_2) \\ A(x_1,p_1) & A(x_1,x_1) & A(x_1,x_2) \\ A(x_2,p_1) & A(x_2,x_1) & A(x_2,x_2) \end{array} \right] \, \mathrm{dx} \\ = \int_{I_1^2} R(p_1,x_1,x_2) \, \mathrm{dx} + \int_{I_1} R(p_1,x) \, \mathrm{dx} + \mathcal{O}(\sqrt{\epsilon} \log{\epsilon}). \end{multline} The left hand side is equal to \begin{multline*} \int_{I_1 \times I_v} \mathrm{det} \left[ \begin{array}{ccc} A(p_m,p_1) & A(p_m,x_1) & A(p_m,x_2) \\ A(x_1,p_1) & A(x_1,x_1) & \widetilde{A}(x_1,x_2) \\ A(x_2,p_1) & A(x_2,x_1) & A(x_2,x_2) \end{array} \right] \, \mathrm{dx} \\ + \int_{I_1 \times I_v} \phi(x_1,x_2) \, \mathrm{det} \left[ \begin{array}{cc} A(p_m,p_1) & A(p_m,x_1) \\ A(x_2,p_1) & A(x_2,x_1) \end{array} \right] \, \mathrm{dx} . \end{multline*} In view of Lemma \ref{K_approx} and (\ref{appr_delta_2}) in Lemma \ref{approx_delta} below we obtain (\ref{sec_part}). \begin{lemma} \label{approx_delta} Suppose that $f: \mathbb{R} \rightarrow \mathbb{R}$ has a continuous derivative and that $g: \mathbb{R}^2 \rightarrow \mathbb{R}$ has continuous first partial derivatives. Assume that \begin{equation*} |f(x)|, \, |f'(x)| \leq C e^{-x} \end{equation*} \begin{equation*} |g(x,y)|, \, |g'(x,y)| \leq C e^{-x-y}. \end{equation*} Then, for $1 \leq i,j \leq m$, it holds that \begin{equation}\label{appr_delta_1} \int_{I_i} \frac{1}{\sqrt{4 \pi \epsilon}} e^{-\frac{(x-p_j)^2}{4 \epsilon}} f(x) \, \mathrm{dx} = f(p_j) \, \int_{\frac{p_i-p_j}{\sqrt{\epsilon}}}^{\infty} \frac{1}{\sqrt{4 \pi}} e^{-\frac{x^2}{4}} \, \mathrm{dx} + \mathcal{O}(\sqrt{\epsilon}) \end{equation} \begin{equation} \label{appr_delta_2} \int_{I_i} \int_{I_j} \frac{1}{\sqrt{4 \pi \epsilon}} e^{-\frac{(x-y)^2}{4 \epsilon}} g(x,y) \, \mathrm{dx} \mathrm{dy} = \int_{I_i} g(x,x) \, \mathrm{dx} + \mathcal{O}(\sqrt{\epsilon}\log{\epsilon}). \end{equation} \end{lemma} \begin{prf} \begin{multline*} \int_{p_i}^{\infty} \frac{1}{\sqrt{4 \pi \epsilon}} e^{-\frac{(x-p_j)^2}{4 \epsilon}} f(x) \, \mathrm{dx} = \left[ z = \frac{x-p_j}{\sqrt{\epsilon}} \right ] \\ = \int_{\frac{p_i-p_j}{\sqrt{\epsilon}}}^{\infty} \frac{1}{\sqrt{4 \pi}} e^{-\frac{z^2}{4}} f(p_j + \sqrt{\epsilon} z) \, \mathrm{dz} \end{multline*} By Taylors theorem \begin{equation*} f(p_j + \sqrt{\epsilon} z) = f(p_j) + \sqrt{\epsilon} z f'(p_j + \theta_{\epsilon}(z)) \end{equation*} where $\theta_{\epsilon}(z)$ is a number between $0$ and $\sqrt{\epsilon}z$. Since by assumption \begin{equation*} |f'(p_j + \theta_{\epsilon}(z))| \leq C e^{-p_j + \sqrt{\epsilon} |z|} \end{equation*} we obtain (\ref{appr_delta_1}). \begin{multline*} \int_{p_i}^{\infty} \int_{p_j}^{\infty} \frac{1}{\sqrt{4 \pi \epsilon}} e^{-\frac{(x-y)^2}{4 \epsilon}} g(x,y) \, \mathrm{dx} \mathrm{dy} = \left[ z = \frac{y-x}{\sqrt{\epsilon}} \right ] \\ = \int_{p_i}^{\infty} \int_{\frac{p_j-x}{\sqrt{\epsilon}}}^{\infty} \frac{1}{\sqrt{4 \pi}} e^{-\frac{z^2}{4}} g(x,x + \sqrt{\epsilon} z) \, \mathrm{dx} \mathrm{dz} \end{multline*} By Taylors theorem \begin{equation*} g(x,x + \sqrt{\epsilon} z) = g(x,x) + \sqrt{\epsilon} z g'(x, x + \theta_{\epsilon}(x,z)) \end{equation*} where $\theta_{\epsilon}(x,z)$ lies between $0$ and $\sqrt{\epsilon}z$. The error can be discarded since \begin{multline*} \int_{p_i}^{\infty} \int_{\frac{p_j-x}{\sqrt{\epsilon}}}^{\infty} \frac{1}{\sqrt{4 \pi}} e^{-\frac{z^2}{4}} |z g'(x,x + \theta_{\epsilon}(x,z))| \, \mathrm{dx} \mathrm{dz} \\ \leq C \int_{p_i}^{\infty} \, \mathrm{dx} \int_{- \infty}^{\infty} \frac{1}{\sqrt{4 \pi}} |z| e^{-\frac{z^2}{4} - 2x + \sqrt{\epsilon} |z|} \, \mathrm{dz} \leq C . \end{multline*} We now split the main term into two terms. \begin{multline*} \int_{p_i}^{\infty} \, \mathrm{dx} \int_{\frac{p_j-x}{\sqrt{\epsilon}}}^{\infty} \, \mathrm{dz} \frac{1}{\sqrt{4 \pi}} e^{-\frac{z^2}{4}} g(x,x) \\ = \int_{p_i}^{p_i - \sqrt{\epsilon \log{\epsilon}}} \, \mathrm{dx} \int_{\frac{p_j-x}{\sqrt{\epsilon}}}^{\infty} \, \mathrm{dz} \frac{1}{\sqrt{4 \pi}} e^{-\frac{z^2}{4}} g(x,x) \\ + \int_{p_i - \sqrt{\epsilon \log{\epsilon}}}^{\infty} \, \mathrm{dx} \int_{\frac{p_j-x}{\sqrt{\epsilon}}}^{\infty} \, \mathrm{dz} \frac{1}{\sqrt{4 \pi}} e^{-\frac{z^2}{4}} g(x,x) =: \int_1 + \int_2 \end{multline*} We can estimate the first integral by \begin{equation*} \left| \int_1 \right| \leq C \int_{p_i}^{p_i - \sqrt{\epsilon \log{\epsilon}}} \, \mathrm{dx} \int_{-\infty}^{\infty} \, \mathrm{dz} \, e^{-\frac{z^2}{4} - 2x} \leq -C \sqrt{\epsilon} \log{\epsilon}. \end{equation*} If $x \geq p_j - \sqrt{\epsilon} \log{\epsilon}$ then $\frac{p_j-x}{\sqrt{\epsilon}} \leq C \log{\epsilon}$ and hence \begin{multline*} \int_{\frac{p_j - x}{\sqrt{\epsilon}}}^{\infty} \frac{1}{\sqrt{4 \pi}} e^{-\frac{z^2}{4}} \, \mathrm{dz} \\ = \int_{-\infty}^{\infty} \frac{1}{\sqrt{4 \pi}} e^{-\frac{z^2}{4}} \, \mathrm{dz} + \int_{-\infty}^{\frac{p_j - x}{\sqrt{\epsilon}}} \frac{1}{\sqrt{4 \pi}} e^{-\frac{z^2}{4}} = 1 + \mathcal{O}\left( e^{- \frac{(\log{\epsilon})^2}{4}} \right). \end{multline*} We finally get \begin{multline*} \int_2 = \int_{p_i - \sqrt{\epsilon} \log{\epsilon}}^{\infty} \left( 1 + \mathcal{O}\left( e^{- \frac{(\log{\epsilon})^2}{4}} \right) \right) g(x,x) \, \mathrm{dx} \\ = \int_{p_i}^{\infty} g(x,x) \, \mathrm{dx} + \mathcal{O}(\sqrt{\epsilon} \log{\epsilon}) \end{multline*} This concludes the proof of the lemma. \end{prf} \section{Theorem \ref{PNG}} \subsection{Multi-layer discrete PNG} Before we give the proof of Theorem \ref{PNG} we must present some preliminary results. How does one get a hand on the process $h$ described in the introduction? In \cite{Jo2} it is shown that $h$ can be embedded as the top curve in a multi-layer process given by a family of non-intersecting paths $\{ h_i, 0 \leq i < N \}$, $h=h_0$. It turns out, see \cite{Jo2}, that this multi-layer process is an example of a discrete determinantal process. \begin{theorem}[Johansson] Let $u,v \in \mathbb{Z}$ be such that $|u|,|v| < N$ and let $q = \alpha^2$. Set \begin{equation*} G(z,w) = (1 - \alpha)^{2(v-u)} \frac{(1-\alpha/z)^{N+u} (1- \alpha w)^{N-v}} {(1 - \alpha z)^{N-u} (1- \alpha / w)^{N+v}} \end{equation*} and \begin{equation*} \widetilde{K}_N (2u,x;2v,y) = \frac{1}{(2 \pi i)^2} \int_{\gamma_{r_2}} \frac{\mathrm{dz}}{z} \int_{\gamma_{r_1}} \frac{\mathrm{dw}}{w} \frac{z}{z-w} G(z,w) \end{equation*} where $\gamma_r$ is the circle with radius $r$ centered around the origin, $\alpha < r_1 < r_2 < 1/ \alpha$ and $x,y \in \mathbb{Z}$. Furthermore, define \begin{equation*} \phi_{2u,2v}(x,y) = \frac{1}{2 \pi} \int_{-\pi}^{\pi} e^{i(y-x) \theta} G \left( e^{i \theta}, e^{i \theta}\right) d \theta \end{equation*} for $u<v$ and $\phi_{2u,2v}(x,y) = 0$ for $u \geq v$. Set \begin{equation*} K_N (2u,x;2v,y) = \widetilde{K}_N(2u,x;2v,y) - \phi_{2u,2v}(x,y). \end{equation*} Then, \begin{multline*} \mathbb{P} \big[ (2u,x_j^{2u}) \in \left\{ (2t,h_i(2t,2N-1)); 0 \leq i < N , \, |t|<N \right\}, \\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad |u| <N,1 \leq j \leq k_u \big] \\ = \mathrm{det} \left( K_N(2u,x_i^{2u};2v,x_j^{2v}) \right)_{|u|,|v|<N,1\leq i \leq k_u,1\leq j \leq k_v} \end{multline*} for any $x_j^{2u} \in \mathbb{Z}$ and any $k_u \in \{ 0,\ldots , N \}$. \end{theorem} The asymptotic information about the kernel $K_N$ needed to prove Theorem \ref{PNG} is contained in two lemmas. The first can be extracted from chapter four in \cite{Jo2} and the proof of the second is provided at the end of this section. Please note that we make a slight redefinition of the function $\phi$ from the last section. However, for the purposes of this text $\phi$ acts as one and the same. \begin{lemma}\label{Klemmat} Let $\tau, \tau'$ be any real numbers such that \begin{eqnarray*} & & u = \frac{1 + \alpha}{1 - \alpha} d^{-1} N^{2/3} \tau \in \mathbb{Z}_+ \\ & & v = \frac{1 + \alpha}{1 - \alpha} d^{-1} N^{2/3} \tau' \in \mathbb{Z}_+. \end{eqnarray*} Let $x,y \in \mathbb{Z}_+$ and define $x',y'$ by \begin{eqnarray*} & & x = 2 \alpha (1 - \alpha)^{-1} N + (x' - \tau^2) d N^{1/3} \\ & & y = 2 \alpha (1 - \alpha)^{-1} N + (y' - \tau'^2) d N^{1/3} . \end{eqnarray*} For any $L \in \mathbb{R}$ there exist positive constants, $c$ and $C$, such that \begin{equation*} | \widetilde{K}_N(2u,x;2v,y) | \leq C N^{-1/3} e^{- c(x'+y')} \end{equation*} if $x',y' \geq L$. If $|x'|,|y'| \leq \log{N}$, then the exists $c > 0$ such that \begin{equation*} d N^{1/3} \widetilde{K}_N(2u,x;2v,y) = e^{\frac{\tau^3 - \tau'^3}{3} + y' \tau' - x' \tau} \widetilde{A}(\tau,x';\tau',y') + \mathcal{O}(N^{-c}). \end{equation*} \end{lemma} \begin{lemma}\label{philemmat} Let $x,y \in \mathbb{Z}_+$ and define $x',y'$ by \begin{eqnarray*} & & x = 2 \alpha (1 - \alpha)^{-1} N + x' d N^{1/3} \\ & & y = 2 \alpha (1 - \alpha)^{-1} N + y' d N^{1/3} . \end{eqnarray*} Take $s > 0$, let $u \sim N^{2/3}$ and define $v$ by \begin{equation*} v = u + \frac{1 + \alpha}{1 - \alpha} d^{-1} s N^{\gamma} \end{equation*} where $0 < \gamma < \frac{2}{3}$. There exists a constant $C > 0$ such that \begin{equation*} \phi_{2u,2v} (x,y) = \frac{1}{d N^{1/3}} \phi(x',y') + \phi_{E}(x',y') \end{equation*} where \begin{equation*} \phi(x',y') = \frac{1}{\sqrt{4 \pi s N^{\gamma - 2/3}}} e^{-\frac{(x'-y')^2}{4 s N^{\gamma - 2/3}}} \end{equation*} and \begin{equation*} |\phi_{E}(x',y')| \leq \Bigg\{ \begin{array}{l} C N^{-\frac{3 \gamma}{2}} \\ \frac{C}{N^{1/3} | x'-y'| N^{\gamma}} \end{array} \end{equation*} for all $x,y$. \end{lemma} \subsection{Proof of Theorem \ref{PNG}} This proof is really a discrete analog of the proof of Theorem \ref{theorem1}. Unfortunately things are more involved in this case where $N^{\gamma - 2/3}$ plays the role of $\epsilon$. Please recall that $J_1 = \mu N + \psi d N^{1/3}$ where $\mu = 2 \alpha (1 - \alpha)^{-1}$ and $q = \alpha^2$. Set $J_i = J_{i-1} + y_i d N^{\gamma / 2} \in \mathbb{Z}$, $i = 2, \ldots ,m$, and \begin{equation*} \widetilde{I}_i = \left\{ z \in \mathbb{Z} | z > J_i \right\}. \end{equation*} Here the $y_i$'s are arbitrary numbers such that $J_i \in \mathbb{Z}$. For later convenience we also define $\psi_i$, $i = 1, \ldots ,m$, by $J_i = \mu N + \psi_i d N^{1/3}$. We will prove that \begin{multline*} \mathbb{P} \left[ \# J_2 = \ldots = \# J_m =1, \# \widetilde{I}_2 = \ldots = \# \widetilde{I}_m = 0 | \# J_1 =1, \# \widetilde{I}_1 = 0 \right] \\ = \phi_{2K_1,2K_2}(J_1,J_2) \cdots \phi_{2K_{m-1},2K_m}(J_{m-1},J_m) \\ + \mathcal{O} \left( ( N^{- \gamma /2} )^{m-1} N^{-c} \right). \end{multline*} This implies Theorem \ref{PNG}: \begin{multline*} \phi_{2K_1,2K_2}(J_1,J_2) \cdots \phi_{2K_{m-1},2K_m}(J_{m-1},J_m) \\ = \frac{1}{\sqrt{4 \pi s_2}} e^{-\frac{y_2^2}{4s_2}} \cdots \frac{1}{\sqrt{4 \pi s_m}} e^{-\frac{y_m^2}{4s_m}} \frac{1}{(d N^{\gamma / 2})^{m-1}} \left( 1 + \mathcal{O} \left( N^{-c} \right) \right) \end{multline*} by Lemma \ref{philemmat}. The sum of this function over the sets $A_i$ is a Riemann sum that is well approximated by the integral in Theorem \ref{PNG}. Define the finite integer intervals $I_i$, $1 \leq i \leq m$, by \begin{equation*} I_i = \left\{ z \in \mathbb{Z} ; J_i < z < \lfloor \mu N \rfloor + N \right\}. \end{equation*} The probability of finding a particle in $\widetilde{I}_i$ but outside of $I_i$ is very small: \begin{align*} \mathbb{P} & \left[ \# ( \widetilde{I}_i \setminus I_i) \geq 1 \right] \leq \sum_{x \in \widetilde{I}_i \setminus I_i} \mathbb{P} [\# x = 1] = \sum_{x \in \widetilde{I}_i \setminus I_i} K(x,x) \\ & = \sum_{k=0}^{\infty} K \Bigg(\lfloor \mu N \rfloor + \left( \frac{1}{d}N^{2/3} + \frac{k}{dN^{1/3}} \right)dN^{1/3}, \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \lfloor \mu N \rfloor + \left( \frac{1}{d}N^{2/3} + \frac{k}{dN^{1/3}} \right)dN^{1/3} \Bigg) \\ & \qquad \qquad \qquad \qquad \leq C \textrm{e} ^{-\frac{1}{d} N^{2/3}} \sum_{k=0}^{\infty} e^{- \frac{k}{d N^{1/3}}} = \mathcal{O} \left( e^{-c N^{2/3}} \right). \end{align*} This means that we can work with $I_i$ instead of $\widetilde{I}_i$. We now proceed much like we did in the proof of Theorem \ref{theorem1}. If we set \begin{equation*} A = \left\{ \# J_1 = 1, \ldots, \# J_m = 1 \right\}, \end{equation*} then \begin{multline*} \mathbb{P} \left[ A , \# I_1 = \ldots = \# I_m = 0 \right] + \mathbb{P} [ A , \# I_1 = 0, (\# I_2 = \ldots = \# I_m = 0)^c] \\ = \mathbb{P} [ A , \# I_1 = 0] \end{multline*} where \begin{multline*} \mathbb{P} [ A , \# I_1 = 0, (\# I_2 = \ldots = \# I_m = 0)^c] \\ = \mathbb{P} \left[ A , \# I_1 = 0, \cup_{i=2}^m \{ \# I_i \neq 0 \} \right] \\ \leq \sum_{i=2}^{m} \mathbb{P} [ A , \# I_1 = 0, \# I_i \neq 0 ] \leq \sum_{i=2}^{m} \mathbb{P} [ A , \# I_i \neq \# I_1 ], \end{multline*} and \begin{align*} \mathbb{P} [ A , \# I_i \neq \# I_1 ] & = \mathbb{E} [\chi_{\{ \# J_1 = 1 \}} \cdots \chi_{\{ \# J_m = 1 \}} \cdot \chi_{\{ \# I_1 \neq \# I_i \}} ] \\ & = \mathbb{E} [ \# J_1 \cdots \# J_m \cdot \chi_{\{ \# I_1 \neq \# I_i \}} ] \\ & \leq \mathbb{E} [ \# J_1 \cdots \# J_m (\# I_1 - \# I_i )^2 ]. \end{align*} The second equality holds since the probability of finding two particles at the same place is zero. We need to prove three things: \begin{eqnarray*} & & 1. \quad \mathbb{P} [A, \# I_1 = 0] \\ & & \qquad \quad = \phi_{2K_1,2K_2}(J_1,J_2) \cdots \phi_{2K_{m-1},2K_m}(J_{m-1},J_m) \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times \mathbb{P}[ \# J_1 =1, \# I_1 = 0] \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \mathcal{O} \left( N^{-1/3-c} (N^{- \gamma / 2})^{m-1}\right) \\ & & 2. \quad \mathbb{E} [ \# J_1 \cdots \# J_m \, (\# I_1 - \# I_i )^2 ] = \mathcal{O} \left( N^{-1/3-c} (N^{- \gamma / 2})^{m-1}\right) \\ & & 3. \quad \mathbb{P}[ \# J_1 =1, \# I_1 = 0] \geq C N^{-1/3} \end{eqnarray*} Before giving the proofs we need some preliminaries. When summing a function $f(x)$ over, say, $I_1$ we can write \begin{equation*} \sum_{x \in I_1}f(x) = \sum_{l=1}^{T_1} f\left(\mu N + \left( \psi_1 + \frac{l}{d N^{1/3}} \right) d N^{1/3} \right). \end{equation*} where $T_1 \sim N$. The next lemma will be frequently used later on. \begin{lemma}\label{phi_sum} There exists constants $C_1,C_2 > 0$ such that \begin{equation*} \sum_{k=1}^{\infty} \phi \left( k / N^{1/3}, x \right) N^{-1/3} \leq C_1 \end{equation*} and \begin{equation*} \sum_{k=1}^{N^2} \phi_E \left( k / N^{1/3}, x \right) \leq C_2 N^{-\gamma / 2} \end{equation*} for any $x \in \mathbb{R}$. \end{lemma} \begin{proof} \begin{multline*} \sum_{k=1}^{\infty} \phi \left( k / N^{1/3}, x \right) N^{-1/3} = \sum_{k=1}^{\infty} \phi \left( \frac{k - x N^{1/3}}{N^{1/3}}, 0 \right) N^{-1/3} \\ \leq \sum_{k= - \infty}^{\infty} \phi \left( \frac{k - x N^{1/3}}{N^{1/3}}, 0 \right) N^{-1/3} = \left[ f := x N^{1/3} - \lfloor x N^{1/3} \rfloor \right] \\ = \sum_{k= - \infty}^{\infty} \phi \left( \frac{k - f}{N^{1/3}}, 0 \right) N^{-1/3} \\ \leq \sum_{k= - \infty}^{0} \phi \left( \frac{k}{N^{1/3}}, 0 \right) N^{-1/3} + \phi \left( \frac{1 - f}{N^{1/3}}, 0 \right) N^{-1/3} + \sum_{k=2}^{\infty} \phi \left( \frac{k - 1}{N^{1/3}}, 0 \right) N^{-1/3} \\ \leq 2 \sum_{k=1}^{\infty} \phi \left( \frac{k}{N^{1/3}}, 0 \right) N^{-1/3} + 2 \leq C \end{multline*} \begin{align*} \sum_{k=1}^{N^2} & \phi_E \left( k/N^{1/3},x \right) \leq C N^{-\gamma}\sum_{k=1}^{x N^{1/3} - N^{\gamma}} \frac{1}{x N^{1/3} - k} \\ & \qquad \qquad \qquad \qquad + C \sum_{x N^{1/3} - N^{\gamma}}^{x N^{1/3} + N^{\gamma}} N^{-3 \gamma/2} + C N^{-\gamma}\sum_{x N^{1/3} + N^{\gamma}}^{N^2} \frac{1}{k - x N^{1/3}} \\ & \qquad \qquad \leq C N^{-\gamma} \log{N} + C N^{-\gamma / 2} + C N^{-\gamma} \log{N} \leq C N^{- \gamma / 2} \end{align*} \end{proof} We now turn to the proof of 1. As in the proof of Theorem \ref{theorem1} we get \begin{equation} \label{probeq} \mathbb{P} [A, \# I_1 = 0] = \sum_{k=0}^{\infty} \frac{(-1)^k}{k!} \mathbb{E} \left[ \# J_1 \cdots \# J_m \, \# I_1^{[k]} \right] . \end{equation} For $0 \leq r \leq m-1$ set \begin{multline*} D_r(k) \\ = \phi_{2K_1,2K_2}(J_1,J_2) \phi_{2K_2,2K_3}(J_2,J_3) \cdots \phi_{2K_r,2K_{r+1}}(J_r,J_{r+1}) \sum_{x_i \in I_1, 1 \leq i \leq k} \\ \times \left| \begin{array}{ccccc} K(J_{r+1},J_1) & K(J_{r+1},J_{r+2}) & \ldots & K(J_{r+1},J_{m}) & K(J_{r+1},x_j) \\ \vdots & \vdots & \, & \vdots & \vdots \\ K(J_{m},J_1) & K(J_{m},J_{r+2}) & \ldots & K(J_{m},J_{m}) & K(J_{m},x_j) \\ K(x_i,J_1) & K(x_i,J_{r+2}) & \ldots & K(x_i,J_{m}) & K(x_i,x_j) \end{array} \right| . \end{multline*} The indicies $i,j$ run from $1$ to $k$ and if $r=0$ the (empty) product of $\phi$-functions is to be interpreted as $1$. Let $\widetilde{D}_r(k)$ be like $D_r(k)$ but having $\widetilde{K}(J_{r+1},J_{r+2})$ in position (1,2) in the matrix. We want to show that \begin{equation*} |D_0(k) - D_r(k)| \leq N^{-1/3-c} \left( N^{- \gamma/2} \right)^{m-1} (Ck)^{\frac{k+m}{2}} \end{equation*} which, by the induction argument in the proof of Theorem \ref{theorem1}, follows if we can prove that \begin{equation} \label{pr_2_Dtilde} |\widetilde{D}_r(k)| \leq N^{-1/3-c} \left( N^{- \gamma/2} \right)^{m-1} (Ck)^{\frac{k+m}{2}} . \end{equation} To show this we shall use Hadamard's inequality and therefore need to estimate sums of column elements squared (confer with the proof of Theorem 1). Lemmas \ref{Klemmat}, \ref{philemmat} and \ref{phi_sum} will be frequently used below. Column 1: \begin{equation*} \sum_{i=r+1}^m K^2(J_i,J_1) + \sum_{i=1}^k K^2(x_i,J_1) \leq C N^{-2/3}(m+k) \end{equation*} \indent Column 2: \begin{equation*} \widetilde{K}^2(J_{r+1},J_{r+2}) + \sum_{i=r+2}^m K^2(J_i,J_{r+2}) \leq C N^{-2/3} m \end{equation*} and \begin{multline*} \sum_{i=1}^k K^2(x_i,J_{r+2}) \\ \leq C N^{-2/3} \sum_{i=1}^k \Big[ 1 + \phi \left( l_i/dN^{1/3},\psi_1 - \psi_{r+2} \right) \\ + N^{1/3} \phi_E \left(l_i/dN^{1/3},\psi_1 - \psi_{r+2} \right) \Big]^2. \end{multline*} Columns $3, \ldots , m-r$ ($r+3 \leq j \leq m$), if they exist: \begin{equation*} \sum_{i=r+1}^m K^2(J_i,J_j) + \sum_{i=1}^k K^2(x_i, J_j) \leq C N^{-\gamma} (k+m) \end{equation*} Last $k$ columns ($1 \leq j \leq k$): \begin{equation*} \sum_{i=r+1}^m K^2(J_i,x_j) + \sum_{i=1}^k K^2(x_i, x_j) \leq C (k+m) N^{-2/3} e^{- c l_j N^{-1/3}} \end{equation*} Using Hadamard's inequality we get after some manipulations that \begin{multline*} |\widetilde{D}_r(k)| \\ \leq \sum_{l_1, \ldots ,l_k=1}^{T_1} N^{-2/3} \left(N^{-\gamma / 2}\right)^{m-2}(Ck)^{\frac{k+m}{2}} \left( N^{-1/3}\right)^k \prod_{i=1}^k e^{- c l_i N^{-1/3}} \\ \times \sum_{i=1}^k \Big[ 1 + \phi \left(l_i/dN^{1/3},\psi_1 - \psi_{r+2} \right) \\ + N^{1/3} \left|\phi_E \left(l_i/dN^{1/3},\psi_1 - \psi_{r+2} \right) \right| \Big] . \end{multline*} It follows from Lemma \ref{phi_sum} that \begin{equation*} \sum_{l_i=1}^{T_1} e^{-c l_i N^{-1/3}} \phi \left(l_i/dN^{1/3},\psi_1 - \psi_{r+2} \right) N^{-1/3} \leq C \end{equation*} and also that \begin{equation*} \sum_{l_i=1}^{T_1} e^{-c l_i N^{-1/3}} \left| \phi_E \left(l_i/dN^{1/3},\psi_1 - \psi_{r+2} \right) \right| \leq C N^{- \gamma / 2} \end{equation*} From this we get (\ref{pr_2_Dtilde}). To get 1 we also need to show that \begin{multline}\label{det_sum} \sum_{x_i \in I_1, 1 \leq i \leq k} \mathrm{det} \left[ \begin{array}{cc} K(J_m,J_1) & K(J_m,x_j) \\ K(x_i,J_1) & K(x_i,x_j) \end{array} \right]_{1 \leq i,j \leq k} \\ = \sum_{x_i \in I_1, 1 \leq i \leq k} \mathrm{det} \left[ \begin{array}{cc} K(J_1,J_1) & K(J_1,x_j) \\ K(x_i,J_1) & K(x_i,x_j) \end{array} \right]_{1 \leq i,j \leq k} \\ + N^{-1/3 - c} \mathcal{O} \left( (Ck)^{\frac{k+m}{2}} \right). \end{multline} Write \begin{equation*} x_i = \mu N + \left( \psi_1 + \frac{l_i}{d N^{1/3}} \right)d N^{1/3} \end{equation*} and consider first the case $1 \leq l_i \leq N^{1/3} \log{N}$. From Lemmas \ref{K_approx} and \ref{Klemmat} it is straight forward to deduce that if $z=x_i$ or $z=J_1$ then \begin{equation*} K(J_m,z) = K(J_1,z) + \mathcal{O} (N^{-1/3-c} ). \end{equation*} We now expand the determinant in the sum to the left in (\ref{det_sum}). \begin{multline*} \mathrm{det} \left[ \begin{array}{cc} K(J_m,J_1) & K(J_m,x_j) \\ K(x_i,J_1) & K(x_i,x_j) \end{array} \right]_{1 \leq i,j \leq k} \\ = \mathrm{det} \left[ \begin{array}{cc} K(J_1,J_1) & K(J_1,x_j) \\ K(x_i,J_1) & K(x_i,x_j) \end{array} \right]_{1 \leq i,j \leq k} \\ + \mathcal{O} (N^{-1/3-c} ) \sum_{p=1}^k \mathrm{det} \left[ K(x_i,J_1) \quad K(x_i,x_j) \right]_{1 \leq i,j \leq k, j \neq p} \\ + \mathcal{O} ( N^{-1/3-c} )\, \mathrm{det} \left[ K(x_i,x_j) \right]_{1 \leq i,j \leq k} \end{multline*} We now use Hadamard's inequality to get \begin{multline*} \sum_{l_i=1}^{N^{1/3} \log{N}} |\mathrm{det} \left[ K(x_i,J_1) \quad K(x_i,x_j) \right]_{1 \leq i,j \leq k, j \neq p}| \\ \leq \sum_{l_i=1}^{N^{1/3} \log{N}} (C k N^{-2/3} )^{k/2} e^{-N^{-1/3}(l_1 + \ldots + l_{p-1} + l_{p+1} + \ldots + l_k)} \\ \leq \left( C k \right)^{k/2} \log{N} \end{multline*} and \begin{multline*} \sum_{l_i=1}^{N^{1/3} \log{N}} |\mathrm{det} \left[ K(x_i,x_j\right]_{1 \leq i,j \leq k}| \\ \leq \sum_{l_i=1}^{N^{1/3} \log{N}} \left(C k N^{-2/3} \right)^{k/2} e^{-N^{-1/3}(l_1 + \ldots + l_k)} \leq \left( C k \right)^{k/2}. \end{multline*} This takes care of the summation over $1 \leq l_i \leq N^{1/3} \log{N}$, $1 \leq i \leq m$. By using Hadamard's inequality once more one readily shows that the contribution coming from the remaining terms in the sums in (\ref{det_sum}) is small enough to make (\ref{det_sum}) hold. We now prove 2. Note that \begin{equation*} (\# I_1 - \#I_i)^2 = I_i^{[2]} + I_1^{[2]} + I_i + I_1 - 2 I_i I_1. \end{equation*} By arguing as in the proof of 1 above we obtain \begin{equation*} \mathbb{E} \left[ \# J_1 \cdots \# J_m \# I_u^{[k]} \right] = \end{equation*} \begin{multline*} = \phi_{2K_1,2K_2}(J_1,J_2) \cdots \phi_{2K_{m-1},2K_m}(J_{m-1},J_m) \\ \qquad \qquad \qquad \qquad \times \sum_{x_1, x_k \in I_u} \mathrm{det} \left[ \begin{array}{cc} K(J_m,J_1) & K(J_m,x_s) \\ K(x_r,J_1) & K(x_r,x_s) \end{array} \right]_{1 \leq r,s \leq k} \\ + \mathcal{O} \left( N^{-1/3-c - \frac{\gamma(m-1)}{2}} \right) \end{multline*} where $u=1,i$ and $k=1,2$. One also gets \begin{multline*} \mathbb{E} [ \# J_1 \cdots \# J_m \# I_1 \# I_i] \\ = \phi_{2K_1,2K_2}(J_1,J_2) \cdots \phi_{2K_{m-1},2K_m}(J_{m-1},J_m) \\ \times \sum_{x \in I_1, y \in I_i} \mathrm{det} \left[ \begin{array}{ccc} K(J_m,J_1) & K(J_m,x) & K(J_m,y) \\ K(x,J_1) & K(x,x) & K(x,y) \\ K(y,J_1) & K(y,x) & K(y,y) \end{array} \right] \\ + \mathcal{O} \left( N^{-1/3-c - \frac{\gamma(m-1)}{2}} \right). \end{multline*} We omit the details. Using Lemma \ref{Klemmat} and Lemma \ref{K_approx} one readily gets \begin{multline*} \sum_{x_1, x_k \in I_i} \mathrm{det} \left[ \begin{array}{cc} K(J_m,J_1) & K(J_m,x_s) \\ K(x_r,J_1) & K(x_r,x_s) \end{array} \right]_{1 \leq r,s \leq k} \\ = \mathbb{E} \left[ \# J_1 \# I_1^{[k]} \right] + \mathcal{O} (N^{-1/3-c} ) \end{multline*} for $k=1,2$ and \begin{multline*} \sum_{x \in I_1,y \in I_i} \mathrm{det} \left[ \begin{array}{ccc} K(J_m,J_1) & K(J_m,x) & K(J_m,y) \\ K(x,J_1) & K(x,x) & \widetilde{K}(x,y) \\ K(y,J_1) & K(y,x) & K(y,y) \end{array} \right] \\ = \mathbb{E} \left[ \# J_1 \# I_1^{[2]} \right] + \mathcal{O} ( N^{-1/3-c} ). \end{multline*} We now see that 2 follows if \begin{multline} \label{phi_sum2} \sum_{x \in I_1,y \in I_i} \phi_{2K_1,2K_i}(x,y) \mathrm{det} \left[ \begin{array}{cc} K(J_m,J_1) & K(J_m,x) \\ K(y,J_1) & K(y,x) \end{array} \right] \\ = \mathbb{E} [ \# J_1 \# I_1] + \mathcal{O} (N^{-1/3-c} ). \end{multline} We shall prove this by showing that both sides are well approximated by integrals. On the integral containing the function $\phi$ we can then apply Lemma \ref{approx_delta}. By using Lemma \ref{phi_sum} we get rid of the error term associated with $\phi_E$: \begin{multline*} \sum_{l_1,l_2=1}^N \phi_E \left(\psi_1 + \frac{l_1}{d N^{1/3}}, \psi_i + \frac{l_2}{d N^{1/3}} \right) e^{-\frac{l_1+l_2}{N^{1/3}}} N^{-2/3} \\ \leq C \sum_{l_2=1}^N e^{-\frac{l_2}{N^{1/3}}} N^{-2/3-\gamma / 2} \leq C N^{-1/3 - \gamma / 2} \end{multline*} The following calculation, again using Lemma \ref{phi_sum}, shows that the main contribution to the sums in (\ref{phi_sum2}) comes from summing over $1 \leq l_1,l_2 \leq N^{1/3} \log{N}$. \begin{multline*} \sum_{l_1=N^{1/3}\log{N}}^N \sum_{l_2=1}^N \phi \left( l_2/dN^{1/3}, \psi_1 - \psi_i + l_2/dN^{1/3} \right) e^{-\frac{l_1+l_2}{N^{1/3}}} N^{-2/3} \\ \leq \sum_{l_1=N^{1/3}\log{N}}^N C e^{-l_1/N^{1/3}} N^{-1/3} \leq CN^{-1} \end{multline*} We shall use Euler's summation formula for two variables: \begin{lemma} Let $f(x,y)$ be a function of two variables such that its partial derivatives up to second order are continuous in the rectangle \begin{equation*} \{ (x,y) |a \leq x \leq b, c \leq y \leq d \} \end{equation*} where $a,b,c,d$ are integers. Then \begin{align*} \sum_{a \leq m \leq b} \sum_{c \leq n \leq d} f(m,n) = & \int_a^b \int_c^d f(x,y) \, \mathrm{dx} \mathrm{dy} \\ & + \int_a^b \int_c^d f_x(x,y)(x - \lfloor x \rfloor) \, \mathrm{dx} \mathrm{dy} \\ & + \int_a^b \int_c^d f_y(x,y)(y - \lfloor y \rfloor) \, \mathrm{dx} \mathrm{dy} \\ & + \int_a^b \int_c^d f_{xy}(x,y)(x - \lfloor x \rfloor) (y - \lfloor y \rfloor)\, \mathrm{dx} \mathrm{dy} \end{align*} \end{lemma} The case that we are interested in is when \begin{equation*} f(x,y) = \phi \left( \psi_1 + \frac{x}{d N^{1/3}},\psi_i + \frac{y}{d N^{1/3}} \right) g \left(x/d N^{1/3},y/d N^{1/3} \right) N^{-1} \end{equation*} where \begin{equation*} |g_x(x,y)|,|g_y(x,y)|, |g_x(x,y)| \leq C e^{-c(x+y)} . \end{equation*} We need to show that the integrals involving the absolute values of $f_x(x,y)$, $f_y(x,y)$ and $f_{xy}(x,y)$ are negligible. We only present the details for $|f_x(x,y)|$ here, the other terms are treated similarly. \begin{multline*} \int_1^{N^{1/3} \log{N}} \int_1^{N^{1/3} \log{N}} |f_x(x,y)| \, \mathrm{dx} \mathrm{dy} \\ \leq (d N^{1/3})^2 \int_{\psi_1}^{\infty} \int_{\psi_i}^{\infty} \left|f_x \left((x-\psi_1) d N^{1/3},(y-\psi_i) d N^{1/3} \right) \right| \, \mathrm{dx} \mathrm{dy} \\ \leq C N^{-2/3} \int_{\psi_1}^{\infty} \int_{\psi_i}^{\infty} \left( |\phi_x(x,y)| + \phi(x,y) \right) e^{-c(x+y)} \, \mathrm{dx} \mathrm{dy} \end{multline*} By Lemma (\ref{approx_delta}) \begin{equation*} \int_{\psi_1}^{\infty} \int_{\psi_i}^{\infty} \phi(x,y) e^{-c(x+y)} \, \mathrm{dx} \mathrm{dy} \leq C. \end{equation*} The remaining term demands some analysis. \begin{align*} \int_{\psi_1}^{\infty} & \int_{\psi_i}^{\infty} |\phi_x(x,y)| e^{-c(x+y)} \, \mathrm{dx} \mathrm{dy} \\ & = \int_{\psi_1}^{\infty} \int_{\psi_i}^{\infty} \frac{|x-y|}{2 N^{\gamma - 2/3}} \frac{1}{\sqrt{4 \pi N^{\gamma - 2/3}}} e^{-\frac{(x-y)^2}{4 N^{\gamma - 2/3}}} e^{-c(x+y)} \, \mathrm{dx} \mathrm{dy} \\ & = \int_{\psi_1}^{\infty} \mathrm{dx} \left( \int_{\psi_i}^x + \int_x^{\infty} \right) \frac{|x-y|}{2 N^{\gamma - 2/3}} \frac{1}{\sqrt{4 \pi N^{\gamma - 2/3}}} e^{-\frac{(x-y)^2}{4 N^{\gamma - 2/3}}} e^{-c(x+y)} \, \mathrm{dy} \end{align*} \begin{multline*} \int_{\psi_1}^{\infty} \mathrm{dx} \int_{\psi_i}^x \frac{x-y}{2 N^{\gamma - 2/3}} \frac{1}{\sqrt{4 \pi N^{\gamma - 2/3}}} e^{-\frac{(x-y)^2}{4 N^{\gamma - 2/3}}} e^{-c(x+y)} \, \mathrm{dy} \\ = \int_{\psi_1}^{\infty} \mathrm{dx} \left( \left[ \phi(x,y) e^{-c(x+y)} \right]_{\psi_i}^{x} + c \int_{\psi_i}^x \phi(x,y) e^{-c(x+y)} \, \mathrm{dy} \right) \\ \leq C N^{1/3 - \gamma / 2} + C \leq C N^{1/3 - \gamma / 2} \end{multline*} We can do the same calculation for the remaining integral. The $|f_x(x,y)|$ integral is hence $\mathcal{O} \left(N^{-1/3 - \gamma / 2}\right)$ and the same goes for the $|f_y(x,y)|$ and $|f_{xy}(x,y)|$ integrals. Set \begin{equation*} A^{\tau_1}(x,y) = A \left(\tau_1,x + \tau_1^2;\tau_1, y + \tau_1^2 \right). \end{equation*} Applying the above calculations to the left hand side of (\ref{phi_sum2}) and using Lemmas \ref{princ_lemma}, \ref{approx_delta}, \ref{K_approx} and \ref{Klemmat} we obtain \begin{multline*} \sum_{x \in I_1,y \in I_i} \phi_{2K_1,2K_i}(x,y) \mathrm{det} \left[ \begin{array}{cc} K(J_m,J_1) & K(J_m,x) \\ K(y,J_1) & K(y,x) \end{array} \right] \\ = \sum_{l_1,l_2 = 1}^{N^{1/3 \log{N}}} \frac{1}{dN^{1/3}} \phi \left( \psi_1 + l_1/ dN^{1/3},\psi_i + l_2/ dN^{1/3}\right) \left( \frac{1}{dN^{1/3}} \right)^2 \\ \times \left| \begin{array}{cc} A^{\tau_1}(\psi_1, \psi_1) & e^{\tau_1 \frac{l_1}{dN^{1/3}}} A^{\tau_1}(\psi_1,\psi_1 + l_1/dN^{1/3}) \\ e^{-\tau_1 \frac{l_2}{dN^{1/3}}} A^{\tau_1}(\psi_1 + l_2/dN^{1/3},\psi_1) & A^{\tau_1}(\psi_i + l_2/dN^{1/3},\psi_1 + l_1/dN^{1/3}) \end{array} \right| \\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \mathcal{O}(N^{-1/3-c}) \\= \frac{1}{d N^{1/3}} \int_{\psi_1}^{\infty} \mathrm{det} \left[ \begin{array}{cc} A^{\tau_1}(\psi_1,\psi_1) & A^{\tau_1}(\psi_1,x) \\ A^{\tau_1}(x,\psi_1) & A^{\tau_1}(x,x) \end{array} \right] \mathrm{dx} + \mathcal{O}(N^{-1/3-c}). \end{multline*} We get the same expression for the right hand side of (\ref{phi_sum2}) when applying Euler's summation formula. This concludes the proof of 2. Let $F_2(t)$ be the Tracy-Widom distribution function corresponding to the largest eigenvalue of the Gaussian Unitary Ensemble (GUE), \cite{TW}. That 3 is true follows from the fact that $F_2'(t)>0$ $\forall t$, see \cite{TW}, together with the next lemma. \begin{lemma} \label{TWconv} Let $J_1$ and $I_1$ be as above. It holds that \begin{equation*} \mathbb{P} [ \# J_1 = 0, \# I_1 = 0] = \frac{1}{d N^{1/3}} F_2'(\psi_1 + \tau_1^2) + \mathcal{O}(N^{-2/3}). \end{equation*} \end{lemma} \noindent \textbf{Proof:} This will, again, be an exercise in using Hadamard's inequality. We have the following representation for $F_2'$ (see the third equality in (\ref{twprimeq})): \begin{equation}\label{TWdensity} F_2'(t) = \sum_{k=0}^{\infty} \frac{(-1)^k}{k!} \int_{(t,\infty)^k} \mathrm{det}(A(x_i,x_j))_{0 \leq i,j \leq k} \, d^kx \end{equation} where $x_0 = t$. In three steps we will now show that \begin{equation*} d N^{1/3}\sum_{k=0}^{\infty} \frac{(-1)^k}{k!} \sum_{x_i \in I_1,1 \leq i \leq k} \mathrm{det}(K(x_i,x_j))_{0 \leq i,j \leq k} \end{equation*} where $x_0 = J_1$, is well approximated by the right hand side in (\ref{TWdensity}). By (\ref{probeq}) this will prove the lemma. In steps one and two we will use Lemma \ref{Klemmat} to insert the kernel $A$ instead of $K$. In in the last step we show that we can change from summation to integration. First we show that we can sum over $x_i = \mu N + \left( \psi_1 + l_i/dN^{1/3} \right)d N^{1/3}$ where $1 \leq l_i \leq N^{1/3} \log{N}$, $1 \leq i \leq k$, instead of over $I_1$. By Hadamard's inequality and Lemma \ref{Klemmat} \begin{multline*} \mathrm{det} \left( K(x_i,x_j) \right)_{0 \leq i,j \leq k} \leq \left( \prod_{j=0}^k \sum_{i=0}^k K^2(x_i,x_j) \right)^{1/2} \\ \leq \left( C(k+1) N^{-2/3}\prod_{j=1}^k C(k+1) N^{-2/3} e^{-c l_j / N^{1/3}} \right)^{1/2} \\ \leq N^{-1/3} (C(k+1))^{\frac{k+1}{2}} \prod_{j=0}^k e^{-c l_j / N^{1/3}} N^{-1/3}. \end{multline*} We have that \begin{multline*} \sum_{\substack{l_i = 1 \\ 1 \leq i \leq k}}^{\infty} \prod_{j=1}^k e^{-c l_j / N^{1/3}} N^{-1/3} - \sum_{\substack{l_i = 1 \\ 1 \leq i \leq k}}^{N^{1/3} \log{N}} \prod_{j=1}^k e^{-c l_j / N^{1/3}} N^{-1/3} \\ \leq k \sum_{l_1=1}^{N^{1/3} \log{N}} \sum_{\substack{l_i = 1 \\ 2 \leq i \leq k}}^{\infty} \prod_{j=1}^k e^{-c l_j / N^{1/3}} N^{-1/3} \leq k \, C^{k} N^{-1}. \end{multline*} Since \begin{equation*} \sum_{k=1}^{\infty}\frac{1}{k!} N^{-1/3} (C(k+1))^{\frac{k+1}{2}}k N^{-1} \leq C N^{-4/3} \end{equation*} we see that we can indeed restrict the summation. In the second step we replace $K$ by $A$. As before we shall use the notation $A^{\tau}(x,y) = A(x+\tau^2,y+\tau^2)$. For $1 \leq l_i \leq N^{1/3} \log{N}$ it holds by Lemma \ref{Klemmat} that \begin{multline*} \mathrm{det}(K(x_i,x_j))_{0 \leq i,j \leq k} \\ = \frac{1}{(dN^{1/3})^{k+1}} \mathrm{det} \left(A^{\tau_1}(l_i/d N^{1/3},l_j/d N^{1/3}) + \mathcal{O}(N^{-c})\right)_{0 \leq i,j \leq k} \end{multline*} where we let $l_0 = \psi_1 d N^{1/3}$. If we expand the determinant in the right hand side we get $(k+1)^2$ error terms of type \begin{equation*} \frac{N^{-c}}{(dN^{1/3})^{k+1}} \mathrm{det} \left(A^{\tau_1}(l_i/d N^{1/3},l_j/d N^{1/3}) + \mathcal{O}(N^{-c})\right)_{\substack{0 \leq i,j \leq k \\ i \neq i_0, j \neq j_0}}. \end{equation*} An application of Hadamard's inequality together with Lemma \ref{Klemmat} shows that the total error we get when changing from $K$ to $A^{\tau_1}$ is of order $N^{-1/3-c}$. We omit the details. Finally we want to go from summation to integration. To do this we shall use that \begin{multline} \label{eulmacl1} \sum_{l_i=1}^{N^{1/3} \log{N}} A^{\tau_1}(l_i /dN^{1/3},x)A^{\tau_1}(y,l_i/dN^{1/3}) \\ = dN^{1/3} \int_0^{\infty} A^{\tau_1}(z,x) A^{\tau_1}(y,z) \mathrm{dz} + \mathcal{O} \left(e^{-x-y} \right) \end{multline} and \begin{multline}\label{eulmacl2} \sum_{l_i=1}^{N^{1/3} \log{N}} A^{\tau_1}(l_i/dN^{1/3},l_i/dN^{1/3}) \\ = dN^{1/3} \int_0^{\infty}A^{\tau_1}(z,z)\mathrm{dz} + \mathcal{O}(1). \end{multline} This follows from Euler-Maclaurins summation formula and Lemma \ref{K_approx}. We will show that \begin{multline}\label{deteq} \sum_{\substack{l_i=1 \\ 1 \leq i \leq k}}^{N^{1/3} \log{N}} \frac{1}{(d N^{1/3})^{k+1}} \mathrm{det} \left( A^{\tau_1}(l_i/d N^{1/3},l_j/d N^{1/3}) \right)_{0 \leq i,j \leq k} \\ = \frac{1}{d N^{1/3}} \int_{(0,\infty)^{k}} \mathrm{det} \left( A^{\tau_1}(y_i,y_j) \right)_{0 \leq i,j \leq k} \, d^ky \\ + \mathcal{O} \left( (Ck)^{\frac{k+5}{2}} N^{-2/3}\right) \end{multline} where $l_0 = d N^{1/3} \psi_1$ and $y_0 = \psi_1$. This will prove the lemma since \begin{equation*} \sum_{k=1}^{\infty} \frac{1}{k!} (Ck)^{\frac{k+5}{2}} < \infty. \end{equation*} For $r = 0, \ldots ,k$ we set \begin{equation*} D_r = \frac{1}{(d N^{1/3})^{k-r+1}} \mathrm{det} \left( A^{\tau_1}(z_i,z_j) \right)_{0 \leq i,j \leq k} \end{equation*} where \begin{equation*} z_i = \Bigg\{ \begin{array}{ll} \psi_1 & \quad i=0 \\ y_i & \quad 1 \leq i \leq r \\ l_i/dN^{1/3} & \quad r+1 \leq i \leq k \end{array}. \end{equation*} Please note that $D_0$ is what we sum over in (\ref{deteq}) and that $D_k$ is what we integrate over. $D_r$ should roughly be what we get after having changed summation over $l_1, \ldots ,l_r$ to integration over $y_1, \ldots ,y_r$. We can expand $D_r$ in such a way that we get $k^2$ terms of type \begin{multline*} \pm \frac{1}{(d N^{1/3})^{k-r+1}} A^{\tau_1}(z_{i_0},l_{r+1}/d N^{1/3}) A^{\tau_1}(l_{r+1}/d N^{1/3},z_{j_0}) \\ \times \mathrm{det} \left( A^{\tau_1}(z_i,z_j)\right)_{\substack{0 \leq i,j \leq k \\ i \neq r+1,i_0 \\ j \neq r+1,j_0}} \end{multline*} and one term \begin{equation*} \frac{1}{(d N^{1/3})^{k-r+1}} A^{\tau_1}(l_{r+1}/d N^{1/3},l_{r+1}/d N^{1/3}) \mathrm{det} \left( A^{\tau_1}(z_i,z_j)\right)_{\substack{0 \leq i,j \leq k \\ i,j \neq r+1}}. \end{equation*} We now apply (\ref{eulmacl1}) and (\ref{eulmacl2}) and therefore need to deal with the corresponding errors. \begin{multline*} C (N^{1/3})^{k-r+1} e^{-z_{i_0} - z_{j_0}} \mathrm{det} \left( A^{\tau_1}(z_i,z_j) \right)_{\substack{0 \leq i,j \leq k \\ i \neq i_0, r+1 \\ j \neq j_0, r+1}} \\ \leq C (N^{1/3})^{k-r+1} e^{-z_{i_0} - z_{j_0}} \left( \prod_{\substack{j=0 \\ j \neq j_0, r+1}}^k C(k-1) e^{-c z_j} \right)^{1/2} \\ \leq (N^{1/3})^{k-r+1} (C(k-1))^{\frac{k-1}{2}} \prod_{\substack{j=1 \\ j \neq r+1}}^k e^{-c z_j} \end{multline*} Since \begin{equation*} \int_{(0,\infty)^r} d^r x \sum_{\substack{l_i=1 \\ r+2 \leq i \leq k}}^{N^{1/3} \log{N}} \prod_{\substack{j=1 \\ j \neq r+1}}^k e^{-c z_j} \leq C^k (N^{1/3})^{k-(r+1)} \end{equation*} we find that the error from the $k^2$ terms of the first type is estimated by \begin{equation*} k^2 (C(k-1))^{\frac{k-1}{2}} N^{-2/3}. \end{equation*} The error coming from the remaining term can be treated in the same way. Changing from summation over $l_i$ to integration over $y_i$, $1 \leq i \leq k$, hence results in an error estimated by \begin{equation*} k \, k^2 (C(k-1))^{\frac{k-1}{2}} N^{-2/3} = (Ck)^{\frac{k+5}{2}} N^{-2/3} \end{equation*} as needed. \noindent \textbf{Proof of Lemma \ref{philemmat}:} By definition \begin{equation*} \phi_{2u,2v} (x,y) = \frac{(1-\alpha)^{2(v-u)}}{2 \pi} \int_{- \pi}^{\pi} e^{i(y-x) \theta + (u-v) \log{(1 + \alpha^2 - 2 \alpha \cos{\theta})}} d\theta. \end{equation*} Define \begin{equation*} g(\theta) = \log{(1 + \alpha^2 - 2 \alpha \cos{\theta})} \end{equation*} in $[-\pi,\pi]$. This function is analytic in a neighbourhood of zero and a Maclaurin expansion gives \begin{equation*} g(\theta) = \log{(1-\alpha)^2} + \frac{\alpha}{(1-\alpha)^2 } \theta^2 + c_2 \theta^4 + \mathcal{O}(\theta^6) \end{equation*} where $c_4 < 0$. It is easy to see that for any $\delta > 0$ there exists $\epsilon > 0$ such that \begin{equation*} g(\theta) \geq \log{(1-\alpha)^2} + \epsilon \end{equation*} if $|\theta| \geq \delta$. Hence \begin{equation*} \left| \int_{|\theta| > \delta} \frac{(1-\alpha)^{2(v-u)}}{2 \pi} e^{i(y-x)\theta + (u-v)g(\theta)} d\theta \right| \leq \frac{1}{2 \pi} \int_{\delta}^{\pi} e^{(u-v) \epsilon} d \theta \sim e^{-\epsilon N^{\gamma}}. \end{equation*} We expect that the main contribution to $\phi_{2u,2v}$ will be \begin{align*} \frac{1}{2 \pi} \int_{-\delta}^{\delta} & e^{i(y-x)\theta + (u-v) \frac{\alpha}{(1-\alpha)^2} \theta^2} \, d \theta \\ & = \frac{1}{2 \pi} \int_{-\delta}^{\delta} e^{i(y'-x') d N^{1/3} \theta - s d^2 N^{\gamma} \theta^2} \, d \theta = \left[ t = \sqrt{s}d N^{1/3}\theta \right] \\ & = \frac{1}{2 \pi \sqrt{s} d N^{1/3}} \int_{-\delta \sqrt{s} d N^{1/3}}^{\delta \sqrt{s} d N^{1/3}} e^{i\frac{y'-x'}{\sqrt{s}} t - N^{\gamma- \frac{2}{3}} t^2} \, d t \\ & = \frac{1}{2 \pi \sqrt{s} d N^{1/3}} \int_{-\infty}^{\infty} e^{i\frac{y'-x'}{\sqrt{s}} t - N^{\gamma- \frac{2}{3}} t^2} \, d t + \mathcal{O}\left( e^{-N^{\gamma}} \right) \\ & = \frac{1}{d N^{1/3}} \frac{1}{\sqrt{4 \pi s N^{\gamma - 2/3}}} e^{-\frac{(x'-y')^2}{4 s N^{\gamma - 2/3}}} + \mathcal{O}\left( e^{-N^{\gamma}} \right) . \end{align*} Below we will analyze the error. For simplicity we take $s=1$. Define $h(\theta)$ by \begin{equation*} g(\theta) = \log{(1-\alpha)^2} + \frac{\alpha}{(1-\alpha)^2} \left( \theta^2 + h(\theta) \right). \end{equation*} This means that \begin{equation*} h(\theta) = \sum_{k=4}^{\infty} h_k \theta^k \end{equation*} where $h_4 < 0$. Note that $h$ is even since $g$ is and also that, for $\delta$ small enough, $h(\theta) < 0$ if $|\theta| \leq \delta$. The error becomes \begin{equation*} \textrm{Err} = \left| \int_{-\delta}^{\delta} e^{i(y'-x')d N^{1/3} \theta} F(\theta) d \theta \right| \end{equation*} where \begin{equation*} F(\theta) = e^{- d^2 N^{\gamma} \theta^2} - e^{-d^2 N^{\gamma} \theta^2 - d^2 N^{\gamma}h(\theta)} . \end{equation*} Next we integrate by parts. \begin{multline*} \textrm{Err} \leq \left| \left[ \frac{1}{i(y'-x')d N^{1/3}} e^{i(y'-x')d N^{1/3} \theta} F(\theta) \right]_{-\delta}^{\delta} \right| \\ \qquad \qquad + \frac{1}{|y'-x'|d N^{1/3}} \left| \int_{-\delta}^{\delta} e^{i(y'-x')d N^{1/3} \theta} F'(\theta) d \theta \right| \\ \leq \frac{3}{|y'-x'|d N^{1/3}} e^{-d^2 N^{\gamma} \delta^2} + \frac{1}{|y'-x'|d N^{1/3}} \int_{-\delta}^{\delta} \left| F'(\theta) \right| d \theta \end{multline*} The last integral will be easy to compute if we can find out where $F'(\theta)$ changes sign. \begin{equation*} F'(\theta) = 2 d^2 N^{\gamma} \theta \, e^{-d^2 N^{\gamma}(\theta^2 + h(\theta))} \left( 1 + \frac{h'(\theta)}{2 \theta} - e^{d^2 N^{\gamma}h(\theta)} \right) \end{equation*} A point in $[-\delta,\delta] \setminus \{0\}$ where $F'$ changes sign will satisfy \begin{equation*} \frac{1}{d^2 N^{\gamma}} = \frac{h(\theta)}{\log{\left[ 1 + \frac{h'(\theta)}{2 \theta} \right]}} = \frac{\theta^2}{2} + \mathcal{O}(\theta^4). \end{equation*} This shows that if $N$ is large then $F'$ has two zeros $\pm \theta_0$ in $[-\delta,\delta] \setminus \{0\}$. Moreover, $\theta_0$ is of order $N^{-\gamma / 2}$. Given this information we check which sign $F'$ has in different intervals and get \begin{align*} \int_{-\delta}^{\delta} |F'(\theta)| d \theta & = 2 \int_{0}^{\delta} |F'(\theta)| d \theta \\ & = -\int_{0}^{\theta_0} F'(\theta) d \theta + \int_{\theta_0}^{\delta} F'(\theta) d \theta \\ & = F(0) - F(\theta_0) + F(\delta) - F(\theta_0) \\ & = \mathcal{O} (N^{-\gamma}). \end{align*} This almost finishes the proof of the second inequality in the lemma. We should not forget the exponentially small error terms that appeared above. They do not have the factor $|x'-y'|^{-1}$ in front of them. However, a couple partial integrations can be used to take care of this obstacle. The first inequality in the lemma follows from the following calculation. \begin{multline*} \int_{0}^{\delta} |F(\theta)| d \theta = [ \theta = t N^{- \gamma / 2}] \\ = N^{-\gamma / 2} \int_{0}^{N^{\gamma / 2} \delta} e^{-d^2 t^2 - d^2 h(t N^{- \gamma / 2})} \left( 1 - e^{d^2 N^{\gamma} h(t N^{- \gamma / 2})} \right) d t \\ \leq N^{-\gamma / 2} \int_{0}^{N^{\gamma / 2} \delta} e^{- c_1 t^2} \left( 1 - e^{-c_2 N^{-\gamma} t^4} \right) d t \\ \leq N^{-\gamma / 2} \int_{1}^{N^{\gamma / 2} \delta} t e^{- c_1 t^2} \left( 1 - e^{-c_2 N^{-\gamma} t^4} \right) d t + \mathcal{O}(N^{-3\gamma / 2}) \end{multline*} We now use partial integration. \begin{multline*} \int_{1}^{N^{\gamma / 2} \delta} t e^{- c_1 t^2} \left( 1 - e^{-c_2 N^{-\gamma} t^4} \right) d t \\ = \left[ -\frac{1}{2 c_1} e^{- c_1 t^2} \left( 1 - e^{-c_2 N^{-\gamma} t^4} \right) \right]_1^{N^{\gamma / 2} \delta} \\ + \frac{2 c_2 N^{-\gamma}}{c_1} \int_{1}^{N^{\gamma / 2} \delta} e^{- c_1 t^2} t^3 e^{-c_2 N^{-\gamma} t^4} d t \\ = \mathcal{O} \left( N^{-3 \gamma / 2} \right) \end{multline*} This concludes the calculations in this section as well as in this paper.
2,877,628,091,367
arxiv
\section{Introduction} It is generally believed that the narrowness of the ground states of heavy quarkonia $J/\psi$ and $\Upsilon$ is due to the so called OZI suppression\cite{OZI}. This OZI rule demands that if there are no quark lines connecting the initial and final hadron states, the processes are suppressed. At beginning, it seemed to be a phenomenological principle, however, further studies indicate that the suppression may originate from the loop suppression which can be precisely evaluated in the framework of perturbative QCD. More than 20 years ago, the OZI-suppressed radiative decays of orthoquarkonia was investigated by K\"{o}rner et al. in perturbative QCD\cite{Korner}, where reasonable approximations were adopted. Since then, technique for calculating loop diagrams has been greatly improved and knowledge on the wavefunctions of light mesons is much enriched. Meanwhile more data have been accumulated and the corresponding experimental measurements become more precise\cite{exp1,exp2}, all the experimental progress indeed provides us with a possibility to test our theoretical framework where the perturbative and non-perturbative effects are factorized and a convolution integral over them results in the physical transition amplitude. Following their work, we have also re-calculated the rates of $J/\psi(\Upsilon)\rightarrow \gamma+\pi^0(\eta,\;\eta')$ which are respectively isospin-violated, flavor-SU(3)-violated and flavor-SU(3)-favored processes without any approximations at one-loop level\cite{RevisitOZI}. In fact, there may exist other possible mechanisms which also contribute to the concerned processes of $J/\psi(\Upsilon)\to PP$ and $VP$ where $P$ and $V$ stand for pseudoscalar and vector mesons respectively\cite{Close,HadronLoop,Chang}, therefore to fully understand such reactions, a complete calculation on the OZI-suppressed non-leptonic decay processes is obviously necessary and should be possible with our present knowledge. Comparing with the radiative decays, theoretical evaluation of the rate of the non-leptonic decays is much more complicated. In the radiative decays, a photon is emitted as a free particle escaping away from the reaction and it does not participate in strong interaction. For the non-leptonic decay, the two (at least) daughter hadrons tangle together by exchanging gluons, therefore one not only needs to carry out the complicated Feynman integrations of four-point and five-point loop functions (i.e. D- and E-functions), but also there are more Feynman diagrams than the radiative decays. In this work we obtain the transition amplitude by carefully calculating the loop integrations. Following the standard procedure\cite{loop}, one can reduce the 5-point loop functions into 4-point and 3-point loop functions which are then evaluated in terms of the program "LoopTools"\cite{Dfunction,LoopTools}. Moreover, one needs to carefully handle the color factors whereas they are much simpler in the radiative decays. In this work, we are going to make a full calculation on the OZI-suppressed processes of $J/\psi(\Upsilon)\rightarrow \pi\pi$ and $J/\psi(\Upsilon)\rightarrow\rho\pi$ at the order of leading twist. The reason to only consider $\pi^{\pm,0}$ and $\rho^{\pm,0}$ as the produced pseudoscalar and vector mesons is following. The processes are non-leptonic decays, at least three hadrons are involved and to theoretically evaluate the rates, one not only needs to calculate the complicated loop integrations at quark-gluon level, but also have to deal with the hadronic matrix elements which are fully governed by the non-perturbative QCD effects. However, at present, a completely reliable way to calculate the non-perturbative QCD effects is lacking, so that some phenomenological models must be invoked. In the decays of $J/\psi(\Upsilon)\rightarrow\pi\pi,\rho\pi$, the product mesons are light and can be nicely described in terms of the light-cone distribution amplitudes. Since $\pi$ and $\rho$ are composed of only $u,d$ and $\bar u,\bar d$ whose masses are approximately equal, due to the obvious symmetry, the distribution functions are more symmetric and reliable, at least for the leading twist order. By contraries, for the distribution functions of $K(K^*)$, $\eta$ and $\eta'$, the produced mesons are composed of constituents $u(d)$ and $s$ quarks which have a large difference in mass, thus one would expect larger uncertainties in the evaluation of hadronic matrix elements. Therefore, in this work, these final states are not concerned. A simple analysis indicates that $J/\psi(\Upsilon)\rightarrow \pi\pi$ is an isospin-violating process. Namely the pions are treated as identical particles once the isospin symmetry is adopted in the analysis. Concretely, by the conservation of angular momentum, the two pions are in the p-wave state, since pions are identical bosons, the wave function of the two-pion system must be totally symmetric, so that the isospin of the system should be 1 as $${1\over\sqrt 2}(|1,1\rangle|1,-1\rangle-|1,-1\rangle|1,1\rangle)\equiv {1\over \sqrt 2}(|\pi^+\rangle|\pi^-\rangle-|\pi^-\rangle|\pi^+\rangle).$$ That requires that the process of $J/\psi\rightarrow \pi^0\pi^0$ is strictly forbidden. The isospin violation effects are expressed in the mass difference of $u$ and $d$ quarks which appears at the loop calculations, and the factor $m_u-m_d$ will be explicitly shown in the expressions derived at the quark level. Even though, we only consider the leading twist contribution of the light-cone wave functions which are independent of the quark masses, we still count in the mass splitting which results in the isospin violation. Turn to the processes $J/\psi(\Upsilon)\rightarrow\rho\pi$, in contrast with the $\pi\pi$ case, $\rho$ and $\pi$ are not identical particles, therefore the anti-symmetry requirement which enforces the two-pion system to be in isospin 1 state, is dismissed, so that the $\rho\pi$ system can be in isospin 0 state and it guarantees the iso-spin conservation for the decay process $J/\psi(\Upsilon)\rightarrow \rho^0\pi^0$. This observation seems to demand that the branching ratio of $J/\psi(\Upsilon)\to\rho\pi$ should be much larger than the isospin violating process $J/\psi(\Upsilon)\rightarrow \pi\pi$ and the data indeed support this statement. Moreover, in $J/\psi(\Upsilon)\rightarrow \rho\pi$, the isospin 0 state of $\rho\pi$ is dominant, so that the branching ratios of $J/\psi(\Upsilon)\rightarrow \rho^0\pi^0$ and $J/\psi(\Upsilon)\rightarrow \rho^+\pi^-+\rho^-\pi^+$ roughly retain a relation of 1:2. However, as indicated in Refs.\cite{Brodsky,HQP}, such processes violate the hadronic helicity conservation because gluons and photon do not carry hadronic helicities. A non-zero theoretical prediction on the rate at the order of leading twist must come from a violation of the hadronic helicity conservation. It is indicated that such a violation is proportional to the light quark mass, therefore one can expect that the directly calculated OZI suppressed amplitude should be proportional to $m_q^2(q=u,d)$. Our calculation confirms this mechanism (see the text for details). The data\cite{PDG} tell us an opposite conclusion that the helicity-violated process $J/\psi(\Upsilon)\rightarrow \rho\pi$ has sizable branching ratio and almost is one of the dominant modes in $J/\psi$ and $\Upsilon$ decays. The discrepancy should be explained, some suggestions that the next-to-leading twist contribution, higher Fock states and other mechanisms such as the hadronic loop and glueball intermediate states etc. are taken into account, are proposed. In this work, we only concern the OZI-suppressed processes for $J/\psi(\Upsilon)\rightarrow \pi\pi, \; \rho\pi$, and will explicitly demonstrate the isospin conserving and violating effects and the helicity violation effects in the formulation. As indicated above, to evaluate the hadronic matrix elements, one has to deal with a convolution integrals over the distribution amplitudes of the concerned hadrons. Because $J/\psi$ and $\Upsilon$ contain two heavy constituents, their bound-state effects can be simply expressed in terms of the wave functions at origin which can be easily obtained from the data of their leptonic decays. The distribution functions of the two produced light hadrons might cause uncertainties, even though as indicated above, for $\pi$ and $\rho$ mesons, they can be reduced to minimum. It is interesting to note that the OZI-suppressed process for $J/\psi(\Upsilon)\rightarrow \rho\pi$ is forbidden by the hadronic helicity conservation at the leading twist, if the quark mass is neglected at the loop calculations. However, it is not zero and the transition amplitudes of such processes must be proportional to $m_q^2$. In this work we only consider the contribution from the leading twist distribution amplitudes of the mesons and show that as $m_q\to 0$, the amplitudes would approaches zero, in other words, we confirm the statement that the hadronic helicity conservation forbids the process $J/\psi(\Upsilon)\to \rho\pi$ at the leading twist if quark mass is neglected. Following the literature, we can trust the calculations to a relatively accurate level. A rough numerical estimate by changing the input parameters in the distribution functions and its forms given in literature, shows that the error can be of order of a few tens percents. After this introduction, we give all the formulas where we carry out the four- and five-point Feynamn integrals to obtain the hard-scattering amplitude at the quark level. The isospin violation factor $m_u-m_d$ explicitly shows up for $J/\psi(\Upsilon)\rightarrow \pi\pi$, and for the helicity-violated $J/\psi(\Upsilon)\rightarrow \rho\pi$ the amplitude is also proportional to the light-quark masses, then one needs to convolute the hard kernel with the initial and final states, and the convolution integration results in the physical transition amplitude in Sec. II. In Sec. III, we carefully analyze the infrared behavior in $J/\psi(\Upsilon) \to \pi\pi,\;\rho\pi$ and convince ourselves that all Feynman diagrams are infrared-safe when the end-point behaviors of the wave functions are considered. In Sec. IV, we make a numerical evaluation of the decay rates of $J/\psi(\Upsilon)\rightarrow \pi^+\pi^-$ and $\rho^\pm\pi^\mp$ and some necessary input parameters are explicitly given. The last section is devoted to a simple discussion on the uncertainties in our calculation and possible contributions to these processes from other mechanisms and then draw our conclusion. Some tedious details are collected in the appendices. \section{Theoretical calculation on the rates of $J/\psi(\Upsilon) \to \pi \pi, \rho \pi$} In this work, without invoking the so-called weak-binding approximation which was adopted in literature\cite{Korner}, we explicitly keep the masses of the heavy and light quarks at the concerned propagators, when derive the transition amplitudes. The amplitude is written as \begin{eqnarray} \mathcal{A}&=& H\otimes\Phi_{J/\psi(\Upsilon)}\otimes\Phi_{P_1}\otimes\Phi_{P_2}\nonumber \\ H&=&C\otimes \widetilde{H} \end{eqnarray} where the factors $C$, $\widetilde{H}$, $\Phi_{J/\psi(\Upsilon),P_1,P_2}$ are the color factor, hard kernel and distribution amplitudes of mesons, respectively. And the labels $P_1, P_2$ denote the two produced mesons in the final state. Indeed, here the perturbative and non-perturbative parts are factorized and a convolution integral would associate them to result in the physical amplitude. The detailed expressions of the hard kernels are given in Appendix A. Below, in Fig. 1, we present the relevant Feynman diagrams. In these figures, we only explicitly draw the typical diagrams. There exit their topologically deformed diagrams which are obtained by exchanging the connections of the gluon-lines in the loop to the light-quark-gluon vertices, namely the two gluon-lines cross with each other. We do not explicitly show them in Fig. 1 just for simplicity. But definitely, in our derivation the contributions from those diagrams are included. \begin{figure}[!htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=10cm]{OZIdiag.eps} \end{tabular} \end{center} \label{figure} \caption{The Feynman diagrams of $J/\psi(\Upsilon) \to P_1 P_2$. Our calculation also includes such diagrams which are topologically deformed from that shown above by exchanging the connections of the gluon-lines in the loops to the gluon-light-quark vertices, where the two gluon-lines cross with each other.} \label{fig1} \end{figure} The amplitude of $J/\psi(\Upsilon) \to P_1 P_2$ can be divided into three categories which correspond to Fig. \ref{fig1} (1a1) and (1a2), (1b1) and (1b2), (1c1) and (1c2) respectively. To make the text succinct we collect the detailed expressions of the amplitudes in Appendix B, where the diagrams with gluon-lines in the loops crossing each other are labelled as (2a1), (2a2), (2b1), (2b2), (2c1) and (2c2) respectively. Generally, the transition amplitude for $J/\psi(\Upsilon)\to \pi^+\pi^-, \rho^+\pi^-, \rho^-\pi^+$ can be written as: \begin{eqnarray} \mathcal{A}^{J/\psi(\Upsilon)\to \pi^+\pi^-}&=&\sum_i \mathcal{A}^{i}(P_1,P_2,m_q),\nonumber \\ \mathcal{A}^{J/\psi(\Upsilon)\to \rho^+\pi^-(\rho^-\pi^+)}&=&\sum_i (\mathcal{A}^{ia}(P_1\to \rho,P_2\to \pi,m_q)+\mathcal{A}^{ib}(P_1\to \pi,P_2\to \rho,m_q)), \end{eqnarray} where summing over $i$ means including all diagrams listed above and their topologically deformed diagrams which were depicted above. For $\pi^+\pi^-$ final states one possible setting is that $P_1$, $P_2$ correspond to $\pi^+$, $\pi^-$ respectively, and another possibility of interchanging $\pi^+$ and $\pi^-$ is also included in the sum. $m_q$ is either $m_u$ or $m_d$, which respectively exist in different settings and their contributions should be summed in the final amplitude. For $\rho^+\pi^-$ or $\rho^-\pi^+$ final states interchanging $P_1$, $P_2$ would induce obvious differences and therefore we use two new labels "a" and "b" to distinguish the two different settings. Here we do not need to calculate the rate of $J/\psi(\Upsilon)\rightarrow \rho^0\pi^0$ because as discussed above, it is an isospin conserving process and the Clebsch-Gordan coefficients in I=0 state determines the ratio of $\Gamma(J/\psi(\Upsilon)\rightarrow \rho^0\pi^0)/ \Gamma(J/\psi(\Upsilon)\rightarrow \rho\pi)$ and it should be close to 1/3, both the data and the analysis according to the topology of our diagrams shown in Fig. \ref{fig1} confirm it, even though we only consider the contributions from the leading twist distribution amplitudes of the mesons. In the quark picture, hadrons are made of valence quarks whose momenta-distributions are described by appropriate distribution functions. The leading-twist distribution amplitudes of $J/\psi(\Upsilon)$ is usually defined through the correlator\cite{Ball}: \begin{eqnarray} &&\langle 0|\overline{c}_{\alpha}^i(y)c_{\beta}^{j}(x)|J/\psi(p)\rangle={\delta_{ij}\over 4N_c}\int^1_0due^{i\bar{u}p\cdot y+iup\cdot x}\times\nonumber \\ &&\left\{f_{J/\psi} m_{J/\psi} \rlap /\varepsilon_{J/\psi}\phi_{J/\psi\parallel}(u)+{1\over 2}\sigma^{\mu'\nu'}if_{J/\psi}(\varepsilon_{J/\psi\mu'}p_{\nu'}-\varepsilon_{J/\psi\nu'}p_{\mu'})\phi_{J/\psi\perp}(u)\right\}_{\beta\alpha}, \end{eqnarray} where $\varepsilon_{J/\psi}$ and $f_{J/\psi}$ are the polarization vector and decay constant of $J/\psi$ respectively, and $\bar u\equiv 1-u$. $\phi_{\parallel}$ and $\phi_{\perp}$ are the leading-twist distribution functions corresponding to the longitudinally and transversely polarized mesons, respectively, by the definition given in literature\cite{Ball}. For the case of $\Upsilon$ one only needs to replace all the symbols of charm c into bottom b (of course as well as the concerned parameters). The leading-twist distribution amplitude of $\pi$ is usually defined through the correlator\cite{Beneke}: \begin{eqnarray} &&\langle \pi(p')|\overline{q}_{1\alpha}^i(y)q_{2\beta}^{j}(x)|0\rangle=i{\delta_{ij}f_\pi\over 4N_c}\int^1_0due^{iup'\cdot y+i\bar{u}p'\cdot x}\left\{\rlap /p' \gamma_5 \phi(u)\right\}_{\beta\alpha}\label{pi}, \end{eqnarray} where $f_{\pi}$ is the decay constant of pion. And the leading-twist distribution amplitude of $\rho$ is usually defined through the correlator\cite{Ball}: \begin{eqnarray} &&\langle \rho(p')|\overline{q}_{1\alpha}^i(y)q_{2\beta}^{j}(x)|0\rangle={\delta_{ij}\over 4N_c}\int^1_0due^{iup'\cdot y+i\bar{u}p'\cdot x}\times\nonumber \\ &&\left\{f_{\rho} m_{\rho} \rlap /\varepsilon_{\rho}^\ast\phi_{\rho\parallel}(u)-{1\over 2}\sigma^{\mu'\nu'}if_{\rho}^T(\varepsilon_{\rho\mu'}^\ast p'_{\nu'}-\varepsilon_{\rho\nu'}^\ast p'_{\mu'})\phi_{\rho\perp}(u)\right\}_{\beta\alpha}\label{rho} \end{eqnarray} where $\varepsilon_{\rho}$ and $f_{\rho}$, $f_{\rho}^T$ are the polarization vector and decay constant of $\rho$ respectively. \section{Analysis on the infrared behaviors in $J/\psi(\Upsilon) \to P_1 P_2$} It is definitely demanded that a reasonable theoretical prediction on any practical process must be infrared safe, namely the infrared divergence must be exactly cancels if it exists or properly dealt with at the end of the calculation which corresponds to real physical measurable quantities, such as the decay width and cross section. In this work, we will explicitly show that in the framework of perturbative QCD, the infrared behavior of each individual Feynman diagram shown in Fig. \ref{fig1} is benign, even though at first glimpse it seems to be divergent. There are several typical Feynman diagrams shown in Fig. \ref{fig1}. We take the amplitude of Fig. \ref{fig1} (1a2) for $J/\psi \to P_1 P_2$ as an example to analyze the infrared behavior. Its contribution to the transition amplitude reads \begin{eqnarray} &&\mathcal{A}^{1a2}(m_q,u,v)=H^{1a2}(m_q,u,v)\otimes\Phi_{P_1}(u)\otimes\Phi_{P_2}(v).\label{A1a2}, \end{eqnarray} and $H^{1a2}(m_q,u,v)$ is given in Appendix A. The concerned factor of the amplitude is \begin{eqnarray} {1\over k^2(k+p_4+p_6)^2[(k+p_4)^2-m_q^2][(k+p_1-p_3-p_5)^2-m_Q^2](p_3+p_5)^2[(p_1-p_3-p_5)^2-m_Q^2]}.\label{deno} \end{eqnarray} Firstly, if there exists an infrared divergence in the loop integration, it must come from the kinematic region $k\to 0$ and the end-points of the distribution functions, therefore one only needs to analyze two cases: (1) $k\to 0$, (2) at end-points. To show the infrared behavior of the amplitude after integrating out the loop function in the case (1), we may fix the external momenta of the quarks and antiquarks in final states by a special choice $p_3=p_4={1\over 2}p_{_{P_1}}, p_5=p_6={1\over 2}p_{_{P_2}}$ to avoid possible endpoint divergence. Looking at the expression (\ref{deno}), as $m_q\neq 0$, the dangerous part is proportional to $1/k^2$ which is finite after integrating over the loop momentum $d^4k$, so that in this case there is no infrared divergence coming from the loop integration. Secondly, in case (2), when the momentum of one quark(antiquark) in each of the final mesons is close to its endpoint, for example $p_4,\; p_6\to 0$, while the other quark (antiquark) takes almost all the momentum of the meson. It is observed that the factor ${1\over (p_3+p_5)^2}$ does not contribute a divergence. The dangerous term comes from the factor $k^2(k+p_4+p_6)^2$ at the denominator as $k\to 0$ and $p_4,\; p_6\to 0$, which seems to cause a logarithmic divergence. However as we convolute the amplitude with the distribution functions of the two mesons whose distribution functions linearly approach to 0($\phi(u), \phi(v)\to 0$) at the end-points, because it turns to zero fasters than the logarithmically divergent factor, i.e. $\lim_{u\to 0}u\ln u\to 0$, the infrared behavior is safe. Finally, when $p_3,\; p_5\to 0$, the loop integration does not produce any divergence by the same sake of the first case. It is noted, that the factor ${1\over (p_3+p_5)^2}\phi(u)\phi(v)$ is finite, but there exists a subtlety. Namely in general the limit depends on the ways how $x, y$ approach to 0 more or less. For $J/\psi(\Upsilon)\rightarrow \pi\pi$ there are two possible settings for each diagram, namely, an interchange $\pi^{\pm}\leftrightarrow \pi^{\mp}$ brings one setting to another, and their contributions have an opposite sign due to the SU(2) symmetry and cancels each other (obviously, for the finite term, their contributions cannot cancel each other due to the SU(2) breaking i.e. $m_u\neq m_d$). Thus the dependence on the order of limits disappears. By contraries, for $J/\psi(\Upsilon)\to \rho\pi$, there is no such a cancellation, so that even though infrared divergence does not exist, the final numerical results somehow depend on the order of taking limits of $u$ and $v$ approaching to zero. The strategy we adopt in this work is to set the integration order as we integrate over $u$ and then $v$ and it can be treated as a regularization scheme similar to that we generally adopt for treating the ultraviolet divergence. \section{Numerical results} The input parameters which we are going to use in the numerical computations are \cite{RevisitOZI,PDG,Ball,data1,data2}: $f_{J/\psi} = 551$ MeV, $f_\Upsilon= 710$ MeV, $f_\pi = 131$ MeV, $f_\rho = 198$ MeV, $f_\rho^T = 160$ MeV, $m_{J/\psi} = 3096.87$ MeV, $m_\Upsilon=9460.3$ MeV, $m_{\pi^{\pm}} = 139.57$ MeV, $m_{\rho^{\pm}} = 775.5$ MeV, $\alpha_s(m_c) = 0.32$, $\alpha_s(m_b) = 0.21$, $m_c = 1300$ MeV, $m_b=4500$ MeV, and the meson distribution functions respectively. For the numerical evaluations, in Eqs.(\ref{pi}, \ref{rho}), we adopt three different distribution functions for the light mesons in the literatures\cite{Beneke,wave1,wave2,wave3} as $\phi_1(x) = 6x(1-x), \phi_2(x) = 30x^2(1-x)^2, \phi_3(x) = {15\over 2}(1-2x)^2[1-(1-2x)^2]$, and also let the current quark masses of the u and d types vary within a reasonable range. Below in Tables I, II, III and IV, we present our numerical results on the decay rates of $J/\psi\rightarrow \pi^+\pi^-,\; \rho^+\pi^-+\rho^-\pi^+$ and $\Upsilon \rightarrow \pi^+\pi^-,\; \rho^+\pi^-+\rho^-\pi^+$ respectively. \begin{table}[h] \caption{Decay widths ($\Gamma$) of $J/\psi\to \pi^+ \pi^-$ based on the three distribution functions, $\phi_1$, $\phi_2$ and $\phi_3$, respectively} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $m_u$(MeV) & $m_d$(MeV) & $\Gamma(\phi_1)$(MeV) & $\Gamma(\phi_2)$(MeV) & $\Gamma(\phi_3)$(MeV) & exp(MeV) \\ \hline 2 & 4 & $4.52\times 10^{-5}$ & $2.98\times 10^{-5}$ & $2.71\times 10^{-4}$ & \\ 3 & 4 & $1.88\times 10^{-5}$ & $9.35\times 10^{-6}$ & $5.67\times 10^{-5}$ & \\ 3 & 5 & $3.17\times 10^{-5}$ & $2.36\times 10^{-5}$ & $1.25\times 10^{-4}$ & $(1.37\pm 0.21)\times 10^{-5}$\\ 4 & 5 & $1.03\times 10^{-5}$ & $8.12\times 10^{-6}$ & $4.26\times 10^{-5}$ & \\ 4.5 & 6 & $2.29\times 10^{-5}$ & $1.43\times 10^{-5}$ & $8.85\times 10^{-5}$ & \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h] \caption{Decay widths ($\Gamma$) of $\Upsilon\to \pi^+ \pi^-$ based on the three distribution functions, $\phi_1$, $\phi_2$ and $\phi_3$, respectively} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $m_u$(MeV) & $m_d$(MeV) & $\Gamma(\phi_1)$(MeV) & $\Gamma(\phi_2)$(MeV) & $\Gamma(\phi_3)$(MeV) & exp(MeV) \\ \hline 2 & 4 & $2.79\times 10^{-6}$ & $1.24\times 10^{-6}$ & $1.13\times 10^{-5}$ & \\ 3 & 4 & $8.16\times 10^{-7}$ & $5.28\times 10^{-7}$ & $6.95\times 10^{-6}$ & \\ 3 & 5 & $1.23\times 10^{-6}$ & $9.43\times 10^{-7}$ & $9.6\times 10^{-6}$ & $<2.7\times 10^{-5}$\\ 4 & 5 & $7.39\times 10^{-7}$ & $2.72\times 10^{-7}$ & $5.11\times 10^{-6}$ & \\ 4.5 & 6 & $9.78\times 10^{-7}$ & $7.5\times 10^{-7}$ & $8.93\times 10^{-6}$ & \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h] \caption{Decay widths ($\Gamma$) of $J/\psi\to \pi^+ \rho^- + \pi^- \rho^+$ based on the three distribution functions, $\phi_1$, $\phi_2$ and $\phi_3$, respectively} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $m_u$(MeV) & $m_d$(MeV) & $\Gamma(\phi_1)$(MeV) & $\Gamma(\phi_2)$(MeV) & $\Gamma(\phi_3)$(MeV) & exp(MeV) \\ \hline 2 & 2 & $1.04\times 10^{-4}$ & $7.21\times 10^{-5}$ & $5.11\times 10^{-4}$ & \\ 3 & 3 & $2.36\times 10^{-4}$ & $1.6\times 10^{-4}$ & $1.17\times 10^{-3}$ & \\ 4 & 4 & $4.12\times 10^{-4}$ & $2.9\times 10^{-4}$ & $2.08\times 10^{-3}$ & $(1.06\pm 0.08)\times 10^{-3}$\\ 5 & 5 & $6.69\times 10^{-4}$ & $4.54\times 10^{-4}$ & $3.38\times 10^{-3}$ & \\ 6 & 6 & $9.75\times 10^{-4}$ & $6.68\times 10^{-4}$ & $4.88\times 10^{-3}$ & \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h] \caption{Decay widths ($\Gamma$) of $\Upsilon\to \pi^+ \rho^- + \pi^- \rho^+$ based on the three distribution functions, $\phi_1$, $\phi_2$ and $\phi_3$, respectively} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $m_u$(MeV) & $m_d$(MeV) & $\Gamma(\phi_1)$(MeV) & $\Gamma(\phi_2)$(MeV) & $\Gamma(\phi_3)$(MeV) & exp(MeV) \\ \hline 2 & 2 & $2.23\times 10^{-6}$ & $1.56\times 10^{-6}$ & $8.54\times 10^{-6}$ & \\ 3 & 3 & $5.04\times 10^{-6}$ & $3.67\times 10^{-6}$ & $1.84\times 10^{-5}$ & \\ 4 & 4 & $8.95\times 10^{-6}$ & $6.25\times 10^{-6}$ & $3.4\times 10^{-5}$ & $<1.08\times 10^{-5}$\\ 5 & 5 & $1.42\times 10^{-5}$ & $9.93\times 10^{-6}$ & $5.44\times 10^{-5}$ & \\ 6 & 6 & $1.84\times 10^{-5}$ & $1.38\times 10^{-5}$ & $7.61\times 10^{-5}$ & \\ \hline \end{tabular} \end{center} \end{table} As discussed above the OZI-suppressed process $J/\psi(\Upsilon)\to \pi^+ \pi^-$ is isospin-violated whereas $J/\psi(\Upsilon)\to \pi^+ \rho^- + \pi^- \rho^+$ violates the hadronic helicity conservation. It is noted that the theoretically evaluated values on the OZI-suppressed processes $J/\psi\to \pi^+ \pi^-$ are slightly larger than the experimental data depending on the parameter choices such as $m_u,$ $m_d$ and types of the meson distribution functions, whereas that on $J/\psi\to \pi^+ \rho^- + \pi^- \rho^+$ are one order smaller than the data. It may imply that some other mechanisms may also contribute to the decays and we will remark on the results in next section. \section{Discussion and Conclusion} In this work, we calculate the contributions of the so-called OZI forbidden processes to the decays $J/\psi(\Upsilon)\rightarrow \pi\pi, \rho\pi$. As we discussed in the introduction, the process $J/\psi(\Upsilon)\rightarrow \pi\pi$ is an isospin violating reaction, whereas $J/\psi(\Upsilon)\rightarrow \rho\pi$ is an isospin conserving one, on other aspect, the former one conserves the hadronic helicity whereas the latter one violates it. Our numerical results on $J/\psi(\Upsilon)\to \pi^+ \pi^-$ are reasonably consistent with the data at order of magnitude, but the evaluated branching ratio of $J/\psi(\Upsilon)\rightarrow \rho\pi$ is obviously smaller than data by one order. As shown in Tables I through IV, one can notice that the results deviate from each other in a wider range as one adopts different wave functions which all are suggested in literatures, as well as the light-quark masses. $J/\psi(\Upsilon)\rightarrow \pi\pi$ is an isospin violating process and at the leading twist the OZI-suppressed process which was supposed to be the main contribution to the mode of $J/\psi(\Upsilon)\rightarrow \rho\pi$ violates the hadronic helicity conservation. As well known, the source of isospin violation can be either from a photon emission (absorption) and/or quark mass difference, and for the helicity violating processes, the decay width is proportional to $m_q^2$, so that both of the processes are somehow sensitive to the light quark masses and much suppressed. Our formulas explicitly show that as $m_q\to 0$, the decay widths for both modes approach zero. This observation confirms the above statements. In this work, we only include the contributions from the leading-twist distribution amplitude and our results confirm that due to violation of helicity conservation, the theoretical evaluated ratio is one order of magnitude smaller than the data. It is also noted from our qualitative analysis that the rate of isospin-violated process $J/\psi(\Upsilon)\to \pi\pi$ should be proportional to the square of mass difference $(m_u-m_d)^2$, whereas rate of the the hadronic helicity-violating process $J/\psi(\Upsilon)\to \rho\pi$ is proportional to $(m_u+m_d)^2$, i.e. it is natural to expect that $\Gamma(J/\psi(\Upsilon)\to \rho\pi)$ which is theoretically estimated in this framework, is a few times larger than $\Gamma(J/\psi(\Upsilon)\to \pi\pi)$, our numerical results shown in Tables I through IV confirm this statement, and if $m_u=m_d$, the estimated $\Gamma(J/\psi(\Upsilon)\to \pi\pi)$ is zero, whereas $\Gamma(J/\psi(\Upsilon)\to \rho\pi)$ is not. But this still does not explain the largeness of the branching ratio of $J/\psi(\Upsilon)\to \rho\pi$. It is indicated in Refs.\cite{Brodsky,HQP}, the large branching ratio might be due to higher twist contributions. Therefore it seems that to correctly evaluate the branching ratio, in principle one needs to include the contributions from higher twist distribution amplitudes in the evaluation. On other side, besides the contributions from higher twist distribution amplitudes, there may exist other mechanisms which may result in larger branching ratios for $J/\psi(\Upsilon)\rightarrow \rho\pi$. As suggested by Suzuki paper\cite{Suzuki} and our earlier work\cite{HadronLoop}, there can be a contribution from the hadronic loops and by fitting data (in the paper, the contributions from the OZI-forbidden processes were not theoretically calculated as we do in this work, but obtained by fitting data), we reached two conclusions that if only the two mechanisms contributing, the hadronic loop contribution would have the same order of magnitudes as that of the OZI-forbidden processes (definitely including higher twist contributions) and secondly the two contributions are destructive. Of course, it may not be the end of the story that some authors also suggested a glueball contribution which should be added to that from the aforementioned mechanisms\cite{Close}, and then the picture becomes more complicated, because we are unable to reliably estimate the glueball mass and phenomenological behaviors so far, unless we can borrow the lattice results. Therefore further developments on theory are necessary. Uncertainties in our theoretical evaluations come from the input parameters, especially the light quark masses and the shapes of the distribution functions while only the leading-twist distribution amplitudes are accounted. One can note that the shapes of the wave functions would cause order of magnitude differences. So far, we still cannot really rule out any of them, but wait for more accurate data to determine. As indicated in the text, we only consider the processes of $J/\psi(\Upsilon)\rightarrow \pi\pi,\;\rho\pi$ because as there no strange flavor gets involved, the wave functions of the produced mesons is simpler and more symmetric. For the processes involving such as $K(K^*),\;\eta,\;\eta'$, the calculations become more complicated and the results are not much reliable. Therefore we postpone our study on such processes in our later works. So far, the experimental data are not accurate yet, especially for the measurements on $\Upsilon$ decays only upper limits are set. However, we are inspired by the promises from the CLEO$_\text{c}$ and BES III collaborations, as they will provide a much larger database on $J/\psi$ decays, and more data would be accumulated in the B-meson factories, and then we will have concrete numbers about the branching ratios of $\Upsilon\rightarrow \pi\pi,\;\rho\pi$ instead of the upper limits set by the present experimental measurements. Moreover, the LHC$_\text{b}$ and future ILC can much enrich our knowledge on hadron structure. Conclusion is definite that further work is necessary. \noindent {\bf Acknowledgments}: This work is supported by the National Natural Science Foundation of China (NNSFC).\\ \noindent{\bf Appendix A: The hard-scattering amplitudes $H^{i,\alpha'\beta'\gamma'\rho'\alpha\beta\gamma\rho}(m_q,u,v)$}\\ The hard-scattering amplitude corresponding to Fig. \ref{fig1} (1a1) is: \begin{eqnarray} &&H^{1a1,\alpha\alpha'\beta\beta'\rho\rho'}(m_q,u,v)=\nonumber \\ &&\int{d^4k\over (2\pi)^4}\overline{Q}_{\alpha'}^i[(-ig_sT^a_{is}\gamma^\nu){i\over \rlap /k+\rlap /p_1-\rlap /p_3-\rlap /p_5-m_Q}(-ig_sT^b_{sr}\gamma^\mu){i\over \rlap /p_1-\rlap /p_3-\rlap /p_5-m_Q}(-ig_sT^c_{rj}\gamma^\lambda)]_{\alpha'\alpha}Q_\alpha^j\nonumber \\ && \overline{q}_{2\beta'}^n[(-ig_sT^a_{nw}\gamma_\nu){i\over -\rlap /k-\rlap /p_4-m_q}(-ig_sT^b_{wl}\gamma_\mu)]_{\beta'\beta}q_{2\beta}^l \overline{q}_{1\rho'}^k[(-ig_sT^c_{km}\gamma_\lambda)]_{\rho'\rho}q_{1\rho}^m{-i\over (p_3+p_5)^2}{-i\over k^2}{-i\over (k+p_4+p_6)^2}\nonumber \\ \end{eqnarray} The hard-scattering amplitude corresponding to Fig. \ref{fig1} (1a2) is: \begin{eqnarray} &&H^{1a2,\alpha\alpha'\beta\beta'\rho\rho'}(m_q,u,v)=\nonumber \\ &&\int{d^4k\over (2\pi)^4}\overline{Q}_{\alpha'}^i[(-ig_sT^a_{is}\gamma^\nu){i\over \rlap /k+\rlap /p_1-\rlap /p_3-\rlap /p_5-m_Q}(-ig_sT^b_{sr}\gamma^\mu){i\over \rlap /p_1-\rlap /p_3-\rlap /p_5-m_Q}(-ig_sT^c_{rj}\gamma^\lambda)]_{\alpha'\alpha}Q_\alpha^j\nonumber \\ &&\overline{q}_{2\beta'}^l[(-ig_sT^b_{lw}\gamma_\mu){i\over \rlap /k+\rlap /p_4-m_q}(-ig_sT^a_{wn}\gamma_\nu)]_{\beta'\beta}q_{2\beta}^n \overline{q}_{1\rho'}^m[(-ig_sT^c_{mk}\gamma_\lambda)]_{\rho'\rho}q_{1\rho}^k{-i\over (p_3+p_5)^2}{-i\over k^2}{-i\over (k+p_4+p_6)^2}\nonumber \\ \end{eqnarray} The hard-scattering amplitude corresponding to Fig. \ref{fig1} (1b1) is: \begin{eqnarray} &&H^{1b1,\alpha\alpha'\beta\beta'\rho\rho'}(m_q,u,v)=\nonumber \\ &&\int{d^4k\over (2\pi)^4}\overline{Q}_{\alpha'}^i[(-ig_sT^a_{is}\gamma^\nu){i\over \rlap /p_4+\rlap /p_6-\rlap /p_2-m_Q}(-ig_sT^b_{sr}\gamma^\mu){i\over \rlap /k+\rlap /p_4+\rlap /p_6-\rlap /p_2-m_Q}(-ig_sT^c_{rj}\gamma^\lambda)]_{\alpha'\alpha}Q_\alpha^j\nonumber \\ &&\overline{q}_{2\beta'}^n[(-ig_sT^a_{nl}\gamma_\nu)]_{\beta'\beta}q_{2\beta}^l \overline{q}_{1\rho'}^k[(-ig_sT^c_{kw}\gamma_\lambda){i\over \rlap /k-\rlap /p_5-m_q}(-ig_sT^b_{wm}\gamma_\mu)]_{\rho'\rho}q_{1\rho}^m{-i\over (p_4+p_6)^2}{-i\over k^2}{-i\over (k-p_3-p_5)^2}\nonumber \\ \end{eqnarray} The hard-scattering amplitude corresponding to Fig. \ref{fig1} (1b2) is: \begin{eqnarray} &&H^{1b2,\alpha\alpha'\beta\beta'\rho\rho'}(m_q,u,v)=\nonumber \\ &&\int{d^4k\over (2\pi)^4}\overline{Q}_{\alpha'}^i[(-ig_sT^a_{is}\gamma^\nu){i\over \rlap /p_4+\rlap /p_6-\rlap /p_2-m_Q}(-ig_sT^b_{sr}\gamma^\mu){i\over \rlap /k+\rlap /p_4+\rlap /p_6-\rlap /p_2-m_Q}(-ig_sT^c_{rj}\gamma^\lambda)]_{\alpha'\alpha}Q_\alpha^j\nonumber \\ &&\overline{q}_{2\beta'}^l[(-ig_sT^a_{ln}\gamma_\nu)]_{\beta'\beta}q_{2\beta}^n \overline{q}_{1\rho'}^m[(-ig_sT^b_{mw}\gamma_\mu){i\over -\rlap /k+\rlap /p_5-m_q}(-ig_sT^c_{wk}\gamma_\lambda)]_{\rho'\rho}q_{1\rho}^k{-i\over (p_4+p_6')^2}{-i\over k^2}{-i\over (k-p_3-p_5)^2}\nonumber \\ \end{eqnarray} The hard-scattering amplitude corresponding to Fig. \ref{fig1} (1c1) is: \begin{eqnarray} &&H^{1c1,\alpha\alpha'\beta\beta'\rho\rho'}(m_q,u,v)=\nonumber \\ &&\int{d^4k\over (2\pi)^4}\overline{Q}_{\alpha'}^i[(-ig_sT^a_{is}\gamma^\nu){i\over \rlap /k+\rlap /p_1-\rlap /p_4-\rlap /p_5-m_Q}(-ig_sT^b_{sr}\gamma^\mu){i\over \rlap /k+\rlap /p_1-m_Q}(-ig_sT^c_{rj}\gamma^\lambda)]_{\alpha'\alpha}Q_\alpha^j\nonumber \\ &&\overline{q}_{2\beta'}^n[(-ig_sT^a_{nw}\gamma_\nu){i\over -\rlap /k-\rlap /p_3-m_q}(-ig_sT^c_{wk}\gamma_\lambda)]_{\beta'\beta}q_{2\beta}^k \overline{q}_{1\rho'}^l[(-ig_sT^b_{lm}\gamma_\mu)]_{\rho'\rho}q_{1\rho}^m\nonumber \\ &&{-i\over (p_4+p_5)^2}{-i\over k^2}{-i\over (k+p_1+p_2-p_4-p_5)^2} \end{eqnarray} The hard scattering amplitude corresponding to Fig. \ref{fig1} (1c2) is: \begin{eqnarray} &&H^{1c2,\alpha\alpha'\beta\beta'\rho\rho'}(m_q,u,v)=\nonumber \\ &&\int{d^4k\over (2\pi)^4}\overline{Q}_{\alpha'}^i[(-ig_sT^a_{is}\gamma^\nu){i\over \rlap /k+\rlap /p_1-\rlap /p_4-\rlap /p_5-m_Q}(-ig_sT^b_{sr}\gamma^\mu){i\over \rlap /k+\rlap /p_1-m_Q}(-ig_sT^c_{rj}\gamma^\lambda)]_{\alpha'\alpha}Q_\alpha^j\nonumber \\ &&\overline{q}_{2\beta'}^k[(-ig_sT^c_{kw}\gamma_\lambda){i\over \rlap /k+\rlap /p_3-m_q}(-ig_sT^a_{wn}\gamma_\nu)]_{\beta'\beta}q_{2\beta}^n \overline{q}_{1\rho'}^m[(-ig_sT^b_{ml}\gamma_\mu)]_{\rho'\rho}q_{1\rho}^l\nonumber \\ &&{-i\over (p_4+p_5)^2}{-i\over k^2}{-i\over (k+p_1+p_2-p_4-p_5)^2} \end{eqnarray} The hard-scattering amplitude corresponding to the diagram which is topologically deformed from Fig. \ref{fig1} (1a1) by exchanging the connection of the gluon-lines in the loop to the gluon-light-quark vertices, is \begin{eqnarray} &&H^{2a1,\alpha\alpha'\beta\beta'\rho\rho'}(m_q,u,v)=\nonumber \\ &&\int{d^4k\over (2\pi)^4}\overline{Q}_{\alpha'}^i[(-ig_sT^a_{is}\gamma^\nu){i\over \rlap /k+\rlap /p_1-\rlap /p_3-\rlap /p_5-m_Q}(-ig_sT^b_{sr}\gamma^\mu){i\over \rlap /p_1-\rlap /p_3-\rlap /p_5-m_Q}(-ig_sT^c_{rj}\gamma^\lambda)]_{\alpha'\alpha}Q_\alpha^j\nonumber \\ && \overline{q}_{2\beta'}^n[(-ig_sT^b_{nw}\gamma_\mu){i\over \rlap /k+\rlap /p_6-m_q}(-ig_sT^a_{wl}\gamma_\nu)]_{\beta'\beta}q_{2\beta}^l \overline{q}_{1\rho'}^k[(-ig_sT^c_{km}\gamma_\lambda)]_{\rho'\rho}q_{1\rho}^m{-i\over (p_3+p_5)^2}{-i\over k^2}{-i\over (k+p_4+p_6)^2}\nonumber \\ \end{eqnarray} The hard-scattering amplitude corresponding to the topologically deformed diagram from Fig. \ref{fig1} (1a2) is: \begin{eqnarray} &&H^{2a2,\alpha\alpha'\beta\beta'\rho\rho'}(m_q,u,v)=\nonumber \\ &&\int{d^4k\over (2\pi)^4}\overline{Q}_{\alpha'}^i[(-ig_sT^a_{is}\gamma^\nu){i\over \rlap /k+\rlap /p_1-\rlap /p_3-\rlap /p_5-m_Q}(-ig_sT^b_{sr}\gamma^\mu){i\over \rlap /p_1-\rlap /p_3-\rlap /p_5-m_Q}(-ig_sT^c_{rj}\gamma^\lambda)]_{\alpha'\alpha}Q_\alpha^j\nonumber \\ && \overline{q}_{2\beta'}^l[(-ig_sT^a_{lw}\gamma_\nu){i\over -\rlap /k-\rlap /p_6-m_q}(-ig_sT^b_{wn}\gamma_\mu)]_{\beta'\beta}q_{2\beta}^n \overline{q}_{1\rho'}^m[(-ig_sT^c_{mk}\gamma_\lambda)]_{\rho'\rho}q_{1\rho}^k{-i\over (p_3+p_5)^2}{-i\over k^2}{-i\over (k+p_4+p_6)^2}\nonumber \\ \end{eqnarray} The hard-scattering amplitude corresponding to the deformed diagram from Fig. \ref{fig1} (1b1) is: \begin{eqnarray} &&H^{2b1,\alpha\alpha'\beta\beta'\rho\rho'}(m_q,u,v)=\nonumber \\ &&\int{d^4k\over (2\pi)^4}\overline{Q}_{\alpha'}^i[(-ig_sT^a_{is}\gamma^\nu){i\over \rlap /p_4+\rlap /p_6-\rlap /p_2-m_Q}(-ig_sT^b_{sr}\gamma^\mu){i\over \rlap /k+\rlap /p_4+\rlap /p_6-\rlap /p_2-m_Q}(-ig_sT^c_{rj}\gamma^\lambda)]_{\alpha'\alpha}Q_\alpha^j\nonumber \\ &&\overline{q}_{2\beta'}^n[(-ig_sT^a_{nl}\gamma_\nu)]_{\beta'\beta}q_{2\beta}^l \overline{q}_{1\rho'}^k[(-ig_sT^b_{kw}\gamma_\mu){i\over -\rlap /k+\rlap /p_3-m_q}(-ig_sT^c_{wm}\gamma_\lambda)]_{\rho'\rho}q_{1\rho}^m{-i\over (p_4+p_6)^2}{-i\over k^2}{-i\over (k-p_3-p_5)^2}\nonumber \\ \end{eqnarray} The hard-scattering amplitude corresponding to the deformed diagram from Fig. \ref{fig1} (1b2) is: \begin{eqnarray} &&H^{2b2,\alpha\alpha'\beta\beta'\rho\rho'}(m_q,u,v)=\nonumber \\ &&\int{d^4k\over (2\pi)^4}\overline{Q}_{\alpha'}^i[(-ig_sT^a_{is}\gamma^\nu){i\over \rlap /p_4+\rlap /p_6-\rlap /p_2-m_Q}(-ig_sT^b_{sr}\gamma^\mu){i\over \rlap /k+\rlap /p_4+\rlap /p_6-\rlap /p_2-m_Q}(-ig_sT^c_{rj}\gamma^\lambda)]_{\alpha'\alpha}Q_\alpha^j\nonumber \\ &&\overline{q}_{2\beta'}^l[(-ig_sT^a_{ln}\gamma_\nu)]_{\beta'\beta}q_{2\beta}^n \overline{q}_{1\rho'}^m[(-ig_sT^c_{mw}\gamma_\lambda){i\over \rlap /k-\rlap /p_3-m_q}(-ig_sT^b_{wk}\gamma_\mu)]_{\rho'\rho}q_{1\rho}^k{-i\over (p_4+p_6)^2}{-i\over k^2}{-i\over (k-p_3-p_5)^2}\nonumber \\ \end{eqnarray} The hard-scattering amplitude corresponding to the deformed diagram from Fig. \ref{fig1} (1c1) is: \begin{eqnarray} &&H^{2c1,\alpha\alpha'\beta\beta'\rho\rho'}(m_q,u,v)=\nonumber \\ &&\int{d^4k\over (2\pi)^4}\overline{Q}_{\alpha'}^i[(-ig_sT^a_{is}\gamma^\nu){i\over \rlap /k+\rlap /p_1-\rlap /p_4-\rlap /p_5-m_Q}(-ig_sT^b_{sr}\gamma^\mu){i\over \rlap /k+\rlap /p_1-m_Q}(-ig_sT^c_{rj}\gamma^\lambda)]_{\alpha'\alpha}Q_\alpha^j\nonumber \\ &&\overline{q}_{2\beta'}^n[(-ig_sT^c_{nw}\gamma_\lambda){i\over \rlap /k+\rlap /p_6-m_q}(-ig_sT^a_{wk}\gamma_\nu)]_{\beta'\beta}q_{2\beta}^k \overline{q}_{1\rho'}^l[(-ig_sT^b_{lm}\gamma_\mu)]_{\rho'\rho}q_{1\rho}^m\nonumber \\ &&{-i\over (p_4+p_5)^2}{-i\over k^2}{-i\over (k+p_1+p_2-p_4-p_5)^2} \end{eqnarray} The hard-scattering amplitude corresponding to the deformed diagram from Fig. \ref{fig1} (1c2) is: \begin{eqnarray} &&H^{2c2,\alpha\alpha'\beta\beta'\rho\rho'}(m_q,u,v)=\nonumber \\ &&\int{d^4k\over (2\pi)^4}\overline{Q}_{\alpha'}^i[(-ig_sT^a_{is}\gamma^\nu){i\over \rlap /k+\rlap /p_1-\rlap /p_4-\rlap /p_5-m_Q}(-ig_sT^b_{sr}\gamma^\mu){i\over \rlap /k+\rlap /p_1-m_Q}(-ig_sT^c_{rj}\gamma^\lambda)]_{\alpha'\alpha}Q_\alpha^j\nonumber \\ &&\overline{q}_{2\beta'}^k[(-ig_sT^a_{kw}\gamma_\nu){i\over -\rlap /k-\rlap /p_6'-m_q}(-ig_sT^c_{wn}\gamma_\lambda)]_{\beta'\beta}q_{2\beta}^n \overline{q}_{1\rho'}^m[(-ig_sT^b_{ml}\gamma_\mu)]_{\rho'\rho}q_{1\rho}^l\nonumber \\ &&{-i\over (p_4+p_5)^2}{-i\over k^2}{-i\over (k+p_1+p_2-p_4-p_5)^2} \end{eqnarray} \noindent{\bf Appendix B: The amplitudes $\mathcal{A}^i(m_q,u,v)$}\\ \textbf{1. For $\mathbf{J/\psi\to P P}$} For amplitudes $\mathcal{A}^{1a1}$ and $\mathcal{A}^{1a2}$, we have \begin{eqnarray} \mathcal{A}^{1a1}&=&C^{1a1}\widetilde{H}^{1a1}(m_q,u,v)\Phi_{J/\psi}\Phi_{P1}\Phi_{P2}\nonumber \\ \mathcal{A}^{1a2}&=&C^{1a2}\widetilde{H}^{1a2}(m_q,u,v)\Phi_{J/\psi}\Phi_{P1}\Phi_{P2} \end{eqnarray} with \begin{eqnarray} C^{1a1}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ C^{1a2}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ \widetilde{H}^{1a1}(m_q,u,v)&=&-\widetilde{H}^{1a2}(m_q,u,v)=-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_3+p_5)^2[(p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ &&\{D_0(m_q,u,v)[-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p_1'\cdot p_{_{P2}}m_Q^2+96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}p_1'\cdot p_{_{P1}}m_Q^2\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P1}}\cdot p_{_{P2}}m_Q^2+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p_1'\cdot p_{_{P2}}p_3'^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}p_1'\cdot p_{_{P1}}p_3'^2\nonumber \\ &&-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p_1'\cdot p_3' p_3'\cdot p_{_{P2}}+64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_1'\cdot p_{_{P1}} p_3'\cdot p_{_{P2}}\nonumber \\ &&-128m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'\cdot p_3' p_3'\cdot p_{_{P1}}-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_3'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_3'^2 p_{_{P1}}\cdot p_{_{P2}}]\nonumber \\ &&+D_\mu(m_q,u,v)[-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p^\mu_{_{P2}}m_Q^2+96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}p^\mu_{_{P1}}m_Q^2\nonumber \\ &&-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_{_{P1}}\cdot p_{_{P2}}m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_1'\cdot p_3'-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_1'\cdot p_3'\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_3'^\mu p_1'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P1}}^\mu p_1'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_3'^\mu p_1'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P2}}^\mu p_1'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_3'^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_3'^2\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'^\mu p_3'\cdot p_{_{P2}}-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_3'^\mu p_3'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P1}}^\mu p_3'\cdot p_{_{P2}}\nonumber \\ &&+64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P1}}^\mu p_3'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P1}} p_3'\cdot p_{_{P2}}-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'^\mu p_3'\cdot p_{_{P1}}\nonumber \\ &&-128m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_3'^\mu p_3'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P2}}^\mu p_3'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}\nonumber \\ &&-64m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_3'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_1'^\mu p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_3'^\mu p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_3' p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_3'^2 p_{_{P2}}\cdot p_{_{P1}}]\nonumber \\ &&+D_{\mu\nu}(m_q,u,v)[-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}p_3'^\mu p_{_{P1}}^\nu+64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P2}}^\mu p_{_{P1}}^\nu\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} g^{\mu\nu} p_3'\cdot p_{_{P2}}-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} g^{\mu\nu} p_3'\cdot p_{_{P1}}-64m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_{_{P2}}^\nu p_3'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' g^{\mu\nu} p_{_{P2}}\cdot p_{_{P1}}+64m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_3'^\nu p_{_{P2}}\cdot p_{_{P1}}]\}\nonumber \\ D_0(m_q,u,v)&=&{1\over i\pi^2}\int d^4k {1\over k^2[(k+p_4)^2-m_q^2](k+p_4+p_6)^2[(k+p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ D_\mu(m_q,u,v)&=&{1\over i\pi^2}\int d^4k {k_\mu\over k^2[(k+p_4)^2-m_q^2](k+p_4+p_6)^2[(k+p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ D_{\mu\nu}(m_q,u,v)&=&{1\over i\pi^2}\int d^4k {k_\mu k_\nu\over k^2[(k+p_4)^2-m_q^2](k+p_4+p_6)^2[(k+p_1-p_3-p_5)^2-m_Q^2]} \end{eqnarray} where $p_1'=p_4, p_3'=p_1-p_3-p_5$. For amplitudes $\mathcal{A}^{1b1}$ and $\mathcal{A}^{1b2}$, we have \begin{eqnarray} \mathcal{A}^{1b1}&=&C^{1b1}\widetilde{H}^{1b1}(m_q,u,v)\Phi_{J/\psi}\Phi_{P1}\Phi_{P2}\nonumber \\ \mathcal{A}^{1b2}&=&C^{1b2}\widetilde{H}^{1b2}(m_q,u,v)\Phi_{J/\psi}\Phi_{P1}\Phi_{P2} \end{eqnarray} with \begin{eqnarray} C^{1b1}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ C^{1b2}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ \widetilde{H}^{1b1}(m_q,u,v)&=&-\widetilde{H}^{1b2}(m_q,u,v)={i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_6)^2[(p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ &&\{D_0(m_q,u,v)[96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p_1'\cdot p_{_{P2}}m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}p_1'\cdot p_{_{P1}}m_Q^2\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P2}}\cdot p_{_{P1}}m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'\cdot p_{_{P2}} p_3'^2+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'\cdot p_{_{P1}} p_3'^2\nonumber \\ &&-128m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'\cdot p_3' p_3'\cdot p_{_{P2}}-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'\cdot p_3' p_3'\cdot p_{_{P1}}\nonumber \\ &&+64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_1'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_3'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_3'^2 p_{_{P2}}\cdot p_{_{P1}}]\nonumber \\ &&+D_\mu(m_q,u,v)[96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p_{_{P2}}^\mu m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}p_{_{P1}}^\mu m_Q^2\nonumber \\ &&-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_{_{P1}}\cdot p_{_{P2}}m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_1'\cdot p_3'-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_1'\cdot p_3'\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_3'^\mu p_1'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P1}}^\mu p_1'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_3'^\mu p_1'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P2}}^\mu p_1'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_3'^2+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_3'^2\nonumber \\ &&-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'^\mu p_3'\cdot p_{_{P2}}-128m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_3'^\mu p_3'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P1}}^\mu p_3'\cdot p_{_{P2}}\nonumber \\ &&-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P1}} p_3'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'^\mu p_3'\cdot p_{_{P1}}-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_3'^\mu p_3'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P2}}^\mu p_3'\cdot p_{_{P1}}+64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P2}}^\mu p_3'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}\nonumber \\ &&-64m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_3'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_1'^\mu p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_3'^\mu p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_3' p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_3'^2 p_{_{P2}}\cdot p_{_{P1}}]\nonumber \\ &&+D_{\mu\nu}(m_q,u,v)[-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p_3'^\mu p_{_{P2}}^\nu+64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P2}}^\mu p_{_{P1}}^\nu\nonumber \\ &&-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} g^{\mu\nu} p_3'\cdot p_{_{P2}}-64m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_{_{P1}}^\nu p_3'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} g^{\mu\nu} p_3'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' g^{\mu\nu} p_{_{P2}}\cdot p_{_{P1}}+64m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_3'^\nu p_{_{P1}}\cdot p_{_{P2}}]\}\nonumber \\ D_0(m_q,u,v)&=&{1\over i\pi^2}\int d^4k {1\over k^2[(k-p_5)^2-m_q^2](k-p_3-p_5)^2[(k+p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ D_\mu(m_q,u,v)&=&{1\over i\pi^2}\int d^4k {k_\mu\over k^2[(k-p_5)^2-m_q^2](k-p_3-p_5)^2[(k+p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ D_{\mu\nu}(m_q,u,v)&=&{1\over i\pi^2}\int d^4k {k_\mu k_\nu\over k^2[(k-p_5)^2-m_q^2](k-p_3-p_5)^2[(k+p_4+p_6-p_2)^2-m_Q^2]} \end{eqnarray} where $p_1'=-p_5, p_3'=p_4+p_6-p_2$. For amplitudes $\mathcal{A}^{1c1}$ and $\mathcal{A}^{1c2}$, we have \begin{eqnarray} \mathcal{A}^{1c1}&=&C^{1c1}\widetilde{H}^{1c1}(m_q,u,v)\Phi_{J/\psi}\Phi_{P1}\Phi_{P2}\nonumber \\ \mathcal{A}^{1c2}&=&C^{1c2}\widetilde{H}^{1c2}(m_q,u,v)\Phi_{J/\psi}\Phi_{P1}\Phi_{P2} \end{eqnarray} with \begin{eqnarray} &&C^{1c1}=\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ &&C^{1c2}=\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ &&\widetilde{H}^{1c1}(m_q,u,v)=-\widetilde{H}^{1c2}(m_q,u,v)=-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_5)^2}\nonumber \\ &&\{E_0(m_q,u,v)[-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_4'\cdot p_{_{P2}}m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_4'\cdot p_{_{P1}}m_Q^2\nonumber \\ &&+96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_{_{P2}}\cdot p_{_{P1}}m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'\cdot p_{_{P1}} p_2'\cdot p_4'\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_2'\cdot p_{_{P2}} p_1'\cdot p_4'-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_1'\cdot p_{_{P1}} p_2'\cdot p_{_{P2}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'\cdot p_4' p_2'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_1'\cdot p_{_{P2}} p_2'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'\cdot p_2' p_4'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_1'\cdot p_{_{P1}} p_4'\cdot p_{_{P2}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_2'\cdot p_{_{P1}} p_4'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'\cdot p_2' p_4'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_1'\cdot p_{_{P2}} p_4'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_2'\cdot p_{_{P2}} p_4'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_1'\cdot p_2' p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_1'\cdot p_4' p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_2'\cdot p_4' p_{_{P2}}\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'\cdot p_{_{P2}} p_2'\cdot p_4']\nonumber \\ &&+E_\mu(m_q,u,v)[-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu m_Q^2\nonumber \\ &&+96m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_{_{P1}}\cdot p_{_{P2}}m_Q^2+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_1'\cdot p_2'+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_1'\cdot p_2'\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_1'\cdot p_4'-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_1'\cdot p_4'-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_2'^\mu p_1'\cdot p_{_{P2}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_4'^\mu p_1'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_{_{P1}}^\mu p_1'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_{_{P1}}^\mu p_1'\cdot p_{_{P2}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_2'^\mu p_1'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_4'^\mu p_1'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_{_{P2}}^\mu p_1'\cdot p_{_{P1}}\nonumber \\ &&-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_{_{P2}}^\mu p_1'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_2'\cdot p_4'-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_2'\cdot p_4'\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'^\mu p_2'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_4'^\mu p_2'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P1}}^\mu p_2'\cdot p_{_{P2}}\nonumber \\ &&-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_{_{P1}}^\mu p_2'\cdot p_{_{P2}}-96m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P1}} p_2'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'^\mu p_2'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_4'^\mu p_2'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P2}}^\mu p_2'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_{_{P2}}^\mu p_2'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P2}} p_2'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'^\mu p_4'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_2'^\mu p_4'\cdot p_{_{P2}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P1}}^\mu p_4'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_{_{P1}}^\mu p_4'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P1}} p_4'\cdot p_{_{P2}}\nonumber \\ &&+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_2'\cdot p_{_{P1}} p_4'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'^\mu p_4'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_2'^\mu p_4'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P2}}^\mu p_4'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_{_{P2}}^\mu p_4'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P2}} p_4'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_2'\cdot p_{_{P2}} p_4'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_1'^\mu p_{_{P2}}\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_1'^\mu p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_2'^\mu p_{_{P2}}\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_2'^\mu p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_4'^\mu p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_4'^\mu p_{_{P2}}\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_2'\cdot p_1' p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_4'\cdot p_1' p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_4'\cdot p_2' p_{_{P2}}\cdot p_{_{P1}}]\nonumber \\ &&+E_{\mu\nu}(m_q,u,v)[-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_4'^\mu p_{_{P2}}^\nu-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_4'^\mu p_{_{P1}}^\nu\nonumber \\ &&-128m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_{_{P2}}^\mu p_{_{P1}}^\nu-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}g^{\mu\nu} p_1'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}g^{\mu\nu} p_1'\cdot p_{_{P1}}\nonumber \\ &&-128m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_{_{P2}}^\nu p_1'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}g^{\mu\nu} p_2'\cdot p_{_{P2}}-128m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_{_{P1}}^\nu p_2'\cdot p_{_{P2}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}g^{\mu\nu} p_2'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}g^{\mu\nu} p_4'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}g^{\mu\nu} p_4'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1'g^{\mu\nu} p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2'g^{\mu\nu} p_{_{P2}}\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4'g^{\mu\nu} p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&-64m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_4'^\nu p_{_{P2}}\cdot p_{_{P1}}]\nonumber \\ &&+E_{\mu\nu\theta}(m_q,u,v)[-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}g^{\mu\nu}p_{_{P2}}^\theta-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}g^{\mu\nu}p_{_{P1}}^\theta\nonumber \\ &&+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi}g^{\nu\theta}p_{_{P1}}\cdot p_{_{P2}}-128m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_{_{P1}}^\nu p_{_{P2}}^\theta]\} \end{eqnarray} \begin{eqnarray} &&E_0(m_q,u,v)=\nonumber \\ &&{1\over i\pi^2}\int d^4k {1\over k^2[(k+p_1)^2-m_Q^2][(k+p_1-p_4-p_5)^2-m_Q^2][(k+p_3)^2-m_q^2](k+p_1+p_2-p_4-p_5)^2}\nonumber \\ &&E_\mu(m_q,u,v)=\nonumber \\ &&{1\over i\pi^2}\int d^4k {k_\mu\over k^2[(k+p_1)^2-m_Q^2][(k+p_1-p_4-p_5)^2-m_Q^2][(k+p_3)^2-m_q^2](k+p_1+p_2-p_4-p_5)^2}\nonumber \\ &&E_{\mu\nu}(m_q,u,v)=\nonumber \\ &&{1\over i\pi^2}\int d^4k {k_\mu k_\nu\over k^2[(k+p_1)^2-m_Q^2][(k+p_1-p_4-p_5)^2-m_Q^2][(k+p_3)^2-m_q^2](k+p_1+p_2-p_4-p_5)^2}\nonumber \\ &&E_{\mu\nu\theta}(m_q,u,v)=\nonumber \\ &&{1\over i\pi^2}\int d^4k {k_\mu k_\nu k_\theta\over k^2[(k+p_1)^2-m_Q^2][(k+p_1-p_4-p_5)^2-m_Q^2][(k+p_3)^2-m_q^2](k+p_1+p_2-p_4-p_5)^2}\nonumber \\ \end{eqnarray} where $p_1'=p_1, p_2'=p_1-p_4-p_5, p_4'=p_3$. For amplitudes $\mathcal{A}^{2a1}$ and $\mathcal{A}^{2a2}$, we have \begin{eqnarray} \mathcal{A}^{2a1}&=&C^{2a1}\widetilde{H}^{2a1}(m_q,u,v)\Phi_{J/\psi}\Phi_{P1}\Phi_{P2}\nonumber \\ \mathcal{A}^{2a2}&=&C^{2a2}\widetilde{H}^{2a2}(m_q,u,v)\Phi_{J/\psi}\Phi_{P1}\Phi_{P2} \end{eqnarray} with \begin{eqnarray} C^{2a1}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ C^{2a2}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ \widetilde{H}^{2a1}(m_q,u,v)&=&-\widetilde{H}^{2a2}(m_q,u,v)={i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_3+p_5)^2[(p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ &&\{D_0(m_q,u,v)[96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p_1'\cdot p_{_{P2}}m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P1}}\cdot p_{_{P2}}m_Q^2\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}p_1'\cdot p_{_{P1}}m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p_1'\cdot p_{_{P2}} p_3'^2+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}p_1'\cdot p_{_{P1}} p_3'^2\nonumber \\ &&-128m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p_1'\cdot p_3' p_3'\cdot p_{_{P2}}-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}p_1'\cdot p_3' p_3'\cdot p_{_{P1}}\nonumber \\ &&+64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_1'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_3'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_3'^2 p_{_{P2}}\cdot p_{_{P1}}]\nonumber \\ &&D_\mu(m_q,u,v)[-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}p^\mu_{_{P1}}m_Q^2+96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p^\mu_{_{P2}}m_Q^2\nonumber \\ &&-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_{_{P1}}\cdot p_{_{P2}}m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_1'\cdot p_3'-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_1'\cdot p_3'\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_3'^\mu p_1'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P1}}^\mu p_1'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_3'^\mu p_1'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P2}}^\mu p_1'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_3'^2+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_3'^2\nonumber \\ &&-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'^\mu p_3'\cdot p_{_{P2}}-128m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_3'^\mu p_3'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P1}}^\mu p_3'\cdot p_{_{P2}}\nonumber \\ &&-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P1}} p_3'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'^\mu p_3'\cdot p_{_{P1}}-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_3'^\mu p_3'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P2}}^\mu p_3'\cdot p_{_{P1}}+64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P2}}^\mu p_3'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}\nonumber \\ &&-64m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_3'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_1'^\mu p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_3'^\mu p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_3' p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_3'^2 p_{_{P2}}\cdot p_{_{P1}}]\nonumber \\ &&+D_{\mu\nu}(m_q,u,v)[-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p_3'^\mu p_{_{P2}}^\nu+64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P1}}^\mu p_{_{P2}}^\nu\nonumber \\ &&-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} g^{\mu\nu} p_3'\cdot p_{_{P2}}-64m_{J/\psi}\varepsilon^{\mu}_{J/\psi}\cdot p_{_{P1}}^\nu p_3'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}g^{\mu\nu} p_3'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' g^{\mu\nu} p_{_{P2}}\cdot p_{_{P1}}+64m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_3'^\nu p_{_{P2}}\cdot p_{_{P1}}]\}\nonumber \\ D_0(m_q,u,v)&=&{1\over i\pi^2}\int d^4k {1\over k^2[(k+p_6)^2-m_q^2](k+p_4+p_6)^2[(k+p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ D_\mu(m_q,u,v)&=&{1\over i\pi^2}\int d^4k {k_\mu\over k^2[(k+p_6)^2-m_q^2](k+p_4+p_6)^2[(k+p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ D_{\mu\nu}(m_q,u,v)&=&{1\over i\pi^2}\int d^4k {k_\mu k_\nu\over k^2[(k+p_6)^2-m_q^2](k+p_4+p_6)^2[(k+p_1-p_3-p_5)^2-m_Q^2]} \end{eqnarray} where $p_1'=p_6, p_3'=p_1-p_3-p_5$. For amplitudes $\mathcal{A}^{2b1}$ and $\mathcal{A}^{2b2}$, we have \begin{eqnarray} \mathcal{A}^{2b1}&=&C^{2b1}\widetilde{H}^{2b1}(m_q,u,v)\Phi_{J/\psi}\Phi_{P1}\Phi_{P2}\nonumber \\ \mathcal{A}^{2b2}&=&C^{2b2}\widetilde{H}^{2b2}(m_q,u,v)\Phi_{J/\psi}\Phi_{P1}\Phi_{P2} \end{eqnarray} with \begin{eqnarray} C^{2b1}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ C^{2b2}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ \widetilde{H}^{2b1}(m_q,u,v)&=&-\widetilde{H}^{2b2}(m_q,u,v)=-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_6)^2[(p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ &&\{D_0(m_q,u,v)[96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}p_1'\cdot p_{_{P1}}m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p_1'\cdot p_{_{P2}}m_Q^2\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P2}}\cdot p_{_{P1}}m_Q^2+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'\cdot p_{_{P2}} p_3'^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'\cdot p_{_{P1}} p_3'^2\nonumber \\ &&-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'\cdot p_3' p_3'\cdot p_{_{P2}}+64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_1'\cdot p_{_{P2}} p_3'\cdot p_{_{P2}}\nonumber \\ &&-128m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'\cdot p_3' p_3'\cdot p_{_{P1}}-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_3'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_3'^2 p_{_{P2}}\cdot p_{_{P1}}]\nonumber \\ &&+D_\mu(m_q,u,v)[96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}p_{_{P1}}^\mu m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}p_{_{P2}}^\mu m_Q^2\nonumber \\ &&-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_{_{P1}}\cdot p_{_{P2}}m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_1'\cdot p_3'-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_1'\cdot p_3'\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_3'^\mu p_1'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P1}}^\mu p_1'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_3'^\mu p_1'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P2}}^\mu p_1'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_3'^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_3'^2\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'^\mu p_3'\cdot p_{_{P2}}-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_3'^\mu p_3'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P1}}^\mu p_3'\cdot p_{_{P2}}\nonumber \\ &&+64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P1}}^\mu p_3'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P1}} p_3'\cdot p_{_{P2}}-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'^\mu p_3'\cdot p_{_{P1}}\nonumber \\ &&-128m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_3'^\mu p_3'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P2}}^\mu p_3'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}\nonumber \\ &&-64m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_3'\cdot p_{_{P2}} p_3'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_1'^\mu p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_3'^\mu p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_3' p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_3'^2 p_{_{P2}}\cdot p_{_{P1}}]\nonumber \\ &&+D_{\mu\nu}(m_q,u,v)[-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}p_{_{P1}}^\mu p_3'^\nu+64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' p_{_{P2}}^\mu p_{_{P1}}^\nu\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} g^{\mu\nu} p_3'\cdot p_{_{P2}}-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} g^{\mu\nu} p_3'\cdot p_{_{P1}}-64m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_{_{P2}}^\nu p_3'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_3' g^{\mu\nu} p_{_{P2}}\cdot p_{_{P1}}+64m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_3'^\nu p_{_{P1}}\cdot p_{_{P2}}]\}\nonumber \\ D_0(m_q,u,v)&=&{1\over i\pi^2}\int d^4k {1\over k^2[(k-p_3)^2-m_q^2](k-p_3-p_5)^2[(k+p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ D_\mu(m_q,u,v)&=&{1\over i\pi^2}\int d^4k {k_\mu\over k^2[(k-p_3)^2-m_q^2](k-p_3-p_5)^2[(k+p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ D_{\mu\nu}(m_q,u,v)&=&{1\over i\pi^2}\int d^4k {k_\mu k_\nu\over k^2[(k-p_3)^2-m_q^2](k-p_3-p_5)^2[(k+p_4+p_6-p_2)^2-m_Q^2]} \end{eqnarray} where $p_1'=-p_3, p_3'=p_4+p_6-p_2$. For amplitudes $\mathcal{A}^{2c1}$ and $\mathcal{A}^{2c2}$, we have \begin{eqnarray} \mathcal{A}^{2c1}&=&C^{2c1}\widetilde{H}^{2c1}(m_q,u,v)\Phi_{J/\psi}\Phi_{P1}\Phi_{P2}\nonumber \\ \mathcal{A}^{2c2}&=&C^{2c2}\widetilde{H}^{2c2}(m_q,u,v)\Phi_{J/\psi}\Phi_{P1}\Phi_{P2} \end{eqnarray} with \begin{eqnarray} &&C^{2c1}=\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ &&C^{2c2}=\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ &&\widetilde{H}^{2c1}(m_q,u,v)=-\widetilde{H}^{2c2}(m_q,u,v)={i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_5)^2}\nonumber \\ &&\{E_0(m_q,u,v)[-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_4'\cdot p_{_{P2}}m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_4'\cdot p_{_{P1}}m_Q^2\nonumber \\ &&+96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_{_{P2}}\cdot p_{_{P1}}m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'\cdot p_{_{P2}} p_2'\cdot p_4'\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'\cdot p_{_{P1}} p_2'\cdot p_4'-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'\cdot p_4' p_2'\cdot p_{_{P2}}\nonumber \\ &&-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_1'\cdot p_{_{P1}} p_2'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'\cdot p_4' p_2'\cdot p_{_{P1}}\nonumber \\ &&-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_1'\cdot p_{_{P2}} p_2'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'\cdot p_2' p_4'\cdot p_{_{P2}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_1'\cdot p_{_{P1}} p_4'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_2'\cdot p_{_{P1}} p_4'\cdot p_{_{P2}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'\cdot p_2' p_4'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_1'\cdot p_{_{P2}} p_4'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_2'\cdot p_{_{P2}} p_4'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_1'\cdot p_2' p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_1'\cdot p_4' p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_2'\cdot p_4' p_{_{P2}}\cdot p_{_{P1}}]\nonumber \\ &&+E_\mu(m_q,u,v)[-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu m_Q^2-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu m_Q^2\nonumber \\ &&+96m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_{_{P1}}\cdot p_{_{P2}}m_Q^2+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_1'\cdot p_2'+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_1'\cdot p_2'\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_1'\cdot p_4'-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_1'\cdot p_4'-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_2'^\mu p_1'\cdot p_{_{P2}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_4'^\mu p_1'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_{_{P1}}^\mu p_1'\cdot p_{_{P2}}-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_{_{P1}}^\mu p_1'\cdot p_{_{P2}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_2'^\mu p_1'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_4'^\mu p_1'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_{_{P2}}^\mu p_1'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_{_{P2}}^\mu p_1'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_{_{P2}}^\mu p_2'\cdot p_4'-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_{_{P1}}^\mu p_2'\cdot p_4'\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'^\mu p_2'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_4'^\mu p_2'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P1}}^\mu p_2'\cdot p_{_{P2}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_{_{P1}}^\mu p_2'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P1}} p_2'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'^\mu p_2'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_4'^\mu p_2'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P2}}^\mu p_2'\cdot p_{_{P1}}-96m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_{_{P2}}^\mu p_2'\cdot p_{_{P1}}\nonumber \\ &&-96m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P2}} p_2'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_1'^\mu p_4'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_2'^\mu p_4'\cdot p_{_{P2}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P1}}^\mu p_4'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_{_{P1}}^\mu p_4'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P1}} p_4'\cdot p_{_{P2}}\nonumber \\ &&-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_2'\cdot p_{_{P1}} p_4'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_1'^\mu p_4'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_2'^\mu p_4'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_{_{P2}}^\mu p_4'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_{_{P2}}^\mu p_4'\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_1'\cdot p_{_{P2}} p_4'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_2'\cdot p_{_{P2}} p_4'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_1'^\mu p_{_{P2}}\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_1'^\mu p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_2'^\mu p_{_{P2}}\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_2'^\mu p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1' p_4'^\mu p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2' p_4'^\mu p_{_{P2}}\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_2'\cdot p_1' p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_4'\cdot p_1' p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi} p_4'\cdot p_2' p_{_{P2}}\cdot p_{_{P1}}]\nonumber \\ &&+E_{\mu\nu}(m_q,u,v)[-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}} p_4'^\mu p_{_{P2}}^\nu-64m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}} p_4'^\mu p_{_{P1}}^\nu\nonumber \\ &&-128m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4' p_{_{P2}}^\mu p_{_{P1}}^\nu-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}g^{\mu\nu} p_1'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}g^{\mu\nu} p_1'\cdot p_{_{P1}}\nonumber \\ &&-128m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_{_{P1}}^\nu p_1'\cdot p_{_{P2}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}g^{\mu\nu} p_2'\cdot p_{_{P2}}-128m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_{_{P2}}^\nu p_2'\cdot p_{_{P1}}\nonumber \\ &&-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}g^{\mu\nu} p_2'\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}g^{\mu\nu} p_4'\cdot p_{_{P2}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}g^{\mu\nu} p_4'\cdot p_{_{P1}}\nonumber \\ &&+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_1'g^{\mu\nu} p_{_{P2}}\cdot p_{_{P1}}+32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_2'g^{\mu\nu} p_{_{P2}}\cdot p_{_{P1}}-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_4'g^{\mu\nu} p_{_{P2}}\cdot p_{_{P1}}\nonumber \\ &&-64m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_4'^\nu p_{_{P2}}\cdot p_{_{P1}}]\nonumber \\ &&+E_{\mu\nu\theta}(m_q,u,v)[-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P1}}g^{\mu\nu}p_{_{P2}}^\theta-32m_{J/\psi}\varepsilon_{J/\psi}\cdot p_{_{P2}}g^{\mu\nu}p_{_{P1}}^\theta\nonumber \\ &&+32m_{J/\psi}\varepsilon^{\mu}_{J/\psi}g^{\nu\theta}p_{_{P1}}\cdot p_{_{P2}}-128m_{J/\psi}\varepsilon^{\mu}_{J/\psi}p_{_{P1}}^\nu p_{_{P2}}^\theta]\} \end{eqnarray} \begin{eqnarray} &&E_0(m_q,u,v)=\nonumber \\ &&{1\over i\pi^2}\int d^4k {1\over k^2[(k+p_1)^2-m_Q^2][(k+p_1-p_4-p_5)^2-m_Q^2][(k+p_6)^2-m_q^2](k+p_1+p_2-p_4-p_5)^2}\nonumber \\ &&E_\mu(m_q,u,v)=\nonumber \\ &&{1\over i\pi^2}\int d^4k {k_\mu\over k^2[(k+p_1)^2-m_Q^2][(k+p_1-p_4-p_5)^2-m_Q^2][(k+p_6)^2-m_q^2](k+p_1+p_2-p_4-p_5)^2}\nonumber \\ &&E_{\mu\nu}(m_q,u,v)=\nonumber \\ &&{1\over i\pi^2}\int d^4k {k_\mu k_\nu\over k^2[(k+p_1)^2-m_Q^2][(k+p_1-p_4-p_5)^2-m_Q^2][(k+p_6)^2-m_q^2](k+p_1+p_2-p_4-p_5)^2}\nonumber \\ &&E_{\mu\nu\theta}(m_q,u,v)=\nonumber \\ &&{1\over i\pi^2}\int d^4k {k_\mu k_\nu k_\theta\over k^2[(k+p_1)^2-m_Q^2][(k+p_1-p_4-p_5)^2-m_Q^2][(k+p_6)^2-m_q^2](k+p_1+p_2-p_4-p_5)^2}\nonumber \\ \end{eqnarray} where $p_1'=p_1, p_2'=p_1-p_4-p_5, p_4'=p_6$. \\ \\ \\ \textbf{2. The case of $\mathbf{J/\psi\to V P}$} For amplitude $\mathcal{A}^{1a1a}$, we have \begin{eqnarray} \mathcal{A}^{1a1a}&=&C^{1a1a}\widetilde{H}^{1a1a}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{1a1a}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ \widetilde{H}^{1a1a}(m_q,u,v)&=&-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_3+p_5)^2[(p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\nu}p_{_{P}}^\alpha p_{_{V}}^\beta p_3'\cdot p_{J/\psi}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_{P}}\nonumber \\ &&+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}+64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{P}}^\beta p_{J/\psi}\cdot p_{_{V}}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_{J/\psi}^\alpha p_3'^\nu p_{_{V}}\cdot p_{_{P}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=p_4, p_3'=p_1-p_3-p_5$. For amplitude $\mathcal{A}^{1a1b}$, we have \begin{eqnarray} \mathcal{A}^{1a1b}&=&C^{1a1b}\widetilde{H}^{1a1b}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{1a1b}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ \widetilde{H}^{1a1b}(m_q,u,v)&=&-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_3+p_5)^2[(p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\nu}p_{_{P}}^\alpha p_{_{V}}^\beta p_3'\cdot p_{J/\psi}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_{P}}\nonumber \\ &&+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}-64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{V}}^\beta p_{J/\psi}\cdot p_{_{P}}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_{J/\psi}^\alpha p_3'^\nu p_{_{V}}\cdot p_{_{P}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (96\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=p_4, p_3'=p_1-p_3-p_5$. For amplitude $\mathcal{A}^{1a2a}$, we have \begin{eqnarray} \mathcal{A}^{1a2a}&=&C^{1a2a}\widetilde{H}^{1a2a}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{1a2a}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ \widetilde{H}^{1a2a}(m_q,u,v)&=&{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_3+p_5)^2[(p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\nu}p_{_{P}}^\alpha p_{_{V}}^\beta p_3'\cdot p_{J/\psi}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_{P}}\nonumber \\ &&+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}+64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{P}}^\beta p_{J/\psi}\cdot p_{_{V}}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_{J/\psi}^\alpha p_3'^\nu p_{_{V}}\cdot p_{_{P}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=p_4, p_3'=p_1-p_3-p_5$. For amplitude $\mathcal{A}^{1a2b}$, we have \begin{eqnarray} \mathcal{A}^{1a2b}&=&C^{1a2b}\widetilde{H}^{1a2b}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{1a2b}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ \widetilde{H}^{1a2b}(m_q,u,v)&=&{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_3+p_5)^2[(p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\nu}p_{_{P}}^\alpha p_{_{V}}^\beta p_3'\cdot p_{J/\psi}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_{P}}\nonumber \\ &&-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}-64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{V}}^\beta p_{J/\psi}\cdot p_{_{P}}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_{J/\psi}^\alpha p_3'^\nu p_{_{V}}\cdot p_{_{P}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=p_4, p_3'=p_1-p_3-p_5$. For amplitude $\mathcal{A}^{1b1a}$, we have \begin{eqnarray} \mathcal{A}^{1b1a}&=&C^{1b1a}\widetilde{H}^{1b1a}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{1b1a}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ \widetilde{H}^{1b1a}(m_q,u,v)&=&{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_6)^2[(p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_P}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}\nonumber \\ &&+64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{P}}^\beta p_{J/\psi}\cdot p_{_{V}}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_3'^\nu p_{J/\psi}^\alpha p_{_V}\cdot p_{_{P}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=-p_5, p_3'=p_4+p_6-p_2$. For amplitude $\mathcal{A}^{1b1b}$, we have \begin{eqnarray} \mathcal{A}^{1b1b}&=&C^{1b1b}\widetilde{H}^{1b1b}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{1b1b}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ \widetilde{H}^{1b1b}(m_q,u,v)&=&{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_6)^2[(p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_P}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}\nonumber \\ &&-64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{V}}^\beta p_{J/\psi}\cdot p_{_{P}}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_3'^\nu p_{J/\psi}^\alpha p_{_V}\cdot p_{_{P}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=-p_5, p_3'=p_4+p_6-p_2$. For amplitude $\mathcal{A}^{1b2a}$, we have \begin{eqnarray} \mathcal{A}^{1b2a}&=&C^{1b2a}\widetilde{H}^{1b2a}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{1b2a}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ \widetilde{H}^{1b2a}(m_q,u,v)&=&-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_6)^2[(p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_P}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}\nonumber \\ &&+64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{P}}^\beta p_{J/\psi}\cdot p_{_{V}}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_3'^\nu p_{J/\psi}^\alpha p_{_V}\cdot p_{_{P}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=-p_5, p_3'=p_4+p_6-p_2$. For amplitude $\mathcal{A}^{1b2b}$, we have \begin{eqnarray} \mathcal{A}^{1b2b}&=&C^{1b2b}\widetilde{H}^{1b2b}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{1b2b}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ \widetilde{H}^{1b2b}(m_q,u,v)&=&-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_6)^2[(p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_P}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}\nonumber \\ &&-64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{V}}^\beta p_{J/\psi}\cdot p_{_{P}}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_3'^\nu p_{J/\psi}^\alpha p_{_V}\cdot p_{_{P}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=-p_5, p_3'=p_4+p_6-p_2$. For amplitude $\mathcal{A}^{1c1a}$, we have \begin{eqnarray} \mathcal{A}^{1c1a}&=&C^{1c1a}\widetilde{H}^{1c1a}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} &&C^{1c1a}=\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ &&\widetilde{H}^{1c1a}(m_q,u,v)=-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_5)^2}\nonumber \\ &&m_Qm_q\{E_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}[-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_2'\cdot p_{J/\psi}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_2'\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_2'\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_1'\cdot p_{J/\psi}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_1'\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_1'\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P} +32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_2'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_1'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}]\nonumber \\ &&+[E_{1\theta}(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+E^\nu_1(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\}\nonumber \\ \end{eqnarray} where $p_1'=p_1, p_2'=p_1-p_4-p_5, p_4'=p_3$. For amplitude $\mathcal{A}^{1c1b}$, we have \begin{eqnarray} \mathcal{A}^{1c1b}&=&C^{1c1b}\widetilde{H}^{1c1b}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} &&C^{1c1b}=\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ &&\widetilde{H}^{1c1b}(m_q,u,v)=-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_5)^2}\nonumber \\ &&m_Qm_q\{E_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}[-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_2'\cdot p_{J/\psi}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_2'\cdot p_{_P}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_2'\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_1'\cdot p_{J/\psi}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_1'\cdot p_{_P}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_1'\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P} +32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_2'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_1'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}]\nonumber \\ &&+[E_{1\theta}(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+E^\nu_1(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\}\nonumber \\ \end{eqnarray} where $p_1'=p_1, p_2'=p_1-p_4-p_5, p_4'=p_3$. For amplitude $\mathcal{A}^{1c2a}$, we have \begin{eqnarray} \mathcal{A}^{1c2a}&=&C^{1c2a}\widetilde{H}^{1c2a}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} &&C^{1c2a}=\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ &&\widetilde{H}^{1c2a}(m_q,u,v)={i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_5)^2}\nonumber \\ &&m_Qm_q\{E_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}[32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_2'\cdot p_{J/\psi}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_2'\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_2'\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_1'\cdot p_{J/\psi}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_1'\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_1'\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_2'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_1'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}]\nonumber \\ &&+[E_{1\theta}(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+E^\nu_1(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\}\nonumber \\ \end{eqnarray} where $p_1'=p_1, p_2'=p_1-p_4-p_5, p_4'=p_3$. For amplitude $\mathcal{A}^{1c2b}$, we have \begin{eqnarray} \mathcal{A}^{1c2b}&=&C^{1c2b}\widetilde{H}^{1c2b}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} &&C^{1c2b}=\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ &&\widetilde{H}^{1c2b}(m_q,u,v)={i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_5)^2}\nonumber \\ &&m_Qm_q\{E_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}[32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_2'\cdot p_{J/\psi}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_2'\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_2'\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_1'\cdot p_{J/\psi}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_1'\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_1'\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P}+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_2'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_1'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}]\nonumber \\ &&+[E_{1\theta}(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+E^\nu_1(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\}\nonumber \\ \end{eqnarray} where $p_1'=p_1, p_2'=p_1-p_4-p_5, p_4'=p_3$. For amplitude $\mathcal{A}^{2a1a}$, we have \begin{eqnarray} \mathcal{A}^{2a1a}&=&C^{2a1a}\widetilde{H}^{2a1a}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{2a1a}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ \widetilde{H}^{2a1a}(m_q,u,v)&=&{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_3+p_5)^2[(p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_P}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}\nonumber \\ &&+64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{P}}^\beta p_{J/\psi}\cdot p_{_{V}}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_3'^\nu p_{J/\psi}^\alpha p_{_P}\cdot p_{_{V}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=p_6, p_3'=p_1-p_3-p_5$. For amplitude $\mathcal{A}^{2a1b}$, we have \begin{eqnarray} \mathcal{A}^{2a1b}&=&C^{2a1b}\widetilde{H}^{2a1b}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{2a1b}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ \widetilde{H}^{2a1b}(m_q,u,v)&=&{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_3+p_5)^2[(p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_P}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}\nonumber \\ &&-64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{P}}^\beta p_{J/\psi}\cdot p_{_{V}}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_3'^\nu p_{J/\psi}^\alpha p_{_P}\cdot p_{_{V}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=p_6, p_3'=p_1-p_3-p_5$. For amplitude $\mathcal{A}^{2a2a}$, we have \begin{eqnarray} \mathcal{A}^{2a2a}&=&C^{2a2a}\widetilde{H}^{2a2a}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{2a2a}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ \widetilde{H}^{2a2a}(m_q,u,v)&=&-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_3+p_5)^2[(p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_P}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}\nonumber \\ &&+64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{P}}^\beta p_{J/\psi}\cdot p_{_{V}}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_3'^\nu p_{J/\psi}^\alpha p_{_P}\cdot p_{_{V}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=p_6, p_3'=p_1-p_3-p_5$. For amplitude $\mathcal{A}^{2a2b}$, we have \begin{eqnarray} \mathcal{A}^{2a2b}&=&C^{2a2b}\widetilde{H}^{2a2b}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{2a2b}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ \widetilde{H}^{2a2b}(m_q,u,v)&=&-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_3+p_5)^2[(p_1-p_3-p_5)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_P}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}\nonumber \\ &&-64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{P}}^\beta p_{J/\psi}\cdot p_{_{V}}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_3'^\nu p_{J/\psi}^\alpha p_{_P}\cdot p_{_{V}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=p_6, p_3'=p_1-p_3-p_5$. For amplitude $\mathcal{A}^{2b1a}$, we have \begin{eqnarray} \mathcal{A}^{2b1a}&=&C^{2b1a}\widetilde{H}^{2b1a}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{2b1a}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ \widetilde{H}^{2b1a}(m_q,u,v)&=&-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_6)^2[(p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\nu}p_{_{P}}^\alpha p_{_{V}}^\beta p_3'\cdot p_{J/\psi}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_{P}}\nonumber \\ &&+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}+64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{P}}^\beta p_{J/\psi}\cdot p_{_{V}}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_{J/\psi}^\alpha p_3'^\nu p_{_{V}}\cdot p_{_{P}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=-p_3, p_3'=p_4+p_6-p_2$. For amplitude $\mathcal{A}^{2b1b}$, we have \begin{eqnarray} \mathcal{A}^{2b1b}&=&C^{2b1b}\widetilde{H}^{2b1b}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{2b1b}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ \widetilde{H}^{2b1b}(m_q,u,v)&=&-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_6)^2[(p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\nu}p_{_{P}}^\alpha p_{_{V}}^\beta p_3'\cdot p_{J/\psi}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_{P}}\nonumber \\ &&-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}-64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{P}}^\beta p_{J/\psi}\cdot p_{_{V}}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_{J/\psi}^\alpha p_3'^\nu p_{_{V}}\cdot p_{_{P}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (-96\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=-p_3, p_3'=p_4+p_6-p_2$. For amplitude $\mathcal{A}^{2b2a}$, we have \begin{eqnarray} \mathcal{A}^{2b2a}&=&C^{2b2a}\widetilde{H}^{2b2a}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{2b2a}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ \widetilde{H}^{2b2a}(m_q,u,v)&=&{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_6)^2[(p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\nu}p_{_{P}}^\alpha p_{_{V}}^\beta p_3'\cdot p_{J/\psi}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_{P}}\nonumber \\ &&+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}+64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{P}}^\beta p_{J/\psi}\cdot p_{_{V}}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_{J/\psi}^\alpha p_3'^\nu p_{_{V}}\cdot p_{_{P}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=-p_3, p_3'=p_4+p_6-p_2$. For amplitude $\mathcal{A}^{2b2b}$, we have \begin{eqnarray} \mathcal{A}^{2b2b}&=&C^{2b2b}\widetilde{H}^{2b2b}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} C^{2b2b}&=&\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ \widetilde{H}^{2b2b}(m_q,u,v)&=&{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_6)^2[(p_4+p_6-p_2)^2-m_Q^2]}\nonumber \\ &&m_Qm_q\{D_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} [-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\nu}p_{_{P}}^\alpha p_{_{V}}^\beta p_3'\cdot p_{J/\psi}-64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{V}}^\beta p_3'\cdot p_{_{P}}\nonumber \\ &&+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_{P}}^\beta p_3'\cdot p_{_{V}}-64\varepsilon_{J/\psi}^{\nu}\varepsilon_{_V}^{\ast\alpha}p_3'^\mu p_{_{P}}^\beta p_{J/\psi}\cdot p_{_{V}}+64\varepsilon_{J/\psi}^{\mu}\varepsilon_{_V}^{\ast\beta}p_{J/\psi}^\alpha p_3'^\nu p_{_{V}}\cdot p_{_{P}}]\nonumber \\ &&+[D_\theta(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (96\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+D^\nu(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\} \end{eqnarray} where $p_1'=-p_3, p_3'=p_4+p_6-p_2$. For amplitude $\mathcal{A}^{2c1a}$, we have \begin{eqnarray} \mathcal{A}^{2c1a}&=&C^{2c1a}\widetilde{H}^{2c1a}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} &&C^{2c1a}=\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ &&\widetilde{H}^{2c1a}(m_q,u,v)={i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_5)^2}\nonumber \\ &&m_Qm_q\{E_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}[-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_2'\cdot p_{J/\psi}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_2'\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_2'\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_1'\cdot p_{J/\psi}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_1'\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_1'\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P} +32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_2'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_1'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}]\nonumber \\ &&+[E_{1\theta}(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+E^\nu_1(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\}\nonumber \\ \end{eqnarray} where $p_1'=p_1, p_2'=p_1-p_4-p_5, p_4'=p_6$. For amplitude $\mathcal{A}^{2c1b}$, we have \begin{eqnarray} \mathcal{A}^{2c1b}&=&C^{2c1b}\widetilde{H}^{2c1b}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} &&C^{2c1b}=\text{Tr}(T^aT^bT^c)\text{Tr}(T^aT^bT^c)\nonumber \\ &&\widetilde{H}^{2c1b}(m_q,u,v)={i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_5)^2}\nonumber \\ &&m_Qm_q\{E_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}[-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_2'\cdot p_{J/\psi}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_2'\cdot p_{_P}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_2'\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_1'\cdot p_{J/\psi}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_1'\cdot p_{_P}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_1'\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P} +32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_2'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_1'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}]\nonumber \\ &&+[E_{1\theta}(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+E^\nu_1(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\}\nonumber \\ \end{eqnarray} where $p_1'=p_1, p_2'=p_1-p_4-p_5, p_4'=p_6$. For amplitude $\mathcal{A}^{2c2a}$, we have \begin{eqnarray} \mathcal{A}^{2c2a}&=&C^{2c2a}\widetilde{H}^{2c2a}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} &&C^{2c2a}=\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ &&\widetilde{H}^{2c2a}(m_q,u,v)=-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_5)^2}\nonumber \\ &&m_Qm_q\{E_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}[32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_2'\cdot p_{J/\psi}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_2'\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_2'\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_1'\cdot p_{J/\psi}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_1'\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_1'\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_2'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}-32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_1'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}]\nonumber \\ &&+[E_{1\theta}(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+E^\nu_1(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\}\nonumber \\ \end{eqnarray} where $p_1'=p_1, p_2'=p_1-p_4-p_5, p_4'=p_6$. For amplitude $\mathcal{A}^{2c2b}$, we have \begin{eqnarray} \mathcal{A}^{2c2b}&=&C^{2c2b}\widetilde{H}^{2c2b}(m_q,u,v)\Phi_{J/\psi}\Phi_{V}\Phi_{P} \end{eqnarray} with \begin{eqnarray} &&C^{2c2b}=\text{Tr}(T^aT^bT^c)\text{Tr}(T^bT^aT^c)\nonumber \\ &&\widetilde{H}^{2c2b}(m_q,u,v)=-{i\pi^2\over (2\pi)^4}g_s^6({1\over 4N_C})^3{1\over (p_4+p_5)^2}\nonumber \\ &&m_Qm_q\{E_0(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}[32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_2'\cdot p_{J/\psi}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_2'\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_2'\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_1'\cdot p_{J/\psi}+32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_1'\cdot p_{_P}-32\varepsilon_{J/\psi}^\mu \varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_1'\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P}+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_V}^\beta p_{J/\psi}\cdot p_{_P}+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_2'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}\nonumber \\ &&+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\alpha}p_1'^\mu p_{_P}^\beta p_{J/\psi}\cdot p_{_V}+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_2'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}+32\varepsilon_{J/\psi}^\nu \varepsilon_{_V}^{\ast\beta}p_1'^\mu p_{J/\psi}^\alpha p_{_P}\cdot p_{_V}]\nonumber \\ &&+[E_{1\theta}(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta} (64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\nu}p_{_P}^\alpha p_{_V}^\beta p_{J/\psi}^\theta+64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_V}^\beta p_{_P}^\theta-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha}p_{J/\psi}^\nu p_{_P}^\beta p_{_V}^\theta)\nonumber \\ &&+E^\nu_1(m_q,u,v)\varepsilon_{\mu\nu\alpha\beta}(-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_V}^\beta p_{J/\psi}\cdot p_{_P}-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\alpha} p_{_P}^\beta p_{J/\psi}\cdot p_{_V}-64\varepsilon_{J/\psi}^\mu\varepsilon_{_V}^{\ast\beta} p_{J/\psi}^\alpha p_{_V}\cdot p_{_P})]\}\nonumber \\ \end{eqnarray} where $p_1'=p_1, p_2'=p_1-p_4-p_5, p_4'=p_6$.
2,877,628,091,368
arxiv
\section{Introduction} The nature of the causative agent that makes some quasars radio loud (RLQs) has challenged astrophysicists for more than 50 years. It became clear early on that the optical/ultraviolet (UV) spectra of RLQs and radio quiet quasars (RQs) are very similar \citep{ste91}\footnote{There are notable small subclasses of objects that are distinct. Some RLQs have relativistic jets that propagate close to the line of sight (blazars) and the Doppler enhanced power law continuum can be significant. There are also rare objects with very broad ultraviolet absorption lines, these are almost exclusively RQs. However, in both classes these effects obfuscate a background thermal component that is very similar to other quasars.}. Attempts to look for subtle differences involved statistical studies of optical and UV emission line strengths and widths \citep{bor92,bor02,cor94,cor96,bro94}. These emission regions are far from the central engine, $\sim$ $10^{3}$ - $10^{4}$ larger than the central black hole radius, so it is not clear what they tell us as a second order indicator of conditions in the jet launching region \citep{gue13}. Are they related to the fueling mechanism for radio loudness, the ionization continuum or jet propagation? Consequently, this research path has provided very little understanding of the jet launching mechanism. Seemingly more relevant to the physics of jet launching, the extreme ultraviolet (EUV) continuum, $\lambda < 1100$ \AA\,, is created orders of magnitude closer to the central engine and RLQs display a significant EUV continuum deficit relative to RQs \citep{tel02}, \textbf{T02} hereafter. \par In the following, evidence is presented that connects the EUV deficit to magnetically arrested accretion (MAA) in the innermost accretion flow of RLQs. The motivation for exploring this interpretation is that it not only explains the second order effect of an EUV deficit, but it also provides a mechanism for the first order difference between RLQs and RQs; namely arresting the flow with large scale magnetic flux is a natural way to launch the jets responsible for the radio emission. This argument is laid out as follows. Section 2 considers the EUV emission in the context in the standard interpretation of a quasar as emission from an optically thick thermal gas that accretes onto a black hole. Section 3 reviews the notion of MAA. Based on the assumption that the EUV deficit is a consequence of thermal gas being displaced by islands of large scale magnetic flux, the distribution of said islands is determined from both numerical and theoretical models of accretion flows. \section{The Thermal Interpretation of Quasar Spectra} It was convincingly demonstrated in \citet{lyn71,sha73,nov73} that the intense blue/UV light associated with the quasar phenomenon was likely the optically thick thermal emission from viscous dissipation of accreting gas onto a supermassive black hole. The connection between these accretion models and observation was made in \citet{mal83,szu96} where quasar spectra in the the optical to far UV were approximated by accretion disks spectra. \begin{figure} \includegraphics[width=170 mm, angle= 0]{f1.eps} \caption{The quasar composite continuum accretion disk SED. The black lines represent the common UV/optical/IR continuum. The blue and red plots are the EUV composite spectra for RQs and RLQs from \textbf{T02}, respectively.} \end{figure} Our understanding of accretion disks around black holes is far from complete. Thus, this effort strives to reach conclusions that do not depend on a particular accretion model. Determination of the EUV spectrum beyond the far UV turnover had to await space-based observations of intermediate redshift quasars since ground-based observations of high redshift objects, in which the EUV is redshifted into the optical band, are unsuitable due to contamination by the Ly$\alpha$ forrest \citep{zhe97}. \par The EUV deficit in RLQs was originally found in \citet{zhe97} and confirmed with a much larger sample of spectra in \textbf{T02}. Figure 1 shows the composite RLQ (red) and RQ (blue) EUV spectra, 1100 \AA\, to 300 \AA\,, from Hubble Space Telescope (HST) observations of 332 spectra of 184 objects with redshift, $z>0.33$ \textbf{T02}\footnote{The newer HST spectra in \citet{ste14} have a span $<1/2$ of the G160L spectra (that was commonly used in \textbf{T02} to bridge the far UV and EUV) and are not wide enough to use in Figure 2. The small number (8) of narrow span RLQ spectra entirely within the EUV, above the noisy spectral break at $\approx 550 \AA$ of Figure 1 ($1100\AA\ < \lambda < 550 \AA $), render statistical comparison to RQs insignificant. The FUSE composite in \citet{sco04} has only two RLQs with coverage below 700 \AA\,.}. The quasar accretion disk composite in Figure 1 is based on the \citet{lao97} composite, but it is updated with the optical and IR composite quasar estimates in \citet{dav11}. They note the important point that the IR (dust) local maximum should be neglected in estimates of the accretion flow SED because it is likely to be the result of accretion disk emission that is reprocessed on larger scales. The UV composite is updated based on the data and discussion in \textbf{T02}. The small difference in the UV continuum between the RLQ and RQ composites in \citet{lao97} does not exist (see Figure 10 of \textbf{T02}) in the larger HST sample. The common continuum at frequencies below the far UV for the accretion disk composite of RQs and RLQs is represented by a black piecewise power-law fit. The EUV is the data of interest, so it is plotted explicitly as opposed to the piecewise power-law estimates elsewhere. The EUV data is normalized to 0 at 1100 \AA\, as in \textbf{T02}. The SEDs above $6\times 10^{15}$ Hz show a spectral break, but they are very noisy and are not considered reliable in this region. At frequencies above $10^{16}$ Hz, where the \textbf{T02} data ends, the SED is very uncertain and is indicated by faint dotted lines. These extend to the soft X-ray values (relative to the peak of the SED) from \citet{lao97}. Since the X-ray luminosity is not considered optically thick thermal emission from the accretion flow and its value relative to the EUV is quite uncertain, it will not be plotted or considered in detail in the following. \par It should be noted that both numerical and analytic models of optically thick accretion flows contain the following elements: the effective temperature increases as the radius decreases and the luminosity from each annular ring reaches a maxima near the black hole and decreases inward of this, i.e., the luminosity of the accreting gas fades before being swallowed by the black hole \citep{zhu12}. Thus, we expect that the maximum of the SED is not representative of the maximum temperature of the accretion flow, but there are higher temperature contributions to the SED beyond the far UV turnover from optically thick thermal gas. Thus, the rapidly falling SED in the EUV band is the electromagnetic signal of the innermost optically thick region. Consider this in the context of the broadband composite of Figure 1. The continuum of the thermal component (frequencies below $6\times 10^{15}$ Hz) of RLQs and RQs are indistinguishable except for emission from the innermost accreting optically thick gas. Thus, the difference in the EUV emission between RLQs and RQs likely arises from suppressed emission in the innermost region of the accretion flow in RLQs of what is otherwise a similar accretion flow to that found in RQs. \begin{figure} \includegraphics[width=170 mm, angle= 0]{f2.eps} \caption{A scatter plot of the estimated jet power and the EUV spectral index in frequency space (a value of 1 is a flat SED).} \end{figure} \section{The EUV Deficit and Magnetically Arrested Accretion} The viscous dissipation that heats the plasma in accretion flows (and therefore the source of modified black body radiation) is produced as a consequence of the magneto-rotational instability (MRI) in 3-D numerical simulations \citep{pen10,dev03}. The discussion of the last section begs the question, what physical process in the innermost accretion flow can suppress MRI in RLQs? The answer might be MAA, \citet{igu03, igu08}, or its variants MCAFs (magnetically choked accretion flows), \citet{mck12} and MADs (magnetically arrested disks), \citet{tch11,tch12}. The accretion flow in these simulations is perforated by large scale magnetic flux tubes, magnetic islands. The islands of magnetic flux that arrest the accretion flow also suppress the MRI induced dissipation in these regions. The magnetic flux tubes torque the plasma and enhance the overall mass accretion rate. The angular momentum is converted to electromagnetic form and removed vertically from the accretion flow as a jet \citep{igu08}. Without loss of generality, consider an annular ring in the \citet{sha73} accretion disk, $r_{1}<r<r_{2}$. Angular momentum is removed by viscous stress in the fluid element at a rate \begin{equation} T_{\phi\; ;\nu}^{\,\nu} =0 \,\Longrightarrow \, \dot{L} =\dot{m}(\Omega(r_{2}) r_{2}^{2} - \Omega(r_{1}) r_{1}^{2}) = \int{r^{-2}(r^{2}T_{\phi}^{r\;\,\mathrm{visc}})_{,r} dV}\;, \end{equation} where $T_{r \phi}^{\mathrm{visc}}$ is the viscous stress, $\dot{m}$ is the accretion rate and $\Omega (r)$ is the angular velocity. Now consider the existence of magnetic islands that fill a fraction, $f_{V}$, of the volume of the ring, $V$, and penetrate a fraction, $f$, of the top and bottom surface areas of the annular volume, $SA$. The volume of magnetic islands is $V_{MI}$ and its complement in $V$ is $V_{MI}^{C}$, $f_{V} = \int{dV_{MI}}/V$. The surface area elements of the top and bottom faces are, $dS\!A_{MI}$ and $dS\!A_{MI}^{C}$, respectively. The angular momentum equation becomes \begin{equation}\dot{L} =\dot{m}(\Omega(r_{2}) r_{2}^{2} - \Omega(r_{1}) r_{1}^{2}) = \int{r^{-2}(r^{2}T_{\phi}^{r\;\,\mathrm{visc}})_{,r} dV_{MI}^{C}} + \int{(-rB^{\phi}B^{z}/(8\pi))_{,z} dV_{MI}} , \end{equation} where, $B^{\phi}$ and $B^{z}$ are the azimuthal and vertical magnetic field components, respectively. For $\dot{m}$ fixed in Equations (2) and (3), even though the volume of plasma experiencing viscous dissipation is reduced, accretion proceeds at an equal rate in the magnetically arrested state. \par Simulated MAD accretion flows are subsonic and therefore do not produce significant gas heating from shocks \citep{mck12,pun14}. Thus, the reconnection of the locally tangled field driven by the MRI is the primary source of heat creation at the boundary of the magnetic islands and in the accreting gas. However, the interior of the magnetic islands are not regions of local MRI driven heating. The total volume available for MRI induced viscous heating is reduced by the magnetic islands. Therefore, the MRI suppression in magnetically arrested flows indicate states of lower radiative efficiency relative to standard accretion states without magnetic islands. The magnetic islands radiate Poynting flux along the magnetic field lines. Again consider the annular ring above. The total energy flux, $Q = Q^{\mathrm{visc}} + Q^{\mathrm{jet}}$, has two components, where $Q^{\mathrm{visc}}$ is the flux of radiation as in a standard accretion disk and $Q^{\mathrm{jet}}$ is primarily poloidal Poynting flux along the magnetic field direction, $S^{P}$ \citep{igu08}. Similarly, the total luminosity $P$ of the ring also has two components: \begin{equation}P = \int{Q\,dA} = (1/2)\int{r (d\Omega/dr)T_{r \phi}^{\mathrm{visc}}dz\,dS\!A_{MI}^{C}} + \int{S^{P} dS\!A_{MI}} \;. \end{equation} The first term on the RHS of Equation (3) is the usual term from standard accretion theory that gives rise to the radiation (such as EUV). The second term is the vertical jet emission. For any approximately axisymmetric MHD Poynting flux dominated jet, regardless of the source, the total integrated electromagnetic poloidal energy flux is \begin{equation} \int S^{P} \mathrm{d}A_{_{\perp}} =(\Omega_{F}/c)\int{(-rB^{\phi}B^{z}/(8\pi))_{,z} dV_{MI}} \approx k\frac{\Omega_{F}^{2}\Phi^{2}}{2\pi^{2} c}\;, \end{equation} where $\Phi$ is the total magnetic flux enclosed within the jet, $\mathrm{d}A_{_{\perp}}$ is the cross-sectional area element and $k$ is a geometrical factor that equals 1 for a uniform highly collimated jet \citep{pun08}. Thus, not only do the magnetic islands of large scale poloidal flux in the inner accretion flow suppress radiation from this region, but they provide a source of Poynting flux (power for the jet) as they orbit around the black hole with an angular velocity, $\Omega_{\mathrm{F}}$. The local physics that produces the turbulent viscosity, $\eta_{t}$, in $V_{MI}^{C}$ is unchanged from standard accretion and therefore so is $T_{r \phi}^{\mathrm{visc}}= \eta_{t}r(d\Omega/dr)$. Thus, from Equations (1) - (4), in the magnetically arrested case, the radiative luminosity is $\approx 1-f$ of what it would be for standard accretion with the same mass accretion rate. \par If MAA is the source of the EUV deficit, one would expect a correlation (perhaps weak) between the EUV spectral index, $\alpha_{\mathrm{EUV}}$, and jet power within the RLQ population. In Figure 2, $\alpha_{\mathrm{EUV}}$ ($F_{\nu}\sim\nu^{-\alpha_{\mathrm{EUV}}}$) derived from individual spectra in the HST archives (downloaded through MAST) is plotted against estimates of the long term time average of the jet power, $\overline{Q}$. In order to get a meaningful estimate of $\alpha_{\mathrm{EUV}}$, a range of at least 700 \AA\, to 1100 \AA\, in the quasar rest frame was needed to extract the continuum from the numerous broad emission lines documented in \textbf{T02}. Therefore, a redshift of $z>0.63$ is required. Troughs from Lyman limit systems (LLS) were removed by assuming a single cloud with a $\nu^{-3}$ opacity. This was considered acceptable if the power law above the LLS could be continued smoothly through the corrected region. If there were many strong absorption systems or an LLS that compromised a broad emission line, this simple procedure was deemed inadequate for continuum extraction with the available data and the spectrum was eliminated from the sample. A small correction for the Lyman valley was also made per the methods of \citet{zhe97}. Additionally, if there was evidence of a blazar synchrotron component contribution to the continuum (high optical polarization or variability, superluminal motion or gamma ray activity), the underlying accretion disk continuum was considered too uncertain for the sample. \par The most reliable methods of estimating $\overline{Q}$ are based on the the optically thin emission from relaxed radio lobes \citep{wil99}. Thus, all sources in the sample needed proof of extended emission on scales larger than the host galaxy so that the lobes can relax ($> 20$ kpc). The proof was derived from archival high resolution interferometry images made between 0.408 GHz and 5 GHz. The HST and radio selection criteria resulted in a total of 18 sources for the sample. The optically thin emission was estimated based on 151 MHz - 178 MHz flux densities (if available) and the lobe fluxes from the radio images. The largest spread in the estimates of $\overline{Q}$, based on optically thin extended emission, are bounded on the high side by the \citet{wil99} estimate for their parameter, \textbf{f} = 20, and on the low side by \citet{pun05}, which assumes that the lobes are inertially dominated. These two extremes are used to generate the errors bars on $\overline{Q}$ in Figure 2 \citep{pun14}. The correlation in the scatter in Figure 2 is statistically significant at the 0.987 level by a Spearman rank correlation test, while the correlations of $\alpha_{\mathrm{EUV}}$ with z and spectral luminosity, $\lambda L_{\lambda}(1100 \AA)$, are significant at the 0.842 and 0.720 levels, respectively for the same sample. The large scatter in Figure 2 is expected on many grounds: the ejections producing $\overline{Q}$ are not contemporaneous with the the HST spectrum, as well as variations in the quasar host EUV absorption, $\dot{m}$, black hole masses, ($M$), and spins ($a/M$). \par Since a magnetically arrested innermost accretion flow naturally explains the suppressed EUV and jet production, it is of interest to estimate the size of the region of suppressed radiation. From, the composites in Figure 1, the EUV deficit in RLQs is $\approx 0.045L_{\mathrm{bol}}$ of the optically thick accretion flow\footnote{The emission line contribution to $L_{\mathrm{bol}}$ is chosen to be 25\% of the optical/UV luminosity \citep{zhe97}. The X-ray contribution to $L_{\mathrm{bol}}$ is ignored as discussed in Section 2. Ignoring the X-ray luminosity of the accretion flow proper will affect the estimates by $<$ 10\% \citep{lao97,dav11}.}. Consider excising a fraction, $f$, of the innermost accretion flow in various models of accretion disks in Figure 3. The simulations of accretion disks in \citet{pen10} include luminosity from the plunge region inside of the innermost stable orbit (ISCO) and are parameterized by $a/M$ and the disk thickness defined by their parameter, $h/r$. The \citet{nov73}, NT, models do not include a plunge region. Note that the putative magnetically arrested region must be concentrated at the smallest radii, since the EUV is suppressed in RLQ composite of Figure 1, but not the UV and optical. The plausible range of, $0.3 < f < 0.9$, near the black hole is based on the simulations presented in \citet{igu08,pun09}. The putative magnetic islands would have to be concentrated between the event horizon and an outer boundary of $<2.8 M$ if $a/M=0.98$ and $<5.5M$ if $a/M=0.7$ to account for the 4.5\% luminosity suppression in RLQs. \begin{figure} \includegraphics[width=170 mm, angle= 0]{f3.eps} \caption{The size of the magnetically arrested region required for the EUV deficit in RLQs versus the filling factor for magnetic flux, f, for various models from \citet{pen10}.} \end{figure} \par It should be noted that, unlike the simulations in \citet{igu08,pun09}, the simulations in \citet{mck12,tch11,tch12} that are heavily seeded with large scale magnetic flux are devoid of magnetic islands this close to the event horizon. This is evidenced by the claim in \citet{mck12} that no significant Poynting flux emerges from this region (see Equation 4, above) as well as the linked online videos of the simulations. The videos show the innermost significant magnetic island concentrations are around $\sim 20M$ and they are extremely transient. This either means that the interpretation of the EUV deficit presented here is wrong or the simulations do not represent the magnetic flux evolution accurately. Using fusion and solar physics as a guide, the latter seems quite likely since these simulations are based on simple single fluid ideal MHD \citep{yam07,mal09,bau13,thr12}. Even more important, the two most relevant dynamic elements for large scale, poloidal, magnetic flux evolution near the black hole, reconnection and diffusion of mass onto the field lines, occur as a consequence of numerical diffusion in the simulations and not an actual physical process. \section{Discussion} In this paper, the EUV deficit in RLQ SEDs was argued to arise from a deficit of optically thick thermal gas in the innermost accretion flow. It was posited that islands of large scale magnetic flux near the black hole, like those that occur in some numerical simulations, would explain this missing volume of optically thick thermal gas and also explain the radio jet launching mechanism. As a further consequence it was argued in \citet{pun99} that the presence of magnetic flux in the inner accretion disk will diminish the power of radiation driven winds that appear to be common in RQs. It is not claimed that this is the only explanation of the EUV deficit. However, none of the alternatives naturally produce a radio jet. Other explanations based on numerical and theoretical models include, lower $a/M$ in RLQs (larger ISCO), a stronger quenching wind in RLQs per the model of \citet{lao14} or higher $M$ and lower $\dot{m}$ in RLQs. Using a sample of $>6000$ QSOs \citet{mcl04} found that the mean $M$ of RLQs is 1.45 that of of RQs with large scatter. In disk models, larger M shifts the peak of the SED to lower frequency, therefore causing a decrease in the EUV. However, it was shown in \citet{dav11,lao14}, based on PG quasars, that black hole mass variations produce much smaller changes in the far UV turnover region of the spectrum than expected from accretion disk models. Thus the issue needs to be addressed empirically. In Figure 4, the \textbf{T02} composites are overlayed. The location of the SED peak and the curvature of the continuum long-ward of the peak are indistinguishable contrary to the notion that a disk temperature shift results in the EUV deficit. Furthermore, if the mass difference is the source of the EUV deficit, the correlation in Figure 2 would be a coincidence. \begin{figure} \includegraphics[width=170 mm, angle= 0]{f4.eps} \caption{The blue and red plots are the composite spectra for RQs and RLQs from \textbf{T02}, respectively. The spectral peak and the curvature of the continuum long-ward of the peak are virtually identical. The composites are normalized to 0 at 1100\AA. That is indicated by the black lines.} \end{figure} \par The basic idea presented here does not depend on any particular accretion model. The observational data indicates that the only significant difference in the optically thick thermal continuum between the RQ and RLQ composite spectra is a deficit of the highest temperature gas (the EUV) in RLQs. This must be created by the innermost optically thick gas independent of the model. A plausible explanation is the displacement of this gas by the large scale magnetic flux of the radio jet at its launch site. \begin{acknowledgements} This work benefitted greatly from the input of a very knowledgable referee who directed the effort towards many important topics that were initially overlooked. I am also indebted to Michael Brotherton who computed the RLQ and RQQ black hole masses from SDSS DR7, for $0.9 < z <1.1$. Although not directly used due to space constraints, this provided valuable insight. I am also thankful to Robert Antonucci for helping me correct for the Lyman limit systems in the HST spectra. I am extremely grateful to Matt Malkan who reviewed the logic of the arguments presented and implied by the paper with me and also had great insight into the proper interpretation of the HST data. \end{acknowledgements}
2,877,628,091,369
arxiv
\section{Introduction} \label{sec_introduction} Cell-free massive multiple-input multiple-output (MIMO) is considered as a promising technology for powering beyond-5G networks. The key idea of a cell-free massive MIMO system is to distributively deploy a large number of access points (APs) coherently serving all users in the system. As illustrated in Fig.~\ref{fig_cell_free_mMIMO_system}, the APs in a cell-free system can be randomly located all over the coverage area and are connected to one or several central processing units (CPUs). Due to this distributed deployment, any user is highly likely to be close to at least one AP. A cell-free system can effectively resolve the poor coverage issue in cell-edge areas of conventional cellular systems ~\cite{Ngo2017cell-free,Interdonato,Emil2020Scalable}. In addition, a cell-free system enables different levels of cooperation among the APs with certain levels of joint signal processing at the CPU, ranging from fully centralized processing (\emph{Level 4}), to partially distributed processing (\emph{Levels 3} and \emph{2}), and to a fully distributed processing (\emph{Level 1}) \cite{Emil2020Making}. Joint signal processing at the system's CPU allows a cell-free system to better address the inter-cell interference, which becomes more severe in cellular systems with small cell deployments. Therefore, cell-free massive MIMO systems can offer significant enhancements in user coverage and energy efficiency compared to traditional cellular systems~\cite{Ngo2017cell-free,Nayebi2017Precoding,Emil2020Making}. \begin{figure}[t] \centering \includegraphics[width=.8\linewidth]{fig/cell-free-system.eps} \caption{Diagram of a cell-free massive MIMO system with multiple distributed APs connected to a CPU.} \label{fig_cell_free_mMIMO_system} \end{figure} The majority of existing research on uplink cell-free massive MIMO has focused on spectral and energy efficiency analysis with linear signal processing methods, such as maximum-ratio combining (MRC) \cite{Ngo2017cell-free}, zero-forcing (ZF) \cite{Ngo2017cell-free}, and linear minimum mean-squared error (LMMSE) \cite{Emil2020Making}. While such approaches have relatively low complexity, linear methods do not perform well in systems with low level of favorable propagation (e.g. when the number of AP antennas is small or is not much larger than the number of UEs, or the channels are highly correlated). Nonlinear signal processing is thus a promising alternative approach that can offer higher spectral efficiency \cite{Emil2020Making} or lower bit error rate (BER) \cite{Song-TWC-2021}. The recent work in \cite{Song-TWC-2021} proposed a nonlinear optimization-based algorithm for joint channel estimation and data detection in cell-free massive MIMO. However, the approach in \cite{Song-TWC-2021} can only provide point estimates of the data symbols of interest. Different from these papers, the focus of this paper is on devising efficient algorithms to obtain Bayesian estimates of the data symbols. Unfortunately, realizing the exact posterior distributions of the data symbols is intractable, even in a conventional single cell MIMO system. We, therefore, develop variational Bayes (VB) inference methods for approximating intractable posterior distributions of data symbols, which are then used to detect the symbols. We investigate the VB methods for joint data detection with fully centralized processing at the CPU, as well as for distributed data detection at the APs. For fully centralized processing, we assume that full knowledge of the channel state information (CSI) is available at the CPU. Likewise, for distributed processing at each AP, we assume that CSI knowledge for the channel from the users to that AP is locally available. Simulation results show significant performance advantages of the developed VB methods over the LMMSE processing techniques in \cite{Emil2020Making}. \textit{Notation:} Upper-case and lower-case boldface letters denote matrices and column vectors, respectively. The transpose and conjugate transpose are denoted by $[\cdot]^T$ and $[\cdot]^H$, respectively. $\mc{CN}(\bs{\mu},\bs{\Sigma})$ represents a complex Gaussian random vector with mean $\bs{\mu}$ and covariance matrix $\mb{\Sigma}$; $\mc{CN}(\mb{x};\bs{\mu},\mb{\Sigma}) = \big(1/\big(\pi^K|\mb{\Sigma}|\big)\big)\mr{exp}\big(-(\mb{x}-\bs{\mu})^H\mb{\Sigma}^{-1}(\mb{x}-\bs{\mu})\big)$ denotes the probability distribution function (PDF) of a length-$K$ random vector $\mb{x}\sim \mc{CN}(\bs{\mu},\mb{\Sigma})$. $\mathbb{E}_{p(x)}[x]$ and $\mr{Var}_{p(x)}[x]$ are the mean and the variance of $x$ with respect to its distribution $p(x)$; $ \langle x\rangle$ and $\sigma_{x}^2$ denote the mean and variance of $x$ with respect to a variational distribution $q(x)$. \section{System Model} \label{sec_system_model_and_problem_formulation} We consider an uplink cell-free massive MIMO system with $L$ distributed APs, each equipped with $N$ antennas, serving $K$ randomly located single-antenna users. It is assumed that $N\leq K\leq NL$. Denote $\mb{h}_{i\ell} \in \mbb{C}^{N}$ as the uplink channel from the $i$-th user and the $\ell$-th AP and $\mb{H}_{\ell} = [\mb{h}_{1\ell},\ldots,\mb{h}_{K\ell}]$. We assume a block Rayleigh fading scenario in which the channel $\mb{h}_{i\ell}$ remains constant for $T$ time slots and is normally distributed as $\mc{CN}(\mb{0},\beta_{i\ell} \mb{R}_{i\ell})$. Here, $\beta_{i\ell}$ is the large-scale fading coefficient and $\mb{R}_{i\ell}$ is the normalized spatial correlation matrix whose diagonal elements equal to one. Due to the random user deployment, the large-scale fading coefficient $\beta_{i\ell}$ is different from one user to another user, resulting in a non-i.i.d. channel matrix $\mb{H}_{\ell}$. We assume that the channel vectors $\{\mb{h}_{i\ell}\}$ are independent of each other for each user-AP pair. Let $\mb{x}_t = [x_{1,t}, \ldots, x_{K,t}]^T$ be the transmitted symbol vector at time slot $t$, in which the transmitted symbol $x_{i,t}$ from the $i$-th user is drawn from a complex-valued discrete constellation $\mc{S}$ such that $\mathbb{E}[x_{i,t}] = 0$ and $\mathbb{E}[|x_{i,t}|^2] = \rho_i$. The prior distribution of $x_{i,t}$ is thus given by \begin{eqnarray}\label{prior-x} p(x_{i,t}) = \sum_{a\in \mc{S}} p_a\delta(x_{i,t}-a), \end{eqnarray} where $p_a$ corresponds to the known prior probability of the constellation point $a\in \mc{S}$. The received signal vector $\mb{y}_{\ell,t} \in \mbb{C}^{N}$ at the $\ell$-th AP can be modeled as \begin{equation} \mb{y}_{\ell,t} = \sum_{i=1}^{K} \mb{h}_{i\ell} x_{i,t} + \mb{n}_{\ell,t} = \mb{H}_{\ell}\mb{x}_t + \mb{n}_{\ell,t}, \end{equation} where $\mb{n}_{\ell,t}$ is the noise vector whose elements are independent and identically distributed (i.i.d.) as $\mc{CN}(0,N_0)$. The interest of this paper is to obtain an estimated $\hat{\mb{x}}_t$ of $\mb{x}_t$ from multiple observed signal vectors $\mb{y}_{\ell,t}$'s across the $L$ distributed APs with minimum mean squared detection error $\mathbb{E}\big[\|\mb{x}_t-\hat{\mb{x}}_t\|^2\big]$. \section{Four Levels of Cell-Free Massive MIMO Signal Processing Using LMMSE Filtering} To frame the discussion on the developed VB methods, we revisit the 4 levels of signal processing in cell-free systems using LMMSE filtering as studied in \cite{Emil2020Making}. Since the processing is based on a per time slot basis, without loss of generality, we drop the time index $t$. \subsection{Level 4: Fully Centralized Processing} At this level, the APs do not process their received signals. Instead, the received signals are forwarded to the CPU for fully centralized processing, including the data detection task. The signals forwarded from the $L$ APs can be stacked into \begin{equation}\label{eq_large_MIMO} \mb{y} = \mb{Hx} + \mb{n}, \end{equation} where $\mb{y} = [\mb{y}_1^T, \ldots, \mb{y}_L^T]^T$, $\mb{H} = [\mb{H}_1^T, \ldots, \mb{H}_{L}^T]^T$, and $\mb{n} = [\mb{n}_1^T, \ldots, \mb{n}_L^T]^T$. The processing for cell-free massive MIMO in this level is similar to the processing at a conventional co-located MIMO receiver. The CPU detects $\mb{x} = [x_1,\ldots,x_K]^T$ using the received signal vector $\mb{y}$ and the channel matrix $\mb{H}$. Among the linear detectors, the LMMSE detector maximizes the signal-to-interference-and-noise ratio (SINR) and also achieves the best detection performance \cite{Emil2020Making}. With the full knowledge of $\mb{H}$, the LMMSE estimate $\hat{\mb{x}}$ is formed as \begin{eqnarray} \hat{\mb{x}} = \big(\mb{H}^H\mb{H}+N_0\mb{I}_{K}\big)^{-1}\mb{H}^H\mb{y}, \end{eqnarray} which is then element-wise projected onto $\mc{S}$. We note that the LMMSE filter in the presented form requires the inverse of a $K\times K$-dimensional matrix. \subsection{Level 3: Local Processing \& Large-Scale Fading Decoding} At this level, each AP pre-processes its received signal by computing a local estimate of $\mb{x}$ that are forwarded to the CPU for final decoding \cite{Emil2020Making}. Assuming full knowledge of channel matrix $\mb{H}_{\ell}$ at the $\ell$-th AP, the local LMMSE estimate $\check{\mb{x}}_{\ell} = [\check{x}_{i\ell},\ldots,\check{x}_{K\ell}]^T$ of $\mb{x}$ can be found as \begin{eqnarray} \check{\mb{x}}_{\ell} = \mb{H}_{\ell}^H \big(\mb{H}_{\ell}\mb{H}^H_{\ell}+N_0\mb{I}_{N}\big)^{-1}\mb{y}_{\ell}. \end{eqnarray} We note that the LMMSE filter in this presented form requires the inverse of a $N\times N$-dimensional matrix. The CPU then can linearly combine the local estimates $\{\check{x}_{i\ell}\,:\,\ell=1,\ldots,L\}$ to obtain the estimate \begin{eqnarray} \label{linear-combination} \hat{x}_{i} = \sum_{\ell=1}^L a_{i\ell}\check{x}_{i\ell}, \end{eqnarray} which is eventually used to decode $x_i$. Here, the weighting coefficient vector $\mb{a}_i = [a_{i1},\ldots,a_{iL}]^T$ relies only on channel statistics and can be optimized by the CPU. This combining method is also known as the large-scale fading decoding (LSFB) strategy in the context of cellular massive MIMO. We note that no instantaneous CSI of any channel is required at the CPU. \subsection{Level 2: Local Processing \& Simple Centralized Decoding} At this level, the CPU forms an estimate of $x_{i}$ by simply taking the average of the local estimates \cite{Emil2020Making}. This yield an estimate $\hat{x}_i$ as \begin{eqnarray} \hat{x}_i = \frac{1}{L}\sum_{\ell=1}^{L}\check{x}_{i\ell}. \end{eqnarray} We note that no statistical parameters of CSI are needed at the CPU at this level of centralized signal processing. \subsection{Level 1: Small-Cell Network} At this level, each user signal is decoded by only one AP that gives the highest spectral efficiency to the user, i.e., the highest SINR \cite{Emil2020Making}. LMMSE filtering can be applied to obtain the local estimate of the user signal. Since only one estimate per user is forwarded to the CPU, no centralizing decoding is required. \section{Variational Bayes for Cell-Free Detection} In this paper, we focus on developing VB-based methods for data detection in cell-free massive MIMO systems that require certain levels of centralized processing, i.e., Levels 4, 3, and 2. For Level 4 processing, we assume that the symbol vectors are estimated independently at each time slot. However, for Levels 3 and 2 processing, we assume that the symbol vectors are first estimated locally over the whole fading block. As explained later in the section, this method of processing helps reduce the amount of signaling to the CPU, where the local estimates are aggregated to obtain the final estimate. \subsection{Background on VB} We first present the background on VB for approximate inference that will be exploited for solving the data detection in cell-free systems. VB inference is a powerful framework from machine learning that approximates intractable posterior distributions of latent variables with a known family of simpler distributions through optimization. The goal of VB inference is to find an approximation for a computationally intractable posterior distribution $p(\mb{x}|\mb{y})$ given a probabilistic model that specifies the joint distribution $p(\mb{x},\mb{y})$, where $\mb{y}$ represents the set of all observed variables and $\mb{x}$ is a set of $m$ latent variables and parameters. The VB inference method aims at finding a density function $q(\mb{x})$ with its own setting of variational parameters within a family $\mc{Q}$ of density functions that makes $q(\mb{x})$ close to the posterior distribution of interest $p(\mb{x}|\mb{y})$. VB inference amounts to solving the following optimization problem: \begin{align} q(\mb{x}) &= \arg\min_{q(\mb{x}) \in \mc{Q}}\; \mr{KL}\big(q(\mb{x}) \|p(\mb{x}|\mb{y}) \big) \nonumber \\ &= \arg\min_{q(\mb{x}) \in \mc{Q}}\;\mathbb{E}_{q(\mb{x})} \big[\ln q(\mb{x})\big] - \mathbb{E}_{q(\mb{x})}\big[\ln p(\mb{x}|\mb{y})\big] \; , \end{align} where $\mr{KL}\big(q(\mb{x})\|p(\mb{x}|\mb{y})$ is the Kullback-Leibler (KL) divergence from $q(\mb{x})$ to $p(\mb{x}|\mb{y})$. Minimizing the KL divergence is equivalent to maximizing the evidence lower bound ($\mr{ELBO}$)~\cite{bishop2006pattern}, which is defined as \begin{align} \mr{ELBO}(q) = \mathbb{E}_{q(\mb{x})} \big[\ln p(\mb{x},\mb{y})\big] - \mathbb{E}_{q(\mb{x})} \big[\ln q(\mb{x}) \big] \; . \end{align} The maximum of $\mr{ELBO}(q)$ occurs when $q(\mb{x}) = p(\mb{x}|\mb{y})$. Since working with the true posterior distribution is often intractable, it is more convenient to consider a restricted family of distributions $q(\mb{x})$. Among VB inference methods, the \textit{mean-field approximation} enables efficient optimization of the variational distribution over a partition of the latent variables, while keeping the variational distributions over other partitions fixed~\cite{bishop2006pattern}. The mean-field variational family is constructed such that \begin{eqnarray} q(\mb{x}) = \prod_{i=1}^m q_i(x_i), \end{eqnarray} where the latent variables are mutually independent and each is governed by a distinct factor in the variational density. Among all mean-field distributions $q(\mb{x})$, the general expression for the optimal solution of the variational density $q_i(x_i)$ that maximizes the ELBO can be obtained as~\cite{bishop2006pattern} \begin{align} q_i(x_i) \propto \mr{exp}\left\{\big\langle{\ln p (\mb{y}|\mb{x}) + \ln p(\mb{x})\big\rangle}\right\} \; , \end{align} where $\lr{\cdot}$ denotes the expectation with respect to all latent variables except $x_i$ using the currently fixed variational density $q_{-i}(\mb{x}_{-i}) = \prod_{j\neq i} q_{j}(x_{j})$. By iterating the update of $q_i(x_i)$ sequentially over all $j$, the $\mr{ELBO}(q)$ objective function can be monotonically improved. This is the basis behind the \textit{coordinate ascent variational inference} algorithm, which guarantees convergence to at least a local optimum of $\mr{ELBO}(q)$~\cite{bishop2006pattern,wainwright2008graphical}. To this send, we examine how the mean-field VB framework can be exploited for data detection at different levels of cooperation in a cell-free system. \subsection{Level 4: Fully Centralized Processing} At this level, the signals forwarded from the APs can be stacked into a single large-scale MIMO system as being shown in \eqref{eq_large_MIMO}. In a recent work \cite{Duy-MF-VB-2022}, we developed several VB-based methods for MIMO data detection. Among them, the \textbf{\textit{LMMSE-VB algorithm}} showed superior performance in MIMO systems with non-i.i.d. channels. Certainly, the algorithm can be adopted for data detection in cell-free systems with fully centralized processing. In the following, we present key operations in the algorithm. For details of the algorithm, we refer the readers to \cite{Duy-MF-VB-2022}. The LMMSE-VB algorithm floats the background noise covariance matrix as an unknown random variable, instead of treating the noise's variance $N_0$ as known. The postulated noise covariance matrix $\mb{C}^{\mathrm{post}}$ is estimated by the algorithm itself. For ease of computation, we use $\mb{W} = (\mb{C}^{\mathrm{post}})^{-1}$ to denote the precision matrix and assume a conjugate prior complex Wishart distribution $\mc{CW}(\mb{W}_0,n)$ for $\mb{W}$, where $\mb{W}_0\succeq \mb{0}$ is the scale matrix and $n\geq NL$ indicates the degrees of freedom. The PDF of $\mb{W}\sim \mc{CW}(\mb{W}_0,n)$ satisfies \begin{eqnarray} p(\mb{W}) \propto |\mb{W}|^{n-M}\mr{exp}\big(-\operatorname{tr}\{\mb{W}_0^{-1}\mb{W}\}\big). \end{eqnarray} The joint distribution $p(\mb{y},\mb{x},\mb{W};\mb{H})$ can be factored as \vspace{-0.1cm} \begin{eqnarray}\label{factor-W} p(\mb{y},\mb{x},\mb{W};\mb{H}) = p(\mb{y}|\mb{x},\mb{W};\mb{H})p(\mb{x})p(\mb{W}), \end{eqnarray} where $p(\mb{y}|\mb{x},\mb{W};\mb{H}) = \mc{CN}(\mb{y};\mb{Hx},\mb{W}^{-1})$. Given the observation $\mb{y}$, we aim at obtaining the mean-field variational distribution $q(\mb{x},\mb{W})$ such that \begin{eqnarray} p(\mb{x},\mb{W}|\mb{y};\mb{H}) \approx q(\mb{x},\mb{W}) = \prod_{i=1}^K q_i(x_i)q(\mb{W}). \end{eqnarray} The optimization of $q(\mb{x},\mb{W})$ is executed by iteratively updating $\{x_i\}$ and $\mb{W}$ as follows. \textit{a) Updating $x_i$.} The variational distribution $q_i(x_i)$ is obtained by expanding the conditional in \eqref{factor-W} and taking the expectation with respect to all latent variables except $x_i$ using the variational distribution $\prod_{j\neq i}^K q_j(x_j)q(\mb{W})$: \begin{eqnarray} \label{q-x-LMMSE-VB} q_i(x_i)&\propto& p(x_i)\,\mc{CN}\big(z_i;x_i,{1}/{\big(\mb{h}_i^H\lr{\mb{W}}\mb{h}_i\big)} \big), \end{eqnarray} where $z_i$ is a linear estimate of $x_i$ that is defined as \begin{eqnarray} \label{z-i-LMMSE-VB} z_i &=& \lr{x_i} + \frac{\mb{h}^H_i\lr{\mb{W}}}{\mb{h}_i^H\lr{\mb{W}}\mb{h}_i}\big(\mb{y} - \mb{H}\lr{\mb{x}}\big). \end{eqnarray} It is observed in \eqref{q-x-LMMSE-VB} that $\mc{CN}\big(z_i;x_i,\hat{\sigma}_i^2\big)$ with $\hat{\sigma}_i^2 =1/\big(\mb{h}_i^H\lr{\mb{W}}\mb{h}_i\big)$ can be interpreted as the likelihood function $p\big(z_i|x_i;\hat{\sigma}_i^2\big)$. In other words, the mean-field VB approximation decouples the linear MIMO system into $K$ parallel AWGN channels $z_i = x_i + \mc{CN}\big(0,\hat{\sigma}_i^2\big)$. The variational distribution $q_i(x_i)$ is realized by normalizing $p(x_i)\,\mc{CN}\big(z_i;x_i,\hat{\sigma}_i^2\big)$. The variational mean $\lr{x_i} = \mathbb{E}[x_i|z_i]$ and variance $\sigma_{x_i}^2$ are then computed accordingly. \textit{b) Updating $\mb{W}$.} The variational distribution $q(\mb{W})$ is obtained by taking the expectation of the conditional in \eqref{factor-W} with respect to $q(\mb{x})$: \begin{eqnarray} \label{q-W} q(\mb{W}) &\propto& \mr{exp}\big\{\big\langle\ln p(\mb{y}|\mb{x},\mb{W};\mb{H}) + \ln p(\mb{W}) \big\rangle\big\}. \end{eqnarray} The variational distribution $q(\mb{W})$ is also complex Wishart with $n+1$ degrees of freedom \cite{Duy-MF-VB-2022}. The variational mean $\lr{\mb{W}}$ can be computed accordingly. In \cite{Duy-MF-VB-2022}, we also proposed to use the estimator \begin{eqnarray} \lr{\mb{W}} = \bigg(\frac{\|\mb{y}-\mb{Hx}\|^2}{NL}\mb{I}_{NL} + \mb{H}\bs{\Sigma}_{\mb{x}}\mb{H}\bigg)^{-1}, \end{eqnarray} where $\bs{\Sigma}_{\mb{x}} = \mr{diag}(\sigma_{x_1}^2,\ldots,\sigma_{x_K}^2)$. By iteratively optimizing $\big\{q_i(x_i)\big\}$ and $q(\mb{W})$ via the updates of $\{\lr{x_i}\}$ and $\lr{\mb{W}}$, we obtain the CAVI algorithm for estimating $\mb{x}$ and the precision matrix $\mb{W}$. We refer to this scheme as the LMMSE-VB algorithm since $z_i$ resembles an LMMSE estimate of $x_i$ due to the cancellation of the inter-user interference and the whitening with the postulated noise covariance matrix $\mb{C}^{\mathrm{post}}$. \subsection{Level 3: Local Processing \& Nonlinear Decoding} At this level, our proposed VB-based method involves two operations: 1) Executing the LMMSE-VB algorithm independently at each AP to compute local estimates of $\mb{x}_t$ and 2) Aggregating the local estimates at the CPU for joint nonlinear decoding of $\mb{x}_t$. However, we make a minor modification to the LMMSE-VB algorithm which allow it to operate over the whole block of $T$ time slots. \subsubsection{AP Processing} The signal processing at an AP, say the $\ell$-th AP, is to generate a coarse estimate $\hat{\mb{x}}_t$ of $\mb{x}_t$, from the observation $\mb{y}_t$. We treat the background noise covariance matrix at the $\ell$-th AP as an unknown random variable. The postulated noise matrix $\mb{C}_{\ell}^{\mr{post}}$ has to be estimated as well. We denote the precision matrix $\mb{W}_{\ell}=(\mb{C}_{\ell}^{\mr{post}})^{-1}$, $\mb{Y}_{\ell}=[\mb{y}_{\ell,1},\ldots,\mb{y}_{\ell,T}]$, and $\mb{X} = [\mb{x}_{1},\ldots,\mb{x}_{T}]$. The joint distribution $p(\mb{Y}_{\ell}, \mb{X}, \mb{W}_{\ell}; \mb{H}_{\ell})$ can be factorized as \begin{equation}\label{factor-W-l} p(\mb{Y}_{\ell}, \mb{X}, \mb{W}_{\ell}; \mb{H}_{\ell}) = p(\mb{Y}_{\ell} |\mb{X}, \mb{W}_{\ell}; \mb{H}_{\ell})p(\mb{X})p(\mb{W}_{\ell}), \end{equation} where $p(\mb{Y}_{\ell} |\mb{X}, \mb{W}_{\ell}; \mb{H}_{\ell}) = \prod_{t=1}^Tp(\mb{y}_{\ell,t}|\mb{x}_t,\mb{W}_{\ell};\mb{H}_{\ell})$ with $p(\mb{y}_{\ell,t}|\mb{x}_t,\mb{W}_{\ell};\mb{H}_{\ell})= \mc{CN}\big(\mb{y}_{\ell,t};\mb{H}_{\ell}\mb{x}_t,\mb{W}_{\ell}^{-1}\big)$. Given the observation $\mb{Y}_\ell$, we aim at obtaining the mean-field variational distribution $q_\ell(\mb{X},\mb{W}_\ell)$ such that \begin{align} p(\mb{X},\mb{W}_{\ell}|\mb{Y}_\ell;\mb{H}_\ell) &\approx q_\ell(\mb{X},\mb{W}_\ell) \nonumber\\ &= \prod_{i=1}^K\prod_{t=1}^T q_{i\ell,t}(x_{i,t})q(\mb{W}_\ell). \end{align} The optimization of $q_\ell(\mb{X},\mb{W}_\ell)$ is executed by iteratively updating $\{x_{i,t}\}$ and $\mb{W}_\ell$ as follows. \textit{a) Update $x_{i,t}$:} The variational distribution $q_{i\ell,t}(x_{i,t})$ is obtained by expanding the conditional in \eqref{factor-W-l} and taking the expectation with respect to all latent variables except $x_{i,t}$ using the variational distribution $\prod_{(j,r)\neq(i,t)} q_{j\ell,r}(x_{j,r})q(\mb{W}_\ell)$: \begin{align}\label{q-x-local} &q_{i\ell,t}(x_{i,t}) \nonumber \\ &\propto \exp \left \{\langle \ln p(\mb{y}_{\ell,t}|\mb{x}_t,\mb{W}_{\ell};\mb{H}_{\ell}) + \ln p(\mb{x}_t) \rangle \right \} \notag \\ &\propto p(x_{i,t})\exp \left \{\left \langle -(\mb{y}_{\ell,t}-\mb{H}_{\ell}\mb{x}_t)^H\mb{W}_{\ell}(\mb{y}_{\ell,t}-\mb{H}_{\ell}\mb{x}_t) \right \rangle \right\} \notag\\ &\propto p(x_{i,t})\exp \left \{ -\mb{h}_{i\ell}^H\langle \mb{W}_{\ell} \rangle\mb{h}_{i\ell} |x_{i,t} - z_{i\ell,t}|^2\right \} \notag\\ &\propto p(x_{i,t})\,\mc{CN}\big(z_{i\ell,t};x_{i,t},1/(\mb{h}_{i\ell}^H\langle \mb{W}_{\ell} \rangle\mb{h}_{i\ell})\big), \end{align} where \begin{eqnarray} z_{i\ell,t} &=& \frac{\mb{h}_{i\ell}^H\langle \mb{W}_{\ell} \rangle}{\mb{h}_{i\ell}^H\langle \mb{W}_{\ell} \rangle\mb{h}_{i\ell}} \big(\mb{y}_{\ell,t} - \sum_{j\neq i}^K \mb{h}_{j\ell}\langle x_{j\ell,t}\rangle \big) \nonumber \\ &=& \langle x_{i,t}\rangle + \frac{\mb{h}_{i\ell}^H\langle \mb{W}_{\ell} \rangle(\mb{y}_{\ell,t} - \mb{H}_{\ell}\langle\mb{x}_{t}\rangle)}{\mb{h}_{i\ell}^H\langle \mb{W}_{\ell} \rangle\mb{h}_{i\ell}}. \end{eqnarray} It is observed in \eqref{q-x-local} that $\mc{CN}\big(z_{i\ell,t};x_{i,t},\check{\sigma}_{i\ell}^2\big)$ with $\check{\sigma}_{i\ell}^2={1}/{\big(\mb{h}_{i\ell}^H\lr{\mb{W}_\ell}\mb{h}_{i\ell}\big)}$ can be interpreted as the likelihood function $p\big(z_{i\ell,t}|x_{i,t};\check{\sigma}_{i\ell}^2\big)$. In this case, the mean-field VB approximation decouples the uplink MIMO channel to the $\ell$-th AP into $K$ parallel AWGN channels $z_{i\ell,t} = x_{i,t} + \mc{CN}\big(0,\check{\sigma}_{i\ell}^2\big)$. It is also observed that $z_{i\ell,t}$ is the local LMMSE estimate of $x_{i,t}$, while the variance $\check{\sigma}_{i\ell}^2$ indicates the reliability of this estimate. The variational distribution $q_{i\ell,t}(x_{i,t})$ is realized by normalizing $p(x_{i,t})\mc{CN}\big(z_{i\ell,t};x_{i,t},\check{\sigma}_{i\ell}^2\big)$. The variational mean $\lr{x_{i,t}} = \mathbb{E}[x_{i,t}|z_{i\ell,t}]$ and variance $\sigma_{x_{i,t}}^2$ can be computed accordingly. Hereafter, we use $\check{x}_{i\ell,t}$ instead of $\lr{x_{i,t}}$ or $\mathbb{E}[x_{i,t}|z_{i\ell,t}]$ to indicate the nonlinear MMSE estimate of $x_{i,t}$ at the $\ell$-th AP. \textit{b) Update $\mb{W}_{\ell}$:} The variational distribution $q(\mb{W}_\ell)$ is obtained by taking the expectation of the conditional in \eqref{factor-W-l} with respect to $\prod_{i=1}^K\prod_{t=1}^T q_{i\ell,t}(x_{i,t})$: \begin{align} \label{q-W-2} q(\mb{W}_\ell) &\propto& \mr{exp}\big\{\big\langle\ln p(\mb{Y}_{\ell}|\mb{X},\mb{W}_\ell;\mb{H}_\ell) + \ln p(\mb{W}_\ell) \big\rangle\big\}. \end{align} \begin{figure*} \begin{eqnarray}\label{W-l} \lr{\mb{W}_{\ell}} = (n+T) \Bigg(\mb{W}_0 + (\mb{Y}_{\ell}-\mb{H}_\ell\mb{X})(\mb{Y}_{\ell}-\mb{H}_\ell\mb{X})^H + \sum_{t=1}^T\mb{H}_\ell\bs{\Sigma}_{\mb{x},t}\mb{H}_\ell\Bigg)^{-1}. \end{eqnarray} \hrulefill \end{figure*} Assuming a conjugate prior complex Wishart distributed $\mc{CW}(\mb{W}_{0,\ell},n)$ for $\mb{W}_\ell$, the variational distribution $q(\mb{W})$ is also complex Wishart with $n+T$ degrees of freedom. The variational mean $\lr{\mb{W}_\ell}$ is given in \eqref{W-l}, where $\bs{\Sigma}_{\mb{x},t} = \diag(\sigma_{x_{1,t}}^2, \ldots,\sigma_{x_{K,t}}^2)$. The LMMSE-VB algorithm is executed at the $\ell$-th AP by iteratively optimizing $\{q_{i\ell,t}(x_{i,t})\}$ and $q(\mb{W})$ via the updates of $\{\lr{x_{i,t}}\}$ and $\lr{\mb{W}_\ell}$. The $\ell$-th AP then sends the LMMSE estimate $z_{i\ell,t}$ and the variance $\check{\sigma}_{i\ell}^2$ to the CPU for centralized decoding. By pre-processing the whole block of $T$ time slots, $\check{\sigma}_{i\ell}^2$ is sent only once for each channel realization. In contrast, if the LMMSE-VB algorithm is executed on a per time slot basis, the variance of the LMMSE estimate $z_{i\ell,t}$ has to be computed and sent for each time slot. \subsubsection{CPU Processing} After collecting the local estimates $z_{i\ell,t}$ and the variance $\check{\sigma}_{i\ell}^2$ from the $L$ APs, the CPU can proceed to decode each of the $K$ symbols independently. Since $z_{i\ell,t} = x_{i,t} + \mc{CN}\big(0,\check{\sigma}_{i\ell}^2\big)$, an approximate posterior distribution $p(x_{i,t}|\{z_{i\ell,t}\};\{\check{\sigma}_{i\ell}^2\})$ can be easily derived. The MAP estimate $\hat{x}_{i,t}$ of $x_{i,t}$ is obtained as \begin{equation}\label{nonlinear-combination} \hat{x}_{i,t} = \arg\max_{x_{i,t} \in \mc{S}} \left(\ln p(x_i)-\sum_{\ell=1}^L \frac{|z_{i\ell,t} - x_{i,t}|^2}{\check{\sigma}_{i\ell}^2}\right). \end{equation} We note that the above nonlinear combination of local estimates and reliability information is significantly different from the linear combination of local estimates in \eqref{linear-combination}. \subsection{Level 2: Local Processing \& Simple Linear Combining} At this level, only local estimates are fed back to the CPU. The LMMSE-VB mentioned in Level 3 signal processing can be used to generate the coarse local estimates. However, the local nonlinear MMSE estimates $\check{x}_{i\ell,t}$ is sent, instead of the LMMSE estimate $z_{i\ell,t}$ and the variance $\check{\sigma}_{i\ell}^2$. We note that $\check{x}_{i\ell,t}$ can be computed using $z_{i\ell,t}$ and $\check{\sigma}_{i\ell}^2$, but not the reverse. A simple estimate of $x_{i,t}$ can be obtained by simply taking the average of all the estimates $\check{x}_{i\ell,t}$ as \begin{equation} \hat{x}_{i,t} = \frac{1}{L}\sum_{\ell=1}^L \check{x}_{i\ell,t}. \end{equation} The final detected symbol of $x_{i,t}$ is the constellation point that is closest to $\hat{x}_{i,t}$. \section{Numerical Results} \label{sec_numerical_results} This section presents the numerical results comparing the developed VB-based methods for data detection in cell-free systems with the LMMSE filtering methods in \cite{Emil2020Making}. We use a simulation setting and a channel model in urban environments similar to the work in~\cite{Emil2020Making}. In particular, a network area of $1 \times 1$ km is considered where the APs are deployed on a square grid and users are randomly distributed. The large-scale fading coefficient of the channel between user-$i$ and AP-$\ell$ (in dB) is given as \begin{equation} \beta_{i\ell} = -30.5 - 36.7\log_{10}(d_{i\ell}) + F_{i\ell}, \end{equation} where $d_{i\ell}$ (in m) is the distance between user-$i$ and AP-$\ell$ and $F_{i\ell}\sim\mathcal{N}(0,16)$ is the shadow fading. The correlation between the shadowing terms from an AP to different users is modeled as \begin{equation} \mbb{E}[F_{i\ell}F_{i'\ell'}] = \begin{cases} 16\times2^{-\delta_{ii'}/9}, & \ell = \ell' \\ 0, & \ell \neq \ell' \end{cases} \end{equation} where $\delta_{ii'}$ (in m) is the distance between user-$i$ and user-$i'$. Receive antennas at each AP are arranged in a uniform linear array with half-wavelength spacing. For spatial correlation, we use the Gaussian local scattering model with a $15^{\circ}$ angular standard deviation~\cite{bjornson2017massive}. We set the noise as $\mathcal{CN}(0,1)$ and vary the transmit power of users. In this work, we compare different data detection methods assuming perfect CSI and QPSK signalling. We assume that each AP is equipped with $4$ antennas, i.e., $N=4$. Fig.~\ref{fig_1} presents the symbol error rate (SER) performance of the two types of methods in a relatively small setting of cell-free systems with $K=16$ and $L=16$. As the user transmit power is increased, the VB-based methods attain much lower SER than the MMSE filtering methods. Up to $2$-dB gain is observed at Level 4 and $4$-dB gain is observed at Level~3 and~2. Fig. \ref{fig_2} presents the SER performance a cell-free system with $K=40$ and $L=64$. The figure clearly indicates the superior performance of the proposed VB-based methods over the MMSE filtering methods. It is also observed from both figures that the more centralized signal processing is carried at the CPU, the better SER performance can be achieved, especially in systems with a large number of users, e.g., $K=40$. \begin{figure} \centering \includegraphics[width=1\linewidth]{fig/figure_1.eps} \caption{SER performance of the VB-based methods (in \emph{solid} lines) and LMMSE methods (in \emph{dashed} lines) \emph{versus} the user transmit power, with $K=16$, $L=16$, and $N=4$.} \label{fig_1} \end{figure} \begin{figure} \centering \includegraphics[width=1\linewidth]{fig/figure_2.eps} \caption{SER performance of the VB-based methods (in \emph{solid} lines) and LMMSE methods (in \emph{dashed} lines) \emph{versus} the user transmit power, with $K=40$, $L=64$, and $N=4$.} \label{fig_2} \end{figure} \section{Conclusion} In this paper, we have proposed the VB-based methods for data detection in cell-free systems at three different levels of AP cooperation. The proposed methods can achieve much lower SER than the linear MMSE signal processing methods. We note that the presented study only considers the case of perfect CSI available at the CPU (for Level 4) and at the APs (for Levels 3 and 2). As an extension of this paper, we are developing novel VB-based methods for data detection with imperfect CSI and joint channel estimation and data detection in cell-free systems. \label{sec_conclusion} \ifCLASSOPTIONcaptionsoff \newpage \fi \section*{Acknowledgment} This work was supported by the U.S. National Science Foundation under Grants ECCS-2146436 and CCF-2225576. \bibliographystyle{IEEEtran}
2,877,628,091,370
arxiv
\section{Introduction} In \cite{Ri} Richardson noticed that the {\it Filbert matrices} \begin{equation}\label{eq:filbert} \mathcal F_n=\left(1/F_{i+j+1}\right),\quad 0\le i,j\le n,\quad n=0,1,\ldots, \end{equation} where $F_n,n\ge 0$ is the sequence of Fibonacci numbers, have the property that all elements of the inverse matrices are integers. The corresponding property for the {\it Hilbert matrices} $(1/(i+j+1))$ has been known for a long time, see Choi \cite{Ch}. Richardson gave an explicit formula for the elements of the inverse matrices and proved it using computer algebra. The formula shows a remarkable analogy with Choi's corresponding formula for the elements of the inverse Hilbert matrices in the sense that one shall replace some binomial coefficients $\binom{n}{k}$ by the analogous {\it Fibonomial coefficients} \begin{equation}\label{eq:fibonomial} \binom{n}{k}_{\mathbb F}=\prod_{i=1}^k\frac{F_{n-i+1}}{F_i},\quad 0\le k\le n, \end{equation} with the usual convention that empty products are defined as 1. These coefficients are defined and studied in \cite{Kn} and are integers. The sequence of Fibonacci numbers is $F_0=0, F_1=1,\ldots,$ with the recursion formula $F_{n+1}=F_n+F_{n-1},\;n\ge 1$. The Hilbert matrices are the Hankel matrices $(s_{i+j})$ corresponding to the moment sequence $$ s_n=1/(n+1)=\int_0^1 x^n\,dx, $$ and that the reciprocal matrices have integer entries can easily be explained by the fact the corresponding orthogonal polynomials, namely the Legendre polynomials, have integer coefficients. See section 4 for details. The purpose of the present paper is to show that $(1/F_{n+2})_{n\ge 0}$ is the moment sequence of a certain discrete probability. Although this is a simple consequence of Binet's formula for $F_n$, it does not seem to have been noticed in the literature, cf. \cite{Ko}. We find the corresponding probability measure to be \begin{equation}\label{eq:fibmea} \mu=(1-q^2)\sum_{k=0}^\infty q^{2k}\delta_{q^k/\phi}, \end{equation} where we use the notation \begin{equation}\label{eq:golden} \phi=\frac{1+\sqrt{5}}{2},\quad q=\frac{1-\sqrt{5}}{1+\sqrt{5}}=\frac{1}{\phi}-1, \end{equation} and $\delta_a$ denotes the probability measure with mass 1 at the point $a$. The number $\phi$ is called the golden ratio. The corresponding orthogonal polynomials are little $q$-Jacobi polynomials \begin{equation}\label{eq:lqJacobi} p_n(x;a,b;q)={}_2\phi_1\left(\begin{matrix}q^{-n},abq^{n+1}\\aq\end{matrix};q,xq\right), \end{equation} see \cite{G:R}, specialized to the parameters $a=q,b=1$, with $q$ taking the value from (\ref{eq:golden}). To be precise we define \begin{equation}\label{eq:fibpol} p_n(x):=F_{n+1}p_n(x\phi;q,1;q), \end{equation} and these polynomials have integer coefficients, since they can be written \begin{equation}\label{eq:fibpol1} p_n(x)=\sum_{k=0}^n (-1)^{kn-\binom{k}{2}}\tbinom{n}{k}_{\mathbb F} \tbinom{n+k+1}{n}_{\mathbb F}x^k. \end{equation} The orthonormal polynomials with respect to $\mu$ and having positive leading coefficients are given as \begin{equation}\label{eq:fibonp} P_n(x)=(-1)^{\binom{n+1}{2}}\sqrt{F_{2n+2}}p_n(x), \end{equation} so the kernel polynomial $$ K_n(x,y)=\sum_{k=0}^n P_k(x)P_k(y), $$ is a polynomial in $x,y$ with integer coefficients. If we denote $a_{i,j}^{(n)}$ the coefficient to $x^{i}y^{j}$ in the kernel polynomial, then it is a general fact that the matrix \begin{equation}\label{eq:matrixAn} A_n=(a^{(n)}_{i,j}),\quad 0\le i,j\le n \end{equation} is the inverse of the Hankel matrix of the problem $(s_{i+j})_0^n$, see Theorem \ref{cbthm:A} below. This explains that the elements of the inverse of the matrix $(F_{i+j+2})_0^n$ are integers, and we derive a formula for the entries from the orthogonal polynomials. The Filbert matrices (\ref{eq:filbert}) are not positive definite but non-singular, and they are the Hankel matrices of the moments of a (real-valued) signed measure with total mass 1. The orthogonal polynomials for this signed measure are the little $q$-Jacobi polynomials \begin{equation}\label{eq:lqJacobi1} p_n(x\phi;1,1;q)=\sum_{k=0}^n(-1)^{kn-\binom{k}{2}} \tbinom{n}{k}_{\mathbb F}\tbinom{n+k}{n}_{\mathbb F}x^k, \end{equation} and a simple modification of the positive definite case leads to Richardson's formula for the entries of the inverse of the Filbert matrices. The two results can be unified in the statement that for each $\a\in\mathbb N=\{1,2,\ldots\}$ the sequence $(F_{\a}/F_{\a+n})_{n\ge 0}$ is a moment sequence of a real-valued measure $\mu_{\a}$ with total mass 1. It is a positive measure when $\a$ is even, but a signed measure when $\a$ is odd. The orthogonal polynomials are little $q$-Jacobi polynomials $p_n(x\phi;q^{\a-1},1;q)$. This is proved in section 3. In section 2 we recall some basic things about orthogonal polynomials both in the positive definite and in the quasi-definite case, and Theorem \ref{cbthm:A} about the inverse of the Hankel matrices is proved. In section 4 we briefly discuss the matrices $(1/(\a+i+j))_0^n$, where $\a>0$. They are related to Jacobi polynomials transfered to the interval $]0,1[$ and belonging to the parameters $(0,\a-1)$. This leads to a generalization of Choi's result, which corresponds to $\a=1$. After the circulation of a preliminary version of this paper Ismail has extended the results of section 3 to a one parameter generalization of the Fibonacci numbers, cf. \cite{Is1}. \section{Orthogonal Polynomials} We start by recalling some simple facts from the theory of orthogonal polynomials, cf. \cite{Ak} or \cite{Is} and in particular \cite{Chi} for the quasi-definite case. \medskip {\it The positive definite case}. We consider the set $\mathcal M^*$ of probability measures on $\mathbb R$ with moments of any order and with infinite support. The moment sequence of $\mu\in\mathcal M^*$ is \begin{equation}\label{eq:mom} s_n=s_n(\mu)=\int x^n\,d\mu(x),\quad n=0,1,\ldots, \end{equation} and the corresponding Hankel matrices are given by \begin{equation}\label{eq:Hankel} H_n=\begin{pmatrix} s_0 & s_1 & \cdots & s_n\\ s_1 & s_2 & \cdots & s_{n+1}\\ \vdots & \vdots & & \vdots\\ s_n & s_{n+1} & \cdots & s_{2n} \end{pmatrix},\quad n=0,1,\ldots. \end{equation} The orthonormal polynomials $(P_n)$ for $\mu$ are uniquely determined by the equations \begin{equation}\label{eq:orthpol} \int P_n(x)P_m(x)\,d\mu(x)=\delta_{n,m},\quad n,m\ge 0, \end{equation} and the requirement that $P_n$ is a polynomial of degree $n$ with positive leading coefficient. This coefficient is equal to \begin{equation}\label{eq:leading} \sqrt{D_{n-1}/D_n}, \end{equation} where $D_n=\det H_n$. The reproducing kernel for the polynomials of degree $\le n$ is defined as \begin{equation}\label{eq:kernel} K_n(x,y)=\sum_{k=0}^n P_k(x)P_k(y), \end{equation} and is called the kernel polynomial. It is clear that we can write \begin{equation}\label{eq:A} K_n(x,y)=\sum_{i=0}^n\sum_{j=0}^n a^{(n)}_{i,j}x^{i}y^{j}, \end{equation} where the numbers $a^{(n)}_{i,j}$ are uniquely determined and satisfy $a^{(n)}_{i,j}=a^{(n)}_{j,i}$. If we collect these numbers in an $(n+1)\times(n+1)$-matrix $A_n=(a^{(n)}_{i,j})$, then it is the inverse of the Hankel matrix $H_n$: \begin{cbthm}\label{cbthm:A} $$ A_nH_n=H_nA_n=E_n, $$ where $E_n$ is the unit matrix of order $n+1$. \end{cbthm} {\it Proof}. For $0\le k\le n$ we have \begin{equation}\label{eq:AH} \int x^kK_n(x,y)\,d\mu(x)=\sum_{m=0}^k P_m(y)\int x^kP_m(x)\,d\mu(x), \end{equation} which is a polynomial in $y$ of degree $k$. On the other hand we have $$ \int x^kK_n(x,y)\,d\mu(x)= \sum_{j=0}^n(\sum_{i=0}^n s_{k+i}a^{(n)}_{i,j})y^j, $$ and therefore $$ \sum_{i=0}^n s_{k+i}a^{(n)}_{i,j}=0 $$ when $k<j\le n$, and when $j=k$ the sum equals the coefficient to $y^k$ in (\ref{eq:AH}), i.e. equal to $$ \sqrt{D_{k-1}/D_k}\int x^kP_k(x)\,d\mu(x)=\int P_k^2(x)\,d\mu(x)=1. $$ Since the matrix $H_nA_n$ is symmetric, the above shows that it equals the unit matrix. \quad$\square$ \medskip {\it The quasi-definite case}. If $\mu$ is a real-valued signed measure on $\mathbb R$ with total mass 1 and moments of any order, one can still define the moments (\ref{eq:mom}) and the corresponding Hankel matrices (\ref{eq:Hankel}). To define orthogonal polynomials one has to assume that (\ref{eq:Hankel}) is a non-singular matrix for any $n$, i.e. that the determinants satisfy $D_n=\det H_n\ne 0$. On the other hand, if orthogonal polynomials exist with respect to a signed measure, then the Hankel determinants are non-zero. See \cite[Theorem 3.1]{Chi} for details. In this case the orthonormal polynomial $P_n$ is uniquely determined by the requirement that the leading coefficient $\sqrt{D_{n-1}/D_n}$ is either positive or purely imaginary with positive imaginary part. The corresponding kernel polynomial $K_n$ has real coefficients, and Theorem \ref{cbthm:A} remains valid. \section{Fibonacci numbers} The Fibonacci numbers can be given by the formula \begin{equation}\label{eq:Binet} F_n=\frac{1}{\sqrt{5}}(\phi^n-{\hat\phi}^n),\quad n\ge 0 \end{equation} usually called Binet's formula, but it is actually older, see \cite{Kn},\cite{Ko}. Here $$ \phi=\frac{1+\sqrt{5}}{2},\quad \hat\phi=\frac{1-\sqrt{5}}{2}=1-\phi. $$ Using the number $q=\hat\phi/\phi$, satisfying $-1<q<0$ and already defined in (\ref{eq:golden}), leads to \begin{equation}\label{eq:help} F_n=\frac{1}{\sqrt{5}}\phi^n(1-q^n),\quad q\phi^2=-1, \end{equation} and for $\a\in\mathbb N$ and $n\ge 0$ $$ \frac{F_{\a}}{F_{\a+n}}=\frac{\sqrt{5}F_{\a}}{{\phi}^{\a+n}}\frac{1}{1-q^{\a+n}} =(1-q^{\a})\sum_{k=0}^\infty (q^k/\phi)^n q^{\a k}, $$ which is the $n$'th moment of the real-valued measure \begin{equation}\label{eq:mup} \mu_{\a}=(1-q^{\a})\sum_{k=0}^\infty q^{\a k}\delta_{q^k/\phi} \end{equation} with total mass 1. When $\a$ is even then $\mu_{\a}$ is a probability measure, but when $\a$ is odd the masses $q^{\a k}$ change sign with the parity of $k$. Note that $\mu_2$ is the measure considered in (\ref{eq:fibmea}). For the Fibonomial coefficients defined in (\ref{eq:fibonomial}) one has $$ \tbinom{n}{k}_{\mathbb F}=1,\; 0\le k\le n\le 2, $$ and they satisfy a recursion formula \begin{equation}\label{eq:fibonomialrec} \tbinom{n}{k}_{\mathbb F}=F_{k-1}\tbinom{n-1}{k}_{\mathbb F} +F_{n-k+1}\tbinom{n-1}{k-1}_{\mathbb F},\;n>k\ge 1, \end{equation} see \cite{Kn}, which shows that the Fibonomial coefficients are integers. From (\ref{eq:fibonomial}) it is also clear that $$ \tbinom{n}{k}_{\mathbb F}=\tbinom{n}{n-k}_{\mathbb F},\quad 0\le k\le n. $$ In \cite[Section 7.3]{G:R} one finds a discussion of the little $q$-Jacobi polynomials defined in (\ref{eq:lqJacobi}), and it is proved that \begin{equation}\label{eq:ortJacobi} \sum_{k=0}^\infty p_n(q^k;a,b;q)p_m(q^k;a,b;q)\frac{(bq;q)_k}{(q;q)_k} (aq)^k=\frac{\delta_{n,m}}{h_n(a,b;q)}, \end{equation} where \begin{equation}\label{eq:norm} h_n(a,b;q)=\frac{(abq;q)_n(1-abq^{2n+1})(aq;q)_n(aq;q)_\infty} {(q;q)_n(1-abq)(bq;q)_n(abq^2;q)_\infty}(aq)^{-n}. \end{equation} In \cite{G:R} it is assumed that $0<q,aq<1$, but the derivation shows that it holds for $|q|<1,|a|\le 1, |b|\le 1$, in particular in the case of interest here: $-1<q<0,a=q^{\a-1}, b=1$, in the case of which we get \begin{equation}\label{eq:ortJacobispec} \sum_{k=0}^\infty p_n(q^k;q^{\a-1},1;q)p_m(q^k;q^{\a-1},1;q) q^{\a k}=\delta_{n,m}\frac{q^{\a n}(q;q)_n^2}{(q^{\a};q)_n^2(1-q^{\a+2n})}. \end{equation} This shows that the polynomials $$ p_n(x\phi;q^{\a-1},1;q) $$ are orthogonal with respect to $\mu_{\a}$ and that $$ \int p_n(x\phi;q^{\a-1},1;q)p_m(x\phi;q^{\a-1},1;q)\,d\mu_{\a}(x)= \delta_{n,m}\frac{(1-q^{\a})q^{\a n}(q;q)_n^2}{(q^{\a};q)_n^2(1-q^{\a+2n})}. $$ To simplify this apply (\ref{eq:help}) to get $$ \frac{(1-q^{\a})q^{\a n}(q;q)_n^2}{(q^{\a};q)_n^2(1-q^{\a+2n})}= (-1)^{\a n}\tfrac{F_{\a}}{F_{\a+2n}}\left(\prod_{j=0}^{n-1}\tfrac{F_{1+j}} {F_{\a+j}}\right)^2=(-1)^{\a n}\tfrac{F_{\a}}{F_{\a+2n}}\tbinom{\a+n-1}{n}_{\mathbb F}^{-2}. $$ \begin{cbthm}\label{cbthm:integercoef} Let $\a\in\mathbb N$. The polynomials $p_n^{(\a)}(x)$ defined by \begin{equation}\label{eq:poldef} p_n^{(\a)}(x)=\tbinom{\a+n-1}{n}_{\mathbb F}p_n(x\phi;q^{\a-1},1;q) \end{equation} can be written \begin{equation}\label{eq:pol} p_n^{(\a)}(x)=\sum_{k=0}^n (-1)^{kn-\binom{k}{2}}\tbinom{n}{k}_{\mathbb F} \tbinom{\a+n+k-1}{n}_{\mathbb F}x^k, \end{equation} and they satisfy \begin{equation}\label{eq:pol1} \int p_n^{(\a)}(x)p_m^{(\a)}(x)\,d\mu_{\a}(x)=\delta_{n,m}(-1)^{\a n}\tfrac{F_{\a}}{F_{\a+2n}}, \end{equation} so the corresponding orthonormal polynomials are \begin{equation}\label{eq:polorth} P_n^{(\a)}(x)=\sqrt{(-1)^{\a n}F_{\a+2n}/F_{\a}}p_n^{(\a)}(x). \end{equation} \end{cbthm} {\it Proof}. By definition, see (\ref{eq:lqJacobi}) \begin{eqnarray*} p_n^{(\a)}(x)&=&\tbinom{\a+n-1}{n}_{\mathbb F}\sum_{k=0}^n\frac{(q^{-n},q^{\a+n};q)_k} {(q,q^{\a};q)_k}(q\phi x)^k\\ &=&\tbinom{\a+n-1}{n}_{\mathbb F}\sum_{k=0}^n{\scriptsize \left[\begin{matrix}n\\k\end{matrix}\right]_q} \frac{(q^{\a+n};q)_k}{(q^{\a};q)_k}(-1)^kq^{\binom{k}{2}-nk}(q\phi x)^k, \end{eqnarray*} where $${\scriptsize \left[\begin{matrix}n\\k\end{matrix}\right]_q}= \frac{(q;q)_n}{(q;q)_k(q;q)_{n-k}} $$ is the $q$-binomial coefficient. Using (\ref{eq:help}) leads to $${\scriptsize \left[\begin{matrix}n\\k\end{matrix}\right]_q}=\binom{n}{k}_{\mathbb F} \phi^{k(k-n)}, $$ hence $$ p_n^{(\a)}(x)=\tbinom{\a+n-1}{n}_{\mathbb F}\sum_{k=0}^n(-1)^k\tbinom{n}{k}_{\mathbb F} (\phi^2q)^{\binom{k+1}{2}-nk}\prod_{j=0}^{k-1}\tfrac{F_{\a+n+j}}{F_{\a+j}}x^k, $$ which by (\ref{eq:help}) can be reduced to (\ref{eq:pol}). \quad$\square$ \begin{cbrem} {\rm The polynomials $p_n^{(\a)}(x)$ for $\a=1$ and $\a=2$ are the polynomials in (\ref{eq:lqJacobi1}) and in (\ref{eq:fibpol1}) respectively.} \end{cbrem} \begin{cbcor}\label{eq:det} For $\a\in\mathbb N$ $$ \det(1/F_{\a+i+j})_0^n=\left((-1)^{\a\binom{n+1}{2}}F_{\a}\prod_{k=1}^n F_{\a+2k}\tbinom{\a+2k-1}{k}_{\mathbb F}^2\right)^{-1}, $$ which is the reciprocal of an integer. \end{cbcor} {\it Proof}. From the general theory it is known that the leading coefficient of the orthonormal polynomial $P_n^{(\a)}$ is $\sqrt{D_{n-1}/D_n}$, where $$ D_n=\det(F_{\a}/F_{\a+i+j})_0^n. $$ From (\ref{eq:pol}) and (\ref{eq:polorth}) we then get $$ D_{n-1}/D_n=(-1)^{\a n}\tfrac{F_{\a+2n}}{F_{\a}} \tbinom{\a+2n-1}{n}_{\mathbb F}^2, $$ hence $$ \frac{1}{D_n}=\prod_{k=1}^n \frac{D_{k-1}}{D_{k}}=(-1)^{\a\binom{n+1}{2}}\tfrac{1}{F_{\a}^n} \prod_{k=1}^n F_{\a+2k} \tbinom{\a+2k-1}{k}_{\mathbb F}^2 $$ and the formula follows. \quad$\square$ \begin{cbthm}\label{cbthm:integer1} The $i,j$'th entry of the inverse of the matrix $(1/F_{\a+i+j})_0^n$ is given as \begin{equation}\label{eq:integer1} (-1)^{n(\a+i+j)-\binom{i}{2}-\binom{j}{2}} F_{\a+i+j} \tbinom{\a+n+i}{n-j}_{\mathbb F}\tbinom{\a+n+j}{n-i}_{\mathbb F} \tbinom{\a+i+j-1}{i}_{\mathbb F}\tbinom{\a+i+j-1}{j}_{\mathbb F}. \end{equation} \end{cbthm} {\it Proof.} From Theorem \ref{cbthm:A} we get $$ \left(\left(F_{\a}/F_{\a+i+j}\right)_0^n\right)^{-1}= \left(a_{i,j}^{(n)}(\a)\right)_0^n, $$ where $a_{i,j}^{(n)}(\a)$ is the coefficient to $x^{i}y^{j}$ in the kernel polynomial $K_n(x,y)$ for the orthonormal polynomials $P^{(\a)}_n$. Inserting the expressions (\ref{eq:pol}) and (\ref{eq:polorth}) in the kernel polynomial and changing the order of summation gives $$ F_{\a} a^{(n)}_{i,j}(\a)=\sum_{k=\max(i,j)}^n C^{(\a)}(k;i,j), $$ where we for $k\ge i,j$ have defined \begin{equation}\label{eq:sum} C^{(\a)}(k;i,j):=(-1)^{k(\a+i+j)-\binom{i}{2}-\binom{j}{2}} F_{\a+2k}\tbinom{k}{i}_{\mathbb F}\tbinom{k}{j}_{\mathbb F} \tbinom{\a+k+i-1}{k}_{\mathbb F}\tbinom{\a+k+j-1}{k}_{\mathbb F}. \end{equation} To prove that this expression can be summed to give (\ref{eq:integer1}), we use induction in $n$. By symmetry we can always assume $i\ge j$. The starting step $n=k=i\ge j$ is easy and is left to the reader. For the induction step let $R^{(\a)}(n;i,j)$ denote the expression (\ref{eq:integer1}). It has to be established that $$ R^{(\a)}(n+1;i,j)-R^{(\a)}(n;i,j)=C^{(\a)}(n+1;i,j). $$ The left-hand side of this expression can be written $$ (-1)^{(n+1)(\a+i+j)-\binom{i}{2}-\binom{j}{2}}F_{\a+i+j} \tbinom{\a+i+j-1}{i}_{\mathbb F}\tbinom{\a+i+j-1}{j}_{\mathbb F} T, $$ where $$ T=\tbinom{\a+n+1+i}{n+1-j}_{\mathbb F}\tbinom{\a+n+1+j}{n+1-i}_{\mathbb F}- (-1)^{\a+i+j}\tbinom{\a+n+i}{n-j}_{\mathbb F}\tbinom{\a+n+j}{n-i}_{\mathbb F} $$ $$=\frac{(F_{\a+n+i}\cdots F_{\a+i+j+1})(F_{\a+n+j}\cdots F_{\a+i+j+1})} {(F_1\cdots F_{n+1-j})(F_1\cdots F_{n+1-i})}\,\cdot $$ $$ \left[F_{\a+n+i+1}F_{\a+n+j+1}-(-1)^{\a+i+j} F_{n+1-i}F_{n+1-j}\right]. $$ By Lemma \ref{cbthm:fiblemma} below (with $n$ replaced by $n+1$), the expression in brackets equals $F_{\a+2n+2}F_{\a+i+j}$, and now it is easy to complete the proof. \quad$\square$ \begin{cblem}\label{cbthm:fiblemma} For $n\ge i,j\ge 0$ and $\a\ge 0$ the following formula holds \begin{equation}\label{eq:fiblemma} F_{\a+2n}F_{\a+i+j}=F_{\a+n+i}F_{\a+n+j}-(-1)^{\a+i+j}F_{n-i}F_{n-j}. \end{equation} \end{cblem} {\it Proof.} Using Binet's formula, the right-hand side of (\ref{eq:fiblemma}) multiplied with 5 equals $$ (\phi^{\a+n+i}-{\hat\phi}^{\a+n+i})(\phi^{\a+n+j}-{\hat\phi}^{\a+n+j}) -(-1)^{\a+i+j}(\phi^{n-i}-{\hat\phi}^{n-i})(\phi^{n-j}-{\hat\phi}^{n-j}). $$ Using $\phi\hat{\phi}=-1$ one gets after some simplification $$ (\phi^{\a+2n}-{\hat\phi}^{\a+2n})(\phi^{\a+i+j}-{\hat\phi}^{\a+i+j}), $$ which establishes the formula. \quad$\square$ \begin{cbrem} {\rm For $\a=1$ the expression (\ref{eq:integer1}) reduces to $$ (-1)^{n(i+j+1)-\binom{i}{2}-\binom{j}{2}} F_{i+j+1} \tbinom{n+i+1}{n-j}_{\mathbb F}\tbinom{n+j+1}{n-i}_{\mathbb F} \tbinom{i+j}{i}_{\mathbb F}^2, $$ which is the expression found by Richardson \cite{Ri}, except that he expressed the sign in a different but equivalent manner.} \end{cbrem} \section{The Hilbert matrices} For $\a>0$ the matrices \begin{equation}\label{eq:Hilbert} \mathcal H_n^{(\a)}=\left(\a/(\a+i+j)\right)_0^n,\quad n=0,1,\ldots, \end{equation} are the Hankel matrices for the moment sequence $$ s_n^{(\a)}=\a\int_0^1 x^nx^{\a-1}\,dx=\frac{\a}{\a+n},\quad n=0,1,\ldots $$ of the measure $\sigma_{\a}=\a x^{\a-1}1_{]0,1[}(x)\,dx$. The corresponding orthogonal polynomials are easily seen to be \begin{equation}\label{eq:Legendre} r_n^{(\a)}(x)=\frac{1}{n!}x^{-\alpha+1}D^n\;[x^{\a-1+n}(1-x)^n]= (-1)^n\sum_{k=0}^n\tbinom{n}{k}\tbinom{\a-1+n}{k}(x-1)^k x^{n-k}, \end{equation} since they are Jacobi polynomials transfered to $]0,1[$, cf. \cite{A:A:R}. Using the binomial formula for $(x-1)^k$ we find $$ r_n^{(\a)}(x)=(-1)^n\sum_{j=0}^n (-1)^j x^{n-j}c_j, $$ where \begin{eqnarray*} c_j&=&\sum_{k=j}^n \tbinom{k}{j}\tbinom{n}{k}\tbinom{\a-1+n}{k} =\sum_{l=0}^{n-j}\tbinom{j+l}{j}\tbinom{n}{j+l}\tbinom{\a-1+n}{j+l}\\ &=&\tbinom{n}{j}\tbinom{\a-1+n}{j} {}_2F_1{\scriptsize\left(\begin{matrix} -n+j,-n-\a+j+1\\ j+1\end{matrix};1\right)}=\tbinom{n}{j}\tbinom{2n+\a-j-1}{n}, \end{eqnarray*} where the ${}_2F_1$ is summed by the Chu-Vandermonde formula, cf. \cite[p. 67]{A:A:R}. This gives \begin{equation}\label{eq:Legendre1} r_n^{(\a)}(x)=\sum_{j=0}^n(-1)^j\tbinom{n}{j}\tbinom{\a+n+j-1}{n}x^j. \end{equation} The orthonormal polynomials with positive leading coefficients are given as $$ R_n^{(\a)}(x)=(-1)^n\sqrt{\frac{\a+2n}{\a}}r_n^{(\a)}(x), $$ so the corresponding kernel polynomials have coefficients $a_{i,j}^{(n)}(\a)$ which by Theorem \ref{cbthm:A} satisfy \begin{equation}\label{eq:nyformel} \a a^{(n)}_{i,j}(\a)=(-1)^{i+j}\sum_{k=\max{(i,j)}}^n (\a+2k)\tbinom{k}{i}\tbinom{k}{j}\tbinom{\a+k+i-1}{k} \tbinom{\a+k+j-1}{k}. \end{equation} \begin{cbthm}\label{cbthm:inversehilb} The $i,j$'th element of the inverse matrix of $\left(1/(\a+i+j)\right)_0^n$ is given as \begin{equation}\label{eq:choi} (-1)^{i+j}(\a+i+j)\tbinom{\a+n+i}{n-j}\tbinom{\a+n+j}{n-i} \tbinom{\a+i+j-1}{i}\tbinom{\a+i+j-1}{j}. \end{equation} In particular they are integers for $\a\in\mathbb N$. Furthermore, \begin{equation}\label{eq:hilbalphadet} \det\left(1/(\a+i+j)\right)_0^n=\left(\a\prod_{k=1}^n (\a+2k) \tbinom{\a+2k-1}{k}^2\right)^{-1}. \end{equation} \end{cbthm} {\it Proof}. Let $R(n;i,j)$ denote the number given in (\ref{eq:choi}), and define $$ C(k;i,j)=(-1)^{i+j}(\a+2k)\tbinom{k}{i}\tbinom{k}{j}\tbinom{\a+k+i-1}{k} \tbinom{\a+k+j-1}{k}, \quad k\ge i,j. $$ We shall prove that $$ R(n;i,j)=\sum_{k=\max(i,j)}^n C(k;i,j) $$ by induction in $n$ and can assume $i\ge j$. This is easy for $n=k=i$ and we shall establish \begin{equation}\label{eq:induc} R(n+1;i,j)-R(n;i,j)=C(n+1;i,j). \end{equation} The left-hand side of this expression can be written $$ (-1)^{i+j}(\a+i+j)\tbinom{\a+i+j-1}{i}\tbinom{\a+i+j-1}{j}T, $$ where $$ T=\tbinom{\a+n+1+i}{n+1-j}\tbinom{\a+n+1+j}{n+1-i}-\tbinom{\a+n+i}{n-j} \tbinom{\a+n+j}{n-i} $$ $$ =\tfrac{((\a+n+i)\cdots(\a+i+j+1))((\a+n+j)\cdots(\a+i+j+1))} {(n+1-j)!(n+1-i)!}\,\cdot $$ $$ [(\a+n+1+i)(\a+n+1+j)-(n+1-j)(n+1-i)]. $$ The quantity in brackets equals $(\a+2n+2)(\a+i+j)$, and now it is easy to complete the proof of (\ref{eq:induc}). The leading coefficient of $R_n^{(\a)}(x)$ is $$ \sqrt{\frac{D_{n-1}}{D_n}}=\sqrt{\frac{\a+2n}{\a}}\binom{\a+2n-1}{n}, $$ where $$ D_n=\det\left(\a/(\a+i+j)\right)_0^n=\a^{n+1}\det\left(1/(\a+i+j)\right)_0^n. $$ Therefore $$ \frac{1}{D_n}=\prod_{k=1}^n \frac{D_{k-1}}{D_k}=\frac{1}{\a^n} \prod_{k=1}^n (\a+2k)\tbinom{\a+2k-1}{k}^2, $$ which proves (\ref{eq:hilbalphadet}). \quad$\square$ \medskip Replacing $x$ by $1-x$, we see that $r_n^{(\a)}(1-x)$ are orthogonal polynomials with respect to the probability measure $\a(1-x)^{\a-1}1_{]0,1[}(x)\,dx$. The corresponding moment sequence is \begin{equation}\label{eq:binomalpha} s_n=\frac{1}{\binom{\a+n}{n}}, \end{equation} and the corresponding orthonormal polynomials are $\sqrt{(\a+2n)/\a}\;r_n^{(\a)}(1-x)$. Therefore \begin{equation}\label{eq:Knfinal} K_n(x,y)=\sum_{k=0}^n \frac{\a+2k}{\a}r_k^{(\a)}(1-x)r_k^{(\a)}(1-y), \end{equation} showing that the coefficient to $x^{i}y^{j}$ in $\a K_n(x,y)$ is an integer when $\a\in\mathbb N$. This yields \begin{cbthm}\label{cbthm:binomalpha} Let $\a\in\mathbb N$. The inverse of the matrix \begin{equation}\label{eq:hankelbinom} \left(\frac{1}{\a\binom{\a+i+j}{\a}}\right)_0^n \end{equation} has integer entries. \end{cbthm} It is not difficult to prove that $$ r_n^{(\a)}(1-x)=\sum_{k=0}^n (-1)^{n-k}\tbinom{n}{k}\tbinom{\a+n+k-1}{k}x^k, $$ and it follows that the entries of the inverse of (\ref{eq:hankelbinom}) are given as $$ (-1)^{i+j}\sum_{k=\max(i,j)}^n (\a+2k)\tbinom{k}{i}\tbinom{k}{j} \tbinom{\a+k+i-1}{i}\tbinom{\a+k+j-1}{j}. $$ This formula holds of course for any $\a>0$. The results of this section for $\a=1,2$ have been treated in the survey paper \cite{Be}, written in Danish. For $\a=1$ the formula for the elements of the inverse of $\mathcal H_n^{(\a)}$ was given in \cite{Ch}, while the formula for its determinant goes back to Hilbert in \cite{Hi}. In this case the polynomials $r_n^{(1)}(x)$ are the Legendre polynomials for the interval $[0,1]$, cf. \cite[Section 7.7]{A:A:R}. These polynomials have succesfully been used in the proof of the irrationality of $\zeta(3)$. For $\a=2$ we have $(\a+2k)/\a=1+k$, so the coefficient to $x^{i}y^{j}$ in (\ref{eq:Knfinal}) is an integer. In this case Theorem \ref{cbthm:binomalpha} can be sharpened: The inverse of the matrix $\left(1/\tbinom{2+i+j}{2}\right)_0^n$ has integer coefficients. This result is also given in \cite{Ri}. \section{Introduction} In \cite{Ri} Richardson noticed that the {\it Filbert matrices} \begin{equation}\label{eq:filbert} \mathcal F_n=\left(1/F_{i+j+1}\right),\quad 0\le i,j\le n,\quad n=0,1,\ldots, \end{equation} where $F_n,n\ge 0$ is the sequence of Fibonacci numbers, have the property that all elements of the inverse matrices are integers. The corresponding property for the {\it Hilbert matrices} $(1/(i+j+1))$ has been known for a long time, see \cite{Co},\cite{Ch},\cite{S:L}. The last reference contains a table of the inverse Hilbert matrices up to $n=9$. Richardson gave an explicit formula for the elements of the inverse Filbert matrices and proved it using computer algebra. The formula shows a remarkable analogy with the corresponding formula for the elements of the inverse Hilbert matrices in the sense that one shall replace some binomial coefficients $\binom{n}{k}$ by the analogous {\it Fibonomial coefficients} \begin{equation}\label{eq:fibonomial} \binom{n}{k}_{\mathbb F}=\prod_{i=1}^k\frac{F_{n-i+1}}{F_i},\quad 0\le k\le n, \end{equation} with the usual convention that empty products are defined as 1. These coefficients are defined and studied in \cite{Kn} and are integers. The sequence of Fibonacci numbers is $F_0=0, F_1=1,\ldots,$ with the recursion formula $F_{n+1}=F_n+F_{n-1},\;n\ge 1$. The Hilbert matrices are the Hankel matrices $(s_{i+j})$ corresponding to the moment sequence $$ s_n=1/(n+1)=\int_0^1 x^n\,dx, $$ and that the reciprocal matrices have integer entries can easily be explained by the fact the corresponding orthogonal polynomials, namely the Legendre polynomials, have integer coefficients. See section 4 for details. The purpose of the present paper is to show that $(1/F_{n+2})_{n\ge 0}$ is the moment sequence of a certain discrete probability. Although this is a simple consequence of Binet's formula for $F_n$, it does not seem to have been noticed in the literature, cf. \cite{Ko}. We find the corresponding probability measure to be \begin{equation}\label{eq:fibmea} \mu=(1-q^2)\sum_{k=0}^\infty q^{2k}\delta_{q^k/\phi}, \end{equation} where we use the notation \begin{equation}\label{eq:golden} \phi=\frac{1+\sqrt{5}}{2},\quad q=\frac{1-\sqrt{5}}{1+\sqrt{5}}=\frac{1}{\phi}-1, \end{equation} and $\delta_a$ denotes the probability measure with mass 1 at the point $a$. The number $\phi$ is called the golden ratio. The corresponding orthogonal polynomials are little $q$-Jacobi polynomials \begin{equation}\label{eq:lqJacobi} p_n(x;a,b;q)={}_2\phi_1\left(\begin{matrix}q^{-n},abq^{n+1}\\aq\end{matrix};q,xq\right), \end{equation} see \cite{G:R}, specialized to the parameters $a=q,b=1$, with $q$ taking the value from (\ref{eq:golden}). To be precise we define \begin{equation}\label{eq:fibpol} p_n(x):=F_{n+1}p_n(x\phi;q,1;q), \end{equation} and these polynomials have integer coefficients, since they can be written \begin{equation}\label{eq:fibpol1} p_n(x)=\sum_{k=0}^n (-1)^{kn-\binom{k}{2}}\tbinom{n}{k}_{\mathbb F} \tbinom{n+k+1}{n}_{\mathbb F}x^k. \end{equation} The orthonormal polynomials with respect to $\mu$ and having positive leading coefficients are given as \begin{equation}\label{eq:fibonp} P_n(x)=(-1)^{\binom{n+1}{2}}\sqrt{F_{2n+2}}p_n(x), \end{equation} so the kernel polynomial $$ K_n(x,y)=\sum_{k=0}^n P_k(x)P_k(y), $$ is a polynomial in $x,y$ with integer coefficients. If we denote $a_{i,j}^{(n)}$ the coefficient to $x^{i}y^{j}$ in the kernel polynomial, then it is a general fact that the matrix \begin{equation}\label{eq:matrixAn} A_n=(a^{(n)}_{i,j}),\quad 0\le i,j\le n \end{equation} is the inverse of the Hankel matrix of the problem $(s_{i+j})_0^n$, see Theorem \ref{cbthm:A} below. This explains that the elements of the inverse of the matrix $(1/F_{i+j+2})_0^n$ are integers, and we derive a formula for the entries from the orthogonal polynomials. The Filbert matrices (\ref{eq:filbert}) are not positive definite but non-singular, and they are the Hankel matrices of the moments of a (real-valued) signed measure with total mass 1. The orthogonal polynomials for this signed measure are the little $q$-Jacobi polynomials \begin{equation}\label{eq:lqJacobi1} p_n(x\phi;1,1;q)=\sum_{k=0}^n(-1)^{kn-\binom{k}{2}} \tbinom{n}{k}_{\mathbb F}\tbinom{n+k}{n}_{\mathbb F}x^k, \end{equation} and a simple modification of the positive definite case leads to Richardson's formula for the entries of the inverse of the Filbert matrices. The two results can be unified in the statement that for each $\a\in\mathbb N=\{1,2,\ldots\}$ the sequence $(F_{\a}/F_{\a+n})_{n\ge 0}$ is a moment sequence of a real-valued measure $\mu_{\a}$ with total mass 1. It is a positive measure when $\a$ is even, but a signed measure when $\a$ is odd. The orthogonal polynomials are little $q$-Jacobi polynomials $p_n(x\phi;q^{\a-1},1;q)$. This is proved in section 3. In section 2 we recall some basic things about orthogonal polynomials both in the positive definite and in the quasi-definite case, and Theorem \ref{cbthm:A} about the inverse of the Hankel matrices is proved. In section 4 we briefly discuss the matrices $(1/(\a+i+j))_0^n$, where $\a>0$. They are related to Jacobi polynomials transfered to the interval $]0,1[$ and belonging to the parameters $(0,\a-1)$. This leads to the formula (\ref{eq:choi}), which for $\a=1$ is the formula for the elements of the inverse Hilbert matrices. After the circulation of a preliminary version of this paper (dated April 10, 2006), Ismail has extended the results of section 3 to a one parameter generalization of the Fibonacci numbers, cf. \cite{Is1}. \section{Orthogonal Polynomials} We start by recalling some simple facts from the theory of orthogonal polynomials, cf. \cite{Ak} or \cite{Is} and in particular \cite{Chi} for the quasi-definite case. \medskip {\it The positive definite case}. We consider the set $\mathcal M^*$ of probability measures on $\mathbb R$ with moments of any order and with infinite support. The moment sequence of $\mu\in\mathcal M^*$ is \begin{equation}\label{eq:mom} s_n=s_n(\mu)=\int x^n\,d\mu(x),\quad n=0,1,\ldots, \end{equation} and the corresponding Hankel matrices are given by \begin{equation}\label{eq:Hankel} H_n=\begin{pmatrix} s_0 & s_1 & \cdots & s_n\\ s_1 & s_2 & \cdots & s_{n+1}\\ \vdots & \vdots & & \vdots\\ s_n & s_{n+1} & \cdots & s_{2n} \end{pmatrix},\quad n=0,1,\ldots. \end{equation} The orthonormal polynomials $(P_n)$ for $\mu$ are uniquely determined by the equations \begin{equation}\label{eq:orthpol} \int P_n(x)P_m(x)\,d\mu(x)=\delta_{n,m},\quad n,m\ge 0, \end{equation} and the requirement that $P_n$ is a polynomial of degree $n$ with positive leading coefficient. This coefficient is equal to \begin{equation}\label{eq:leading} \sqrt{D_{n-1}/D_n}, \end{equation} where $D_n=\det H_n$. The reproducing kernel for the polynomials of degree $\le n$ is defined as \begin{equation}\label{eq:kernel} K_n(x,y)=\sum_{k=0}^n P_k(x)P_k(y), \end{equation} and is called the kernel polynomial. It is clear that we can write \begin{equation}\label{eq:A} K_n(x,y)=\sum_{i=0}^n\sum_{j=0}^n a^{(n)}_{i,j}x^{i}y^{j}, \end{equation} where the numbers $a^{(n)}_{i,j}$ are uniquely determined and satisfy $a^{(n)}_{i,j}=a^{(n)}_{j,i}$. If we collect these numbers in an $(n+1)\times(n+1)$-matrix $A_n=(a^{(n)}_{i,j})$, then it is the inverse of the Hankel matrix $H_n$: \begin{cbthm}\label{cbthm:A} $$ A_nH_n=H_nA_n=I_n, $$ where $I_n$ is the unit matrix of order $n+1$. \end{cbthm} {\it Proof}. For $0\le k\le n$ we have by the reproducing property \begin{equation}\label{eq:AH} \int x^kK_n(x,y)\,d\mu(x)=y^k. \end{equation} On the other hand we have $$ \int x^kK_n(x,y)\,d\mu(x)= \sum_{j=0}^n(\sum_{i=0}^n s_{k+i}a^{(n)}_{i,j})y^j, $$ and therefore $$ \sum_{i=0}^n s_{k+i}a^{(n)}_{i,j}=\delta_{k,j}. $$ \quad$\square$ \medskip {\it The quasi-definite case}. If $\mu$ is a real-valued signed measure on $\mathbb R$ with total mass 1 and moments of any order, one can still define the moments (\ref{eq:mom}) and the corresponding Hankel matrices (\ref{eq:Hankel}). To define orthogonal polynomials one has to assume that (\ref{eq:Hankel}) is a non-singular matrix for any $n$, i.e. that the determinants satisfy $D_n=\det H_n\ne 0$. On the other hand, if orthogonal polynomials exist with respect to a signed measure, then the Hankel determinants are non-zero. See \cite[Theorem 3.1]{Chi} for details. In this case the orthonormal polynomial $P_n$ is uniquely determined by the requirement that the leading coefficient $\sqrt{D_{n-1}/D_n}$ is either positive or purely imaginary with positive imaginary part. The corresponding kernel polynomial $K_n$ has real coefficients, and Theorem \ref{cbthm:A} remains valid. \section{Fibonacci numbers} The Fibonacci numbers can be given by the formula \begin{equation}\label{eq:Binet} F_n=\frac{1}{\sqrt{5}}(\phi^n-{\hat\phi}^n),\quad n\ge 0 \end{equation} usually called Binet's formula, but it is actually older, see \cite{Kn},\cite{Ko}. Here $$ \phi=\frac{1+\sqrt{5}}{2},\quad \hat\phi=\frac{1-\sqrt{5}}{2}=1-\phi. $$ Using the number $q=\hat\phi/\phi$, satisfying $-1<q<0$ and already defined in (\ref{eq:golden}), leads to \begin{equation}\label{eq:help} F_n=\frac{1}{\sqrt{5}}\phi^n(1-q^n),\quad q\phi^2=-1, \end{equation} and for $\a\in\mathbb N$ and $n\ge 0$ $$ \frac{F_{\a}}{F_{\a+n}}=\frac{\sqrt{5}F_{\a}}{{\phi}^{\a+n}}\frac{1}{1-q^{\a+n}} =(1-q^{\a})\sum_{k=0}^\infty (q^k/\phi)^n q^{\a k}, $$ which is the $n$'th moment of the real-valued measure \begin{equation}\label{eq:mup} \mu_{\a}=(1-q^{\a})\sum_{k=0}^\infty q^{\a k}\delta_{q^k/\phi} \end{equation} with total mass 1. When $\a$ is even then $\mu_{\a}$ is a probability measure, but when $\a$ is odd the masses $q^{\a k}$ change sign with the parity of $k$. Note that $\mu_2$ is the measure considered in (\ref{eq:fibmea}). For the Fibonomial coefficients defined in (\ref{eq:fibonomial}) one has $$ \tbinom{n}{k}_{\mathbb F}=1,\; 0\le k\le n\le 2, $$ and they satisfy a recursion formula \begin{equation}\label{eq:fibonomialrec} \tbinom{n}{k}_{\mathbb F}=F_{k-1}\tbinom{n-1}{k}_{\mathbb F} +F_{n-k+1}\tbinom{n-1}{k-1}_{\mathbb F},\;n>k\ge 1, \end{equation} see \cite{Kn}, which shows that the Fibonomial coefficients are integers. From (\ref{eq:fibonomial}) it is also clear that $$ \tbinom{n}{k}_{\mathbb F}=\tbinom{n}{n-k}_{\mathbb F},\quad 0\le k\le n. $$ In \cite[Section 7.3]{G:R} one finds a discussion of the little $q$-Jacobi polynomials defined in (\ref{eq:lqJacobi}), and it is proved that \begin{equation}\label{eq:ortJacobi} \sum_{k=0}^\infty p_n(q^k;a,b;q)p_m(q^k;a,b;q)\frac{(bq;q)_k}{(q;q)_k} (aq)^k=\frac{\delta_{n,m}}{h_n(a,b;q)}, \end{equation} where \begin{equation}\label{eq:norm} h_n(a,b;q)=\frac{(abq;q)_n(1-abq^{2n+1})(aq;q)_n(aq;q)_\infty} {(q;q)_n(1-abq)(bq;q)_n(abq^2;q)_\infty}(aq)^{-n}. \end{equation} In \cite{G:R} it is assumed that $0<q,aq<1$, but the derivation shows that it holds for $|q|<1,|a|\le 1, |b|\le 1$, in particular in the case of interest here: $-1<q<0,a=q^{\a-1}, b=1$, in the case of which we get \begin{equation}\label{eq:ortJacobispec} \sum_{k=0}^\infty p_n(q^k;q^{\a-1},1;q)p_m(q^k;q^{\a-1},1;q) q^{\a k}=\delta_{n,m}\frac{q^{\a n}(q;q)_n^2}{(q^{\a};q)_n^2(1-q^{\a+2n})}. \end{equation} This shows that the polynomials $$ p_n(x\phi;q^{\a-1},1;q) $$ are orthogonal with respect to $\mu_{\a}$ and that $$ \int p_n(x\phi;q^{\a-1},1;q)p_m(x\phi;q^{\a-1},1;q)\,d\mu_{\a}(x)= \delta_{n,m}\frac{(1-q^{\a})q^{\a n}(q;q)_n^2}{(q^{\a};q)_n^2(1-q^{\a+2n})}. $$ To simplify this apply (\ref{eq:help}) to get $$ \frac{(1-q^{\a})q^{\a n}(q;q)_n^2}{(q^{\a};q)_n^2(1-q^{\a+2n})}= (-1)^{\a n}\tfrac{F_{\a}}{F_{\a+2n}}\left(\prod_{j=0}^{n-1}\tfrac{F_{1+j}} {F_{\a+j}}\right)^2=(-1)^{\a n}\tfrac{F_{\a}}{F_{\a+2n}}\tbinom{\a+n-1}{n}_{\mathbb F}^{-2}. $$ \begin{cbthm}\label{cbthm:integercoef} Let $\a\in\mathbb N$. The polynomials $p_n^{(\a)}(x)$ defined by \begin{equation}\label{eq:poldef} p_n^{(\a)}(x)=\tbinom{\a+n-1}{n}_{\mathbb F}p_n(x\phi;q^{\a-1},1;q) \end{equation} can be written \begin{equation}\label{eq:pol} p_n^{(\a)}(x)=\sum_{k=0}^n (-1)^{kn-\binom{k}{2}}\tbinom{n}{k}_{\mathbb F} \tbinom{\a+n+k-1}{n}_{\mathbb F}x^k, \end{equation} and they satisfy \begin{equation}\label{eq:pol1} \int p_n^{(\a)}(x)p_m^{(\a)}(x)\,d\mu_{\a}(x)=\delta_{n,m}(-1)^{\a n}\tfrac{F_{\a}}{F_{\a+2n}}, \end{equation} so the corresponding orthonormal polynomials are \begin{equation}\label{eq:polorth} P_n^{(\a)}(x)=\sqrt{(-1)^{\a n}F_{\a+2n}/F_{\a}}p_n^{(\a)}(x). \end{equation} \end{cbthm} {\it Proof}. By definition, see (\ref{eq:lqJacobi}) \begin{eqnarray*} p_n^{(\a)}(x)&=&\tbinom{\a+n-1}{n}_{\mathbb F}\sum_{k=0}^n\frac{(q^{-n},q^{\a+n};q)_k} {(q,q^{\a};q)_k}(q\phi x)^k\\ &=&\tbinom{\a+n-1}{n}_{\mathbb F}\sum_{k=0}^n{\scriptsize \left[\begin{matrix}n\\k\end{matrix}\right]_q} \frac{(q^{\a+n};q)_k}{(q^{\a};q)_k}(-1)^kq^{\binom{k}{2}-nk}(q\phi x)^k, \end{eqnarray*} where $${\scriptsize \left[\begin{matrix}n\\k\end{matrix}\right]_q}= \frac{(q;q)_n}{(q;q)_k(q;q)_{n-k}} $$ is the $q$-binomial coefficient. Using (\ref{eq:help}) leads to $${\scriptsize \left[\begin{matrix}n\\k\end{matrix}\right]_q}=\binom{n}{k}_{\mathbb F} \phi^{k(k-n)}, $$ hence $$ p_n^{(\a)}(x)=\tbinom{\a+n-1}{n}_{\mathbb F}\sum_{k=0}^n(-1)^k\tbinom{n}{k}_{\mathbb F} (\phi^2q)^{\binom{k+1}{2}-nk}\prod_{j=0}^{k-1}\tfrac{F_{\a+n+j}}{F_{\a+j}}x^k, $$ which by (\ref{eq:help}) can be reduced to (\ref{eq:pol}). \quad$\square$ \begin{cbrem} {\rm The polynomials $p_n^{(\a)}(x)$ for $\a=1$ and $\a=2$ are the polynomials in (\ref{eq:lqJacobi1}) and in (\ref{eq:fibpol1}) respectively.} \end{cbrem} \begin{cbcor}\label{eq:det} For $\a\in\mathbb N$ $$ \det(1/F_{\a+i+j})_0^n=\left((-1)^{\a\binom{n+1}{2}}F_{\a}\prod_{k=1}^n F_{\a+2k}\tbinom{\a+2k-1}{k}_{\mathbb F}^2\right)^{-1}, $$ which is the reciprocal of an integer. \end{cbcor} {\it Proof}. From the general theory it is known that the leading coefficient of the orthonormal polynomial $P_n^{(\a)}$ is $\sqrt{D_{n-1}/D_n}$, where $$ D_n=\det(F_{\a}/F_{\a+i+j})_0^n. $$ From (\ref{eq:pol}) and (\ref{eq:polorth}) we then get $$ D_{n-1}/D_n=(-1)^{\a n}\tfrac{F_{\a+2n}}{F_{\a}} \tbinom{\a+2n-1}{n}_{\mathbb F}^2, $$ hence $$ \frac{1}{D_n}=\prod_{k=1}^n \frac{D_{k-1}}{D_{k}}=(-1)^{\a\binom{n+1}{2}}\tfrac{1}{F_{\a}^n} \prod_{k=1}^n F_{\a+2k} \tbinom{\a+2k-1}{k}_{\mathbb F}^2 $$ and the formula follows. \quad$\square$ \begin{cbthm}\label{cbthm:integer1} The $i,j$'th entry of the inverse of the matrix $(1/F_{\a+i+j})_0^n$ is given as \begin{equation}\label{eq:integer1} (-1)^{n(\a+i+j)-\binom{i}{2}-\binom{j}{2}} F_{\a+i+j} \tbinom{\a+n+i}{n-j}_{\mathbb F}\tbinom{\a+n+j}{n-i}_{\mathbb F} \tbinom{\a+i+j-1}{i}_{\mathbb F}\tbinom{\a+i+j-1}{j}_{\mathbb F}. \end{equation} \end{cbthm} {\it Proof.} From Theorem \ref{cbthm:A} we get $$ \left(\left(F_{\a}/F_{\a+i+j}\right)_0^n\right)^{-1}= \left(a_{i,j}^{(n)}(\a)\right)_0^n, $$ where $a_{i,j}^{(n)}(\a)$ is the coefficient to $x^{i}y^{j}$ in the kernel polynomial $K_n(x,y)$ for the orthonormal polynomials $P^{(\a)}_n$. Inserting the expressions (\ref{eq:pol}) and (\ref{eq:polorth}) in the kernel polynomial and changing the order of summation gives $$ F_{\a} a^{(n)}_{i,j}(\a)=\sum_{k=\max(i,j)}^n C^{(\a)}(k;i,j), $$ where we for $k\ge i,j$ have defined \begin{equation}\label{eq:sum} C^{(\a)}(k;i,j):=(-1)^{k(\a+i+j)-\binom{i}{2}-\binom{j}{2}} F_{\a+2k}\tbinom{k}{i}_{\mathbb F}\tbinom{k}{j}_{\mathbb F} \tbinom{\a+k+i-1}{k}_{\mathbb F}\tbinom{\a+k+j-1}{k}_{\mathbb F}. \end{equation} To prove that this expression can be summed to give (\ref{eq:integer1}), we use induction in $n$. By symmetry we can always assume $i\ge j$. The starting step $n=k=i\ge j$ is easy and is left to the reader. For the induction step let $R^{(\a)}(n;i,j)$ denote the expression (\ref{eq:integer1}). It has to be established that $$ R^{(\a)}(n+1;i,j)-R^{(\a)}(n;i,j)=C^{(\a)}(n+1;i,j). $$ The left-hand side of this expression can be written $$ (-1)^{(n+1)(\a+i+j)-\binom{i}{2}-\binom{j}{2}}F_{\a+i+j} \tbinom{\a+i+j-1}{i}_{\mathbb F}\tbinom{\a+i+j-1}{j}_{\mathbb F} T, $$ where $$ T=\tbinom{\a+n+1+i}{n+1-j}_{\mathbb F}\tbinom{\a+n+1+j}{n+1-i}_{\mathbb F}- (-1)^{\a+i+j}\tbinom{\a+n+i}{n-j}_{\mathbb F}\tbinom{\a+n+j}{n-i}_{\mathbb F} $$ $$=\frac{(F_{\a+n+i}\cdots F_{\a+i+j+1})(F_{\a+n+j}\cdots F_{\a+i+j+1})} {(F_1\cdots F_{n+1-j})(F_1\cdots F_{n+1-i})}\,\cdot $$ $$ \left[F_{\a+n+i+1}F_{\a+n+j+1}-(-1)^{\a+i+j} F_{n+1-i}F_{n+1-j}\right]. $$ By Lemma \ref{cbthm:fiblemma} below (with $n$ replaced by $n+1$), the expression in brackets equals $F_{\a+2n+2}F_{\a+i+j}$, and now it is easy to complete the proof. \quad$\square$ \begin{cblem}\label{cbthm:fiblemma} For $n\ge i,j\ge 0$ and $\a\ge 0$ the following formula holds \begin{equation}\label{eq:fiblemma} F_{\a+2n}F_{\a+i+j}=F_{\a+n+i}F_{\a+n+j}-(-1)^{\a+i+j}F_{n-i}F_{n-j}. \end{equation} \end{cblem} {\it Proof.} Using Binet's formula, the right-hand side of (\ref{eq:fiblemma}) multiplied with 5 equals $$ (\phi^{\a+n+i}-{\hat\phi}^{\a+n+i})(\phi^{\a+n+j}-{\hat\phi}^{\a+n+j}) -(-1)^{\a+i+j}(\phi^{n-i}-{\hat\phi}^{n-i})(\phi^{n-j}-{\hat\phi}^{n-j}). $$ Using $\phi\hat{\phi}=-1$ one gets after some simplification $$ (\phi^{\a+2n}-{\hat\phi}^{\a+2n})(\phi^{\a+i+j}-{\hat\phi}^{\a+i+j}), $$ which establishes the formula. \quad$\square$ \begin{cbrem} {\rm For $\a=1$ the expression (\ref{eq:integer1}) reduces to $$ (-1)^{n(i+j+1)-\binom{i}{2}-\binom{j}{2}} F_{i+j+1} \tbinom{n+i+1}{n-j}_{\mathbb F}\tbinom{n+j+1}{n-i}_{\mathbb F} \tbinom{i+j}{i}_{\mathbb F}^2, $$ which is the expression found by Richardson \cite{Ri}, except that he expressed the sign in a different but equivalent manner.} \end{cbrem} \section{The Hilbert matrices} For $\a>0$ the matrices \begin{equation}\label{eq:Hilbert} \mathcal H_n^{(\a)}=\left(\a/(\a+i+j)\right)_0^n,\quad n=0,1,\ldots, \end{equation} are the Hankel matrices for the moment sequence $$ s_n^{(\a)}=\a\int_0^1 x^nx^{\a-1}\,dx=\frac{\a}{\a+n},\quad n=0,1,\ldots $$ of the measure $\sigma_{\a}=\a x^{\a-1}1_{]0,1[}(x)\,dx$. The corresponding orthogonal polynomials are easily seen to be \begin{equation}\label{eq:Legendre} r_n^{(\a)}(x)=\frac{1}{n!}x^{-\alpha+1}D^n\;[x^{\a-1+n}(1-x)^n]= (-1)^n\sum_{k=0}^n\tbinom{n}{k}\tbinom{\a-1+n}{k}(x-1)^k x^{n-k}, \end{equation} since they are Jacobi polynomials transfered to $]0,1[$, cf. \cite{A:A:R}. Using the binomial formula for $(x-1)^k$ we find $$ r_n^{(\a)}(x)=(-1)^n\sum_{j=0}^n (-1)^j x^{n-j}c_j, $$ where \begin{eqnarray*} c_j&=&\sum_{k=j}^n \tbinom{k}{j}\tbinom{n}{k}\tbinom{\a-1+n}{k} =\sum_{l=0}^{n-j}\tbinom{j+l}{j}\tbinom{n}{j+l}\tbinom{\a-1+n}{j+l}\\ &=&\tbinom{n}{j}\tbinom{\a-1+n}{j} {}_2F_1{\scriptsize\left(\begin{matrix} -n+j,-n-\a+j+1\\ j+1\end{matrix};1\right)}=\tbinom{n}{j}\tbinom{2n+\a-j-1}{n}, \end{eqnarray*} where the ${}_2F_1$ is summed by the Chu-Vandermonde formula, cf. \cite[p. 67]{A:A:R}. This gives \begin{equation}\label{eq:Legendre1} r_n^{(\a)}(x)=\sum_{j=0}^n(-1)^j\tbinom{n}{j}\tbinom{\a+n+j-1}{n}x^j. \end{equation} The orthonormal polynomials with positive leading coefficients are given as $$ R_n^{(\a)}(x)=(-1)^n\sqrt{\frac{\a+2n}{\a}}r_n^{(\a)}(x), $$ so the corresponding kernel polynomials have coefficients $a_{i,j}^{(n)}(\a)$ which by Theorem \ref{cbthm:A} satisfy \begin{equation}\label{eq:nyformel} \a a^{(n)}_{i,j}(\a)=(-1)^{i+j}\sum_{k=\max{(i,j)}}^n (\a+2k)\tbinom{k}{i}\tbinom{k}{j}\tbinom{\a+k+i-1}{k} \tbinom{\a+k+j-1}{k}. \end{equation} \begin{cbthm}\label{cbthm:inversehilb} The $i,j$'th element of the inverse matrix of $\left(1/(\a+i+j)\right)_0^n$ is given as \begin{equation}\label{eq:choi} (-1)^{i+j}(\a+i+j)\tbinom{\a+n+i}{n-j}\tbinom{\a+n+j}{n-i} \tbinom{\a+i+j-1}{i}\tbinom{\a+i+j-1}{j}. \end{equation} In particular they are integers for $\a\in\mathbb N$. Furthermore, \begin{equation}\label{eq:hilbalphadet} \det\left(1/(\a+i+j)\right)_0^n=\left(\a\prod_{k=1}^n (\a+2k) \tbinom{\a+2k-1}{k}^2\right)^{-1}. \end{equation} \end{cbthm} {\it Proof}. Let $R(n;i,j)$ denote the number given in (\ref{eq:choi}), and define $$ C(k;i,j)=(-1)^{i+j}(\a+2k)\tbinom{k}{i}\tbinom{k}{j}\tbinom{\a+k+i-1}{k} \tbinom{\a+k+j-1}{k}, \quad k\ge i,j. $$ We shall prove that $$ R(n;i,j)=\sum_{k=\max(i,j)}^n C(k;i,j) $$ by induction in $n$ and can assume $i\ge j$. This is easy for $n=k=i$ and we shall establish \begin{equation}\label{eq:induc} R(n+1;i,j)-R(n;i,j)=C(n+1;i,j). \end{equation} The left-hand side of this expression can be written $$ (-1)^{i+j}(\a+i+j)\tbinom{\a+i+j-1}{i}\tbinom{\a+i+j-1}{j}T, $$ where $$ T=\tbinom{\a+n+1+i}{n+1-j}\tbinom{\a+n+1+j}{n+1-i}-\tbinom{\a+n+i}{n-j} \tbinom{\a+n+j}{n-i} $$ $$ =\tfrac{((\a+n+i)\cdots(\a+i+j+1))((\a+n+j)\cdots(\a+i+j+1))} {(n+1-j)!(n+1-i)!}\,\cdot $$ $$ [(\a+n+1+i)(\a+n+1+j)-(n+1-j)(n+1-i)]. $$ The quantity in brackets equals $(\a+2n+2)(\a+i+j)$, and now it is easy to complete the proof of (\ref{eq:induc}). The leading coefficient of $R_n^{(\a)}(x)$ is $$ \sqrt{\frac{D_{n-1}}{D_n}}=\sqrt{\frac{\a+2n}{\a}}\binom{\a+2n-1}{n}, $$ where $$ D_n=\det\left(\a/(\a+i+j)\right)_0^n=\a^{n+1}\det\left(1/(\a+i+j)\right)_0^n. $$ Therefore $$ \frac{1}{D_n}=\prod_{k=1}^n \frac{D_{k-1}}{D_k}=\frac{1}{\a^n} \prod_{k=1}^n (\a+2k)\tbinom{\a+2k-1}{k}^2, $$ which proves (\ref{eq:hilbalphadet}). \quad$\square$ \medskip Replacing $x$ by $1-x$, we see that $r_n^{(\a)}(1-x)$ are orthogonal polynomials with respect to the probability measure $\a(1-x)^{\a-1}1_{]0,1[}(x)\,dx$. The corresponding moment sequence is \begin{equation}\label{eq:binomalpha} s_n=\frac{1}{\binom{\a+n}{n}}, \end{equation} and the corresponding orthonormal polynomials are $\sqrt{(\a+2n)/\a}\;r_n^{(\a)}(1-x)$. Therefore \begin{equation}\label{eq:Knfinal} K_n(x,y)=\sum_{k=0}^n \frac{\a+2k}{\a}r_k^{(\a)}(1-x)r_k^{(\a)}(1-y), \end{equation} showing that the coefficient to $x^{i}y^{j}$ in $\a K_n(x,y)$ is an integer when $\a\in\mathbb N$. This yields \begin{cbthm}\label{cbthm:binomalpha} Let $\a\in\mathbb N$. The inverse of the matrix \begin{equation}\label{eq:hankelbinom} \left(\frac{1}{\a\binom{\a+i+j}{\a}}\right)_0^n \end{equation} has integer entries. \end{cbthm} It is not difficult to prove that $$ r_n^{(\a)}(1-x)=\sum_{k=0}^n (-1)^{n-k}\tbinom{n}{k}\tbinom{\a+n+k-1}{k}x^k, $$ and it follows that the entries of the inverse of (\ref{eq:hankelbinom}) are given as $$ (-1)^{i+j}\sum_{k=\max(i,j)}^n (\a+2k)\tbinom{k}{i}\tbinom{k}{j} \tbinom{\a+k+i-1}{i}\tbinom{\a+k+j-1}{j}. $$ This formula holds of course for any $\a>0$. The results of this section for $\a=1,2$ have been treated in the survey paper \cite{Be}, written in Danish. For $\a=1$ the formula for the elements of the inverse of $\mathcal H_n^{(\a)}$ was given in \cite{Ch}, but goes at least back to Collar \cite{Co}, while the formula for its determinant goes back to Hilbert in \cite{Hi}. In this case the polynomials $r_n^{(1)}(x)$ are the Legendre polynomials for the interval $[0,1]$, cf. \cite[Section 7.7]{A:A:R}. These polynomials have succesfully been used in the proof of the irrationality of $\zeta(3)$. For $\a=2$ we have $(\a+2k)/\a=1+k$, so the coefficient to $x^{i}y^{j}$ in (\ref{eq:Knfinal}) is an integer. In this case Theorem \ref{cbthm:binomalpha} can be sharpened: The inverse of the matrix $\left(1/\tbinom{2+i+j}{2}\right)_0^n$ has integer coefficients. This result is also given in \cite{Ri}. \medskip {\bf Added June 2007} A result equivalent to Theorem \ref{cbthm:A} is given by Collar in \cite{Co}. Denoting by $$ M_n=(p_{ij}),\quad 0\le i,j\le n $$ the matrix of coefficients of the orthonormal polynomials, i.e. $$ P_i(x)=\sum_{j=0}^n p_{ij}x^j,\quad i=0,1,\ldots,n, $$ where $p_{ij}=0$ for $i<j$, then the orthonormality can be expressed as the matrix equation $M_nH_nM_n^t=I_n$, hence \begin{equation}\label{eq:collar} H_n^{-1}=M_n^t M_n. \end{equation} Collar uses (\ref{eq:collar}) to obtain formula (\ref{eq:choi}) and states: \lq\lq Equation (\ref{eq:collar}), which provides an elegant method for the computation of the reciprocal of a moment matrix, is due to Dr A.\ C.\ Aitken. The author is grateful to Dr. Aitken for permission to describe the method and for many helpful suggestions.\rq\rq The paper by Collar is not mentioned in Choi's paper \cite{Ch} and was not included in the list of references in the first version of this paper. In \cite{A:B} the authors have defined a $q$-analogue of the Hilbert matrix for any complex $q$ different from the roots of unity and have proved a $q$-analogue of (\ref{eq:choi}). When $q=(1-\sqrt{5})/(1+\sqrt{5})$ one can recover the results about the Filbert matrices and for $q=-e^{-2\theta},\theta>0$ results of Ismail (\cite{Is1}) about Hankel matrices of generalized Fibonacci numbers.
2,877,628,091,371
arxiv
\section{Introduction} \label{intro} \setcounter{equation}{0} Seiberg and Witten demonstrated~\cite{SW} that the dual Meissner effect takes place in ${\cal N}=2\;$ Yang--Mills theories. Shortly after~\cite{SW}, Hanany, Strassler and Zaffaroni discussed \cite{HSZ} formation and structure of the chromoelectric flux tubes in the Seiberg--Witten solution. Their analysis showed that details of the Seiberg--Witten confinement are quite different from those we expect in QCD-like theories. The confining strings in the Seiberg--Witten solution are, in fact, Abelian strings of the Abrikosov--Nielsen--Olesen type \cite{ANO}. The ``hadronic'' spectrum in the Seiberg--Witten model is much richer than that in QCD (for a review see e.g. \cite{MattS}.) The discovery of non-Abelian strings \cite{HT1,ABEKY} and non-Abelian confined monopoles \cite{SYmon,HT2} was a significant step towards QCD. They were originally found in ${\cal N}=2\;$ models which are quite distant relatives of QCD. To get closer to QCD one needs to have less supersymmetry. Another conspicuous feature of ${\cal N}=2\;$ Yang--Mills theories which drastically distinguishes them from QCD-like theories is the presence of scalar and spinor fields in the adjoint representation. To advance along these lines it is highly desirable to break ${\cal N}=2\;$ down to ${\cal N}=1\;$ and get rid of the adjoint superfield by making it very heavy, without destroying non-Abelian strings and confined monopoles. A partial success in this direction was reported in Ref.~\cite{SYnone}. Adding a mass term to the adjoint superfield of the type $\delta{\cal W} =\mu {\cal A}^2$ breaks ${\cal N}=2\;$. As long as the mass parameter $\mu$ is kept finite, the non-Abelian string in this ${\cal N}=1\;$ model is well-defined and supports confined monopoles. However, at $\mu\to \infty$, as the adjoint superfield becomes heavy and we approach the limit of ${\cal N}=1\;$ SQCD, an infrared problem develops. This is due to the fact that in ${\cal N}=1\;$ SQCD defined in a standard way the vacuum manifold is not an isolated point; rather, there exists a flat direction (a Higgs branch). On the Higgs branch there are no finite-size BPS strings \cite{ruba}. Thus one arrives at a dilemma: either one has to abandon the attempt to decouple\,\footnote{Below we use the word decouple in two opposite meanings: first, if a field becomes very heavy and disappears from the physical spectrum, so that it can be integrated out; second, if all coupling constants of a certain field vanish so that it becomes sterile. With regards to the adjoint fields decoupling means making them very heavy. With regards to the meson superfield $M$ decoupling means sterility. Each time it is perfectly clear from the context what is meant. We hope this will cause no confusion.} the adjoint superfield, or, if this decoupling is performed, confining non-Abelian strings cease to exist \cite{SYnone}. In this paper we report that a relatively insignificant modification of the benchmark ${\cal N}=2\;$ model solves the problem. All we have to do is to add a neutral meson superfield $M$ coupled to the quark superfields through a superpotential term. Acting together with the mass term of the adjoint superfield, it breaks ${\cal N}=2\;$ down to ${\cal N}=1\;$. The limit $\mu\to\infty$ in which the adjoint superfield completely decouples, becomes well-defined. No flat directions emerge. The limiting theory is ${\cal N}=1\;$ SQCD supplemented by the meson superfield. We show that it supports non-Abelian strings. The junctions of these strings present confined monopoles, or, better to say, what becomes of the monopoles in the theory where there are no adjoint scalar fields. There is a continuous path following which one can trace the evolution in its entirety: from the 't Hooft--Polyakov monopoles which do not exist without the adjoint scalars to the confined monopoles in the adjoint-free environment. As far as we know, this is the first demonstration (in fully controllable weak coupling regime) of the Meissner effect in ${\cal N}=1\;$ theories without adjoint superfields. If a dual of ${\cal N} =1$ SQCD with the additional meson superfield could be found, in this dual theory our demonstration would be equivalent to the proof of the non-Abelian dual Meissner effect. The organization of the paper is as follows. In Sect.~\ref{mtheory} we review the benchmarks ${\cal N}=2\;$ super-Yang--Mills theory with the gauge group U($N$) and $N_f=N$ quark flavors. We introduce the Fayet--Iliopoulos (FI) term \cite{FI} in the U(1) subgroup, crucial for the string construction, and a meson superfield $M$, coupled to the quark superfields through a cubic superpotential. We add the mass terms to the adjoint superfields. The latter two terms in the superpotential break ${\cal N}=2\;$. In Sect. 3 we discuss the spectrum of elementary excitations, in particular, in the limit $\mu\to\infty$. We show that the limiting theory is essentially ${\cal N}=1\;$ SQCD. The only distinction is the meson superfield which survives in the limit $\mu\to\infty$. The vacuum of this theory, which will be referred to as $M$ model, is isolated (i.e. there are no flat directions). As usual, we construct non-Abelian strings and determine the world-sheet theory. This is the contents of Sect.~\ref{strings}. One of the crucial points of our analysis is determination of the fermion zero modes. To count these modes we engineer an appropriate index theorem (Sect.~\ref{ferm}). This theorem applies to the two-dimensional Dirac operator which we encounter in the string analysis. In Sect.~\ref{indext} we derive the index $\nu = 4N$. We observe four supertranslational zero modes and $4(N-1)$ superorientational modes. In Sect.~\ref{evol} we discuss how the monopoles evolve when we vary the adjustable parameters of the $M$ model: from the 't Hooft--Polyakov limit to the limit of confined monopoles in highly quantum regime in ${\cal N}=1\;$ SQCD. In Sect.~\ref{bpersp} the same issue is discussed from the brane perspective. Section \ref{conc} summarizes our findings. Finally, in Appendix we present explicit expressions for the fermion zero modes in the case of two flavors. \section{From \boldmath{${\cal N}=2\;$} SQCD to \boldmath{${\cal N}=1\;$}} \label{bulk} \setcounter{equation}{0} To begin with, let us briefly review ${\cal N}=2\;$ supersymmetric QCD. The gauge symmetry of our benchmark model is SU($N$)$\times$U(1). It has $N_f=N$ matter hypermultiplets. The field content of this model is as follows. The ${\cal N}=2\;$ vector multiplet consists of the U(1) gauge fields $A_{\mu}$ and SU($N$) gauge field $A^a_{\mu}$, (here $a=1,..., N^2-1$), their Weyl fermion superpartners ($\lambda^{1}_{\alpha}$, $\lambda^{2}_{\alpha}$) and ($\lambda^{1a}_{\alpha}$, $\lambda^{2a}_{\alpha}$), and complex scalar fields $a$ and $a^a$, the latter in the adjoint of SU($N$). The spinorial index of $\lambda$'s runs over $\alpha=1,2$. In this sector the global SU(2)$_R$ symmetry inherent to ${\cal N}=2\;$ model at hand manifests itself through rotations $\lambda^1 \leftrightarrow \lambda^2$. \vspace{2mm} The quark multiplets of the SU($N$)$\times$U(1) theory consist of the complex scalar fields $q^{kA}$ and $\tilde{q}_{Ak}$ (squarks) and the Weyl fermions $\psi^{kA}$ and $\tilde{\psi}_{Ak}$, all in the fundamental representation of the SU($N$) gauge group. Here $k=1,..., N\,\,$ is the color index while $A$ is the flavor index, $A=1,...,N$. Note that the scalars $q^{kA}$ and ${\bar{\tilde q}}^{\, kA}$ form a doublet under the action of the global SU(2)$_R$ group. In addition, we introduce the Fayet--Iliopoulos $D$-term for the U(1) gauge field which triggers the squark condensation. The undeformed ${\cal N}=2\;$ theory we start from has a superpotential, \begin{equation} {\cal W}_{{\cal N}=2} =\sqrt 2 \,{\rm Tr}\, \left\{\frac12 \tilde Q {\cal A} Q + \tilde Q {\cal A}^a\,T^a Q\right\}+ {\rm Tr}\, m \, \tilde Q\, Q \label{superpot} \end{equation} where ${\cal A}^a$ and ${\cal A}$ are chiral superfields, the ${\cal N}=2$ superpartners of the gauge bosons of SU($N$) and U(1), respectively, while $T^a$ are generators of SU($N$) normalized by the condition $${\rm Tr}\,T^a T^b =\frac{1}{2}\,\delta^{ab}\,.$$ Moreover, $m$ is the quark mass matrix, a numerical $N\times N$ matrix $m^B_A$ (to be elevated to superfield matrix later on). We write the quark superfields $Q^{kA}$ as $N\times N$ matrices in color and flavor indices. The trace in (\ref{superpot}) runs over the appropriate indices. \vspace{2mm} Now we deform this theory in two ways each of which breaks ${\cal N}=2\;$ supersym\-metry down to ${\cal N}=1\;$. First, we add superpotential mass terms for the adjoint chiral superfields from the U(1) and SU($N$) sectors, respectively, \begin{equation} \delta{\cal W} =\sqrt{\frac{N}{2}}\frac{\mu_1}{2}\, {\cal A}^2 + \frac{\mu_2}{2} \left({\cal A}^a\right)^2\, , \label{superpotbr} \end{equation} where $\mu_1$ and $\mu_2$ are mass parameters. Clearly, the mass term (\ref{superpotbr}) splits the gauge ${\cal N}=2\;$ supermultiplets, breaking ${\cal N}=2\;$ supersymmetry down to ${\cal N}=1\;$. As will be discussed later in detail, in the large-$\mu$ limit the adjoint multiplets decouple and then we recover ${\cal N}=1\;$ SQCD with $N_f=N$ flavors. This theory has a Higgs branch (see, for example, \cite{IS}). The presence of quark massless states in the bulk associated with this Higgs branch obscure physics of non-Abelian strings in this theory \cite{SYnone}. In particular, the strings become infinitely thick. \vspace{2mm} Can one avoid this shortcoming? The answer is yes. To this end we introduce another ${\cal N}=2\;$ breaking deformation. Namely, we uplift the quark mass matrix $m_A^B$ to the superfield status, $$ m_A^B \to M_A^B\,, $$ where $M$ represents $N^2$ chiral superfields of the mesonic type (they are color-singlets). With this uplifting we have to add a kinetic term for $ M_A^B$, \begin{equation} \delta S_{M\rm kin} = \int d^4x \, d^2\theta \, d^2\bar{\theta}\; \;\frac{2}{h}\; {\rm Tr}\,\bar{M}M \,, \label{mkin} \end{equation} where $h$ is a new coupling constant (it supplements the set of the gauge couplings). At $h=0$ the matrix field $M$ becomes sterile, it is frozen and in essence returns to the status of a constant numerical matrix as in Ref.~\cite{SYnone}. The theory acquires flat directions (a moduli space). With nonvanishing $h$ these flat directions are lifted and $M$ is determined by the minimum of the scalar potential, see below. The elevation of the quark mass matrix to superfield is a crucial step which allows us to lift the Higgs branch which would develop in this theory in the large $\mu$ limit if $M$ were a constant matrix. The bosonic part of our SU($N$)$\times$U(1) theory has the form \begin{eqnarray} S&=&\int d^4x \left[\frac1{4g^2_2} \left(F^{a}_{\mu\nu}\right)^2 + \frac1{4g^2_1}\left(F_{\mu\nu}\right)^2 + \frac1{g^2_2}\left|D_{\mu}a^a\right|^2 +\frac1{g^2_1} \left|\partial_{\mu}a\right|^2 \right. \nonumber\\[4mm] &+& {\rm Tr}\,\left|\nabla_{\mu} q\right|^2 + {\rm Tr}\,\left|\nabla_{\mu} \bar{\tilde{q}}\right|^2 +\frac1h \left|\partial_{\mu} M^0\right|^2 \nonumber\\[4mm] &+& \left. \frac1h \left|\partial_{\mu} M^a\right|^2 +V(q,\tilde{q},a^a,a,M^0,M^a)\right]\,. \label{mamodel} \end{eqnarray} Here $D_{\mu}$ is the covariant derivative in the adjoint representation of SU(2), while \begin{equation} \nabla_\mu=\partial_\mu -\frac{i}{2}\; A_{\mu} -i A^{a}_{\mu}\,T^a\,. \label{defnabla} \end{equation} Moreover, the matrix $M^A_B$ can be always decomposed as \begin{equation} M^A_B=\frac12\, \delta_B^A\;M^0 +(T^a)^A_B\;M^a\,. \label{adjointM} \end{equation} We use this decomposition in Eq.~(\ref{mamodel}). The coupling constants $g_1$ and $g_2$ correspond to the U(1) and SU($N$) sectors, respectively. With our conventions the U(1) charges of the fundamental matter fields are $\pm 1/2$. \vspace{2mm} The potential $V(q^A,\tilde{q}_A,a^a,a,M^0,M^a)$ in the Lagrangian (\ref{mamodel}) is a sum of various $D$ and $F$ terms, \begin{eqnarray} & & V(q^A,\tilde{q}_A,a^a,a,M^0,M^a) = \frac{g^2_2}{2} \left( \frac{1}{g^2_2}\, f^{abc} \,\bar a^b a^c + {\rm Tr}\,\bar{q}\,T^a q - {\rm Tr}\,\tilde{q}\, T^a\,\bar{\tilde{q}}\right)^2 \nonumber\\[3mm] &+& \frac{g^2_1}{8} \left({\rm Tr}\,\bar{q} q - {\rm Tr}\,\tilde{q} \bar{\tilde{q}}- N\xi\right)^2+ \frac{g^2_2}{2}\left| 2{\rm Tr}\,\tilde{q}T^a q +\sqrt{2}\mu_2 a^a\right|^2 \nonumber\\[3mm] &+& \frac{g^2_1}{2}\left| {\rm Tr}\,\tilde{q} q +\sqrt{N}\mu_1 a \right|^2 +\frac12 \,{\rm Tr}\, \left\{ \left|(a +\,2\,T^a\, a^a)q + \frac1{\sqrt{2}}q(M^0 +2T^a M^a) \right. \right|^2 \nonumber\\[3mm] &+& \left. \left|(a +\,2\,T^a\, a^a)\bar{\tilde{q}} +\frac1{\sqrt{2}}\bar{\tilde{q}}(M^0 +2T^a M^a) \right|^2 \right\} +\frac{h}{4}\left|{\rm Tr}\,\tilde{q}q\right|^2 +h\left|{\rm Tr}\,qT^a\tilde{q}\right|^2 \,, \nonumber\\[3mm] &&\mbox{} \label{pot} \end{eqnarray} where $f^{abc}$ stand for the SU($N$) structure constants. The first and second terms here represent $D$ terms, the next two terms are $F_{\cal A}$ terms, while the term in the curly brackets represents the squark $F$ terms. Two last terms are $F$ terms of the $M$ field. In Eq.~(\ref{pot}) we also introduced the FI $D$-term for the U(1) field, with the FI parameter $\xi$. Note that the FI term does not break ${\cal N}=2\;$ supersymmetry \cite{HSZ,VY}. The three parameters which do break ${\cal N}=2\;$ down to ${\cal N}=1\;$ are $\mu_1$, $\mu_2$ and $h$. The FI term triggers the spontaneous breaking of the gauge symmetry. The vacuum expectation values (VEV's) of the squark fields can be chosen as \begin{eqnarray} \langle q^{kA}\rangle &=&\sqrt{ \xi}\, \left( \begin{array}{ccc} 1 & 0 & ...\\ ... & ... & ... \\ ... & 0 & 1 \\ \end{array} \right),\,\,\,\,\,\,\langle \bar{\tilde{q}}^{kA}\rangle =0, \nonumber\\[3mm] k&=&1,...,N,\qquad A=1,...,N\,, \label{qvev} \end{eqnarray} {\em up to gauge rotations}. The VEV's of the adjoint fields vanish, \begin{equation} \langle a^a\rangle =0,\,\,\,\,\langle a\rangle =0\,, \label{avev} \end{equation} and so do those of the $M$ fields, \begin{equation} \langle M^a\rangle =0,\,\,\,\,\langle M^0\rangle =0\,. \label{Mvev} \end{equation} \vspace{2mm} The color-flavor locked form of the quark VEV's in Eq.~(\ref{qvev}) and the absence of VEV's of the adjoint scalar $a^a$ and the meson scalar $M^a$ in Eqs.~(\ref{avev}), (\ref{Mvev}) results in the fact that, while the theory is fully Higgsed, a diagonal SU($N$)$_{C+F}$ symmetry survives as a global symmetry. Namely, the global rotation \begin{equation} q\to UqU^{-1},\qquad a^aT^a\to Ua^aT^aU^{-1},\qquad M\to U^{-1}MU, \label{c+f} \end{equation} where $U$ is a matrix from SU($N$) is not broken by the VEV's (\ref{qvev}), (\ref{avev}) and (\ref{Mvev}). This is a particular case of the Bardak\c{c}\i--Halpern mechanism \cite{BarH}. The presence of this symmetry leads to the emergence of orientational zero modes \cite{ABEKY} of the $Z_N$ strings in the model (\ref{mamodel}). Note that the vacuum expectation values (\ref{qvev}), (\ref{avev}) and (\ref{Mvev}) do not depend on the supersymmetry breaking parameters $\mu_1$ and $\mu_2$. This is because our choice of parameters in (\ref{mamodel}) ensures vanishing of the adjoint VEV's, see (\ref{avev}). In particular, we have the same pattern of symmetry breaking all the way up to very large values $\mu_1$ and $\mu_2$, where the adjoint fields decouple. With $N$ matter hypermultiplets, the SU($N$) part of the gauge group is asymptotically free, implying generation of a dynamical scale $\Lambda$. In the ultraviolet (UV) we start with a small $g_2^2$, and let the theory evolve in the infrared. If the descent to $\Lambda$ were uninterrupted, the gauge coupling $g_2^2$ would explode at this scale. Moreover, strong coupling effects in the SU($N$) subsector at the scale $\Lambda$ would break the SU($N$) subgroup through the Seiberg--Witten mechanism \cite{SW}. Since we want to stay at weak coupling, we assume that $\sqrt{\xi}\gg \Lambda$, so that the SU($N$) coupling running is frozen by the squark condensation at a small value, namely, \begin{equation} \frac{8\pi^2}{N\, g_2^2}=\ln{\frac{\sqrt{\xi}}{\Lambda}} +\cdots \gg 1\,. \label{g2} \end{equation} \vspace{2mm} Now let us discuss the elementary excitation spectrum in the theory (\ref{mamodel}). Since both U(1) and SU($N$) gauge groups are broken by the squark condensation, all gauge bosons become massive. From (\ref{mamodel}) we get for the U(1) gauge boson mass (we refer to it as photon) \begin{equation} m_{\rm ph}=g_1\sqrt{\frac{N}{2}\,\xi}\,, \label{phmass} \end{equation} while $(N^2-1)$ gauge bosons of the SU($N$) group acquire a common mass \begin{equation} m_{W}=g_2\sqrt{\xi}\,. \label{wmass} \end{equation} This is typical of the Bardak\c{c}\i--Halpern mechanism. To get the masses of the scalar bosons we expand the potential (\ref{pot}) near the vacuum (\ref{qvev}), (\ref{avev}), (\ref{Mvev}) and diagonalize the corresponding mass matrix. The $N^2$ components of the $2\,N^2$-component\,\footnote{We mean here {\em real} components.} scalar $q^{kA}$ are eaten by the Higgs mechanism for U(1) and SU($N$) gauge groups. Another $N^2$ components are split as follows: one component acquires the mass (\ref{phmass}). It becomes a scalar component of a massive ${\cal N}=1\;$ vector U(1) gauge multiplet. The remaining $N^2-1$ components acquire masses (\ref{wmass}) and become scalar superpartners of the SU($N$) gauge boson in the ${\cal N}=1\;$ massive gauge supermultiplet. \vspace{2mm} Moreover, 6$\,N^2$ real scalar components of the fields $\tilde{q}_{Ak}$, $a^a$, $a$, $M^a$ and $M^0$ produce the following states: six states have masses determined by the roots of the cubic equation \begin{equation} \lambda_i^3-\lambda_i^2(2+\omega^2_i +2\gamma_i) + \lambda_i(1 +2\gamma_i+\gamma^2_i +2\gamma_i\omega_i) -\gamma_i^2\omega_i^2=0\,, \rule{0mm}{7mm} \label{queq} \end{equation} for $i=1\rule{0mm}{7mm}$. Namely, these states form degenerate pairs with the masses \begin{equation} m_{{\rm U}(1)}=g_1\sqrt{\frac{N}{2}\,\xi\lambda_1}\,. \label{u1m} \end{equation} Each root of Eq.~(\ref{queq}) for $i=1$ determines masses of two degenerate states. Above we introduced ${\cal N}=2\;$ supersymmetry breaking parameters $\omega $ and $\gamma$ associated with the U(1) and SU($N$) gauge groups, respectively, \begin{equation} \omega_1=\frac{g_1\mu_1}{\sqrt{\xi}}\, ,\qquad \omega_2=\frac{g_2\mu_2}{\sqrt{\xi}}\,. \label{omega} \end{equation} and \begin{equation} \gamma_1=\frac{h}{2g^2_1}\, ,\qquad \gamma_2=\frac{h}{2g^2_2}\,. \label{gamma} \end{equation} \mbox{} \vspace{2mm} \mbox{} Now we are left with $6\,(N^2-1)$ states. They acquire masses \begin{equation} m_{{\rm SU}(N)}=g_2\sqrt{\xi\lambda_2}\,, \label{suNm} \end{equation} where each root of Eq.~(\ref{queq}) for $i=2$ determines masses of $2\,(N^2-1)$ degenerate states. When the supersymmetry breaking parameters $\omega_{i}$ and $\gamma_i$ vanish, two mass eigenvalues (\ref{u1m}) coincide with the U(1) gauge boson mass (\ref{phmass}). The corresponding states form the bosonic part of the ${\cal N}=2\;$ long massive U(1) vector supermultiplet \cite{VY}. The one remaining eigenvalue in (\ref{u1m}) becomes zero. It corresponds to the massless field $M^0$ which decouples (becomes sterile) in this limit. With nonvanishing values of $\omega_1$ and $\gamma_1$ this supermultiplet splits into the massive ${\cal N}=1\;$ vector multiplet, with mass (\ref{phmass}), plus three chiral multiplets with masses given by Eq.~(\ref{u1m}). The same happens with the states with masses (\ref{suNm}). If $\omega$'s and $\gamma$'s vanish they combine into the bosonic parts of $(N^2-1)\;$ ${\cal N}=2\;$ massive vector supermultiplets, with mass (\ref{wmass}), plus the massless field $M^a$. If $\omega$'s and $\gamma$'s do not vanish, these multiplets split into $(N^2-1)\;$ ${\cal N}=1\;$ vector multiplets (for the SU($N$) group), with mass (\ref{wmass}), and $3\,(N^2-1)$ chiral multiplets, with masses (\ref{suNm}). \section{\boldmath{${\cal N}=1\;$} SQCD with the mesonic \boldmath{$M$} field} \label{mtheory} \setcounter{equation}{0} Now let us take a closer look at the spectrum obtained above, assuming the limit of very large ${\cal N}=2\;$ supersymmetry breaking parameters $\omega_i$, $$ \omega_i\gg 1\,. $$ In this limit the largest masses $m_{{\rm U}(1)}$ and $m_{{\rm SU}(N)}$ become \begin{eqnarray} m_{{\rm U}(1)}^{\rm (largest)} &=& m_{{\rm U}(1)}\,\omega_1=\sqrt{\frac{N}{2}}\,g_1^2\mu_1\,,\nonumber\\[2mm] m_{{\rm SU}(N)}^{\rm (largest)} &=& m_{{\rm SU}(2)}\,\omega_2=g_2^2\mu_2\, . \label{amass} \end{eqnarray} Clearly, in the limit $\mu_i\to \infty$ these are the masses of the heavy adjoint scalars $a$ and $a^a$. At $\omega_i\gg 1$ these fields leave the physical spectrum; they can be integrated out. The low-energy bulk theory in this limit contains massive gauge ${\cal N}=1\;$ multiplets and chiral multiplets with two lower masses $m_{{\rm U}(1)}$ and two lower masses $m_{{\rm SU}(N)}$. Equation (\ref{queq}) gives for these masses \begin{eqnarray} m_{{\rm U}(1)}^{(1)} &=&\sqrt{\frac{hN\xi}{4}}\left\{1+\frac1{2\omega_1} \sqrt{\gamma_1(\gamma_1+1)} +\cdots\right\}, \nonumber\\[3mm] m_{{\rm U}(1)}^{(2)} &=& \sqrt{\frac{hN\xi}{4}}\left\{1-\frac1{2\omega_1} \sqrt{\gamma_1(\gamma_1+1)} +\cdots\right\}, \label{U1mass} \end{eqnarray} for the U(1) sector and \begin{eqnarray} m_{{\rm SU}(N)}^{(1)} &=& \sqrt{\frac{h\xi}{2}}\left\{1+\frac1{2\omega_2} \sqrt{\gamma_2(\gamma_2+1)} +\cdots\right\}, \nonumber\\[3mm] m_{{\rm SU}(N)}^{(2)} &=& \sqrt{\frac{h\xi}{2}}\left\{1-\frac1{2\omega_2} \sqrt{\gamma_2(\gamma_2+1)} +\cdots\right\}, \label{SUNmass} \end{eqnarray} for the SU($N$) sector. \vspace{2mm} It is worth emphasizing again that there are no massless states in the bulk theory. As we have already mentioned, at $h=0$ the theory (\ref{mamodel}) develops a Higgs branch in the large-$\mu$ limit (see, for example, \cite{SYnone}). If $h\ne 0$, $M$ becomes a fully dynamical field, and the Higgs branch is lifted, as follows from Eqs.~(\ref{U1mass}) and (\ref{SUNmass}). At large $\mu$ one can readily integrate out the adjoint fields ${\cal A}^a$ and ${\cal A}$. Instead of the superpotential terms (\ref{superpot}) and (\ref{superpotbr}) we get \begin{equation} {\cal W} = -\, \frac{\left({\rm Tr}\, \tilde Q\,Q\right)^2}{4\mu_1} -\, \frac{\left({\rm Tr}\, \tilde Q\,T^a\,Q\right)^2}{\mu_2} + {\rm Tr}\, M\,\tilde Q\, Q \,. \label{instem} \end{equation} At $\mu_{1,2}\to \infty$ the first two terms disappear, we are left with ${\cal W} = {\rm Tr}\, M\, \tilde Q\, Q$, and our model (\ref{mamodel}) reduces to ${\cal N}=1\;$ SQCD with the extra mesonic $M$ field. The bosonic part of the action takes the form \begin{eqnarray} S&=&\int d^4x \left[\frac1{4g^2_2} \left(F^{a}_{\mu\nu}\right)^2 + \frac1{4g^2_1}\left(F_{\mu\nu}\right)^2+ {\rm Tr}\,\left|\nabla_{\mu} q\right|^2 + {\rm Tr}\,\left|\nabla_{\mu} \bar{\tilde{q}}\right|^2 \right. \nonumber\\[4mm] &+& \frac1h \left|\partial_{\mu} M^0\right|^2+ \frac1h \left|\partial_{\mu} M^a\right|^2 +\frac{g^2_2}{2} \left( {\rm Tr}\,\bar{q}\,T^a q - {\rm Tr}\,\tilde{q} T^a\,\bar{\tilde{q}}\right)^2 \nonumber\\[3mm] &+& \frac{g^2_1}{8} \left({\rm Tr}\,\bar{q} q - {\rm Tr}\,\tilde{q} \bar{\tilde{q}}- N\xi\right)^2+ {\rm Tr}|qM|^2 +{\rm Tr}|\bar{\tilde{q}}M|^2 \nonumber\\[3mm] &+& \left. \frac{h}{4}\left|{\rm Tr}\,\tilde{q}q\right|^2 +h\left|{\rm Tr}\,qT^a\tilde{q}\right|^2 \right\} \,. \label{mmodel} \end{eqnarray} The vacuum of this theory is given by Eqs. (\ref{qvev}) and (\ref{Mvev}). The mass spectrum of elementary excitations over this vacuum consists of the ${\cal N}=1\;$ gauge multiplets for the U(1) and SU($N$) sectors with masses given by Eqs.~(\ref{phmass}) and (\ref{wmass}), and the chiral multiplets of the U(1) and SU($N$) sectors with masses given by the leading terms in Eqs. (\ref{U1mass}) and (\ref{SUNmass}). The scale of the theory (\ref{mmodel}) is determined by the scale of the theory (\ref{mamodel}) in the ${\cal N}=2\;$ limit by the relation \begin{equation} \Lambda_{{\cal N}=1}^{2N}=\mu_2^N\Lambda^N\,. \label{Lambda} \end{equation} In order to keep the theory (\ref{mmodel}) at weak coupling we assume that \begin{equation} g_2\sqrt{\xi}\gg\Lambda_{{\cal N}=1}\, . \label{weakcoupl} \end{equation} Our ${\cal N}=1\;$ SQCD with the $M$ field, the $M$ model, belongs to the class of theories introduced by Seiberg \cite{Sdual} to give a dual description of conventional ${\cal N}=1\;$ SQCD with the SU($N_c$) gauge group and $N_f$ flavors of fundamental matter, where $$N_c=N_f - N$$ (for reviews see Refs.~\cite{IS,MS}). There are significant distinctions, however. Let us point out the main differences of the $M$ model (\ref{mmodel}) from those introduced \cite{Sdual} by Seiberg: (i) Our theory has the U($N$) gauge group rather than SU($N$); (ii) Our theory has the FI $D$-term instead of a linear in $M$ superpotential in Seiberg's models; (iii) We consider the case $N_f=N$ which would correspond to Seiberg's $N_c=0$. Our theory (\ref{mmodel}) is asymptotically free while Seiberg's dual theories give the most reliable description of the original ${\cal N}=1\;$ SQCD in the range $N_f<3/2\,N_c$ which corresponds to $N_f>3N$. In this range the theory (\ref{mmodel}) is not asymptotically free. In addition, it is worth noting that at $N_f>N$ the vacuum (\ref{qvev}), (\ref{Mvev}) becomes metastable: supersymmetry is broken \cite{ISS}. The $N_c=N_f - N$ supersymmetry-preserving vacua have vanishing VEV's of the quark fields and nonvanishing VEV of the $M$ field \footnote{This is correct for the version of the theory with $\xi$-parameter introduced via superpotential.}. The latter vacua are associated with the gluino condensation in pure SU($N$) theory, $\langle\lambda\lambda\rangle \neq 0$, arising upon decoupling of $N_f$ flavors \cite{IS}. In the case $N_f=N$ considered here the vacuum (\ref{qvev}), (\ref{Mvev}) preserves supersymmetry. Thus, despite a conceptual similarity between Seiberg's models and ours, dynamical details are radically different. To conclude this section let us mention that if a theory dual to the one in (\ref{mmodel}) were known our results would imply a non-Abelian confinement of quarks in the former theory. We will qualitatively discuss this issue in Sect.~\ref{conc}. \section{Non-Abelian strings} \label{strings} \setcounter{equation}{0} Non-Abelian strings were shown to emerge at weak coupling in ${\cal N}=2\;$ supersymmetric gauge theories \cite{HT1,ABEKY,SYmon,HT2}. The main feature of the non-Abelian strings is the presence of orientational zero modes associated with rotations of their color flux in the non-Abelian gauge group. This feature makes such strings genuinely non-Abelian. As long as the solution for the non-Abelian string suggested and discussed in \cite{ABEKY,SYmon} for ${\cal N}=2\;$ SQCD does not depend on the adjoint fields it can be generalized to ${\cal N}=1\;$ SQCD upon introducing the mass term (\ref{superpotbr}) for the adjoint fields and then taking the limit $\mu_{1,2}\to \infty$. This was done in Ref.~\cite{SYnone}. However, as we have already explained above, ${\cal N}=1\;$ SQCD has the Higgs branch which obscures physics of the non-Abelian strings. The string becomes infinitely thick in the limit $\mu_i\to \infty$ due to the presence of massless fields in the bulk. In particular, in \cite{SYnone} it turned out impossible to follow the fate of the confined monopoles (present in ${\cal N}=2\;$ SQCD) all the way down to ${\cal N}=1\;$ SQCD which one recovers in the limit $\mu_{1,2} = \infty$. Below we will show that this obstacle does not arise in the model (\ref{mamodel}). The reason is that ${\cal N}=1\;$ SQCD with the mesonic field $M$ has no massless states in the bulk in the limit $\mu_i\to \infty$, as was demonstrated in Sect.~\ref{mtheory}. \vspace{2mm} Let us generalize the string solutions found in \cite{ABEKY,SYmon} to the model (\ref{mamodel}). In addition to the conventional Abrikosov--Nielsen--Olesen (ANO) strings \cite{ANO} this model supports $Z_N$ strings. These arise due to a nontrivial homotopy group \begin{equation} \pi_1 \Big({\rm SU}(N)\times {\rm U}(1)/ Z_N \Big) \neq 0\,. \end{equation} It is easy to see that this nontrivial topology amounts to winding of just one element of the diagonal matrix (\ref{qvev}) at infinity. Such strings can be called elementary; their tension is $1/N$ of that of the ANO string. The ANO string can be viewed as a bound state of $N$ elementary strings. More concretely, the $Z_N$ string solution (a progenitor of the non-Abelian string) can be written \cite{ABEKY} as follows: \begin{eqnarray} q &=& \left( \begin{array}{cccc} \phi_2(r) & 0& ... & 0\\[2mm] ...&...&...&...\\[2mm] 0& ... & \phi_2(r)& 0\\[2mm] 0 & 0& ... & e^{i\alpha}\phi_{1}(r) \end{array} \right) ,\qquad \tilde{q}=0, \nonumber\\[5mm] A^{{\rm SU}(N)}_i &=& \frac1N\left( \begin{array}{cccc} 1 & ... & 0 & 0\\[2mm] ...&...&...&...\\[2mm] 0& ... & 1 & 0\\[2mm] 0 & 0& ... & -(N-1) \end{array} \right)\, \left( \partial_i \alpha \right) \left[ -1+f_{NA}(r)\right] , \nonumber\\[5mm] A^{{\rm U}(1)}_i &=& \frac{I}{2}\,A_i=\frac{I}{N}\, \left( \partial_i \alpha \right)\left[1-f(r)\right] ,\qquad a=a^a=M^0=M^a=0\,, \label{znstr} \end{eqnarray} where $i=1,2$ labels coordinates in the plane orthogonal to the string axis, $r$ and $\alpha$ are the polar coordinates in this plane and $I$ is the unit $N\times N$ matrix. The profile functions $\phi_1(r)$ and $\phi_2(r)$ determine the profiles of the scalar fields, while $f_{NA}(r)$ and $f(r)$ determine the SU($N$) and U(1) fields of the string solution, respectively. These functions satisfy the following rather obvious boundary conditions: \begin{eqnarray} && \phi_{1}(0)=0, \nonumber\\[2mm] && f_{NA}(0)=1,\;\;\;f(0)=1\,, \label{bc0} \end{eqnarray} at $r=0$, and \begin{eqnarray} && \phi_{1}(\infty)=\sqrt{\xi},\;\;\;\phi_2(\infty)=\sqrt{\xi}\,, \nonumber\\[2mm] && f_{NA}(\infty)=0,\;\;\;\; \; f(\infty) = 0 \label{bcinfty} \end{eqnarray} at $r=\infty$. \vspace{2mm} As long as our {\em ansatz} (\ref{znstr}) does not involve the fields $\tilde{q}$, $a$ and $M$ the classical string solution does not depend on ${\cal N}=2\;$ SUSY breaking parameters. The classical solution is the same as that found \cite{ABEKY} in the ${\cal N}=2\;$ SQCD limit. In particular, the profile functions satisfy the following first-order equations: \begin{eqnarray} && r\frac{d}{{d}r}\,\phi_1 (r)- \frac1N\left( f(r) + (N-1) f_{NA}(r) \right)\phi_1 (r) = 0\, , \nonumber\\[4mm] && r\frac{d}{{ d}r}\,\phi_2 (r)- \frac1N\left(f(r) - f_{NA}(r)\right)\phi_2 (r) = 0\, , \nonumber\\[4mm] && -\frac1r\,\frac{ d}{{ d}r} f(r)+\frac{g^2_1N}{4}\, \left[\left(\phi_1(r)\right)^2 +(N-1)\left(\phi_2(r)\right)^2-N\xi\right] = 0\, , \nonumber\\[4mm] && -\frac1r\,\frac{d}{{ d}r} f_{NA}(r)+\frac{g^2_2}{2}\, \left[\left(\phi_1(r)\right)^2 -\left(\phi_2(r)\right)^2\right] = 0 \, . \label{foe} \end{eqnarray} Numerical solutions of the Bogomolny equations (\ref{foe}) for $N=2$ ($Z_2$ strings) were found in Ref.~\cite{ABEKY}. The string (\ref{str}) is 1/2-BPS saturated. It automatically preserves two supercharges out of four present in the bulk theory. The tension of this elementary string is \begin{equation} T_1=2\pi\,\xi\, , \label{ten} \end{equation} to be compared with the ANO string tension, \begin{equation} T_{\rm ANO}=2N\pi\,\xi \label{tenANO} \end{equation} in our normalization. The elementary strings are {\em bona fide} non-Abelian. This means that, besides trivial translational moduli, they acquire moduli corresponding to spontaneous breaking of a non-Abelian symmetry. Indeed, while the ``flat" vacuum (\ref{qvev}), (\ref{avev}) and (\ref{Mvev}) is SU$(N)_{C+F}$ symmetric, the solution (\ref{znstr}) breaks this symmetry down to U(1)$\times$SU$(N-1)$. To obtain the non-Abelian string solution from the $Z_N$ string (\ref{znstr}) we apply the diagonal color-flavor rotation (\ref{c+f}) which preserves the vacuum. To this end it is convenient to pass to the singular gauge where the scalar fields have no winding at infinity, while the string flux comes from the vicinity of the origin. In this gauge we have (for details see the review paper \cite{SYrev}) \begin{eqnarray} q &=& \frac1N[(N-1)\phi_2 +\phi_1] +(\phi_1-\phi_2)\left( n\,\cdot n^*-\frac1N\right) , \nonumber\\[3mm] A^{{\rm SU}(N)}_i &=& \left( n\,\cdot n^*-\frac{1}{N}\right) \varepsilon_{ij}\, \frac{x_i}{r^2} \, f_{NA}(r) \,, \nonumber\\[3mm] A^{{\rm U}(1)}_i &=& \frac1N \varepsilon_{ij}\, \frac{x_i}{r^2} \, f(r) \, , \nonumber\\[3mm] \tilde{q} & = & 0,\qquad a=a^a=M^0=M^a=0\,, \label{str} \end{eqnarray} where we parametrize the matrices $U$ of SU($N$)$_{C+F}$ rotations as follows: \begin{equation} \frac1N\left\{ U\left( \begin{array}{cccc} 1 & ... & 0 & 0\\[2mm] ...&...&...&...\\[2mm] 0& ... & 1 & 0\\[2mm] 0 & 0& ... & -(N-1) \end{array} \right)U^{-1} \right\}^l_p=-n^l n_p^* +\frac1N \delta^l_p\,\, . \label{n} \end{equation} Here $n^l$ is a complex vector in the fundamental representation of SU($N$), and \begin{equation} n^*_l n^l =1\,, \label{unitvec} \end{equation} ($l,p=1, ..., N$ are color indices). In Eq.~(\ref{str}) for brevity we suppress all SU$(N)$ indices. At $n=\{0,..., 1\}$ we get the field configuration quoted in Eq.~(\ref{znstr}). The vector $n^l$ parametrizes orientational zero modes of the string associated with flux rotations in SU($N$). The presence of these modes makes the string genuinely non-Abelian. To derive an effective world-sheet theory for the orientational collective coordinates $n^l$ of the non-Abelian string we follow Refs.~\cite{ABEKY,SYmon,GSY05}, see also the review \cite{SYrev}. From the string solution (\ref{znstr}) it is quite clear that not each element of the matrix $U$ will give rise to a modulus. The SU($N-1) \times$U(1) subgroup remains unbroken by the string solution under consideration; therefore the moduli space is \begin{equation} \frac{{\rm SU}(N)}{{\rm SU}(N-1)\times {\rm U}(1)}\sim CP(N-1)\,. \label{modulispace} \end{equation} Assume that the orientational collective coordinates $n^l$ are slowly varying functions of the string world-sheet coordinates $x_k$, $k=0,3$. Then the moduli $n^l$ become fields of a (1+1)-dimensional sigma model on the world sheet. Since the vector $n^l$ parametrizes the string zero modes, there is no potential term in this sigma model. To obtain the kinetic term we substitute our solution, which depends on the moduli $ n^l$, in the action (\ref{mamodel}), assuming that the fields acquire a dependence on the coordinates $x_k$ via $n^l(x_k)$. Then we arrive at the $CP(N-1)$ sigma model (for details see the review paper \cite{SYrev}), \begin{equation} S^{(1+1)}_{CP(N-1)}= 2 \beta\, \int d t\, dz \, \left\{(\partial_{k}\, n^* \partial_{k}\, n) + (n^*\partial_{k}\, n)^2\right\}\,, \label{cp} \end{equation} where the coupling constant $\beta$ is given by a normalizing integral defined in terms of the string profile functions. Using the first-order equations for the string profile functions (\ref{foe}) one can see that this integral reduces to a total derivative and given by the flux of the string determined by $f_{NA}(0)=1$, namely \begin{equation} \beta= \frac{2\pi}{g_2^2}\,. \label{betag} \end{equation} The two-dimensional coupling constant is determined by the four-dimensional non-Abelian coupling. The relation between the four-dimensional and two-dimensional coupling constants (\ref{betag}) is obtained at the classical level. In quantum theory both couplings run. So we have to specify a scale at which the relation (\ref{betag}) takes place. The two-dimensional $CP(N-1)$ model is an effective low-energy theory good for the description of internal string dynamics at low energies, much lower than the inverse thickness of the string which, in turn, is given by $g_2\sqrt{\xi}$. Thus, $g_2\sqrt{\xi}$ plays the role of a physical ultraviolet cutoff in (\ref{cp}). This is the scale at which Eq.~(\ref{betag}) holds. Below this scale, the coupling $\beta$ runs according to its two-dimensional renormalization-group flow. The sigma model (\ref{cp}) is asymptotically free \cite{Po3}; at large distances (low energies) it gets into the strong coupling regime. The running coupling constant as a function of the energy scale $E$ at one loop is given by \begin{equation} 4\pi \beta = N\ln {\left(\frac{E}{\Lambda_{CP(N-1)}}\right)} +\cdots, \label{sigmacoup} \end{equation} where $\Lambda_{CP(N-1)}$ is the dynamical scale of the $CP(N-1)$ model. As was mentioned above, the UV cut-off of the sigma model at hand is determined by $g_2\sqrt{\xi}$. Hence, \begin{equation} \Lambda^N_{CP(N-1)} = g_2^N\, \xi^{N/2} \,\, e^{-\frac{8\pi^2}{g^2_2}} . \label{lambdasig} \end{equation} Note that in the bulk theory, due to the VEV's of the squark fields, the coupling constant is frozen at $g_2\sqrt{\xi}$. There are no logarithms in the bulk theory below this scale. Below $g_2\sqrt{\xi}$ the logarithms of the world-sheet theory take over. In the limit of large $\mu_2$ we are interested in here, $$ \mu_2\gg g_2\sqrt{\xi}\,, $$ the coupling constant $g_2$ of the bulk theory is determined by the scale $\Lambda_{{\cal N}=1}$ of the ${\cal N}=1\;$ SQCD (\ref{mmodel}) with the $M$ field included, as shown in Eq.~(\ref{Lambda}). In this limit Eq.~(\ref{lambdasig}) gives \begin{equation} \Lambda_{CP(N-1)}=\frac{\Lambda_{{\cal N}=1}^2}{g_2\sqrt{\xi}}\,, \label{cpscale} \end{equation} where we take into account that the first coefficient of the $\beta$ function in ${\cal N}=1\;$ SQCD is $2N$. To conclude this section let us note a somewhat related development: {\em non}-BPS non-Abelian strings were recently considered in metastable vacua of a dual description of ${\cal N}=1\;$ SQCD at $N_f>N$ in Ref.~\cite{Jmeta}. \section{Fermionic sector of the world-sheet theory} \label{ferm} \setcounter{equation}{0} In this section we discuss the fermionic sector of the low-energy effective theory on the world sheet of the non-Abelian string in ${\cal N}=1\;$ SQCD with the $M$ field, as well as supersymmetry of the world-sheet theory. First we note that our string is 1/2 BPS suturated. Therefore in the ${\cal N}=2\;$ limit ( when ${\cal N}=2\;$ breaking parameters $\mu_i$ and $h$ vanish) four supercharges out of eight present in the bulk theory are automatically preserved on the string world sheet. They become supercharges in the $CP(N-1)$ model (\ref{cp}). \vspace{2mm} For simplicity in this section we will discuss the case $N=2$ limiting ourselves to the $CP(1)$ model. Generalization to arbitrary $N$ is straightforward. The action of the (2,2) supersymmetric $CP(1)$ model model is \begin{eqnarray} S^{(1+1)}_{CP(1)} &=& \beta \int d t d z \left\{\frac12 (\partial_k S^a)^2+ \frac12 \, \chi^a_1 \, i(\partial_0-i\partial_3)\, \chi^a_1 \right. \nonumber\\[3mm] &+& \left. \frac12 \, \chi^a_2 \, i(\partial_0+i\partial_3)\, \chi^a_2 -\frac12 (\chi^a_1\chi^a_2)^2 \right\}, \label{ntwoo3} \end{eqnarray} where we used the fact that $CP(1)$ is equivalent to the $O(3)$ sigma model defined in terms of a unit real vector $S^a$, \begin{equation} S^a=n^{*}_l\tau^a n^l, \qquad (S^a)^2=1\, . \label{sn} \end{equation} This model has two real bosonic degrees of freedom. Two real fermion fields $\chi_1^a$ and $\chi_2^a$ are subject to constrains \begin{equation} \chi_1^aS^a=0, \qquad \chi_2^aS^a=0\,. \label{fermconstr} \end{equation} Altogether we have four real fermion fields in the model (\ref{ntwoo3}). Now we break ${\cal N}=2\;$ supersymmetry of the bulk model by switching on parameters $\mu_i$ and $h$. The 1/2-``BPS-ness" of the string solution requires only two supercharges. However, as we will show below, the number of the fermion zero modes on the string does not change. This number is fixed by the index theorem. Thus, the number of (classically) massless fermion fields in the world sheet $CP(N-1)$ model does not change. It was shown in \cite{SYnone} that the (2,2) supersymmetric sigma model with the $CP(N-1)$ target space does not admit (0,2) supersymmetric deformations. Therefore, it was concluded in \cite{SYnone} that the world sheet theory has ``accidental'' SUSY enhancement. A similar phenomenon was found earlier in \cite{RSV} for domain walls. On the other hand, in the recent publication \cite{EdT} it was suggested that superorientational zero modes can mix with supertranslational modes. It was shown that the sigma model with the $C\times CP(N-1)$ target space does admit (0,2) supersymmetric deformations. It is not clear at the moment if this mixing really occurs in the effective theory on the string. If it occurs then the emerging (0,2) supersymmetric $C\times CP(N-1)$ model has a $\mu$-deformed four-fermion interaction \begin{eqnarray} S^{(1+1)}_{CP(1)} &=& \beta \int d t d z \left\{\frac12 (\partial_k S^a)^2+ \frac12 \, \chi^a_1 \, i(\partial_0-i\partial_3)\, \chi^a_1 \right. \nonumber\\[3mm] &+& \left. \frac12 \, \chi^a_2 \, i(\partial_0+i\partial_3)\, \chi^a_2 -\frac12\,\frac1{1+c|\mu_2|^2/(g^2_2\xi)} \,(\chi^a_1\chi^a_2)^2 \right\}, \label{02o3} \end{eqnarray} where $c$ is an unknown coefficient. Also the first constraint in (\ref{fermconstr}) is replaced with $\chi_1^a S^a=c/2\,(\mu_2\zeta_1 + \bar{\mu}_2\bar{\zeta}_1)$, where $\zeta_1$ is the right-moving two-dimensional fermion field associated with the supertranslational zero modes. If this conjecture \cite{EdT} is correct the four-fermion term disappears in the large-$\mu$ limit. To find out which scenario is correct one has to calculate the coefficient in front of the four-fermion term in (\ref{02o3}). We left this for future work. In any case, the world sheet supersymmetric model has $N$ vacua which are interpreted as $N$ elementary strings of the bulk theory. This number is protected by Witten's index and survives ${\cal N}=2\;$ breaking deformations. We will use this result in the next section. The kinks which interpolate between these vacua are confined monopoles. Below we will show that the occurrence of four ($4(N-1)$ in the general case) superorientational fermion zero modes on the non-Abelian strings follows from an index theorem. In Appendix we present explicit solutions for these modes for the case $N=2$. \subsection{ Index theorem} \label{indext} The fermionic part of the action of the model (\ref{mmodel}) is \begin{eqnarray} S_{\rm ferm} &=& \int d^4 x\left\{ \frac{i}{g_2^2}\bar{\lambda}^a \bar{D}\hspace{-0.65em}/\lambda^{a}+ \frac{i}{g_1^2}\bar{\lambda} \bar{\partial}\hspace{-0.55em}/\lambda + {\rm Tr}\left[\bar{\psi} i\bar\nabla\hspace{-0.75em}/ \psi\right] + {\rm Tr}\left[\tilde{\psi} i\nabla\hspace{-0.75em}/ \bar{\tilde{\psi}} \right] \right. \nonumber\\[3mm] &+& \frac{2i}{h}{\rm Tr}\left[\bar{\zeta} \bar{\partial}\hspace{-0.55em}/\zeta\right] +\frac{i}{\sqrt{2}}\,{\rm Tr}\left[ \bar{q}(\lambda\psi)- (\tilde{\psi}\lambda)\bar{\tilde{q}} +(\bar{\psi}\bar{\lambda})q- \tilde{q}(\bar{\lambda}\bar{\tilde{\psi}})\right] \nonumber\\[3mm] &+& \frac{i}{\sqrt{2}}\,{\rm Tr}\left[ \bar{q}\, 2T^a\, (\lambda^{a}\psi)- (\tilde{\psi}\lambda^a)\, 2T^a\, \bar{\tilde{q}} +(\bar{\psi}\bar{\lambda}^a)\, 2T^a\, q - \tilde{q}\,2T^a\, (\bar{\lambda}^{a}\bar{\tilde{\psi}})\right] \nonumber\\[3mm] &+& i\,{\rm Tr}\left[ \tilde{q}(\psi\zeta)+ (\tilde{\psi}q\zeta) +(\bar{\psi}\bar{\tilde{q}}\bar{\zeta})+ \bar{q}(\bar{\tilde{\psi}}\bar{\zeta})\right] \nonumber\\[3mm] &+& \left. i\,{\rm Tr}\left(\tilde{\psi}\psi M+ \bar{\psi} \bar{\tilde{\psi}}\bar{M}\right) \right\}\,, \label{fermact} \end{eqnarray} where the matrix color-flavor notation is used for matter fermions $(\psi^{\alpha})^{kA}$ and $(\tilde{\psi}^{\alpha})_{Ak}$ and the traces are performed over the color--flavor indices. Contraction of the spinor indices is assumed inside all parentheses, for instance, $(\lambda\psi)\equiv \lambda_{\alpha}\psi^{\alpha}\,$. Moreover, $\zeta$ denotes the fermion component of matrix $M$ superfield, \begin{equation} \zeta^A_B=\frac12 \delta^A_B\, \zeta^0 + (T^a)^A_B\,\zeta^a\,. \rule{0mm}{7mm} \label{matrixzeta} \end{equation} \mbox{} \vspace{0.1mm} \mbox{} \noindent In order to find the number of the fermion zero modes in the background of the non-Abelian string solution (\ref{str}) we have to carry out the following program. Since our string solution depends only on two coordinates $x_i$ ($i=1,2$), we can reduce our theory to two dimensions. Given the theory defined on the $(x_1,x_2)$ plane we have to identify an axial current and derive the anomalous divergence for this current. In two dimensions the axial current anomaly takes the form \begin{equation} \partial_ij_{i5}\sim F^{*}, \label{appranom} \end{equation} where $F^{*}=(1/2)\varepsilon_{ij}F_{ij}$ is the dual U(1) field strength in two dimensions. Then integral over the left-hand side over the $(x_1,x_2)$ plane gives us the index of the 2D Dirac operator $\nu$ coinciding with the number of the 2D left-handed minus 2D right-handed zero modes of this operator in the given background field. The integral over the right-hand side is proportional to the string flux. This will fix the number of the chiral fermion zero modes\,\footnote{Chirality is understood as the two-dimensional chirality.} of the string with the given flux. Note that the reduction of the theory to two dimensions is an important step in this program. The anomaly relation in four dimensions involves the instanton charge $F^{*}F$ rather than the string flux and is therefore useless for our purposes. The reduction of ${\cal N}=1\;$ gauge theories to two dimensions is discussed in detail in \cite{W93} and here we will be brief. Following \cite{W93} we use the rules \begin{eqnarray} && \psi^{\alpha}\to(\psi^{-},\psi^{+}), \qquad \tilde{\psi}^{\alpha}\to(\tilde{\psi}^{-},\tilde{\psi}^{+}), \nonumber\\[3mm] && \lambda^{\alpha}\to(\lambda^{-},\lambda^{+}),\,\qquad \zeta^{\alpha}\to(\zeta^{-},\zeta^{+}). \label{2dreduc} \end{eqnarray} With these rules the Yukawa interactions in (\ref{fermact}) take the form \begin{eqnarray} {\cal L}_{\rm Yukawa} &=& i\sqrt{2}\,{\rm Tr}\left[ -\bar{q}(\hat{\lambda}_{-}\psi_{+} -\hat{\lambda}_{+}\psi_{-})+ (\tilde{\psi}_{-}\hat{\lambda}_{+} -\tilde{\psi}_{+}\hat{\lambda}_{-})\bar{\tilde{q}} + {\rm c.c.}\right] \nonumber\\[3mm] &- & i\,{\rm Tr}\left[ \tilde{q}(\psi_{-}\zeta_{+}-\psi_{+}\zeta_{-})+ (\tilde{\psi}_{-}q \zeta_{+}- \tilde{\psi}_{+}q \zeta_{-}) + {\rm c.c.}\right], \label{yukawa} \end{eqnarray} where the color matrix $\hat{\lambda} = (1/2)\,\lambda +T^a\lambda^a$. \begin{table} \begin{center} \begin{tabular}{|c|c | c| c|c| c | c | c|c | c | c |} \hline $\rule{0mm}{6mm}$ Field & $\psi_{+}$ & $\psi_{-}$ & $\tilde{\psi}_{+}$ & $\tilde{\psi}_{-}$ & $\lambda_{+}$ & $\lambda_{-}$ & $\zeta_{+}$ & $\zeta_{-}$ & $q$ & $\tilde{q}$ \\[3mm] \hline $\rule{0mm}{5mm}$ U(1)$_R$ charge & $-1$ & 1 & $-1$ & 1 & $-1$ & 1 & $-1$ & 1 & 0 & 0 \\[2mm] \hline $\rule{0mm}{5mm}$ U(1)$_{\tilde{R}}$ charge & $-1$ & 1 & 1 & $-1$ & $-1$ & 1 & 1 & $-1$ & 0 & 0 \\[2mm] \hline \end{tabular} \end{center} \caption{{\footnotesize The U(1)$_R$ and U(1)$_{\tilde{R}}$ charges of fields of the two-dimensional reduction of the theory.}} \label{table1} \end{table} It is easy to see that ${\cal L}_{\rm Yukawa}$ is classically invariant under the chiral U(1)$_{R}$ transformations with the U(1)$_{R}$ charges presented in Table~\ref{table1}. The axial current associated with this U(1)$_{R}$ is not anomalous \cite{W93}. This is easy to understand. In two dimensions the chiral anomaly comes from the diagram shown in Fig.~\ref{figanom}. The U(1)$_{R}$ chiral charges of the fields $\psi$ and $\tilde{\psi}$ are the same while their electric charges are opposite. This leads to cancellation of their contributions to this diagram. It turns out that for the particular string solution we are interested in the classical two-dimensional action has more symmetries than generically, for a general background. To see this, please, note that the field $\tilde{q}$ vanishes on the string solution (\ref{str}). Then the Yukawa interactions (\ref{yukawa}) reduce to \begin{equation} i\sqrt{2}\,{\rm Tr}\left[ -\bar{q}(\hat{\lambda}_{-}\psi_{+} -\hat{\lambda}_{+}\psi_{-}) \right] -i\,{\rm Tr}\left[ \tilde{\psi}_{-}q \zeta_{+}- \tilde{\psi}_{+}q \zeta_{-} \right]+ {\rm c.c.} \label{redyukawa} \end{equation} The fermion $\psi$ interacts only with $\lambda$'s while the fermion $\tilde{\psi}$ interacts only with $\zeta$. Note also that the interaction in the last line in (\ref{fermact}) is absent because $M=0$ on the string solution. This property allows us to introduce another chiral symmetry in the theory, the one which is relevant for the string solution. We will refer to this extra chiral symmetry as U(1)$_{\tilde{R}}$. \begin{figure}[h] \epsfxsize=8cm \centerline{\epsfbox{anom}} \caption{\footnotesize Diagram for the chiral anomaly in two dimensions. The solid lines denote fermions $\psi$, $\tilde{\psi}$, the dashed line denotes photon, while the cross denotes insertion of the axial current.} \label{figanom} \end{figure} The U(1)$_{\tilde{R}}$ charges of our set of fields are also shown in Table~\ref{table1}. Note that $\psi$ and $\tilde{\psi}$ have the opposite charges under this symmetry. The corresponding current then has the form \begin{equation} \tilde{j}_{i5}= \left( \begin{array}{c} \rule{0mm}{6mm} \bar{\psi_{-}}\psi_{-}-\bar{\psi_{+}}\psi_{+} -\bar{\tilde{\psi}}_{-}\tilde{\psi}_{-} +\bar{\tilde{\psi}}_{+}\tilde{\psi}_{+} +\cdots\\[3mm] -i\bar{\psi_{-}}\psi_{-}-i\bar{\psi}_{+}\psi_{+} +i\bar{\tilde{\psi}}_{-}\tilde{\psi}_{-} +i\bar{\tilde{\psi}}_{+}\tilde{\psi}_{+} +\cdots \rule{0mm}{6mm}\\ \end{array} \right), \label{current} \end{equation} where the ellipses stand for terms associated with the $\lambda$ and $\zeta$ fields which do not contribute to the anomaly relation. Clearly, in quantum theory this symmetry is anomalous. Now the contributions of the fermions $\psi$ and $\tilde{\psi}$ double in the diagram in Fig.~\ref{figanom} rather than cancel. It is not difficult to find the coefficient in the anomaly formula \begin{equation} \partial_i\tilde{j}_{i5} = \frac{N^2}{\pi} F^{*} \,, \label{anom} \end{equation} which can be normalized e.g. from \cite{ShVa}. The factor $N^2$ appears due to the presence of $2N^2$ fermions $\psi^{kA}$ and $\tilde{\psi}_{Ak}$. Now, taking into account that the flux of the $Z_N$ string under consideration is \begin{equation} \int d^2 x \,F^{*}=\frac{4\pi}{N}\, , \label{flux} \end{equation} (see the expression for the U(1) gauge field for the solution (\ref{znstr}) or (\ref{str})) we conclude that the total number of the fermion zero modes in the string background \begin{equation} \nu\,= \,4N \,. \label{number} \end{equation} This number can be decomposed as \begin{equation} \nu\,= \,4N= \, 4(N-1)+4\,, \label{splitnumber} \end{equation} where 4 is the number of the supertranslational modes while $4(N-1)$ is the number of the superorientational modes. Four supertranslational modes are associated with four fermion fields in the two-dimensional effective theory on the string world sheet, which are superpartners of the bosonic translational moduli $x_0$ and $y_0$. Furthermore, $4(N-1)$ corresponds to $4(N-1)$ fermion fields in the ${\cal N}=2\;$ $CP(N-1)$ model on the string world sheet (\ref{cp}). $CP(N-1)$ describes dynamics of the orientational moduli of the string. For $N=2$ the latter number ($4(N-1)=4$) counts four fermion fields $\chi_1^a$, $\chi_2^a$ in the model (\ref{ntwoo3}) or (\ref{02o3}). We explicitly determine four superorientational fermion zero modes for the case $N=2$ in Appendix. Note that the fermion zero modes of the string in ${\cal N}=1\;$ SQCD with the $M$ field are perfectly normalizable provided we keep the coupling constant $h$ nonvanishing. Instead, in conventional ${\cal N}=1\;$ SQCD without the $M$ field the second pair of the fermion zero modes (proportional to $\chi_1^a$ ) become non-normalizable \cite{SYnone}. This is related to the presence of the Higgs branch and massless bulk states in conventional ${\cal N}=1\;$ SQCD. As was already mentioned more than once, in the $M$ model, Eq.~ (\ref{mmodel}), we have no massless states in the bulk. Note that in both translational and orientational sectors the number of the fermion zero modes is twice larger than the one dictated by 1/2-``BPS-ness." \section{Evolution of the monopoles} \label{evol} \setcounter{equation}{0} Since supersymmetric $CP(N-1)$ model is an effective low-energy theory describing world sheet physics of the non-Abelian string, all consequences of this model ensue, in particular, $N$ degenerate vacua and kinks which interpolate between them --- the same kinks that we had discovered in ${\cal N}=2\;$ SQCD \cite{SYmon} and interpreted as (confined) non-Abelian monopoles, the descendants of the 't Hooft--Polyakov monopole \cite{thopo}. Let us briefly review the reason for this interpretation \cite{Tong,SYmon,HT2} and discuss what happens with these monopoles as we deform our theory and eventually end up with the $M$ model. It is convenient to split this deformation into several distinct stages. We will describe what happens to the monopoles as one passes from one stage to another. A qualitative evolution of the monopoles under consideration as a function of the relevant parameters is presented in Fig.~\ref{twoabcd}. \begin{figure}[h] \epsfxsize=10cm \centerline{\epsfbox{twoabcde}} \caption{\footnotesize Various regimes for the monopoles and flux tubes in the simplest case of two flavors.} \label{twoabcd} \end{figure} \begin{itemize} \item We start from ${\cal N}=2\;$ SQCD turning off the ${\cal N}=2\;$ breaking parameters $h$ and $\mu$'s as well as the FI parameter in the theory (\ref{mamodel}), i.e. we start from the Coulomb branch of the theory, \begin{equation} \mu_1=\mu_2=0,\qquad h=0, \qquad \xi=0, \qquad M\neq 0. \label{stage1} \end{equation} As was explained in Sect.~\ref{bulk}, the field $M$ is frozen in this limit and can take arbitrary values (the notorious flat direction). The matrix $M^A_B$ plays the role of fixed mass parameter matrix for the quark fields. First we consider the diagonal matrix $M$, with distinct diagonal entries, \begin{equation} M^A_B ={\rm diag}\,\{M_1,...,M_N\}\,. \label{diffmasses} \end{equation} Shifting the field $a$ one can always make $\sum_{A}M_A=0$ in the limit $\mu_1=0$, therefore $M^0=0$. If all $M_A$'s are different the gauge group SU($N$) is broken down to U(1)$^{(N-1)}$ by a VEV of the SU($N$) adjoint scalar \begin{equation} \langle a^k_l\rangle = -\frac{1}{\sqrt{2}} \,\delta^k_l M_l \,. \label{adjvev} \end{equation} Thus, there are 't Hooft--Polyakov monopoles embedded in the broken gauge SU($N$). Classically, on the Coulomb branch the masses of $(N-1)$ elementary monopoles are proportional to $$|(M_A-M_{A+1}) \,| /g_2^2\, $$ This is shown in the upper left corner of Fig.~\ref{twoabcd} for the case $$N=2\,,\,\,\,\, \Delta m\equiv M_1-M_2\,.$$ In the limit $(M_A-M_{A+1})\to 0$ the monopoles tend to become massless, formally, in the classical approximation. Simultaneously their size become infinite \cite{We}. The mass and size are stabilized by confinement effects which are highly quantum. The confinement of monopoles occurs in the Higgs phase, at $\xi\neq 0$. \item Now we introduce the FI parameter $\xi$ which triggers the squark condensation. The theory is in the Higgs phase. We still keep ${\cal N}=2\;$ breaking parameters $h$ and $\mu$'s vanishing, \begin{equation} \mu_1=\mu_2=0,\qquad h=0, \qquad \xi\neq 0, \qquad M\neq 0. \label{stage2} \end{equation} If we allow $\xi$ to be nonvanishing but small, \begin{equation} |M_A \,| \gg\sqrt{\xi}\,, \end{equation} then the effect which comes into play first is the spontaneous breaking of the gauge SU($N$) by the condensation of the adjoint scalars. The next gauge symmetry breaking, due to $\xi\neq 0$, which leads to complete Higgsing of the model and the string formation (confinement of monopoles) is much weaker. Thus, we deal here with the formation of ``almost" 't~Hooft--Polyakov monopoles, with a typical size $\sim \left|(M_A-M_{A+1}) \,\right| ^{-1}\,.$ Only at much larger distances, $\sim \xi ^{-1/2}$, the chromoelectric charge condensation enters the game, and forces the magnetic flux, rather than spreading evenly a l\'a Coulomb, to form flux tubes (the upper right corner of Fig.~\ref{twoabcd}). \mbox{} \,\,\,\, Let us verify that the confined monopole is a junction of two strings. At $M_A\neq 0$ the global SU($N$)$_{C+F}$ group is broken by condensation of the adjoint scalars (\ref{adjvev}), and non-Abelian strings become Abelian $Z_N$ strings. Their orientational moduli space is lifted \cite{SYmon,HT2}. Consider the junction of two $Z_N$ strings (\ref{str}), namely $A$-th string with \begin{equation} n^l=\delta^l_A \label{Astring} \end{equation} and ``neighboring'' $(A+1)$-th string with \begin{equation} n^l=\delta^l_{A+1}\,, \label{A1string} \end{equation} (cf. solution (\ref{znstr}) which is written for $A+1=N$). The flux of the junction is given by the difference of the fluxes of these two strings. Using (\ref{str}) we get that the flux of the junction is \begin{equation} 4\pi\,\times \, {\rm diag} \, \frac12\, \left\{ ...\, 0, \,1 ,\, -1,\, 0 ,\, ... \right\} \, \label{monflux} \end{equation} with the nonvanishing entries located at the positions $A$ and $(A +1)$. These are exactly the fluxes of $N-1$ distinct 't Hooft--Polyakov monopoles occurring in the SU($N$) gauge theory provided that SU($N$) is spontaneously broken down to U(1)$^{N-1}$. We see that in the quasiclassical limit of large $|M_A|$ the Abelian monopoles and the junctions of the Abelian $Z_N$ strings are in one-to-one correspondence. \mbox{} \,\,\,\, At large $M_A$ the monopoles, albeit confined, are weakly confined. Now, if we further reduce $|M_A| $, \begin{equation} \Lambda_{CP(N-1)} \ll \left| M_A\right| \ll \sqrt{\xi}\, , \label{ququr} \end{equation} the size of the monopole $\sim \left|(M_A-M_{A+1}) \,\right| ^{-1}\,$ becomes larger than the transverse size of the attached strings. The monopole gets squeezed in earnest by the strings --- it becomes a {\em bona fide} confined monopole (the lower left corner of Fig.~\ref{twoabcd}). At nonzero $M_A$ the effective $CP(N-1)$ model on the string world sheet becomes massive with the potential determined by so called twisted mass terms \cite{Tong,SYmon,HT2}. Two $Z_N$ strings corresponds to two ``neighboring'' vacua of the $CP(N-1)$ model . The monopole (aka the string junction of two $Z_N$ strings) is interpreted as a kink interpolating between these two vacua. \mbox{} \,\,\,\, In \cite{SYmon} the first order equations for the 1/4 BPS string junction of two $Z_2$ strings were explicitly solved in the case $N=2$, and the solution shown to correspond to the kink solution of the two-dimensional $CP(1)$ model. Moreover, it was shown that the mass of the monopole matches the mass of the $CP(1)$-model kink both in the quasiclassical ($\Delta m\gg \Lambda_{CP(1)}$) and quantum ($\Delta m \ll \Lambda_{CP(1)}$) limits. \item Now let us switch off the mass differences $M_A$ still keeping the ${\cal N}=2\;$ breaking parameters vanishing, \begin{equation} \mu_1=\mu_2=0,\qquad h=0, \qquad \xi\neq 0, \qquad M = 0 \,. \label{stage3} \end{equation} The values of the twisted mass in $CP(N-1)$ model equal $M_A$ while the size of the twisted-mass sigma-model kink/confined monopole is of the order of $\sim \left|(M_A-M_{A+1}) \,\right| ^{-1}\,$. \mbox{} \,\,\,\,\,\,\,\, As we further diminish $M_A$ approaching $\Lambda_{CP(N-1)}$ and then getting below $\Lambda_{CP(N-1)}$, the size of the monopole grows, and, classically, it would explode. This is where quantum effects in the world-sheet theory take over. It is natural to refer to this domain of parameters as the ``regime of highly quantum dynamics." While the thickness of the string (in the transverse direction) is $\sim \xi ^{-1/2}$, the $z$-direction size of the kink representing the confined monopole in the highly quantum regime is much larger, $\sim \Lambda_{CP(N-1)}^{-1}$, see the lower right corner in Fig.~\ref{twoabcd}. \mbox{} \,\,\,\,\,\,\,\, In this regime the confined monopoles become truly non-Abelian. They no longer carry average magnetic flux since \begin{equation} \langle n^l\rangle =0, \label{nvev} \end{equation} in the strong coupling limit of the $CP(N-1)$ model \cite{W79}. The kink/monopole belongs to the fundamental representation of the SU($N$)$_{C+F}$ group \cite{W79,HoVa}. Let us stress that in the limit $M_A=0$ the global group SU($N$)$_{C+F}$ is restored in the bulk and both strings and confined monopoles become non-Abelian. One might argue that this restoration could happen only at the classical level. One could suspect that in quantum theory a ``dynamical Abelization'' ( i.e. a cascade breaking of the gauge symmetry U($N$)$\to$U(1)$^{N} \to {\rm descrete\; subgroup}$ ) might occur. This could have happened if the adjoint VEV's that are classically vanish at $M=0$ (see (\ref{avev})) could have developed dynamically in quantum theory. At $M_A\ne 0$ the global SU($N$)$_{C+F}$ group is explicitly broken down to U(1)$^{N-1}$ by quark masses. At $M_A=0$ this group is classically restored. If still it could be dynamically broken this would mean a spontaneous symmetry breaking. Let us show that this does not happen in the theory at hand. First of all, if a global symmetry is not spontaneously broken at the tree level then it cannot be broken by quantum effects at week coupling in ``isolated'' vacua. Second, if the global group SU($N$)$_{C+F}$ were broken spontaneously at $M_A=0$ this would ensure the presence of massless Goldstone bosons. However, we know that there are no massless states in the spectrum of the bulk theory, see Sect. \ref{bulk}, \ref{mtheory}. Finally, the breaking of SU($N$)$_{C+F}$ in the $M_A=0$ limit would mean that the twisted masses of the world sheet $CP(N-1)$ model would not be given by $M_A$; instead they would be shifted, $m^{(tw)}_A=M_a +c_A \Lambda_{CP(N-1)}$, where $c_A$ are some coefficients. In \cite{SYmon,HT2} it was shown that the BPS spectrum of the $CP(N-1)$ model on the string should coincide with the BPS spectrum of the four-dimensional bulk theory on the Coulomb branch because the central charges which determine masses of the BPS states cannot depend on the non-holomorphic parameter $\xi$. The BPS spectrum of the $CP(N-1)$ model is determined by $m^{(tw)}_A$ while the BPS spectrum of the bulk theory on the Coulomb branch is determined by $M_A$. In \cite{Dorey} it was shown that the BPS spectrum of both theories coincide at $m^{(tw)}_A=M_A$. Thus, we conclude that $c_A=0$ and the twisted masses go to zero in the $M_A=0$ limit. Again we conclude that the global SU($N$)$_{C+F}$ group is not broken in the bulk and both strings and confined monopoles become non-Abelian at $M_A=0$. \item Thus, at zero $M_A$ we still have confined ``monopoles" (interpreted as kinks) stabilized by quantum effects in the world-sheet $CP(N-1)$ model. Now we can finally switch on the ${\cal N}=2\;$ breaking parameters $\mu_i$ and $h$, \begin{equation} \mu_i\neq 0,\qquad h\neq 0, \qquad \xi\neq 0, \qquad M = 0\, . \label{stage4} \end{equation} Note that the last equality here is automatically satisfied in the vacuum, see Eq.~(\ref{Mvev}). \mbox{} \,\,\,\,\,\,\,\, As we discussed in Sects.~\ref{strings} and \ref{ferm} the effective world-sheet description of the non-Abelian string is still given by supersymmetric $CP(N-1)$ model. This model obviously still has $N$ vacua which should be interpreted as $N$ elementary non-Abelian strings in the quantum regime, and BPS kinks can interpolate between these vacua. These kinks should still be interpreted as non-Abelian confined monopoles/string junctions. \mbox{} \,\,\,\,\,\,\,\, Note that although the adjoint fields are still present in the theory (\ref{mamodel}) their VEV's vanish (see (\ref{avev})) and the monopoles cannot be seen in the semiclassical approximation. They are seen as the $CP(N-1)$ model kinks. Their mass and inverse size is determined by $\Lambda_{CP(N-1)}$ which in the limit of large $\mu_i$ is given by Eq.~(\ref{cpscale}). \item Now, at the last stage, we take the limit of large masses of the adjoint fields in order to eliminate them from the physical spectrum altogether, \begin{equation} \mu_i\to \infty,\qquad h\neq 0, \qquad \xi\neq 0, \qquad M = 0\, . \label{stage5} \end{equation} The theory flows to ${\cal N}=1\;$ SQCD extended by the $M$ field. \mbox{} \,\,\,\,\,\,\,\, In this limit we get a remarkable result: although the adjoint fields are eliminated from our theory and the monopoles cannot be seen in any semiclassical description, our analysis shows that confined non-Abelian monopoles still exist in the theory (\ref{mmodel}). They are seen as $CP(N-1)$-model kinks in the effective world-sheet theory on the non-Abelian string. \end{itemize} \section {A brane perspective} \label{bpersp} Let us elucidate how some important features of the consideration above are seen in the brane picture. To this end we will rely on Type IIA string approach to our $M$ model. Consider the brane picture for ${\cal N}=2\;$ and ${\cal N}=1\;$ SQCD (see Ref.~\cite{giveon} for a review). We will limit ourselves to the special case of equal numbers of colors and flavors relevant to the present work. The ${\cal N}=2\;$ theory involves $N$ D4 branes extended in the directions of the (0, 1, 2, 3, 6) coordinates, two NS5 branes with coordinates along (0, 1, 2, 3, 4, 5), localized at $x_6=0$ and $x_6=1/g^2$ and $N_f = N$ D6 branes with the world volume along (0, 1, 2, 3, 7, 8, 9). The D4 branes are stretched between NS5 branes along $x_6$, while the coordinates of D6 branes in $x_6$ are arbitrary. The NS5 branes can be split in the $x_7$ direction which corresponds to the introduction of the Fayet-Iliopoulos term in the U(1) factor of U($N$), namely, $$\delta x_7=\xi\,.$$ The Higgs branch in this theory occurs when the D6 branes touch the D4 branes. After this, the D4 branes can split in pieces which can be moved in the (7, 8, 9) directions. The coordinates of these pieces in in the (7, 8, 9) directions, along with the Wilson line of $A_6$, yield coordinates on the Higgs branch of the moduli space. Fluctuations of the ends of the D4 branes in the $(4,5)$ plane provide the coordinates on the Coulomb branch of the moduli space. To break ${\cal N}=2\;$ SUSY down to ${\cal N}=1\;$ we rotate one of the NS5 branes. The angle of rotation corresponds to the mass of the adjoint scalar in the superpotential (\ref{superpotbr}). The fact that this superpotential does not vanish removes the Coulomb branch of the moduli space. The positions of the D4 branes in the (4,5) plane are now fixed. Now, let us switch on the meson field $M$. It turns out that it emerges as a particular limiting brane configuration in the setup described above, without any additional branes. Consider the situation when the $x_6$ coordinates of all D6 branes are the same. First, in this limit the open strings connecting the pairs of the D6 branes yield a massless field which is in the adjoint representation with respect to the {\em flavor} group U($N)$. In the field-theory language this is nothing but our $M$ field. Taking into account the standard three-string vertex we immediately derive the superpotential ${\cal W}_M= {\rm Tr}\, M\,\tilde{Q}\, Q$. On the other hand, since all D6 branes have the same $x_6$ coordinate, it is impossible to split the pieces of the D4 branes --- such a splitting would require different values of $x_6$ for the pair of the D6 branes. Thus, the Higgs branch disappears. We see that in the brane language the introduction of the $M$ field is in one-to-one correspondence with the disappearance of the Higgs branch. Consider now the evolution of the monopoles discussed in Sect.~\ref{evol} within the framework of the brane picture. In the ${\cal N}=2\;$ theory in the regime (\ref{stage1}) the monopole is represented by a D2 brane stretched between two NS5 branes in the $x_6$ direction and two D4 branes located at $x_{4A}=M_{A}$ and $x_{4(A+1)}=M_{(A+1)}$, which yields the correct monopole mass $$ \frac{\left|(M_A -M_{A+1})\right|}{g_2^2}\,. $$ Switching on the Fayet-Iliopoulos parameter parameter $\xi$ in the regime (\ref{stage2}) corresponds to a displacement of one of the NS5 branes in the $x_7$ direction. Since the D4 branes split in two pieces at the common $x_6$ coordinate where the D6 branes are located, and each piece is attached to the NS5 brane, a squark condensate develops. It is proportional to $\sqrt{\xi}$. This regime supports quasiclassical non-Abelian strings which have a transparent geometrical interpretation \cite{HT1}. The non-Abelian strings are identified with the D2 brane parallel to the D6 branes stretched between two NS5 branes along the $x_7$ coordinate. Geometrically, the string tension equals $\delta x_7$, in full agreement with the field-theory result. The D2 brane representing the monopole in the Higgs phase is located as follows. It extends along two coordinates, $x_6$ and $x_4$. Along the $x_6$ coordinate the D2 brane is stretched between the common position of the D6 branes and the NS5 brane. In the $x_4$ direction it is stretched between two D4 branes. From this picture one immediately recognizes the monopole to be a junction of two non-Abelian strings since it is stretched between two different non-Abelian strings in the $x_4$ direction. If one switches off the Fayet--Iliopoulos term then the monopole in the Higgs phase geometrically smoothly transforms into the 't Hooft--Polyakov monopole. This picture implies that in the semiclassical regime of large $M_A$ the monopole mass is the same as the mass of the 't Hooft--Polyakov monopole. With $M_A$ decreasing we eventually find ourselves in the purely quantum regime described by lifting type IIA string to M-theory and, hence, lifting the D2 brane to M2 brane. In M-theory the monopole in the Higgs phase can be easily described by the M2 brane wrapping the appropriate circle on the Riemann surface, using its identification with the kink in CP($N-1$) model \cite{Dorey}. Finally in the regime (\ref {stage4}) we rotate one of the NS5 branes which results in vanishing vacuum expectation values of the adjoint scalars. However, the M2 brane representing the non-Abelian string is still clearly identified. The monopoles are the M2 branes wrapped around the Riemann surface responsible for this regime upon rotation of the branes. Let us emphasize that the monopoles in all regimes are represented by the M2 branes, and their evolution from the Coulomb branch to the Higgs one corresponds just to different placement of one and the same brane in a certain brane background. Note that the brane picture suggests the possibility of a more general situation, when only $k$ of the D6 branes have the same $x_6$ coordinates. Then, the massless meson field M belongs to the U$(k)$ subgroup of the flavor group. In particular, one can consider the case $N_f>N$, introduce a meson field of some rank and perform the standard Seiberg duality transformation by exchanging two NS5 branes. \section{Discussion and conclusions} \label{conc} Let us summarize our findings. Deformation of ${\cal N}=2\;$ SQCD leads us to the $M$ model, ${\cal N}=1\;$ SQCD supplemented by the $M$ field, see (\ref{mmodel}). We observe confined non-Abelian monopoles in this model which has no monopoles whatsoever in the semiclassical limit. Why we are sure that the objects we observe are ``non-Abelian monopoles"? We know this because we can start from the ${\cal N}=2\;$ theory on the Coulomb branch were the standard 't Hooft--Polyakov monopoles are abundant, and trace their evolution stage by stage, as one varies the adjustable parameters to eventually arrive at ${\cal N}=1\;$ SQCD. This is the main result of the present paper. As was mentioned above the confined monopoles are in the highly quantum regime so they do not carry average magnetic flux (see Eq.~(\ref{nvev})). They are genuinely non-Abelian. Moreover, they acquire global flavor quantum numbers. In fact, they belong to the fundamental representation of the global SU($N$)$_{C+F}$ group (see Refs.~\cite{W79,HoVa} where this phenomenon is discussed in the context of the $CP(N-1)$-model kinks). In particular, the monopole-antimonopole ``meson'' formed by the string configuration shown in Fig.~\ref{figmmeson} belongs to the adjoint representation of the global ``flavor'' group SU($N$)$_{C+F}$, in accordance with our expectations. Similar there are ``baryons'' built of $N$ monopoles connected by strings to each other to form a close necklace configuration. \begin{figure}[h] \epsfxsize=8cm \centerline{\epsfbox{mmeson}} \caption{\footnotesize Monopole and antimonopole bound by strings in a meson. Open and closed circles denote monopole and antimonopole, respectively. } \label{figmmeson} \end{figure} We believe that the emergence of these non-Abelian monopoles can shed light on mysterious objects introduced by Seiberg: ``dual magnetic'' quarks which play an important role in the description of ${\cal N}=1\;$ SQCD at strong coupling \cite{Sdual,IS}. It is curious to note that monopole-like configurations apparrently occur in lattice QCD. In particular, in the recent publications \cite{Ch} the occurence of the monopole-like configurations is traced back to the color-octet operator $\tilde{q}\,T^a q$. We would like to stress that the non-Abelian monopoles observed here are totally different. In the limit $\mu\to \infty$ all traces of ``Abelization'' ( i.e. cascade breaking of the gauge symmetry U(N)$\to$U(1)$^{N} \to {\rm descrete\; subgroup}$ ) typical of the ${\cal N}=2\;$ limit are erased! In fact, it is clear from (\ref{qvev}) that $\langle\tilde{q}T^a q\rangle=0$ in the M-model vacuum and cannot be used to construct monopoles. Our monopoles are not seen classically. The confined non-Abelian monopoles emerge as $CP(N-1)$-model kinks living on the string, deep in the quantum regime. Now, let our imagination run away with the hypothetical dual of the $M$ model. In this model it is not chromomagnetic but rather chromoelectric flux tubes that will form (upon ``monopole" condensation) in a highly quantum regime. The number of degenerate chromoelectric flux tubes must grow with $N$. Quarks are confined; inside mesons a quark and its anti-partner must be attached to a pair of strings, in contradistinction with QCD where the confining bond between quark and anti-quark is built from a single string. It is thus clear that even if a dual to the $M$ model is found, it is not yet quite QCD. However, it is pretty close. \section*{Acknowledgments} We are grateful to Arkady Vainshtein for useful discussions, to Maxim Chernodub and especialy to David Tong for helpful communications. This work was supported in part by DOE grant DE-FG02-94ER408. A.G. was funded in part by FTPI, University of Minnesota, grant CRDF-RUP2-261-MO-04 and RFBR grant No. 06-02-17382. The work of A.Y. was supported by FTPI, University of Minnesota, by INTAS Grant No. 05-1000008-7865, by RFBR Grant No. 06-02-16364a and by Russian State Grant for Scientific School RSGSS-11242003.2. \section*{Appendix: Superorientational zero modes} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} In this Appendix we find explicit expressions for four superorientational fermion zero modes of the non-Abelian string in the theory (\ref{mmodel}) with $N=2$. Half-criticality of the string in question ensures that two supercharges are preserved in the world-sheet theory. Following the general method of \cite{SYmon,SYnone} we generate two superorientational fermion zero modes applying SUSY transformations to our string solution (\ref{str}). Essentially repeating the calculation made in \cite{SYnone} we get \begin{eqnarray} \bar{\psi}_{Ak\dot{2}} & = & \left(\frac{\tau^a}{2}\right)_{Ak}\,\, \frac1{2\phi_2}(\phi_1^2-\phi_2^2) \left[ \chi_2^a +i\varepsilon^{abc}\, S^b\, \chi^c_2\, \right]\, , \nonumber\\[3mm] \bar{\psi}_{Ak\dot{1}} & = & 0\, , \nonumber\\[4mm] \lambda^{a1} & = & \frac{i}{\sqrt{2}}\frac{x_1-ix_2}{r^2} \, f_{NA}\, \frac{\phi_1}{\phi_2} \left[ \chi^a_2 +i\varepsilon^{abc}\, S^b\, \chi^c_2 \right]\,, \nonumber\\[4mm] \lambda^{a2} & = & 0 \, . \label{nonemodes} \end{eqnarray} We see that supersymmetry generates for us only two fermion superorientational modes out of four predicted by the index theorem. They are parametrized by the two-dimensional fermion field $\chi_2^a$. This was expected, of course. The modes proportional to $\chi^a_1$ do not appear. The nonzero fermion fields here have the U(1)$_{\tilde{R}}$ charge $+1$ while the fields which are zero have charge $-1$. Clearly we need to find two more zero modes of charge $+1$. We do it by explicitly solving the Dirac equations. From the fermion part of the action of the model (\ref{fermact}) we get the relevant Dirac equations \begin{eqnarray} && \frac{i}{g_2^2} \bar{D}\hspace{-0.65em}/\lambda^{a} +\frac{i}{\sqrt{2}}\,{\rm Tr}\, \bar{\psi}\tau^a q=0\, , \nonumber\\[3mm] && \frac{i}{h} \bar{\partial}\hspace{-0.55em}/\zeta^{a} +\frac{i}{2}\,{\rm Tr}\, \bar{q}\bar{\tilde{\psi}}\tau^a =0\, , \nonumber\\[3mm] && i\nabla\hspace{-0.75em}/ \bar{\psi}-\frac{i}{\sqrt{2}} \lambda^{a}(\tau^a\bar{q})=0\, , \nonumber\\[3mm] && i\nabla\hspace{-0.75em}/ \bar{\tilde{\psi}}+ \frac{i}{2}\zeta^{a}(q\tau^a)=0\, . \label{dirac} \end{eqnarray} After some algebra one can check that equations (\ref{nonemodes}) do satisfy the first and the third of the Dirac equations (\ref{dirac}) provided the first-order equations for the string profile functions (\ref{foe}) are satisfied. Now let us find two additional fermion zero modes by solving the second and the fourth of the Dirac equations (\ref{dirac}). The fields with the U(1)$_{\tilde{R}}$ chiral charge $-1$ vanish, \begin{equation} \bar{\tilde{\psi}}^{kA}_{\dot{2}}=0, \qquad \zeta^{a1}=0\, . \label{negchirality} \end{equation} In order to find the fields with the U(1)$_{\tilde{R}}$ chiral charge $+1$ we use the following {\em ansatz}\,\footnote{One can show that profile functions in front of all other possible structures have singular behavior either at $r=0$ or at $r=\infty$.} (cf. Ref.~\cite{SYnone}), \begin{eqnarray} \zeta^{a2} &=& \zeta(r)\,\left[\chi_1^a+ i\varepsilon^{abc}S^b\chi_1^c\right]\, , \nonumber\\[4mm] \bar{\tilde{\psi}}^{kA}_{\dot{1}} &=& \frac{x_1-ix_2}{r}\,\psi(r)\, \left(\frac{\tau^a}{2}\right)^{kA}\, \left[\chi_1^a+ i\varepsilon^{abc}S^b\chi_1^c\right]. \label{fprofile} \end{eqnarray} Here we introduce two profile functions $\zeta(r)$ and $\psi(r)$ parameterizing the fermion fields $\zeta^{2}$ and $\bar{\tilde{\psi}}_{\dot{1}}$. Substituting (\ref{fprofile}) into the Dirac equations (\ref{dirac}) we get the following equations for fermion profile functions: \begin{eqnarray} &&\frac{d}{dr}\psi +\frac1r\psi -\frac1{2r}(f+f_{NA})\psi +i\,\phi_1\,\zeta=0, \nonumber\\[3mm] &-&\frac{d}{dr}\zeta +i\frac{h}{2}\,\phi_1\,\psi =0\,. \label{fermeqs} \end{eqnarray} Below we present the solution to these equations in the limit \begin{equation} h\ll g_1^2\sim g_2^2 \,. \label{hg} \end{equation} This latter assumption is not a matter of principle, rather it is just a technical point. It allows us to find an approximate analytic solution to Eqs. (\ref{fermeqs}). If the condition (\ref{hg}) is met the mass of the fermions $\bar{\tilde{\psi}}$ and $\zeta$, \begin{equation} m_0=\sqrt{\frac{h}{2}}\,\xi \,, \label{zetamass} \end{equation} (see Eqs.~(\ref{U1mass}) and (\ref{SUNmass})) becomes much smaller than the masses of the gauge bosons (see Eqs.~(\ref{phmass}) and (\ref{wmass}); note that the fermions $\bar{\tilde{\psi}}$ and $\zeta$ are superpartners of $\tilde{q}$ and $M$ and have the same mass). Thus, the fields $\bar{\tilde{\psi}}$ and $\zeta$ develop long range tails with the exponential fall-off determined by the small masses (\ref{zetamass}). This allows us to solve Eqs.~(\ref{fermeqs}) analytically treating separately the domains of large and small $r$. In the large $r$ domain, at $r \gg m_{W}$, we can drop the terms in (\ref{fermeqs}) containing $f$ and $f_{NA}$ and use the first equation to express $\psi$ in terms of $\zeta$. We then get \begin{equation} \psi= -\frac{2i}{h\sqrt{\xi}}\frac{d}{dr}\zeta\, . \label{psizeta} \end{equation} Substituting this into the second equation in (\ref{fermeqs}) we obtain \begin{equation} \frac{d^2}{dr^2}\zeta+\frac1r\frac{d}{dr}\zeta-m_0^2\zeta= 0\,. \label{zetaeq} \end{equation} This is a well-known equation for a free field with mass $m_0$ in the radial coordinates. Its solution is well-known too, \begin{equation} \zeta=-\frac{ih}{2}\sqrt{\xi}\, K_0(m_0 r) \,, \label{zeta} \end{equation} where $K_0 (x)$ is the imaginary-argument Bessel function, and we fix a certain convenient normalization (in fact, the normalization constant of the profile functions is included in $\chi^a_1$). At infinity $K_0 (x)$ falls-off exponentially, \begin{equation} K_0(x)\sim \frac{e^{-x}}{\sqrt{x}}\,, \end{equation} while at $x\to 0$ it has a logarithmic behavior, \begin{equation} K_0(x)\sim \ln{\frac1x}\, . \label{log} \end{equation} Taking into account Eq.~(\ref{psizeta}) we get the solutions for the fermion profile functions at $r\gg 1/m_W$, \begin{equation} \zeta=-\frac{ih}{2}\sqrt{\xi}\, K_0(m_0 r)\,,\qquad \psi=- \frac{d}{dr}K_0(m_0 r)\, . \rule{0mm}{7mm} \label{psi} \end{equation} \mbox{} \vspace{1mm} \mbox{} \noindent In particular, at $r\ll 1/m_0$ we have \begin{equation} \zeta\sim -\frac{ih}{2}\sqrt{\xi} \, \ln\, {\frac1{m_0 r}}\,, \qquad \psi\sim\frac1r \,. \label{psizero} \end{equation} In the intermediate domain $r\le 1/m_{W}$ we neglect the small mass terms in (\ref{fermeqs}). We then arrive at \begin{eqnarray} &&\frac{d}{dr}\zeta =0\,, \nonumber\\[3mm] &&\frac{d}{dr}\psi +\frac1r\psi-\frac1{2r}(f+f_{NA})\psi=0\,. \label{smallreqs} \end{eqnarray} The first equation here shows that $\zeta=$const, while the second one is identical to the equation for the string profile function $\phi_1$, see Eq.~(\ref{foe}). This gives the fermion profile functions at intermediate $r$, \begin{equation} \zeta= -\frac{ih}{2}\sqrt{\xi} \, \ln\, {\frac{m_W}{m_0 }}\,, \qquad \psi_{-}=\frac{1}{r\sqrt{\xi}}\,\phi_1\, , \label{sdpsi} \end{equation} where we fixed the normalization constants matching this solutions with the ones in the large-$r$ region, see (\ref{psizero}). Equations~(\ref{psi}) and (\ref{sdpsi}) present our final result for the fermion profile functions. They determine two extra fermion superorientational zero modes proportional to $\chi_1^a$ via Eq.~(\ref{fprofile}). Now if we substitute the fermion zero modes (\ref{nonemodes}) and (\ref{fprofile}) in the action (\ref{fermact}) we get the effective ${\cal N}=2\;$ $CP(1)$ model (\ref{ntwoo3}) on the string world sheet,\footnote{In doing so one has to redefine the normalization of the fields $\chi^a_1$.} cf. Ref.~\cite{SYnone}. \vspace{1cm} \small
2,877,628,091,372
arxiv
\section{Introduction and notation}\label{intro} Many situations in which processes restart probabilistically at renewal instants and there are non-negative rewards associated with each renewal epoch, are well described by a multivariate renewal-reward process. For example, a multivariate reward function can be viewed as an accumulated cost from different types of properties or infrastructures caused by a single catastrophe event, which is of interest in actuarial science and reliability analysis. The asymptotic distribution and covariance function of a multivariate reward function were studied by \cite{PNT15} who extended the result of \cite{BS75} to multivariate case. In an insurance context, much research about the aggregate discounted claims has been done on its moment under renewal claim arrival processes. For example, \cite{LA11}, \cite{LG01a}, \cite{LG01b}, and \cite{LGW10} in renewal process, and \cite{WC13} in the dependent renewal process. In this paper, we assume that there are time lags added to the original arrival times of renewal process. These delayed renewal epochs allow us to study the quantities related to infinite server queues with correlated batch arrivals and multivariate \emph{Incurred But Not Reported} (IBNR) claims where there is a delay in reporting or payment for claims. Furthermore, rewards are accumulated as a discounted value, which is useful to analyze the discounted multivariate IBNR claim amounts and workload of the queue (the required time to empty the queue). For an univariate case, IBNR claim count with batch arrivals was considered by \cite{GLW13} and the total discounted IBNR claim amount was studied by \cite{LWX16}. For the multivariate case, \cite{W15} provided expressions for joint moments of multivariate IBNR claims which are computable recursively. As mentioned previously, a direct application to some problems in infinite server queues is also available. For example, suppose that the bulk size random variable is multivariate (i.e. correlated) and service time distribution is dependent on the type of input, then a multivariate reward function incorporating time delays up to time $t$ (with zero discounting factor) is essentially the number of customers in the system up to time $t$. In the infinite server queues with multiple batch Markovian arrival streams, a time-dependent matrix joint generating function of the number of customers in the system was derived by \cite{MT02}. We note that it is usually difficult to derive a distribution for this compound renewal input since for a general arrival process there is no concrete representation for an inversion of the complicated moment generating function for this quantity. In this sense, it is appealing to study the long term behavior of the process in terms of its moment and distribution. From \cite{W15}, explicit expressions for the joint moments of multivariate aggregate discounted claims involving time delays in renewal process are computable recursively. However, an analytic expression of the lower moment which appears in its integral term, is required for a calculation of the higher moment. Therefore, our objective here is to develop simpler approximation methods such as asymptotics and bound results for the joint moment of a multivariate discounted reward function incorporating time delays. To the best of our knowledge, these kinds of approximation approaches have never been developed in the analysis of a multivariate renewal-reward process with discounted inputs and time delays (or interpreted as a multivariate discounted IBNR claim process in an actuarial context). Also, in a queueing context, a relationship between multivariate discounted IBNR claim process and quantities studied in infinite server queues with correlated batch arrivals and a discounting factor is firstly exploited in this paper. In particular, some asymptotic results regarding queueing theoretic applications such as the workload in $G/ \cdot /\infty$ queue, are obtained. For the number of IBNR claims, a direct relation to the number of customers in the infinite server queues with batch arrivals is well known as discussed in the literature, e.g. \cite{Karlsson}, \cite{LWX16}, \cite{WD01}, \cite{WDC02}, \cite{WD09}. The transient behavior of a distribution of the number of customer in various multichannel bulk queues was studied in \cite{CT83}. See also \cite{BR69} for example. Moreover, when interarrival times are light tailed, we are able to quantify the approximation precision by providing many terms for the asymptotics for the first order moment of our process. We note that this approach was previously found in \cite[Lemma 1]{BS75} where 2-terms asymptotics for a general renewal reward process without delays was provided, see also \cite{PNT15} for an expansion of the covariance. In the following, we describe the model by calling renewal and reward as batch and input respectively. We shall suppose that the batch arrival process $\{N_t\}_{t\geq 0}$ is a renewal process with a sequence of independent and identically distributed (iid) positive continuous random variables (rv)s $\{T_i\}^\infty_{i=1}$ representing the arrival time of the $i$th batch with $T_0 \equiv 0$. Let $\tau_i=T_i-T_{i-1}$ be the interarrival time of the $i$th batch with a common probability density function (pdf) $f$, distribution $F$, and Laplace transform $\L^\tau(s)=\mathbbm{E}[e^{-s\tau_1}]$ for $s\ge 0$. Each batch arrival containing several ($k$) types of inputs which may simultaneously occur from the same renewal event (e.g. \cite{PNT15}, \cite{W15}). Let us denote the $j$-type of input from the $i$th batch as $X_{i,j}$ where $\{X_{i,j}\}^\infty_{i=1}$ is a sequence of iid rvs. A vector for multivariate variables is denoted as $X=(X_1,X_2,\ldots,X_k)$. Here multivariate input values are assumed to be dependent on the occurrence time and/or the adjusted time by adding a random delay. This time delay for the $j$-type of input from $i$th batch is denoted by $L_{i,j}$ where $\{L_{i,j}\}^\infty_ {i=1}$ is a sequence of iid rvs with a common pdf $w_j$ and distribution $W_j$. For the sake of simplicity let us assume a constant force of interest $\delta$ to discount input values to time 0. Now we define the following discounted compound delayed process \begin{equation}\label{Zdt} Z(t)=(Z_1(t),\ldots,Z_k(t))=Z^\delta(t),\qquad t\ge 0, \end{equation} where \begin{equation}\label{Zjt} Z_j(t):=\sum_{i=1}^{N_t} e^{-\delta(T_i+L_{i,j})} X_{i,j}\mathbbm{1}_{\{ T_i+L_{i,j}>t\}}= \sum_{i=1}^{\infty} e^{-\delta(T_i+L_{i,j})} X_{i,j}\mathbbm{1}_{\{T_i\leq t< T_i+L_{i,j}\}},\quad j\in\{1,\ldots,k\}. \end{equation} In most cases in this paper, we suppose that the discounted factor $\delta$ is real and non negative because this has some direct actuarial or queueing interpretation. However, it has to be pointed out that, mathematically speaking, Definitions (\ref{Zdt}) and (\ref{Zjt}) can in some cases be extended to some {\it complex} $\delta$, as is the case in Section \ref{sec:workload} where $\delta\in\mathbbm{C}$ is needed for technical purposes. Throughout the paper, we assume that vector $X$ admits joint moments of all order. Let us denote {\bf (A)} for the following assumption: $$ {\bf (A)}\quad \mbox{Density } f(\cdot) \mbox{ is bounded. $$ An important consequence of {\bf (A)} is the following result, of which proof is given at the beginning of Section~\ref{sec:Proofs}. \begin{lemm}\label{lemma_density_upper_bound} If {\bf (A)} holds then the associated renewal function $m:t\ge 0\mapsto \mathbbm{E}[N_t]$ admits a density $u(t)$, and this latter verifies \begin{equation}\label{Expression_density_renewal} u(t)=\frac{d}{dt}m(t)=\sum_{j=0}^\infty f^{\star (j)}(t). \end{equation} Besides, this density is upper bounded: There exists $C>0$ such that $u(t)\le C,\ \forall t\ge 0 $. \end{lemm} Not all results in this paper require Assumption {\bf (A)} to hold. We refer to it only when it is needed in what follows. \noindent {\bf Notation. } The $n$th joint moment for $Z(t)$ is denoted as \begin{equation}\label{Mnt} M_n(t)=\mathbbm{E} \bigg[\prod^k_{j=1}Z_j^{n_j}(t)\bigg],\qquad t\geq 0,~~n=(n_1,\ldots,n_k)\in \mathbbm{N}^k. \end{equation} For notational convenience, we let, for all $n=(n_1,\ldots,n_k)\in \mathbbm{N}^k$ and $t\ge 0$, \begin{eqnarray} \eta_n&:=&\sum_{i=1}^k n_i,\nonumber\\ \tilde{M}_n(t)&:=& e^{\eta_n \delta t}M_n(t)\label{M_tilde},\\ \tilde{b}_n(t)&:=& e^{\eta_n \delta t}b_n(t)\label{b_tilde}. \end{eqnarray} We define the natural partial order on set $\mathbbm{N}^k$ as follows. We say that two vectors $\ell$ and $n$ in $\mathbbm{N}^k$ verify $\ell<n$ if $\ell_i\le n_i$ for all $i=1,\ldots,k$ and $\ell_i<n_i$ for (at least) an $i$, i.e. $\eta_n> \eta_\l$. Let us introduce, for all $n\in \mathbbm{N}^k$, $$C_\ell=C_{\ell,n}:=\{ j=1,\ldots,k|\ \ell_j<n_j\}\subset \{1,\ldots,k \}.$$ We will denote by $n(i)\in\mathbbm{N}^k$ the vector of which $j$th entry is $\delta_{i,j}$ where $\delta_{i,j}$ is the Kronecker delta function. It is convenient to introduce function $t\mapsto\varphi_\l(t)=\varphi_{\l,n}(t)$ for $\l < n$: \begin{equation} \varphi_\l(t)= \mathbbm{E}\bigg[ e^{(\eta_n-\eta_\ell) \delta (t-\tau_1)}\tilde{M}_\ell(t-\tau_1)\prod_{j\in C_\ell} \overline{\omega}_{(n_j-\ell_j)\delta,j}(t-\tau_1) .\ensuremath{\mathbbm{1}}_{[\tau_1<t]}\bigg], \label{phi_l} \end{equation} where \begin{equation}\label{bomega} \overline{\omega}_{\delta,i}(t)=\int^\infty_t e^{-\delta y}dW_i(y). \end{equation} Then using (\ref{phi_l}), $\tilde{b}_n(t)$ in (\ref{b_tilde}) admits the following expression with the help of Equation (34) in \cite{W15}: \begin{equation} \tilde{b}_n(t)=\sum_{\ell < n} {{n_1}\choose{\ell_1}}\cdots{{n_k}\choose{\ell_k}} \mathbbm{E}\bigg[ \prod_{j=1}^k X_j^{n_j-\ell_j}\bigg]\varphi_{\ell,n}(t), \label{exp_tilde_b} \end{equation} and one has that $\tilde{M}_n(t)$ defined in (\ref{M_tilde}) satisfies the following renewal equation (a direct consequence of Theorem 3 in \cite{W15}) \begin{equation} \tilde{M}_n(t)= \tilde{b}_n(t) +\tilde{M}_n\star F(t),\qquad t\geq 0,\quad n\in\mathbbm{N}^k\!\setminus \!\{n(i),\ i=1,\ldots,k\}. \label{renewal_M_tilde} \end{equation} Lastly, throughout the paper, ${\cal E}(\mu_i)$ denotes an exponential distribution with a mean $1/\mu_i$. \noindent{\bf Structure of paper.} For ease of presentation, in Section \ref{sec:main_results} we provide our main results without proof. It includes (i) asymptotic behavior for the joint moment of $Z(t)$ given in (\ref{Mnt}), (ii) bounds for (\ref{M_tilde}), (iii) higher order expansion for asymptotic case of (\ref{M_tilde}) when $n=n(i)$ and exponential time delay, (iv) convergence form of $e^{\delta t}Z(t)$ in distribution, and (v) joint moments of the limiting distribution of $e^{\delta t}Z(t)$ in the case of exponential time delays. In Section \ref{sec:App_queues}, we focus on queueing theoretic applications involving some particular $G/G/\infty$ queue with correlated batch arrivals and determine the asymptotic expected workload and covariance of the workload and queue size in the $G/M/\infty$ queue. Section \ref{sec:special} presents limiting moments and covariances when $k=1$ or $2$ in (\ref{Mnt}), and we profit by this section to give some (re)interpretation of Little's Law. Finally, proofs of all main results and applications are presented in Section \ref{sec:Proofs}. \section{Main results}\label{sec:main_results} \subsection{Asymptotics and bounds} In this section, we study some asymptotic behaviors and bounds for the joint moment of the process $Z(t)$ in (\ref{Zdt}) defined as (\ref{Mnt}). \begin{prop}\label{prop_asymptotics} One finds the following asymptotic result for the moment of discounted compound delayed process, for all $n\in\mathbbm{N}^k$: \begin{equation*} M_n(t)\sim \chi_n e^{-\eta_n \delta t},\qquad t\to\infty, \end{equation*} where \begin{equation} \chi_n:=\frac{\displaystyle\int_0^\infty \tilde{b}_n(t) dt}{\displaystyle\mathbbm{E}[\tau_1]}, \label{asymp_tilde_M} \end{equation} and $\tilde{b}_n(t)$ is given by (\ref{exp_tilde_b}). \end{prop} \begin{proof} See Section \ref{proof_prop_asymptotics}. \hfill $\Box$ \end{proof} It turns out that coefficients $\chi_n$, $n\in\mathbbm{N}^k$ are in general not directly computable, as the function $t\mapsto \tilde{b}_n(t)$ in the integral (\ref{asymp_tilde_M}) does not have an easy expression, and are defined recursively in the function of $ t\mapsto \tilde{M}_\ell(t)$, $\ell<n$. We thus provide in the following easily computable bounds for the $\chi_n$'s and a uniform upper bound in $t$ for $\tilde{M}_n(t)$. \begin{prop}\label{tildeMnRn} Let us suppose that {\bf (A)} holds. One has the following bounds for all $n\in\mathbbm{N}^k$: \begin{eqnarray} \chi_n & \le & \frac{1}{\mathbbm{E}(\tau_1)}\,R_n,\label{bound_Cn}\\ \tilde{M}_n(t)& \le & R_n,\quad \forall t\ge 0, \label{bound_M_tilde} \end{eqnarray} where $(R_n)_{n\in\mathbbm{N}^k}$ is defined recursively by \begin{equation} \begin{array}{rcl} R_{n(i)}&=& C \mathbbm{E}[X_i] \delta^{-1} \left\{1-\mathbbm{E}\left[ e^{-\delta L_i}\right]\right\}, \quad i=1,\ldots,k,\\ R_n&=& \displaystyle C \sum_{\ell < n} {{n_1}\choose{\ell_1}}\cdots{{n_k}\choose{\ell_k}} \mathbbm{E}\bigg[ \prod_{j=1}^k X_j^{n_j-\ell_j}\bigg] \min_{i\in C_\ell} \mathbbm{E}[L_i].\, R_\ell,\quad n\in\mathbbm{N}^k\!\setminus \!\{n(i),\ i=1,\ldots,k\}, \end{array} \label{rec_Rn} \end{equation} Here, the constant $C$ is the upper bound for renewal density $u(t)$ in Lemma \ref{lemma_density_upper_bound}. \end{prop} \begin{proof} See Section \ref{proof_tildeMnRn}. \hfill $\Box$ \end{proof} We remark that coefficients $R_n$, $n\in \mathbbm{N}^k$, in (\ref{rec_Rn}) are easily obtained because $R_n$ is a linear function of the $R_\l$, $\l <n$, and only involves the joint moments of $X=(X_1,...,X_k)$, the Laplace transform of the $L_1$,...,$L_k$ as well as their expectations. Proposition \ref{tildeMnRn} thus provides a uniform upper bound for $ \tilde{M}_n(t)$, it however lacks the property that it does not say much what happens when $t$ is small. The following bounds are established under the conditions that the interarrival time distribution $\tau_1$ has either an increasing or a decreasing failure rate. \begin{prop}[Transient bounds]\label{twosidedbounds} If $\tau_1$ has an increasing failure rate (IFR), then one has the lower bound for all $n\in\mathbbm{N}^k$: $$\tilde{M}_n(t)\ge h_n(t),\quad \forall t\ge0 ,$$ Conversely, $\tau_1$ has a decreasing failure rate (DFR), then one has the upper bound for all $n\in\mathbbm{N}^k$: $$\tilde{M}_n(t)\le h_n(t),\quad \forall t\ge0 ,$$ where $t\mapsto h_n(t)$ for $n\in\mathbbm{N}^k$ is defined recursively by \begin{equation} \begin{array}{rcl} h_{n(i)}(t)&=& \mathbbm{E}[X_i] \int^t_0 \overline{\omega}_{\delta,i}\star H_\delta(y)dy, \quad i=1,\ldots,k,\\ h_n(t)&=& \displaystyle \sum_{\ell < n} {{n_1}\choose{\ell_1}}\cdots{{n_k}\choose{\ell_k}} \mathbbm{E}\bigg[ \prod_{j=1}^k X_j^{n_j-\ell_j}\bigg]f(0+)\\ & & \times \Big[ \int^t_0 \int^y_0 e^{- \eta_n \delta z}h_\ell(y\!-\!z) e^{-\eta_\ell \delta(y\!-\!z)} \prod_{j\in C_\ell} \overline{\omega}_{(n_j\!-\!\ell_j)\delta,j}(y\!-\!z)dF(z) dy\Big],\quad n\in\mathbbm{N}^k\!\setminus \!\{n(i),\ i\!=\!1,\!\ldots\!,k\}, \end{array} \label{rec_hnt} \end{equation} where $ f(0+)=\lim_{t\to 0+}f(t)$ and $H_\delta(t)=\int^t_0 e^{-\delta y}dF(y)$ is the discounted interarrival distribution. \end{prop} \begin{proof} See Section \ref{proof_twosidedbounds}. \hfill $\Box$ \end{proof} \subsection{High order expansions}\label{sec:high} In this section, we consider the case of $n=n(i)$ for $i\in\{1,\ldots,k\}$ to study how fast $\tilde{M}_n(t)$ converges to $\chi_n$ given in Proposition \ref{prop_asymptotics} when $t\rightarrow \infty$. As $\tilde{M}_n(t)$ satisfies the renewal equation (\ref{renewal_M_tilde}), using its solution it may be expressed as \begin{equation}\label{solMnt} \tilde{M}_n(t)=\int^t_0 \tilde{b}_n(t-s)dm(s), \end{equation} and from Proposition \ref{prop_asymptotics}, recall that \begin{equation}\label{chind} \tilde{M}_n(t) \longrightarrow \chi_n =\frac{\int_0^\infty \tilde{b}_n(t) dt}{\mathbbm{E}[\tau_1]}, \end{equation} where here $\chi_n=\chi_{n(i)}=\{\mathbbm{E}[X_i]\mathbbm{E}[L_i]\tilde{w}_{1,i}(\delta)\}/\mathbbm{E}[\tau_1]$ and $\tilde{w}_{1,i}(\delta)=\int_0^\infty e^{-\delta x}\overline{W}_i(x)dx/\mathbbm{E}[L_i]$ as given later in Corollary \ref{Coro_single_type_of_input}, Expression (\ref{single_type_input_chi1}). From \cite{DR16}, we use the result of higher order expansions for the function $v(x)$ which is related to the renewal function as \begin{equation}\label{vx} v(x):=m(x)-\frac{x}{\mathbbm{E}[\tau_1]}-\frac{\mathbbm{E}[\tau_1^2]}{2\mathbbm{E}[\tau_1]^2}, \end{equation} where $F$ here is non-lattice (as it admits a density) and is light tailed, i.e. there exists $R>0$ such that \begin{equation}\label{light_tailed_cond} \int_0^\infty e^{Rx}dF(x)={\mathbb E}[e^{R\tau_1}]<+\infty . \end{equation} It admits the following expression \begin{equation}\label{vxexpan} v(x)=\sum^N_{j=1}\gamma_j e^{-z_j x} + o (e^{-z_N x}), \end{equation} where $z_j$'s are the solution of $\mathbbm{E}(e^{z_j \tau_1})=1$ which are in the range of $0 \leq \mathrm{Re}(z_j)\leq R$ for some $R>0$ and ordered as $\mathrm{Re}(z_j)\leq \mathrm{Re}(z_{j+1})$. In order to hold (\ref{vxexpan}), we in addition require all roots $z_1,\ldots,z_N$ to be of mutliplicity $1$, i.e. such that $\left.\frac{\partial}{\partial z} \mathbbm{E}(e^{z \tau_1})\right|_{z=z_j}\neq 0$ (the condition is not necessary but it enables us to avoid some technicalities later), in which case one has $$ \gamma_j=-\frac{1}{z_j \left.\frac{\partial}{\partial z}\mathbbm{E}(e^{z \tau_1})\right|_{z=z_j}}, \quad j=1,\ldots,N, $$ see \cite[Theorem 3]{DR16}. Although they are complex, the $z_j$'s actually come in pair as one sees that if $z_j$ verifies $\mathbbm{E}(e^{z_j \tau_1})=1$ then so does $\overline{z}_j$, so that one checks in (\ref{vxexpan}) that the right-hand side is in fact real. Furthermore, in the following result we need to write $o(e^{-z_N x})$ term in (\ref{vxexpan}) in the form of \begin{equation}\label{function_eta} o(e^{-z_N x})=\eta(x) e^{-z_N x} \end{equation} for some function $\eta(x)$ such that $ \lim_{x\to \infty}\eta(x)=0$. \begin{theorem}\label{theorem_expansion} Let us assume that time delays $L_i$ are ${\cal E}(\mu_i)$ distributed, and define \begin{equation}\label{Ai} A_i=-\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} \bigg[ \frac{\mathbbm{E}[\tau_1^2]}{ 2\mathbbm{E}[\tau_1]^2}+\sum^N_{k=1}\gamma_k \frac{\mu_i}{z_k-\mu_i}+\mu_i\int^{\infty}_0 \eta(s)e^{(\mu_i-z_N)s} ds \bigg]\L^{\tau} (-\mu_i) \end{equation} and \begin{equation}\label{Bki} B_{k,i}=\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} \Big[\gamma_k \frac{z_k}{z_k-\mu_i} \Big]\L^{\tau} (-z_k). \end{equation} Then $\tilde{M}_n(t)$ in (\ref{M_tilde}) satisfies the following high order expansions \begin{equation}\label{high_order_expansion} \tilde{M}_n(t)=\chi_n +A_i^\ast e^{-\mu_i t}+\sum^N_{k=1} B_{k,i}e^{-z_k t}+ o(e^{-z_N t}),\qquad n=n(i), \end{equation} where $A_i^\ast = A_i-\frac{\mathbbm{E}[X_i]}{\mathbbm{E}[\tau_1]}.\frac{1}{\mu_i+\delta} \L^{\tau}(-\mu_i)$ with $A_{i}$ in (\ref{Ai}) and $B_{k,i}$ in (\ref{Bki}). \end{theorem} \begin{proof} See Section \ref{proof_theorem_expansion}. \hfill $\Box$ \end{proof} Note that in expansion (\ref{high_order_expansion}) the $B_{k,i}$'s are explicit. On the other hand, $A_i$ in (\ref{Ai}) features an integral involving function $x\mapsto \eta(x)$ which is not explicit in general. This means that (\ref{high_order_expansion}) is explicit only if we truncate the expansion to the $i_0$th term where $i_0=\max\{j=1,...,N|\ \mathrm{Re}(z_j)<\mu_i\}$. We may write the expansion in this way, however we prefer to keep a form as general as possible. Besides, we point out on a similar note that an expansion akin to (\ref{high_order_expansion}) was provided in \cite[Lemma 1]{BS75} for a general renewal reward process in the particular context where there is no time delay, under the weaker assumption that interarrival times and rewards admit the moment of order $1$. \begin{remark}[Dependence of (\ref{high_order_expansion}) in $\delta$]\label{rem_dep_delta} {\normalfont Upon inspecting (\ref{Ai}) and (\ref{Bki}) one notices that $$ |A_i^*|, \ |B_{k,i}| \quad \le \frac{M}{\mu_i+\delta},\quad k=1,\ldots,N, $$ for all $\delta\ge 0$, where $M>0$ is a constant independent from $\delta$. On further analysis, one also checks that when $\delta$ is {\it complex} and verifies $|\delta|<\mu_i$ then \begin{equation}\label{dep_delta1} |A_i^*|, \ |B_{k,i}| \quad \le \frac{M}{\mu_i-|\delta|},\quad k=1,\ldots,N. \end{equation} In particular this inequality also holds when $\delta$ is {\it negative} and larger than $-\mu_i$. Hence, from (\ref{dep_delta1}), it is shown that that $\tilde{M}_n(t)$ and $\chi_n$ are defined for such a complex $\delta$. This is particularly going to be the case in Section \ref{sec:workload}. Concerning the term $o(e^{-z_N t})$ in (\ref{high_order_expansion}), one carefully checks from the proof of Theorem \ref{theorem_expansion} that \begin{equation}\label{dep_delta2} |o(e^{-z_N t})|\le \frac{1}{\mu_i -|\delta|} \,\zeta(t) e^{- \mathrm{Re}(z_N)t}, \end{equation} when $\delta\in\mathbbm{C}$, $|\delta|<\mu_i$, for some function $\zeta(.)$ independent from $\delta$ verifying $\lim_{t\to\infty}\zeta(t)=0$.} \end{remark} \begin{remark}\label{rem_release_expo}\normalfont The exponential distribution assumption for $L_i$ may seem a bit restrictive. In fact, the result in Theorem \ref{theorem_expansion} can be similarly extended to the case of a combination of exponentials. For example, assume that $w_i(x)=\sum^2_{j=1} p_{ij}\mu_{ij}e^{-\mu_{ij}x}$ where $\sum^2_{j=1}p_{ij}=1$. Then a key step in the proof of Theorem \ref{theorem_expansion} in Section \ref{proof_theorem_expansion} is to be able to prove a similar structure for the discounted survival function of $W_i$. For instance, (\ref{abc}) becomes \begin{equation*} \int^\infty_{z-\tau_1} e^{-\delta s} dW_i(s)=\sum^2_{j=1}\frac{p_{ij}\mu_{ij}}{\mu_{ij}+\delta} e^{-(\mu_{ij}+\delta)(z-\tau_1)}, \end{equation*} which is a combination of exponentials. It is thus not hard to be convinced that the rest of steps are similar, hence the details are omitted here for brevity. \end{remark} \subsection{Convergence in distribution of renormalized process} From the proof of Proposition \ref{prop_asymptotics}, it is shown that $\tilde{M}_n(t)$ converges towards $\chi_n$. Since $\tilde{M}_n(t)$ is the joint moments of $\mathbbm{R}^k$ valued process $\{e^{\delta t}Z(t)\}_{t\ge 0}$, convergence result suggests in turn that this process converges in distribution. Since convergence of moments does not always implies convergence in distribution, we give in this section some sufficient conditions such that this latter holds. \begin{theorem}\label{theo_conv_distrib} Let us suppose that {\bf (A)} holds and that each rv $X_j$ for $j=1,\ldots,k$, is a.s. bounded by some constant $M$. Then one has the result of convergence in distribution for $e^{\delta t}Z(t)$ given by \begin{equation*}\label{conv_distribution} e^{\delta t}Z(t)\stackrel{\cal D}{\longrightarrow} {\cal Z}_\infty,\quad t\to \infty , \end{equation*} where ${\cal Z}_\infty=({\cal Z}_{\infty, 1},\ldots,{\cal Z}_{\infty,k})={\cal Z}_\infty(\delta)$ is a light tailed vector valued rv with the joint moments \[ \mathbbm{E}\bigg[ \prod_{i=1}^k {\cal Z}_{\infty, i}^{n_i}\bigg]=\chi_n=\chi_n(\delta) \] given by (\ref{asymp_tilde_M}) for $n\in \mathbbm{N}^k$. \end{theorem} \begin{proof} See Section \ref{proof_theo_conv_distrib}. \hfill $\Box$ \end{proof} \subsection{Exponentially distributed delays}\label{sec:expo_delays} Let us note that Theorem \ref{theo_conv_distrib} actually holds for general light tailed interarrival times $\tau_i$ that satisfy {\bf (A)}, and general time delays $L_j$'s. In practice, it is not easy to compute explicitly limiting moments $\chi_n$ for $n\in\mathbbm{N}^k$, as given by (\ref{asymp_tilde_M}), although they are obtainable recursively in principle. Hence, we shall now restrict to the case where the $L_j$'s are exponentially distributed. To make analysis simpler, we suppose that all $L_j$'s for $j=1,...,k$, are all ${\cal E}(\mu)$ distributed for some $\mu>0$. In the same spirit as in Remark \ref{rem_release_expo}, we may obtain similar results in the following for more general cases such as a mixture or a combination of exponentials. To begin, some notations are introduced. Let $\L^M_n(s)$ and $\L^b_n(s)$ for $s\ge 0$ and $n\in\mathbbm{N}^k$, be the Laplace transforms of $\tilde{M}_n(\cdot)$ and $\tilde{b}_n(\cdot)$ respectively $$ \L^M_n(s):= \int_0^\infty e^{-sy}\tilde{M}_n(y)dy,\quad \L^b_n(s):= \int_0^\infty e^{-sy}\tilde{b}_n(y)dy. $$ Note that these Laplace transforms exist (i.e. the integrals converge) respectively when $s>0$ and $s\ge 0$ since $\tilde{M}_n(y)$ converges to some finite limit $\chi_n$ as $y\to \infty$, and $\tilde{b}_n(\cdot)$ is integrable (as proved in Proposition \ref{prop_asymptotics}). The following lemma gives a recursive expression of $\L^b_n(s)$. We denote $|A|$ as the cardinal of $A$ for any finite set $A$. \begin{lemm}\label{lemma_recursive} When time delays $L_j$ are ${\cal E}(\mu)$ distributed, the Laplace transform of $\tilde{b}_n(\cdot)$ in (\ref{exp_tilde_b}) is obtained as \begin{equation}\label{recursion_LT_b1} \L^b_{n(i)}(s)=\mathbbm{E}[X_i]\frac{\mu}{(\mu+\delta)(\mu+s)} \, \L^\tau (s) ,\qquad i=1,\ldots,k, \end{equation} and \begin{equation}\label{recursion_LT_b} \L^b_n(s)= B_{\mathbf{0}, n} \frac{\L^\tau (s)}{s+ |C_\mathbf{0}|\mu} +\sum_{\mathbf{0}<\l < n} B_{\l, n} \frac{\L^\tau (s)}{1-\L^\tau(s+|C_\l|\mu)}\, \L^b_\l(s+|C_\l|\mu),\quad n\in\mathbbm{N}^k\!\setminus \!\{n(i),\ i=1,\ldots,k\}, \end{equation} where \begin{equation}\label{Bln} B_{\l, n}:={{n_1}\choose{\ell_1}}\cdots{{n_k}\choose{\ell_k}} \mathbbm{E}\bigg[ \prod_{j=1}^k X_j^{n_j-\ell_j}\bigg] \prod_{j\in C_\l} \frac{\mu}{\mu+(n_j-\l_j)\delta}, \end{equation} and we recall that $C_\l=C_{\l,n}=\{ j=1,\ldots,k|\ \ell_j<n_j\}\subset \{1,\ldots,k \}$. \end{lemm} \begin{proof} See Section \ref{proof_lemma_recursive}. \hfill $\Box$ \end{proof} \begin{theorem}\label{expser} Let us denote $D_n(j):=\L^b_n(j\mu)$ for $j\in\mathbbm{N}$ and $n\in \mathbbm{N}^k$. When time delays $L_j$ are ${\cal E}(\mu)$ distributed, the joint moments $\chi_n=\chi_n^\delta$ for $n\in\mathbbm{N}^k$ of $\mathcal{Z}_\infty=\mathcal{Z}_\infty^\delta$ (the limiting distribution of $e^{\delta t}Z(t)$), are given by \begin{equation} \chi_{n(i)}=\dfrac{\mathbbm{E}[X_i]}{\mathbbm{E}[\tau_1]}\bigg(\dfrac{1}{\mu+\delta}\bigg), \qquad i=1,\ldots,k, \label{C_1_expo} \end{equation} and \begin{equation} \chi_n= \dfrac{1}{\mathbbm{E}[\tau_1]}\left(B_{\mathbf{0}, n} \frac{1}{|C_\mathbf{0}|\mu} + \sum_{\mathbf{0}<\l < n} B_{\l, n} \frac{1}{1-\L^\tau(|C_\l|\mu)}\,D_\l(|C_\l|)\right),\quad n\in\mathbbm{N}^k\!\setminus \!\{n(i),\ i=1,\ldots,k\}, \label{C_n_expo} \end{equation} where $D_n(j)$'s for $j\in\mathbbm{N}$ and $n\in\mathbbm{N}^k$ are obtained recursively as: \begin{eqnarray} D_{n(i)}(j)&=& \mathbbm{E}[X_i]\frac{\mu}{(\mu+\delta)([j+1]\mu)} \,\L^\tau (j\mu),\quad i=1,\ldots,k, \label{D_n_i,expression}\\ D_n(j)&=& B_{\mathbf{0}, n} \frac{\L^\tau (j\mu)}{[j+|C_\mathbf{0}|]\mu} +\sum_{\mathbf{0}<\l < n} B_{\l, n} \frac{\L^\tau (j\mu)}{1-\L^\tau([j+|C_\l|]\mu)}\,D_\l([j+|C_\l|]),\quad n\in\mathbbm{N}^k\!\setminus \!\{n(i),\ i=1,\ldots,k\},\nonumber\\ \label{D_n,expression} \end{eqnarray} with $B_{\l,n}$ in (\ref{Bln}). \end{theorem} \begin{proof} From (\ref{asymp_tilde_M}), using (\ref{recursion_LT_b1}) and (\ref{recursion_LT_b}) when $s=0$, we find (\ref{C_1_expo}) and (\ref{C_n_expo}) respectively. In addition, (\ref{D_n_i,expression}) and (\ref{D_n,expression}) are obtainable by setting $s=j\mu$ in (\ref{recursion_LT_b1}) and (\ref{recursion_LT_b}) respectively. \hfill $\Box$ \end{proof} We remark that a close look at (\ref{C_n_expo}) and (\ref{D_n,expression}) reveals that computation of the {\it infinite} sequences $(D_\l(j))_{j\in\mathbbm{N}}$ for all $\l<n$ is not needed to obtain $\chi_n$. Since $|C_\l|$ is bounded by $k$, it is not hard to see that one needs to compute (recursively) $D_\l(j)$ for $\l<n$ and for $j\le k\eta_n$ (i.e. only for a finite number of $j$'s). Moreover, the values of $D_n(j)$ may be stored in memory while computing the successive $\chi_n$ as $\eta_n$ increases, and thus one does not need to recompute them each time. Hence the algorithm (\ref{C_n_expo}) is relatively not too costly. \section{Applications to infinite server queues}\label{sec:App_queues} Now we consider the following application related to queueing theory. To begin, we restate the model assumptions described in Section \ref{intro} in terms of technical terms used in queueing theory. Let us consider a single queue containing batches of $k$ types of customers in the infinite-server model. Here queues arrive according to a renewal process $\{N_t\}_{t\geq 0}$ with corresponding arrival times $ \{T_i\}^\infty_{i=1}$. At each arrival instant $T_i$ a batch of (correlated) customers $(X_{i,1},\ldots,X_{i,k})$ arrive in the system, with each customer within batch $j\in\{1,\ldots,k\}$ having the same service times $L_{i,j}$. A random sequence $(X_{i,1},\ldots,X_{i,k})$ for $i\in\mathbbm{N}$, is iid and distributed as $(X_{1},\ldots,X_{k})$. In order to comply with the previous section we suppose furthermore that the $X_j$'s are upper bounded, i.e. there exists some $M\in\mathbbm{N}$ such that all $X_j$'s have support included in $\{0,\ldots,M\}$. Here, service times $(L_{i,j})_{i,j\in \mathbbm{N}^2}$ are assumed to be independent, although $L_{i,1},\ldots,L_{i,k}$ possibly have different distributions, i.e. service times are different according to the type of customer class. \subsection{$G/G/\infty$ queue with correlated batch arrivals and customer classes}\label{GMinf} We are first interested in the process $Z(t)=Z^\delta(t)=\{(Z_1(t),\ldots,Z_k(t))\}_{t\ge 0}$ defined in (\ref{Zdt}). Note in particular that when $\delta=0$, $Z_j(t)$ is the number of customers of class $j\in\{1,\ldots,k\}$ in the system at time $t$. When $\delta>0$, $Z(t)$ has no real interpretation and can be seen as the number of customers of class $j$ penalized with respect to their departure times through a discount with rate $\delta$; another interpretation of the rescaled process $ e^{\delta t}Z(t)$ is given in upcoming Remark \ref{Rem_interpretation_queueing}. Theorem \ref{theo_conv_distrib} then reads as follows in this context: \begin{theorem}\label{conv_distrib_queue} Let us suppose that {\bf (A)} holds. The following convergence holds for the discounted queue size: $$ e^{\delta t}Z(t)\stackrel{\cal D}{\longrightarrow} \mathcal{Z}_\infty,\quad t\to \infty , $$ where $\mathcal{Z}_\infty=\mathcal{Z}_\infty(\delta)=(\mathcal{Z}_{\infty, 1},\ldots,\mathcal{Z}_{\infty,k})$ is a light tailed vector valued rv with joint moments $\mathbbm{E}\left[ \prod_{i=1}^k \mathcal{Z}_{\infty, i}^{n_i}\right]=\chi_n=\chi_n (\delta) $ given by (\ref{asymp_tilde_M}) for $n\in \mathbbm{N}^k$. In particular, when $\delta=0$, we obtain that the joint number of customers within different classes $(Z_1(t),\!\ldots\!,Z_k(t))$ converges in distribution as $t\to\infty$ to a stationary regime $\mathcal{Z}_\infty$ with joint moments given by $(\chi_n)_{n\in\mathbbm{N}^k}$. \hfill $\Box$ \end{theorem} \begin{example} As an illustration, let us look at the particular case where $(X_1,\ldots,X_k)$ follows a multinomial distribution with parameters $M\in\mathbbm{N}$ and probability vector $(p_1,\ldots,p_k)$ where $p_j\ge 0$ and $\sum_{j=1}^k p_j=1$. This models a situation where at each instant $T_i$ exactly $M$ customers arrive, each of which belongs to class $j$ with probability $p_j$. $X_j$ is the number of customers of class $j$ in this batch. See Figure \ref{example_queue}. \begin{figure}[!hbtp]% \centering \includegraphics[scale=0.7]{queue_batch}% \caption{\label{example_queue} The $G/G/\infty$ queue with multinomial distributed classes batches $(X_1,\ldots,X_k)$.} \end{figure} When $M=1$, customers arrive according to renewal process $\{N_t\}_{t\ge 0}$, and each arriving customer belongs to class $j$ with probability $p_j$. \end{example} \begin{remark}[Another queueing interpretation in the case of $\delta>0$]\label{Rem_interpretation_queueing} \normalfont As pointed out at the beginning of this section, no direct interpretation of the vector valued process $\{ Z(t)\}_{t\ge 0}=\{ Z^\delta (t)\}_{t\ge 0}$ is available in a queueing context. One way to introduce a queueing interpretation is by using Fubini's theorem and noticing that for all $t\ge 0$ and $j=1,...,k$, \begin{equation}\label{Interpret_queue} \mathbbm{E}[e^{\delta t}Z_j(t)]=\mathbbm{E}\left[ \sum_{i=1}^{\infty} e^{-\delta(T_i+L_{i,j}-t)} X_{i,j}\mathbbm{1}_{\{T_i\leq t< T_i+L_{i,j}\}}\right]=\mathbbm{E}\left[ \sum_{i=1}^{\infty} \mathbbm{1}_{\{ T_i+L_{i,j}-t\le E_\delta \}} X_{i,j}\mathbbm{1}_{\{T_i\leq t< T_i+L_{i,j}\}}\right], \end{equation} where $E_\delta$ is an ${\cal E}(\delta)$ distributed rv independent from everything. Since $ T_i+L_{i,j}-t$ is the residual service time of the $i$th batch of customers of size $X_{i,j}$, (\ref{Interpret_queue}) can then be interpreted as {\it the expected number of customers at time $t$ of which residual service time does not exceed horizon $E_\delta$}, where $\delta>0$ is arbitrary. Thus, a direct consequence of Theorem \ref{conv_distrib_queue} is that this expected number converges towards $\chi_{n(j)}(\delta)=\mathbbm{E}[\mathcal{Z}_{\infty, j}(\delta)]$, see upcoming Remark \ref{Little_revisit} for another interesting insight on this convergence. \end{remark} Similar to what observed at the beginning of Section \ref{sec:expo_delays} concerning Theorem \ref{theo_conv_distrib}, Theorem \ref{conv_distrib_queue} holds for any light tailed interarrival (that satisfies {\bf (A)}) and service times. However, computing the $\chi_n$'s for $n\in \mathbbm{N}^k$, is theoretically feasible but practically complicated, as explained just before Proposition \ref{tildeMnRn}. On the other hand, the case where the $L_{i,j}$ are {\it exponentially distributed}, i.e. when one considers the $G/M/\infty$ queue with multiple types of customer classes, is much more tractable and one may use the procedure given in Theorem \ref{expser} to compute $\chi_n$'s much more easily. \subsection{Asymptotics for the workload of the $G/M/\infty$ queue}\label{sec:workload} We now turn to the asymptotic behaviour of the workload $D(t)$ of the queue when $k=1$, which is defined as the time needed to empty the queue at time $t$ if there is no arrival afterwards. As we deal with one queue only, we drop a subscript in $L_{i,1}$ for the $i$th service time (i.e. write $L_i$ for $i\in\mathbbm{N}$), and denote by $L$ for the generic service time. The workload has the following expression $$ D(t):= \sum_{i=1}^\infty (T_i+L_i-t)\mathbbm{1}_{\{ T_i\le t <T_i+L_i\}}, $$ and is obtained from $\tilde{Z}(t,\delta):=e^{\delta t} Z_1(t)$ as: \begin{equation} D(t)=\left.-\frac{\partial}{\partial \delta}\tilde{Z}(t,\delta)\right|_{\delta=0}, \label{workload_size} \end{equation} where here $Z_1(t)$ is the first entry of process $Z(t)$ (i.e. (\ref{Zjt}) when $k=1$). We assume in this subsection that all $X_{i,1}$ for $i\in\mathbbm{N}$, are equal to one. In that case, $Z_1(t)$ in (\ref{Zjt}) is, when $\delta=0$, the size of this infinite server queue at time $t$. \begin{figure}[!hbtp]% \centering \includegraphics[scale=0.8]{workload}% \caption{\label{fig:workload} Sample path of workload for the $G/G/\infty$ queue.} \end{figure} A sample path of $D(t)$ is depicted in Figure \ref{fig:workload}. Let us note that $D(t)$ is also the sum of the residual times for all services to be completed at time $t$. We are interested in the limiting expectation of workload and the covariance of queue size and workload. We thus need to study the two first moments of $\tilde{Z}(t,\delta)$, i.e. quantities $\tilde{M}_{n(1)}(t,\delta)={\mathbb E}[\tilde{Z}(t,\delta)]$ and $\tilde{M}_{2n(1)}(t,\delta)={\mathbb E}[\tilde{Z}(t,\delta)^2]$, where here $n(1)=1$ and $2n(1)=2$ in the case $k=1$, sticking with the notation introduced in Section \ref{intro}. In the following, we write $n(1)=1$ and $2n(1)=2$ for notational convenience. The main assumptions in this subsection are that service time $L$ is ${\cal E}(\mu)$ distributed, i.e. \begin{equation}\label{assumption_light_tailed} {\mathbb E}[e^{x L}]=\frac{\mu}{\mu-x},\quad \forall x\in(-\infty,\mu ), \end{equation} so that this queue is the $G/M/\infty$, and that interarrival times are light tailed, i.e. Condition (\ref{light_tailed_cond}) holds for some $R>0$. A few lemmas are first required. We need to define for $r>0$, the disc $D_r$ centered at $0$ with the radius $r$, included in $\mathbbm{C}$, by $$ D_r:=\{ z\in \mathbbm{C}|\ |z|\le r\}. $$ \begin{lemm}\label{Lemma_analytic} Let $a<\mu$. For all $t>0$, $\tilde{M}_{1}(t,\delta)$ and $\tilde{M}_{2}(t,\delta)$ are respectively defined on $D_a$ and $D_{a/2}$. Furthermore, $\delta\mapsto \tilde{M}_{1}(t,\delta)$ and $\delta\mapsto \tilde{M}_{2}(t,\delta)$ are analytic on those sets, hence a fortiori at $\delta=0$. \end{lemm} Note that one implication of the above lemma is that quantities $\tilde{M}_{1}(t,\delta)$ and $\tilde{M}_{2}(t,\delta)$ (and, hence $\tilde{Z}(t,\delta)$) are defined for some complex values of $\delta$, and in particular for negative values (not only for $\delta \ge 0$). This is especially handy to express the workload as (\ref{workload_size}) and to be able to define analyticity of $\tilde{M}_{1}(t,\delta)$ and $\tilde{M}_{2}(t,\delta)$ at $\delta=0$, which is needed to differentiate with respect to $\delta$ at $0$.\\ \begin{proof} See Section \ref{proof_workload}. \hfill $\Box$ \end{proof} \begin{lemm}\label{lemma_uniform_convergence} Let us suppose that {\bf (A)} holds and let $a<\mu$. $\tilde{M}_{1}(t,\delta)$ and $\tilde{M}_{2}(t,\delta)$ uniformly converge to $\chi_1(\delta)$ and $\chi_2(\delta)$ respectively on $D_a$ and $D_{a/2}$ as $t\to+\infty$. \end{lemm} \begin{proof} See Section \ref{proof_workload}. \hfill $\Box$ \end{proof} Now we are ready to provide some results for the long term behaviour of the expected workload, and the covariance function of the workload and the queue size in the folloiwng. \begin{theorem}\label{workload} Let us suppose that {\bf (A)} holds. The limiting expected workload for the $G/M/\infty$ queue is given by \begin{equation}\label{limiting_expected_workload} \lim_{t\to\infty}\mathbbm{E}[D(t)]=\frac{1}{\mu^2 \mathbbm{E}[\tau_1]}=\frac{\mathbbm{E}[L^2]}{2\mathbbm{E}[\tau_1]}. \end{equation} The limiting covariance of the workload and queue size is given by \begin{equation}\label{limiting_cov_workload} \lim_{t\to\infty}\mathbb{C}ov[D(t), Z_1(t,0)]=\frac{1}{\mu^2 \mathbbm{E}[\tau_1]}\left[1+ \frac{\L^\tau(\mu)}{1-\L^\tau(\mu)}-\frac{1}{\mu \mathbbm{E}[\tau_1]}\right]. \end{equation} \end{theorem} \begin{proof} See Section \ref{proof_workload}. \hfill $\Box$ \end{proof} \section{Special cases}\label{sec:special} In this section, we use the results given in previous sections to obtain nice simple forms of asymptotic results for some special cases. The following two corollaries are the results when $k=1$ in (\ref{Zjt}). The last corollary is the case of $k=2$ which is useful to find the covariance of two types of inputs. \begin{cor}[Single type of input, exponential time delays]\label{nmoment1} The $r$-th moment of discounted compound delayed process $Z_1(t)$ in (\ref{Zjt}) for $k=1$ with exponential time lag is asymptotically obtained as \[ \mathbbm{E}[Z_1^r(t)] \sim \chi_{r}\, e^{- r\delta t},\qquad t\to\infty,\qquad r\in \mathbbm{N}, \] where \begin{equation}\label{CHIk11} \chi_{1}=\frac{\mathbbm{E}[X_1]}{\mathbbm{E}[\tau_1]} \bigg(\frac{1}{\mu+\delta}\bigg), \end{equation} and \begin{equation}\label{CHIk1n} \chi_{r}= \dfrac{1}{\mathbbm{E}[\tau_1]}\left(\mathbbm{E}[X_1^r] \frac{1}{\mu+ r\delta} + \sum_{\l=1}^{r-1} {{r}\choose{\l}} \mathbbm{E}\big[ X_1^{r-\ell}\big] \frac{\mu}{\mu+ (r-\l)\delta} \frac{D_\l(1)}{1-\L^\tau(\mu)}\right),\qquad r=2,3,\ldots, \end{equation} and $D_\l(1)$ recursively available from the formulas (\ref{D_n_i,expression}) and (\ref{D_n,expression}) respectively given by \[ D_1(j)=\mathbbm{E}[X_1]\frac{\mu}{(\mu+\delta)([j+1]\mu)} \,\L^\tau (j\mu) \] and \[ D_n(j)=\mathbbm{E}[X_1^n] \frac{\mu}{\mu+ n\delta}\frac{\L^\tau(j\mu)}{[j+1]\mu} +\sum_{\l=1}^{n-1} {{n}\choose{\l}} \mathbbm{E}\big[ X_1^{n-\ell}\big] \frac{\mu}{\mu+ (n-\l)\delta}\frac{\L^\tau(j\mu)}{1-\L^\tau([j+1]\mu)} D_\l([j+1]),\qquad n=2,3,\ldots. \] \end{cor} \begin{proof} When $n(1)=1$ and $n(i)=0$ for $i\neq 1$ together with $\eta_n=r$, $\eta_\l=\l$, and $|C_\l|=1$, from Proposition \ref{prop_asymptotics} and Theorem \ref{expser}, the result follows. \hfill $\Box$ \end{proof} We remark that the form given in Theorem 3 of \cite{W15} was not suitable to derive asymptotic behavior of $Z_1(t)$. A comment therein reveals only that this quantity is asymptotically closed to zero. Hence Corollary \ref{nmoment1} is useful for calculating higher moments of $Z_1(t)$ in any order for a large $t$ when time delays are exponentially distributed. For a general time lag distribution, a direct consequence of Proposition \ref{prop_asymptotics} when $k=1$ with (\ref{int_b_tildeA}) yields the result for the first moment in the following corollary. \begin{cor}[Single type of input, arbitrary time delays]\label{Coro_single_type_of_input} The mean of discounted compound delayed process $Z_1(t)$ in (\ref{Zjt}) for $k=1$ with \textbf{arbitrary} time lag distribution is asymptotically obtained as \[ \mathbbm{E}[Z_1(t)] \sim \chi_{1} \,e^{-\delta t},\qquad t\to\infty, \] where \begin{equation}\label{single_type_input_chi1} \chi_{1}=\frac{\mathbbm{E}[X_1]\mathbbm{E}[L]\tilde{w}_{1,1}(\delta)}{\mathbbm{E}[\tau_1]}, \end{equation} and $\tilde{w}_{1,1}(\delta)=\int_0^\infty e^{-\delta x}\overline{W}_1(x)dx/\mathbbm{E}[L_1]$. This is a generalization of Corollary 3 in \cite{W15} in which it is assumed that $X_i= 1$ and $\delta=0$. \end{cor} \begin{remark}[Little's law revisited]\label{Little_revisit} {\normalfont Remark \ref{Rem_interpretation_queueing} as well as Expression (\ref{single_type_input_chi1}) gives an interesting interpretation in a queuing context. Let us suppose here (without loss of generality) that $X_1=1$ (i.e. customers do not arrive in batches). Recall that we defined $\tilde{Z}(t,\delta):=e^{\delta t} Z_1(t)$, (\ref{single_type_input_chi1}) then reads \begin{equation}\label{Litte_bis} \lim_{t\to\infty} \mathbbm{E}[\tilde{Z}(t,\delta)]=\chi_1=\frac{\mathbbm{E}[L]\tilde{w}_{1,1}(\delta)}{\mathbbm{E}[\tau_1]}. \end{equation} When $\delta=0$, $\tilde{Z}(t,\delta)=\tilde{Z}(t,0)$ is the number of customers at time $t$ in infinite server queues; In that case $\tilde{w}_{1,1}(\delta)=1$ and (\ref{Litte_bis}) is just a rephrasing of Little's law which says that the limiting expected number of customers in the queue is equal to the arrival rate mutliplied by the mean service time. When $\delta>0$, the interpretation comes from (\ref{Interpret_queue}): Noticing that $\mathbbm{E}[L_1]\tilde{w}_{1,1}(\delta)=\Pr(L>E_\delta)/\delta$, (\ref{Litte_bis}) reads \begin{equation}\label{Little_generalized} \lim_{t\to\infty} \mathbbm{E}[\tilde{Z}(t,\delta)]=\frac{1}{\mathbbm{E}[\tau_1]}\frac{\Pr(L>E_\delta)}{\delta}=\frac{1}{\mathbbm{E}[\tau_1]}\Pr(L>E_\delta) \mathbbm{E}[E_\delta] \end{equation} which says that the limiting expected number of customers of which residual service time is no more than horizon $E_\delta\sim {\cal E}(\delta)$ is equal to the arrival rate mutliplied by the expected horizon time, multiplied by the proportion of customers of which service time did exceed this horizon $E_\delta$. So, (\ref{Little_generalized}) can be seen as a generalization of Little's Law in the $G/G/\infty$ context.} \end{remark} Next, to compute the covariance for different types of discounted compound delayed process, the first joint moment of $X_i$ and $X_j$ for $i\neq j$ is needed. For notational convenience, let us denote arbitrary pair of claims as $X_1$ and $X_2$. Suppose that $k=2$ and $n_1=n_2=1$ (i.e. $\l=(\l_1,\l_2) \in \{(0,0),(0,1),(1,0)\}$). From (\ref{exp_tilde_b}) and (\ref{phi_l}), we have \begin{align} \tilde{b}_n(t) &=\sum_{\l_1,\l_2 \setminus(\l_1, \l_2)=(n_1,n_2)} {{n_1}\choose{\ell_1}}{{n_2}\choose{\ell_2}} \mathbbm{E}\bigg[ \prod_{j=1}^2 X_j^{n_j-\ell_j}\bigg]\varphi_\ell(t)\nonumber\\ &= \mathbbm{E} [X_1X_2] \varphi_{(0,0)}(t) + \mathbbm{E}[X_1] \varphi_{(0,1)}(t) + \mathbbm{E}[X_2] \varphi_{(1,0)}(t), \label{exp_tilde_b2} \end{align} where $\varphi_{(0,0)}(t)= \mathbbm{E}\big[ e^{ 2\delta (t-\tau_1)} \overline{\omega}_{\delta,1}(t-\tau_1)\overline{\omega}_{\delta,2}(t-\tau_1) .\ensuremath{\mathbbm{1}}_{[\tau_1<t]}\big]$ because of $\tilde{M}_{(0,0)}(t-\tau_1)=1$), $\varphi_{(0,1)}(t)= \mathbbm{E}\big[ e^{ \delta (t-\tau_1)}\tilde{M}_{(0,1)}(t-\tau_1) \overline{\omega}_{\delta,1}(t-\tau_1) .\ensuremath{\mathbbm{1}}_{[\tau_1<t]}\big]$, and $\varphi_{(1,0)}(t)= \mathbbm{E}\big[ e^{ \delta (t-\tau_1)}\tilde{M}_{(1,0)}(t-\tau_1) \overline{\omega}_{\delta,2}(t-\tau_1) .\ensuremath{\mathbbm{1}}_{[\tau_1<t]}\big]$. As shown previously, (\ref{exp_tilde_b2}) is simplified when $L_i$ for $i=1,2$ is exponentially distributed. In this case, the joint expectation and the covariance of $Z_1(t)$ and $Z_2(t)$ are presented in the following. \begin{cor}[Two types of inputs, exponential time delays] The joint mean of two types of discounted compound delayed processes (\ref{Zjt}) where the time delay of type-i input $L_i$ for $i=1,2$ is ${\cal E}(\mu)$ distributed, is asymptotically given by \[ \mathbbm{E}[Z_1(t)Z_2(t)] \sim \chi_n e^{-2\delta t},\qquad t\to\infty, \] where \begin{equation}\label{CHI11} \chi_n=\frac{1}{\mathbbm{E}[\tau_1]}\frac{\mu}{(\mu+\delta)^2} \bigg[\frac{\mathbbm{E}[X_1X_2]}{2}+\frac{\mathbbm{E}[X_1]\mathbbm{E}[X_2]} {2} \frac{\L^\tau(\mu)}{1-\L^\tau(\mu)} \bigg]. \end{equation} Consequently, the covariance is given by \[ \mathbb{C}ov[Z_1(t),Z_2(t)] \sim \xi_n e^{-2\delta t},\qquad t\to\infty, \] where $\xi_n=\chi_n-\frac{\mathbbm{E}[X_1]\mathbbm{E}[X_2]}{\mathbbm{E}[\tau_1]^2(\mu+\delta)^2}$ with $\chi_n$ given in (\ref{CHI11}). \end{cor} \begin{proof} From Theorem \ref{expser} when $n=(n_1,n_2)=(1,1)$ (i.e $|C_\l|=1$ when $\l=(\l_1,\l_2) \in \{(1,0),(0,1)\}$), we have \begin{equation}\label{chi11} \chi_n=\frac{1}{\mathbbm{E}[\tau_1]}\bigg[B_{(0,0),(1,1)}\frac{1}{2\mu}+B_{(1,0),(1,1)}\frac{D_{(1,0)}(1)}{1-\L^\tau(\mu)}+B_{(0,1),(1,1)}\frac{D_{(0,1)}(1)}{1-\L^\tau(\mu)}\bigg]. \end{equation} But from (\ref{Bln}), $B$'s are given by \[ B_{(0,0),(1,1)}=\mathbbm{E}[X_1 X_2] \bigg(\frac{\mu}{\mu+\delta}\bigg)^2, ~~B_{(1,0),(1,1)}=\mathbbm{E}[X_2] \frac{\mu}{\mu+\delta},~~B_{(0,1),(1,1)}=\mathbbm{E}[X_1] \frac{\mu}{\mu+\delta}. \] Also, $D_{n(i)}(1)$ for $i=1,2$ is available from (\ref{D_n_i,expression}) as $ D_{n(i)}(1)=\mathbbm{E}[X_i]\frac{\mu}{(\mu+\delta)(2\mu)} \,\L^\tau (\mu). $ Combining results given above, (\ref{chi11}) is expressed as (\ref{CHI11}). \hfill $\Box$ \end{proof} \section{Proofs}\label{sec:Proofs} {{\bf Proof of Lemma \ref{lemma_density_upper_bound}}. When $\tau_1$ admits a density $f(\cdot)$ then density $ t\mapsto u(t)$ of renewal function $t\mapsto m(t)$ satisfies a renewal equation of the form \begin{equation} u(x)=f(x)+\int_0^x u(y) f(x-y)dy,\quad x\ge 0, \label{renewal_function_density} \end{equation} (e.g. see Equation (3.6) of \cite{Feller}). Since {\bf (A)} holds, by \cite[Lemma p.359]{Feller} (\ref{renewal_function_density}) admits a unique solution bounded on finite intervals given by (\ref{Expression_density_renewal}). Also, the derivative $m'(t)=u(t)$ verifies $\lim_{t\to\infty}m'(t)=1/\mathbbm{E}[\tau_1]$, see \cite[Theorem 2 p.367]{Feller}, and is thus bounded above by some constant $C$. \hfill $\Box$} \subsection{Proof of Proposition \ref{prop_asymptotics}}\label{proof_prop_asymptotics} Since $\tilde{M}_n(t)$ satisfies the renewal equation in (\ref{renewal_M_tilde}), asymptotics result in (\ref{asymp_tilde_M}) is a direct consequence of Blackwell's renewal theorem, provided that we prove that $\int_0^\infty \tilde{b}_n(y)dy$ or equivalently $\int_0^\infty \varphi_{\l,n}(y)dy$ is finite for all $n\in\mathbbm{N}^k$ and $\l<n$. We shall demonstrate this by induction on $n\in\mathbbm{N}^k$. First, consider the case of $n=n(i)$ for some $i\in\{1,\ldots,k\}$. From Example 3 in \cite{W15} one has \begin{equation}\label{bnt} b_n(t)=\mathbbm{E}[X_i].\int^t_0 e^{-\delta y}\overline{\omega}_{\delta,i}(t-y)dF(y)=\mathbbm{E}[X_i].\, \overline{\omega}_{\delta, i}\star H_\delta(t), \end{equation} where $\overline{\omega}_{\delta,i}(t)$ is given in (\ref{bomega}) and $H_\delta(t)=\int^t_0 e^{-\delta y}dF(y)$. But $\int^\infty_0 e^{\delta z} \overline{\omega}_{\delta,i}(z)dz=\int^\infty_0 e^{\delta z} \int^\infty_z e^{-\delta y}dW_i(y)dz=\delta^{-1}\{1-\mathbbm{E}[e^{-\delta L_i}]\}$, the following integration yields \begin{align} \int_0^\infty \tilde{b}_n(y)dy =\int^\infty_0 e^{\delta y} b_n(y)dy &=\mathbbm{E}[X_i].\,\int^\infty_0 e^{\delta y} \int^y_0 e^{-\delta x} \overline{\omega}_{\delta,i}(y-x)dF(x)dy \nonumber\\ &=\mathbbm{E}[X_i].\, \int^\infty_0 e^{-\delta x} \int^\infty_x e^{\delta y} \overline{\omega}_{\delta,i}(y-x) dy dF(x)\nonumber\\ &=\mathbbm{E}[X_i].\, \delta^{-1} \left\{1-\mathbbm{E}\left[ e^{-\delta L_i}\right]\right\} <\infty, \label{int_b_tilde} \end{align} or equivalently \begin{equation*}\label{int_b_tildeA} \int_0^\infty \tilde{b}_n(y)dy = \mathbbm{E}[X_i]\mathbbm{E}[L_i]\int^\infty_0 e^{-\delta x} \frac{\overline{W}_i(x)}{\mathbbm{E}[L_i]}dx= \mathbbm{E}[X_i]\mathbbm{E}[L_i]\tilde{w}_{1,i}(\delta), \end{equation*} where $w_{1,i}(x)$ is an equilibrium pdf of $L_i$ defined as $w_{1,i}(x)=\overline{W}_i(x)/\mathbbm{E}[L_i]$ and its Laplace transform is $\tilde{w}_{1,i}(s)=\int^\infty_0 e^{-sx}w_{1,i}(x)dx$. Moreover, recall Equation (36) in \cite{W15} \[ M_n(t) = \mathbbm{E}[X_i].\, \int_0^t e^{-\delta y}\overline{\omega}_{\delta, i}(t-y)dm(y)= \mathbbm{E}[X_i].\, e^{-\delta t}\int_0^t e^{\delta (t-y)}\overline{\omega}_{\delta, i}(t-y)dm(y). \] By Blackwell's theorem, it satisfies \[ M_n(t) \sim \frac{\mathbbm{E}[X_i]}{\mathbbm{E}[\tau_1]} \bigg[ \int_0^\infty e^{\delta y}\overline{\omega}_{\delta, i}(y)dy\bigg] e^{-\delta t},\qquad t\to\infty. \] In other words, one identifies \begin{equation*}\label{prop_asymptotics_expression_C_n_i} \chi_n=\chi_{n(i)}=\frac{\mathbbm{E}[X_i]}{\mathbbm{E}[\tau_1]}\bigg[ \int_0^\infty e^{\delta y}\overline{\omega}_{\delta, i}(y)dy\bigg]. \end{equation*} We now assume for all $\l<n$ that $\tilde{M}_\ell(t)\rightarrow \chi_\ell<+\infty$ as $t\to\infty$ with $\chi_\ell$ defined as in (\ref{asymp_tilde_M}). Hence $t\mapsto \tilde{M}_\ell(t)$ is bounded for all $\ell<n$ by some constant $K_\ell=\sup_{t\geq 0} \tilde{M}_\ell(t)$. Hence simple algebraic computation results in the upper bound for (\ref{phi_l}) as \begin{eqnarray*} \varphi_\ell(t)&\le & K_\ell \mathbbm{E}\bigg[ e^{(\eta_n-\eta_\ell) \delta (t-\tau_1)} \prod_{j\in C_\ell} \overline{\omega}_{(n_j-\ell_j)\delta,j}(t-\tau_1). \ensuremath{\mathbbm{1}}_{[\tau_1<t]}\bigg]\\ &=& K_\ell \mathbbm{E}\bigg[ e^{(\eta_n-\eta_\ell) \delta (t-\tau_1)} \prod_{j\in C_\ell} \bigg[ \int_{t-\tau_1}^\infty e^{-(n_j-\ell_j)\delta y} dW_j(y)\bigg].\ensuremath{\mathbbm{1}}_{[\tau_1<t]}\bigg]\\ &\le & K_\ell \mathbbm{E}\bigg[ e^{(\eta_n-\eta_\ell) \delta (t-\tau_1)} \prod_{j\in C_\ell} \bigg[ e^{-(n_j-\ell_j)\delta (t-\tau_1)} \overline{W}_j(t-\tau_1) \bigg].\ensuremath{\mathbbm{1}}_{[\tau_1<t]}\bigg]\\ &=& K_\ell \mathbbm{E}\bigg[ \prod_{j\in C_\ell} \overline{W}_j(t-\tau_1) .\ensuremath{\mathbbm{1}}_{[\tau_1<t]}\bigg]. \end{eqnarray*} Then integrating $\varphi_{\ell}(t)$ from $0$ and $\infty$ yields \[ \int_0^\infty \varphi_\ell(t) dt \le K_\ell \mathbbm{E}\bigg[ \int_0^\infty \prod_{j\in C_\ell} \bigg[\overline{W}_j(t-\tau_1)\bigg].\ensuremath{\mathbbm{1}}_{[\tau_1<t]}\, dt\bigg]= K_\ell \int_0^\infty \prod_{j\in C_\ell} \overline{W}_j(t) \, dt, \] and by Holder's inequality, one finds \begin{eqnarray} \int_0^\infty \varphi_\ell(t) dt &\le & K_\ell \prod_{j\in C_\ell} \bigg[\int_0^\infty \overline{W}_j(t)^{|C_\ell|}\, dt\bigg]^{1/|C_\ell|}\quad \mbox{,}\nonumber\\ &\le & K_\ell \prod_{j\in C_\ell} \left[\int_0^\infty \overline{W}_j(t) \, dt\right]^{1/|C_\ell|}=K_\ell \prod_{j\in C_\ell} \mathbbm{E}[L_j]^{1/|C_\ell|}\nonumber\\ &\le & K_\ell\, \min_{j\in C_\ell} \mathbbm{E}[L_j] <\infty, \label{bound_phi_l} \end{eqnarray} where $|C_\ell|$ denotes the cardinal of set $C_\ell$. Hence from (\ref{exp_tilde_b}) we deduce that $\int_0^\infty \tilde{b}_n(y)dy$ is also finite, and the induction is complete. \subsection{Proof of Proposition \ref{tildeMnRn}}\label{proof_tildeMnRn} Since $m(t)$ admits $u(t)$ as a density, one has from (\ref{renewal_M_tilde}) that $\tilde{M}_n(t)=\int_0^t \tilde{b}_n(y)u(t-y)dy$, and in turn, from Lemma \ref{lemma_density_upper_bound} we arrive at the following upper bound \begin{equation} \tilde{M}_n(t)\le C\int_0^\infty \tilde{b}_n(y)dy. \label{upper_bound_M_tilde} \end{equation} Combining (\ref{exp_tilde_b}) and (\ref{bound_phi_l}) yields the following upper bound \[ \int_0^\infty \tilde{b}_n(y)dy\ \le \ \sum_{\ell < n} {{n_1}\choose{\ell_1}}\cdots {{n_k}\choose{\ell_k}}\mathbbm{E}\bigg[ \prod_{j=1}^k X_j^{n_j-\ell_j}\bigg] K_\ell\, \min_{j\in C_\ell} \mathbbm{E}[L_j], \] where we recall that $K_\ell= \sup_{t\ge 0} \tilde{M}_\ell(t)$ (see the proof of Proposition \ref{prop_asymptotics}). Thus the above inequality together with (\ref{asymp_tilde_M}) and (\ref{upper_bound_M_tilde}) yields (\ref{bound_Cn}) and (\ref{bound_M_tilde}) respectively with $(R_n)_{n\in \mathbbm{N}^k}$ defined in (\ref{rec_Rn}), provided we initialize value of $R_n$ when $n=n(i)$ for $i\in\{1,\ldots,k\}$. This is done by again using upper bound (\ref{upper_bound_M_tilde}) and remembering that $\int_0^\infty \tilde{b}_n(y)dy$ is obatined by (\ref{int_b_tilde}) when $n=n(i)$. \subsection{Proof of Proposition \ref{twosidedbounds}} \label{proof_twosidedbounds} Since $t\mapsto M_n(t)$ satisfies renewal equation in Theorem 3 of \cite{W15}, one can write \begin{equation}\label{LMnt} M_n(t)=\int^t_0 b_n(t-y)dm_{\eta_n \delta}(y),\qquad t\geq 0, \end{equation} where $m_{\delta}(y)$ is a discounted renewal function defined as $\sum_{j\geq 0} H_{\delta}^{(\star)j}(y)$ with $H_\delta(t) =\int^t_0 e^{-\delta y} dF(y)$. From (\ref{renewal_function_density}), applying Theorem 3.1 of \cite{WCL01}, one has lower and upper bounds for a renewal density $u(x)$ as $\alpha_L(x)\leq u(x) \leq \alpha_U(x)$ for $x\geq 0$ where $ \alpha_L(t) =\inf_{y\in [0,t]} \alpha(y)$ and $\alpha_U(t) =\sup_{y\in [0,t]} \alpha(y)$ with $\alpha(y):=\frac{f(y)}{\overline{F}(y)}$. Hence, if $\tau_1$ is IFR, one has that $y\mapsto \alpha(y)$ is nondecreasing, i.e. $\alpha_L(t)=\lim_{t\to 0+}\alpha(t)=\alpha(0+)=f(0+)$ assuming that $F(0)=0$, and it implies that \begin{equation}\label{Ldrd} \frac{d}{dt} m_{\eta_n \delta}(t) \geq \alpha_L(t) e^{-\eta_n \delta t}=f(0)e^{-\eta_n \delta t}, \end{equation} Substituting (\ref{Ldrd}) into (\ref{LMnt}) results in \begin{align}\label{LBMnt} M_n(t) &= \int^t_0 b_n(t-y) \frac{d}{dy} m_{\eta_n \delta}(y) .dy \geq \int^t_0 b_n(t-y) dy\bigg[\inf_{y\in [0,t]} \frac{d}{dy} m_{\eta_n \delta}(y) \bigg]\nonumber\\ &\geq \int^t_0 b_n(t-y) dy [f(0) e^{-\eta_n \delta t}] =f(0)\int^t_0 b_n(y)dy. e^{-\eta_n \delta t}. \end{align} We then finish proving this Proposition by induction on $n\in\mathbbm{N}^k$. For $n=n(i)$, one has a closed-form expression for $b_n(y)$ as $\mathbbm{E}[X_i]. \overline{\omega}_{\delta,i}\star H_\delta(y)$ from Example 3 of \cite{W15}, hence the explicit expression for $h_n(t)$ is available in this case. Next, if lower bound $M_\l(t) \geq h_\l(t) e^{-\eta_\l \delta t}$ is satisfies for all $\l < n$ then, using expression for $t\mapsto b_n(t)$ given in Equation (34) of \cite{W15}, one obtains lower bound \[ b_n(y) \geq \displaystyle \sum_{\ell < n} {{n_1}\choose{\ell_1}}\cdots{{n_k}\choose{\ell_k}} \mathbbm{E}\bigg[ \prod_{j=1}^k X_j^{n_j-\ell_j}\bigg]\int^y_0 e^{-\eta_n \delta z} h_\l(y-z)e^{-\eta_\ell \delta(y-z)} \prod_{j\in C_\ell} \overline{\omega}_{(n_j-\ell_j)\delta,j}(y-z)dF(z), \] and thus putting this lower bound into (\ref{LBMnt}) yields (\ref{rec_hnt}) when $n\in\mathbbm{N}^k\!\setminus \!\{n(i),\ i=1,\ldots,k\}$. On the other hand, if $\tau_1$ is DFR then $\alpha_U(x)=f(0)$, and the proof is similar. \subsection{Proof of Theorem \ref{theorem_expansion}}\label{proof_theorem_expansion} Substituting (\ref{vx}) into (\ref{solMnt}) for $dm(s)$ yields \[ \tilde{M}_n(t)=\frac{1}{\mathbbm{E}[\tau_1]}\int^t_0 \tilde{b}_n(t-s)ds+\int^t_0 \tilde{b}_n(t-s)dv(x). \] A change of variable $s:=t-s$ in the first integral and a subtraction of $\chi_n$ in (\ref{chind}) on both sides result in \begin{equation}\label{tMntminuscn} \tilde{M}_n(t)-\chi_n =-\frac{1}{\mathbbm{E}[\tau_1]}\int^\infty_t \tilde{b}_n(s)ds+\int^t_0 \tilde{b}_n(t-s)dv(x). \end{equation} Let \begin{equation}\label{I12t} I_1(t)=-\frac{1}{\mathbbm{E}[\tau_1]}\int^\infty_t \tilde{b}_n(s)ds,\qquad I_2(t)=\int^t_0 \tilde{b}_n(t-s)dv(s), \end{equation} then (\ref{tMntminuscn}) is essentially a sum of $I_1(t)$ and $I_2(t)$. In the sequel, we shall separately study the asymptotic behaviors of $I_1(t)$ and $I_2(t)$ when $t\rightarrow \infty$. First it is convenient to introduce the following quantity and its asymptotic result as it will be often utilitized in the later analysis. \begin{align}\label{useful} \mathbbm{E}[\mathbbm{1}_{\{ \tau_1 \geq t\}}e^{-\mu_i(t-\tau_1)}] & = e^{-\mu_i t}\int^\infty_t e^{\mu_i s}dF(x)\nonumber \\ &= e^{-\mu_i t} \int^\infty_t e^{(\mu_i-R)s} e^{Rs} dF(s) \leq e^{-\mu_i t} \int^\infty_t e^{(\mu_i-R)t} e^{Rs} dF(s)\nonumber\\ &\leq e^{-Rt} \int^\infty_t e^{Rs} dF(x)=o (e^{-R t}), \end{align} where the second last inequality is due to the assumption on $\mu_i <R$ for all $i$'s and the last result is due to $\mathbbm{E}(e^{R\tau_1})=\L^\tau(-R)<\infty$ by (\ref{light_tailed_cond}). We begin to analyze $I_1(t)$ in (\ref{I12t}) when $t\rightarrow \infty$. From (\ref{b_tilde}) and (\ref{bnt}) with (\ref{bomega}) we may write \begin{equation}\label{I1t} \int^\infty_t \tilde{b}_n(z)dz= \mathbbm{E}[X_i]. \mathbbm{E}\Big[ \int^\infty_t e^{\delta(z-\tau_1)} \mathbbm{1}_{\{ \tau_1 <z\}} \int^\infty_{z-\tau_1} e^{-\delta s} dW_i(s)dz\Big]. \end{equation} When we assume that $L_j$'s are ${\cal E}(\mu_i)$ distributed for $\mu_i>0$, then the second integral on the above equation is simplified as \begin{equation}\label{abc} \int^\infty_{z-\tau_1} e^{-\delta s} dW_i(s)=\frac{\mu_i}{\mu_i+\delta} e^{-(\mu_i+\delta)(z-\tau_1)}. \end{equation} As $\mathbbm{1}_{\{ \tau_1 \geq t\}}+\mathbbm{1}_{\{ \tau_1 < t\}}=1$, inserting these two indicator functions in (\ref{I1t}) together with (\ref{abc}) results in \begin{equation}\label{I1ta} \int^\infty_t \tilde{b}_n(z)dz =\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta}\,\mathbbm{E}\Big[ \left(\mathbbm{1}_{\{ \tau_1 < t\}}+\mathbbm{1}_{\{ \tau_1 \geq t\}}\right) \int^\infty_t \mathbbm{1}_{\{ \tau_1 <z\}} e^{-\mu_i (z-\tau_1)}dz\Big]. \end{equation} For the case of $\tau_1 < t$, as $z>t$ and $ \tau_1 < z$, the above expectation is reduced to \begin{align*}\label{aaa} \mathbbm{E}\Big[ \mathbbm{1}_{\{ \tau_1 < t\}} \int^\infty_t \mathbbm{1}_{\{ \tau_1 <z\}} e^{-\mu_i(z-\tau_1)}dz\Big]&=\frac{1}{\mu_i}\mathbbm{E}[ \mathbbm{1}_{\{ \tau_1 < t\}} e^{-\mu_i(t-\tau_1)}]\nonumber\\ &=\frac{1}{\mu_i}\mathbbm{E}[ (1-\mathbbm{1}_{\{ \tau_1 \geq t\}}) e^{-\mu_i(t-\tau_1)}]\nonumber\\ &= \frac{1}{\mu_i} \left\{e^{-\mu_i t} \L^{\tau} (-\mu_i) - \mathbbm{E}[ \mathbbm{1}_{\{ \tau_1 \geq t\}} e^{-\mu_i(t-\tau_1)}] \right\}\nonumber\\ &=\frac{1}{\mu_i} e^{-\mu_i t} \L^{\tau} (-\mu_i)+o (e^{-R t}), \end{align*} where the last line is obtained by applying (\ref{useful}). On the other hand, when $\tau_1 \geq t$, \begin{align*} \mathbbm{E}\Big[ \mathbbm{1}_{\{ \tau_1 \geq t\}} \int^\infty_t \mathbbm{1}_{\{ \tau_1 <z\}} e^{-\mu_i(z-\tau_1)}dz\Big] &=\mathbbm{E}\Big[ \mathbbm{1}_{\{ \tau_1 \geq t\}} \int^\infty_{\tau_1} e^{-\mu_i (z-\tau_1)}dz\Big] \nonumber\\ &=\frac{1}{\mu_i} \Pr(\tau_1\geq t), \end{align*} and note that, using Chernoff's inequality, $\Pr(\tau_1 \geq t)\leq \mathbbm{E}(e^{R\tau_1}) e^{-Rt} = o(e^{-z_N t})$ because of ${\mathbb E}(e^{R\tau_1})<\infty$ (by (\ref{light_tailed_cond})) and $\mathrm{Re}(z_N)<R$. Hence combining the above results using the fact that an $o(e^{-Rt})$ is a fortiori an $o(e^{-z_Nt})$, it follows that \begin{equation}\label{I1tb} I_1(t)= -\frac{1}{\mathbbm{E}[\tau_1]}\int^\infty_t \tilde{b}_n(s)ds= -\frac{\mathbbm{E}[X_i]}{\mathbbm{E}[\tau_1]}.\frac{1}{\mu_i+\delta} \L^{\tau}(-\mu_i)e^{-\mu_i t}+ o(e^{-z_N t}). \end{equation} We now turn to $I_2(t)$ in (\ref{I12t}). As $\tilde{b}_n(0)=0$, applying integration by parts for Stieltjes integrals on the right side of $I_2(t)$ yields \begin{equation}\label{I2t1} I_2(t)= \int^t_0 \tilde{b}_n(t-s)dv(x)=\tilde{b}_n(t) v(0^-)+\int^t_0 v(s) \tilde{b}_n'(t-s)ds. \end{equation} But $v(0^-)=-\mathbbm{E}[\tau_1^2]/(2\mathbbm{E}[\tau_1]^2)$ and using a similar reasoning applied to (\ref{useful}) we get \begin{equation}\label{btnto} \tilde{b}_n(t)=\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} \mathbbm{E}\big[ \mathbbm{1}_{\{ \tau_1 <t\}} e^{-\mu_i (t-\tau_1)}\big]=\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta}\L^{\tau} (-\mu_i)e^{-\mu_i t} +o (e^{-R t}), \end{equation} i.e. \begin{equation}\label{bnv0} \tilde{b}_n(t)v(0^-)=-\frac{\mathbbm{E}[X_i]\mathbbm{E}[\tau_1^2]}{ 2\mathbbm{E}[\tau_1]^2}\frac{\mu_i}{\mu_i+\delta}\L^{\tau} (-\mu_i)e^{-\mu_i t} +o (e^{-z_N t}),\qquad t\rightarrow \infty. \end{equation} Also we have $\tilde{b}_n(t)=\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta}e^{-\mu_i t}\int^t_0 e^{\mu_i s}dF(s)$ and then $ \tilde{b}_n'(t)=-\mu_i \tilde{b}_n(t)+\mathbbm{E}[X_i] \frac{\mu_i}{\mu_i+\delta}f(t). $ Thus \begin{align}\label{one} \int^t_0 e^{-z_k s} \tilde{b}_n'(t-s)dx &= e^{-z_k t}\int^t_0 e^{z_k s} \tilde{b}_n'(s)ds\nonumber\\ &= e^{-z_k t} \int^t_0 e^{z_k s} \Big[-\mu_i\tilde{b}_n(s)+\mathbbm{E}[X_i] \frac{\mu_i}{\mu_i+\delta}f(s) \Big]ds , \quad k=1,...,N. \end{align} On the first term of the above equation, from (\ref{btnto}) it follows that \begin{align}\label{two} e^{-z_k t} \int^t_0 e^{z_k s} \tilde{b}_n(s)ds &=\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} \Big(\frac{1}{z_k-\mu_i}\Big)\mathbbm{E}\Big[ \mathbbm{1}_{\{ \tau_1 < t\}} \{e^{-\mu_i(t-\tau_1)}- e^{-z_k(t-\tau_1)}\}\Big]\nonumber\\ &= \mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} \Big(\frac{1}{z_k-\mu_i}\Big) \left\{e^{-\mu_i t} \L^{\tau} (-\mu_i)-e^{-z_k t} \L^{\tau}(-z_k)\right\}+o (e^{-R t}), \end{align} for $k=1,...,N$. Next, on the second term, one has \begin{align}\label{three} e^{-z_k t}\int^t_0 e^{z_k s} f(s)ds &=e^{-z_k t} \L^\tau(-z_k)-e^{-z_k t}\int^\infty_t e^{z_k s} f(s)ds\nonumber\\ &= e^{-z_k t} \L^\tau(-z_k)+o(e^{-z_N t}) \end{align} since \begin{align*} \Big|e^{-z_k t} \int^\infty_t e^{z_k s} f(s)ds\Big| &= \Big|e^{-z_k t} \int^\infty_t e^{(z_k-R) s} e^{Rs}f(s)ds\Big| \leq e^{-\mathrm{Re}(z_k) t} \int^\infty_t e^{(\mathrm{Re}(z_k)-R) s} e^{Rs}f(s)ds\\ &\leq e^{-\mathrm{Re}(z_k) t}e^{(\mathrm{Re}(z_k)-R) t}\int^\infty_t e^{Rs}f(s)ds = e^{-Rt} \int^\infty_0 e^{Rs}f(s)ds=o(e^{-z_N t}). \end{align*} Then using (\ref{vxexpan}) and (\ref{one}) with (\ref{two}) and (\ref{three}), and since an $o(e^{-Rt})$ is a fortiori an $o(e^{-z_Nt})$, the second term of (\ref{I2t1}) (except for the term involving $o (e^{-z_N x})$ in $v(x)$ in (\ref{vxexpan})) is now given by \begin{align}\label{intvbp} &\int^t_0 [v(s)-o (e^{-z_N s})] \tilde{b}_n'(t-s)ds \nonumber\\ &~~= \mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta}\left[ \sum^N_{k=1}\gamma_k \Big(\frac{\mu_i}{z_k-\mu_i}\Big) \left\{e^{-z_k t} \L^{\tau}(-z_k)-e^{-\mu_i t} \L^{\tau} (-\mu_i)\right\}+\gamma_k e^{-z_k t} \L^\tau(-z_k)\right]+o(e^{-z_N t})\nonumber\\ &~~=\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta}\left[ \sum^N_{k=1}\gamma_k \left(\frac{z_k}{z_k-\mu_i}e^{-z_k t} \L^{\tau}(-z_k)-\frac{\mu_i}{z_k-\mu_i}e^{-\mu_i t} \L^{\tau} (-\mu_i)\right)\right]+o(e^{-z_N t}). \end{align} Recall that function $\eta(.)$ is defined by (\ref{function_eta}). Then, putting the expression for $\tilde{b}_n'(t)$ into the integral, it follows that \begin{align}\label{intobp} \int^t_0 o(e^{-z_N s})\tilde{b}_n'(t-s)ds &=\int^t_0 \eta(s)e^{-z_N s}\tilde{b}_n'(t-s)ds \nonumber\\ &=\int^t_0 \eta(s)e^{-z_N s} \Big[ -\mu_i \tilde{b}_n(t-s)+\mathbbm{E}[X_i] \frac{\mu_i}{\mu_i+\delta}f(t-s)\Big] ds. \end{align} We start by considering $\int^t_0 \eta(s)e^{-z_N s} f(t-s)ds$ which can be written as \[ \int^t_0 \eta(t-s)e^{-z_N (t-s)} f(s)ds=e^{-z_N t} \int^\infty_0 \eta(t-s)\mathbbm{1}_{\{ 0<s< t\}}e^{ z_N s} f(s)ds. \] The fact that $\int^\infty_0 |e^{z_N s} f(s)ds| =\int^\infty_0 e^{(\mathrm{Re}(z_N))s} f(s)ds$ is convergent implies, by dominated convergence, \[ \int^\infty_0 \eta(t-s)\mathbbm{1}_{\{ 0<s< t\}}e^{ z_N s} f(s)ds \longrightarrow 0, \qquad t\rightarrow \infty. \] Consequently, \begin{equation}\label{intetaf} \int^t_0 \eta(s)e^{-z_N s} f(t-s)ds=o(e^{-z_N t}),\qquad t\rightarrow \infty. \end{equation} Now we turn our attention to the first term of (\ref{intobp}) involving $\int^t_0 \eta(s)e^{-z_N s}\tilde{b}_n(t-s)ds$. Writing from (\ref{bnt}) (see also (\ref{btnto})) $$ \tilde{b}_n(t)=\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} \mathbbm{E}\big[ \mathbbm{1}_{\{ \tau_1 <t\}} e^{-\mu_i (t-\tau_1)}\big]= \mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} {\cal L}^\tau(-\mu_i) e^{-\mu_i t}- \mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} \mathbbm{E}\big[ \mathbbm{1}_{\{ \tau_1 \ge t\}} e^{-\mu_i (t-\tau_1)}\big], $$ we then split $\int^t_0 \eta(s)e^{-z_N s}\tilde{b}_n(t-s)ds$ into two parts, namely $\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} {\cal L}^\tau(-\mu_i)\int_0^t \eta(s) e^{-z_N s} e^{-\mu_i(t-s)}ds$ and $\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} \int_0^t \eta(s) e^{-z_N s} \mathbbm{E}\big[ \mathbbm{1}_{\{ \tau_1 \ge t-s\}} e^{-\mu_i ((t-s)-\tau_1)}\big] ds$. The first term is expressed as \begin{align}\label{eta_expansion1} &\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} {\cal L}^\tau(-\mu_i)\int_0^t \eta(s) e^{-z_N s} e^{-\mu_i(t-s)}ds \nonumber\\ &~~=\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} {\cal L}^\tau(-\mu_i)\left[\int_0^\infty \eta(s) e^{-z_N s} e^{\mu_i s}ds\right] e^{-\mu_i t}- \mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} {\cal L}^\tau(-\mu_i)\left[\int_t^\infty \eta(s) e^{-z_N s} e^{\mu_i s}ds\right] e^{-\mu_i t}\nonumber\\ &~~= \mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} {\cal L}^\tau(-\mu_i)\left[\int_0^\infty \eta(s) e^{-z_N s} e^{\mu_i s}ds\right] e^{-\mu_i t} + o(e^{-z_N t}), \end{align} where the latter term $o(e^{-z_N t})$ being again justified as in (\ref{useful}). Now (\ref{useful}) implies that the second term verifies, by dominated convergence \begin{align}\label{eta_expansion2} &\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} \int_0^t \eta(s) e^{-z_N s} \mathbbm{E}\big[ \mathbbm{1}_{\{ \tau_1 \ge t-s\}} e^{-\mu_i ((t-s)-\tau_1)}\big] ds\nonumber\\ &~~=\mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta} e^{-z_N t}\int_0^t \eta(t-s) e^{z_N s} \mathbbm{E}\big[ \mathbbm{1}_{\{ \tau_1 \ge s\}} e^{-\mu_i (s-\tau_1)}\big] ds=o(e^{-z_N t}). \end{align} Gathering (\ref{eta_expansion1}) and (\ref{eta_expansion2}) thus yields \begin{equation}\label{intetab} \int^t_0 \eta(s)e^{-z_N s}\tilde{b}_n(t-s)ds = \mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta}\L^{\tau} (-\mu_i) \Big[\int^\infty_0 \eta(s)e^{(\mu_i-z_N)s} ds \Big]e^{-\mu_i t}+o(e^{-z_N t}). \end{equation} Then from (\ref{intvbp}) and (\ref{intobp}) with (\ref{intetaf}) and (\ref{intetab}) we get \begin{align*} \int^t_0 v(s) \tilde{b}_n'(t-s)ds&= \mathbbm{E}[X_i].\frac{\mu_i}{\mu_i+\delta}\left[ \sum^N_{k=1}\gamma_k \left(\frac{z_k}{z_k-\mu_i}e^{-z_K t} \L^{\tau}(-z_K)-\frac{\mu_i}{z_k-\mu_i}e^{-\mu_i t} \L^{\tau} (-\mu_i)\right)\right]\\ &~~-\mathbbm{E}[X_i].\frac{\mu_i^{2}}{\mu_i+\delta}\L^{\tau} (-\mu_i) \Big[\int^{ \infty}_0 \eta(s)e^{(\mu_i-z_N)s} ds \Big]e^{-\mu_i t}+o(e^{-z_N t}),\qquad t\rightarrow \infty. \end{align*} Hence the above result together with (\ref{bnv0}) allows us to have an expression for (\ref{I2t1}) as \begin{equation}\label{I2t} I_2(t)=A_i e^{-\mu_i t}+\sum^N_{k=1} B_{k,i}e^{-z_k t} +o(e^{-z_N t}), \end{equation} where $A_i$ and $B_{k,i}$ for $k=1,...,N$ are defined by (\ref{Ai}) and (\ref{Bki}). As a result, combining (\ref{I1tb}) and (\ref{I2t}) leads the theorem. \subsection{Proof of Theorem \ref{theo_conv_distrib}}\label{proof_theo_conv_distrib} Let $P(x_1,\ldots,x_k)=\sum_{\eta_n\le K} a_n x_1^{n_1}\cdots x_k^{n_k}$ be a nonnegative polynomial in the variables $x_1$\ldots$x_k$ of degree $K$. One has then that $\sum_{\eta_n\le K} a_n \mathbbm{E}\left[\prod_{i=1}^k (e^{\delta t}Z_i(t))^{n_i}\right]=\mathbbm{E}\left[P(e^{\delta t}Z_1(t),\ldots,e^{\delta t}Z_k(t))\right]\ge 0$ for all $t$, which, from Proposition \ref{prop_asymptotics}, yields $\sum_{\eta_n\le K} a_n\chi_n \ge 0$ as $t\to\infty$. By the Riesz-Haviland theorem (see \cite{Haviland}), we deduce that sequence $(\chi_n)_{n\in \mathbbm{N}^k}$ is a sequence of moments associated to some random variable $\mathcal{Z}_\infty=(\mathcal{Z}_{\infty, 1},\ldots,\mathcal{Z}_{\infty,k})$. Next we shall show that the moment generating function (mgf) of $e^{\delta t}Z(t)$ exists and converges to the mgf of $\mathcal{Z}_\infty$ as $t \to \infty$. To this end, we first introduce the mgfs of $e^{\delta t}Z(t)$ and of $\mathcal{Z}_\infty$ denoted by $\tilde{\varphi}_t(q)$ and $\varphi_{\mathcal{Z}_\infty} (q)$ respectively in the following: \begin{equation}\label{mgfedtZt} \tilde{\varphi}_t(q):= \mathbbm{E}\left[ e^{<q,e^{\delta t}Z(t)>}\right]=\sum_{n\in \mathbbm{N}^k} \prod_{i=1}^k \frac{q_i^{n_i}}{n_i!} \,\tilde{M}_n(t),\qquad t\ge 0, \end{equation} and \begin{equation}\label{mgfLinf} \varphi_{\mathcal{Z}_\infty}(q)= \mathbbm{E}\left[ e^{<q,\mathcal{Z}_\infty>}\right]=\sum_{n\in \mathbbm{N}^k} \prod_{i=1}^k \frac{q_i^{n_i}}{n_i!} \,\chi_n, \end{equation} for $q=(q_1,\ldots,q_k)$ in the neighborhood of $(0,\ldots,0)$. To apply the dominated convergence theorem to (\ref{mgfedtZt}), we need to show that $\tilde{M}_n(t)$ is bounded such as \begin{equation}\label{upper_bound_conv_distrib} \tilde{M}_n(t)\le U_n:=(Mm_L e^k)^{\eta_n} \prod_{i=1}^k n_i!,\quad \forall n\in\mathbbm{N}^k,\quad \forall t\ge 0, \end{equation} where $m_L:=\max\left(1, C.\max_{i=1,\ldots,k}\mathbbm{E} [L_i]\right)$, so that, since \[ \sum_{n\in \mathbbm{N}^k} \prod_{i=1}^k \frac{|q_i|^{n_i}}{n_i!}\, U_n= \prod_{i=1}^k \left( \sum_{n_i=1}^\infty |q_iMm_L e^k|^{n_i}\right) \] converges for \[ q=(q_1,\ldots,q_k)\in J:=\left[-\frac{1}{Mm_L e^k}, \frac{1}{Mm_L e^k}\right]^k, \] the dominated convergence theorem yields $\tilde{\varphi}_t(q)\longrightarrow \varphi_{\mathcal{Z}_\infty}(q)$ when $t\to\infty$ for $q\in J$. Hence, we shall prove (\ref{upper_bound_conv_distrib}) by induction. Recall that in Proposition \ref{tildeMnRn}, we have already proved $\tilde{M}_n(t)\leq R_n$ where $R_n$ is defined in (\ref{rec_Rn}). Thus we shall essentially show that $R_n\le U_n$ for all $n\in \mathbbm{N}^k$, so that (\ref{upper_bound_conv_distrib}) holds. We start by $n=n(i)$ for $i\in \{1,\ldots,k\}$. In this case, upper bound (\ref{bound_M_tilde}) with (\ref{rec_Rn}) yields $$ R_{n(i)}= C \mathbbm{E}[X_i] \delta^{-1} \left\{1-\mathbbm{E}\left[ e^{-\delta L_i}\right]\right\}\le CM \mathbbm{E}[L_i]\le M m_L=U_{n(i)}, $$ where the first inequality is due to $ \delta^{-1} \left\{1-\mathbbm{E}\left[ e^{-\delta L_i}\right]\right\}=\int^\infty_0 e^{-\delta x}\overline{W}_i(x)dx\leq \int^\infty_0 \overline{W}_i(x)dx$. Let us now suppose that $n$ is such that $R_{\ell}\le U_{\ell}$ for all ${\ell}<n$. Using (\ref{rec_Rn}) as well as the induction assumption we get \begin{equation}\label{upper_bound_conv_distrib2} R_n\le C \sum_{\ell < n} {{n_1}\choose{\ell_1}}\cdots{{n_k}\choose{\ell_k}} \mathbbm{E}\bigg[ \prod_{j=1}^k X_j^{n_j-\ell_j}\bigg] \min_{i\in C_\ell} \mathbbm{E}[L_i].\, U_{\ell}\le m_L \sum_{\ell < n} {{n_1}\choose{\ell_1}}\cdots{{n_k}\choose{\ell_k}} M^{\eta_n - \eta_{\ell}}.\, U_{\l}. \end{equation} But, $\l<n$ implies $\eta_n - \eta_{\ell}\ge 1$ and $m_L$ and $e$ are larger than $1$, the following inequality is valid $$ m_L M^{\eta_n - \eta_{\ell}}\le m_L^{\eta_n - \eta_{\ell}} M^{\eta_n - \eta_{\ell}} (e^k)^{\eta_n - \eta_{\ell}-1}=(m_LM e^k)^{\eta_n - \eta_{\ell}} e^{-k}. $$ Substituting the above inequality and $U_\ell=(Mm_L e^k)^{\eta_\ell} \prod_{i=1}^k \ell_i!$ into (\ref{upper_bound_conv_distrib2}), the right-hand side of (\ref{upper_bound_conv_distrib2}) is now bounded by \begin{eqnarray} R_n & \le & \sum_{\ell < n} {{n_1}\choose{\ell_1}}\cdots{{n_k}\choose{\ell_k}} (m_LM e^k)^{\eta_n - \eta_{\ell}} e^{-k} (Mm_L e^k)^{\eta_\l} \prod_{i=1}^k \l_i!\nonumber\\ &= & (Mm_L e^k)^{\eta_n}\bigg[ \sum_{\ell < n} \prod_{i=1}^k \frac{n_i!}{(n_i-\l_i)!}\bigg] e^{-k}=(Mm_L e^k)^{\eta_n}\bigg[\prod_{i=1}^k n_i!\bigg]\bigg[ \sum_{\ell < n} \prod_{i=1}^k \frac{1}{(n_i-\l_i)!}\bigg] e^{-k}\nonumber\\ &=& U_n \bigg[ \sum_{\ell < n} \prod_{i=1}^k \frac{1}{(n_i-\l_i)!}\bigg] e^{-k}. \label{upper_bound_conv_distrib3} \end{eqnarray} We then conclude by noticing that \begin{eqnarray*} \sum_{\ell < n} \prod_{i=1}^k \frac{1}{(n_i-\l_i)!}&\le &\sum_{\ell_i\le n_i,\, i\in\{1,\ldots,k\}} \prod_{i=1}^k \frac{1}{(n_i-\l_i)!} \\ &= &\prod_{i=1}^k \bigg[ \sum_{\l_i=1}^{n_i}\frac{1}{(n_i-\l_i)!}\bigg]= \prod_{i=1}^k \bigg[ \sum_{\l_i=1}^{n_i}\frac{1}{\l_i!}\bigg]\le \prod_{i=1}^k \bigg[ \sum_{\l_i=1}^{\infty}\frac{1}{\l_i!}\bigg]=e^k, \end{eqnarray*} which, plugged into (\ref{upper_bound_conv_distrib3}), yields $R_n\le U_n$. Therefore, by the dominated convergence theorem, $\tilde{\varphi}_t(q)$ in (\ref{mgfedtZt}) converges to $\varphi_{\mathcal{Z}_\infty}(q)$ in (\ref{mgfLinf}) as $t\rightarrow \infty$. Now it remains to show that the convergence in mgf implies the convergence in distribution. Since $\tilde{M}_n(t)$ and $\chi_n$ are bounded as shown in Proposition \ref{tildeMnRn}, the mgfs of $e^{\delta t}Z(t)$ in (\ref{mgfedtZt}) and $\mathcal{Z}_\infty$ in (\ref{mgfLinf}) exist. Also, we have shown that $\varphi_t(q)\longrightarrow \varphi_{\mathcal{Z}_\infty}(q)$ when $t\to\infty$ for $q\in J$ in some neighborhood of $(0,\ldots,0)$. Hence, $e^{\delta t}Z(t)$ converges to $\mathcal{Z}_\infty$ in distribution. \subsection{Proof of Lemma \ref{lemma_recursive}}\label{proof_lemma_recursive} When $n=n(i)$ and $i\in\{1,\ldots,k\}$, we may obtain an expression of $\L^b_{n}(s)$ by using the expression of $b_n(t)$ in Example 3 in \cite{W15}, and applying similar idea as applied in (\ref{int_b_tilde}). We now turn to proving (\ref{recursion_LT_b}). Since $L_j$'s are all ${\cal E}(\mu)$ distributed, $\varphi_\l(t)=\varphi_{\l,n}(t)$ given by (\ref{phi_l}) simplifies to $$ \varphi_\ell(t)= \mathbbm{E}\bigg[ \tilde{M}_\ell(t-\tau_1)\bigg\{\prod_{j\in C_\ell} \frac{\mu}{\mu+(n_j-\l_j)\delta} \bigg\} e^{-|C_\l| \mu(t-\tau_1)} .\ensuremath{\mathbbm{1}}_{[\tau_1<t]}\bigg]. $$ Then using Fubini's theorem to interchange the expectation with the integration as well as a change of variable $t:=t-\tau_1$, it follows that \begin{eqnarray} \int_0^\infty e^{-st}\varphi_\l(t)dt &=& \bigg[\prod_{j\in C_\ell} \frac{\mu}{\mu+(n_j-\l_j)\delta} \bigg]\mathbbm{E}\left[ \int_{\tau_1}^\infty e^{-st} \tilde{M}_\ell(t-\tau_1)e^{-|C_\l| \mu(t-\tau_1)} dt \right]\nonumber\\ &=&\bigg[\prod_{j\in C_\ell} \frac{\mu}{\mu+(n_j-\l_j)\delta} \bigg]\mathbbm{E}\left[ e^{-s\tau_1} \int_{0}^\infty e^{-st} \tilde{M}_\ell(t)e^{-|C_\l| \mu t} dt\right]\nonumber\\ &=& \bigg[\prod_{j\in C_\ell} \frac{\mu}{\mu+(n_j-\l_j)\delta} \bigg]\L^\tau (s) \L^M_\l(s+|C_\l| \mu).\label{compute_LT_recursive} \end{eqnarray} If $\ell=\mathbf{0}$ where $\mathbf{0}$ is a zero vector in $\mathbbm{N}^k$ then $\tilde{M}_\ell(t)=1$ hence $ \L^M_\l(s+|C_\mathbf{0}| \mu)=\frac{1}{s+|C_\mathbf{0}| \mu}$, and we get $$ \int_0^\infty e^{-st}\varphi_\mathbf{0}(t)dt =\bigg[\prod_{j\in C_\mathbf{0}} \frac{\mu}{\mu+(n_j-\l_j)\delta} \bigg] \frac{\L^\tau (s)}{s+ |C_\mathbf{0}|\mu}=\bigg[\prod_{j=1}^k \frac{\mu}{\mu+n_j\delta} \bigg] \frac{\L^\tau (s)}{s+ |C_\mathbf{0}|\mu}. $$ In the case $\ell>\mathbf{0}$, let us now observe that taking Laplace transforms in renewal equation (\ref{renewal_M_tilde}) satisfied by $\tilde{M}_n(.)$ yields the following classical relation between $\L^M_n(s)$ and $\L^b_n(s)$: $$ \L^M_n(s)=\dfrac{\L^b_n(s)}{1- \L^\tau (s)},\quad \forall s> 0,\quad n\in\mathbbm{N}^k\!\setminus \!\{n(i),\ i=1,\ldots,k\}, $$ so that (\ref{compute_LT_recursive}) leads to $$ \int_0^\infty e^{-st}\varphi_\l(t)dt=\bigg[\prod_{j\in C_\ell} \frac{\mu}{\mu+(n_j-\l_j)\delta} \bigg]\frac{\L^\tau (s)}{1-\L^\tau (s+|C_\l| \mu)}\, \L^b_\l (s+|C_\l| \mu). $$ With the above result, the Laplace transform of (\ref{exp_tilde_b}) becomes (\ref{recursion_LT_b}). \subsection{Proof of Theorem \ref{workload}}\label{proof_workload} {\bf Proof of Lemma \ref{Lemma_analytic}}. We shall start by proving the properties for $\tilde{M}_{1}(t,\delta)$, as those for $\tilde{M}_{2}(t,\delta)$ are a bit more technical but follow in a similar way. Let us write \begin{equation}\label{analytic_1} \tilde{M}_{1}(t,\delta)=\sum_{i=1}^\infty \psi_i(t,\delta),\quad \psi_i(t,\delta):= {\mathbb E}[e^{-\delta(T_i+L_i-t)}\mathbbm{1}_{\{ T_i \le t< T_i+L_i\}}],\qquad i\in\mathbbm{N} . \end{equation} We first start by proving that $\psi_i(t,\delta)$ is defined and analytic on set $D_a$. Indeed, inequality \begin{equation}\label{analytic_2} \left| \delta^j \frac{(-1)^j}{j!}[T_i+L_i-t]^j \mathbbm{1}_{\{ T_i \le t< T_i+L_i\}}\right|\le a^j \frac{1}{j!} L_i^j,\quad j\in \mathbbm{N},\quad \delta \in D_a, \end{equation} coupled with the fact that $\sum_{j=0}^\infty {\mathbb E}\left( a^j \frac{1}{j!} L_i^j\right)= {\mathbb E}[e^{a L}]=\frac{\mu}{\mu-a}<+\infty$ by (\ref{assumption_light_tailed}), yields that \[\sum_{j=0}^\infty \delta^j {\mathbb E} \left[ \frac{(-1)^j}{j!}[T_i+L_i-t]^j \mathbbm{1}_{\{ T_i \le t< T_i+L_i\}}\right] \] is a convergent series on $\delta\in D_a$ and that $\delta\mapsto \psi_i(t,\delta)$ is analytic on that set for all $t\ge 0$, and admits the above power series expansion in $\delta$. Now one checks easily, by independence of $L_i$ and $T_i$, \begin{equation}\label{analytic_3} \psi_i(t,\delta)\le {\mathbb E}[e^{a L_i}\mathbbm{1}_{\{T_i\le t\}}]= {\mathbb E}[e^{a L}]\Pr[T_i\le t],\quad \forall \delta\in D_a, \end{equation} with $\sum_{i=1}^\infty {\mathbb E}[e^{a L}]\Pr[T_i\le t] = {\mathbb E}[e^{a L}] m(t)<+\infty$. This yields that for all $t\ge 0$, series $\sum_{i=1}^\infty\psi_i(t,\delta)$ converges normally on $\delta \in D_a$. Thus for all $t\ge 0$, $\delta\mapsto \tilde{M}_{1}(t,\delta)$ is thus analytic as the uniform limit of an analytic sequence of functions on compact set $D_a$. We then move on $\tilde{M}_{2}(t,\delta)$. Similar to (\ref{analytic_1}), one has \begin{equation*}\label{analytic_4} \tilde{M}_{2}(t,\delta)=\sum_{r,j=1}^\infty \pi_{r,j}(t,\delta),\quad \pi_{r,j}(t,\delta):={\mathbb E}[e^{-\delta(T_r+L_r-t)}\mathbbm{1}_{\{ T_r \le t< T_r+L_r\}}e^{-\delta(T_j+L_j-t)}\mathbbm{1}_{\{ T_j \le t< T_j+L_j\}}]. \end{equation*} The analog of (\ref{analytic_2}) is \begin{multline*}\label{analytic_5} \left| \delta^p \frac{(-1)^p}{p!}[(T_r+L_r-t)+(T_j+L_j-t)]^p\, \mathbbm{1}_{\{ T_r \le t< T_r+L_r\}}\mathbbm{1}_{\{ T_j \le t< T_j+L_j\}}\right|\le (a/2)^p \frac{1}{p!} [L_r+L_j]^p,\\ r\in \mathbbm{N},\ j\in \mathbbm{N},\quad \delta\in D_{a/2}, \end{multline*} with $\sum_{p=0}^\infty (a/2)^p \frac{1}{p!} [L_r+L_j]^p = {\mathbb E}\left( e^{a(L_r+L_j)/2}\right) \le {\mathbb E}\left( e^{aL}\right)$ (by Jensen's inequality), a finite quantity, so that $\delta\in D_{a/2} \mapsto \pi_{r,j}(t,\delta)$ is analytic. The analog of (\ref{analytic_3}) is \begin{equation}\label{analytic_6} \pi_{r,j}(t,\delta)\le {\mathbb E}\left[e^{a (L_r+L_j)/2}\mathbbm{1}_{\{T_r\le t\}}\mathbbm{1}_{\{T_j\le t\}}\right],\quad r\in \mathbbm{N},\ j\in \mathbbm{N},\ \delta\in D_{a/2}, \end{equation} with, again thanks to Jensen's inequality as well as independence of $(L_r,L_j)$ from $(T_r,T_j)$, $$\sum_{r,j=1}^\infty {\mathbb E}[e^{a (L_r+L_j)/2}\mathbbm{1}_{\{T_r\le t\}}\mathbbm{1}_{\{T_j\le t\}}]\le {\mathbb E}\left( e^{aL}\right)\sum_{r,j=1}^\infty {\mathbb E}\left[ \mathbbm{1}_{\{T_r\le t\}}\mathbbm{1}_{\{T_j\le t\}}\right]={\mathbb E}\left( e^{aL}\right) {\mathbb E}\left( N(t)^2\right)<+\infty.$$ Hence, from (\ref{analytic_6}), $ \sum_{r,j=1}^\infty \pi_{r,j}(t,\delta)= \tilde{M}_{2}(t,\delta)$ converges normally on $\delta\in D_{a/2}$, and is analytic on this set by the same argument as $\delta\mapsto\tilde{M}_{1}(t,\delta)$. Note that we used that $N(t)$ admits the second moment, a fact that holds because ${\mathbb E}[\tau_1^2]<+\infty$, see e.g. \cite[Chapter V.6]{APQ}.\hfill $\Box$ Prior to proving Lemma \ref{lemma_uniform_convergence}, we prove a few upper bounds concerning $\tilde{M}_{1}(t,\delta)$. First, we note that deriving $\tilde{b}_{1}(t)=\frac{\mu}{\mu+\delta}e^{-\mu t}\int^t_0 e^{\mu s}dF(s)$ yields $\tilde{b}_{1}'(t)=-\mu \tilde{b}_{1}(t)+ \frac{\mu}{\mu+\delta}f(t)$. Besides, since {\bf (A)} holds, a density $u(t)=m'(t)$ of renewal function exists and is bounded by above by $C>0$ thanks to Lemma \ref{lemma_density_upper_bound}. Both these facts entail, deriving (\ref{solMnt}), the following \[ \left|\tilde{M}_{1}'(t)\right|= \left|\int^t_0 \tilde{b}_{1}'(t-s)m'(s)ds + \tilde{b}_{1}(0)m'(t)\right|=\left|\int^t_0 \tilde{b}_{1}'(t-s)m'(s)ds \right| \] as $f(.)$ is a density, so that $\tilde{b}_{1}(0)=0$. Then one finds \begin{eqnarray} \left|\tilde{M}_{1}'(t)\right| &\le & \mu \int^t_0 \left|\tilde{b}_{1}(t-s)m'(s)\right| ds + \left|\frac{\mu}{\mu+\delta}\right| \int^t_0 \left|f(t-s)m'(s)\right| ds \nonumber\\ &\le & \mu C \int^\infty_0 \left|\tilde{b}_{1}(s) \right|ds + \left|\frac{\mu}{\mu+\delta}\right| C \int^\infty_0 f(s)ds \nonumber\\ &\le & C \left[ \left|\frac{\mu}{\mu+\delta}\right| +\left|\frac{\mu}{\mu+\delta}\right|\right]\le \frac{2C\mu}{\mu-|\delta|}, \label{bound_tildeM_n(1)} \end{eqnarray} where the last line is due to the fact that $f(\cdot)$ is a density, and $\int^\infty_0 |\tilde{b}_{1}(s)|ds\le C\big|\frac{\mu}{\mu+\delta}\big|$ from (\ref{int_b_tilde}). \noindent{\bf Proof of Lemma \ref{lemma_uniform_convergence}.} We again start with $\tilde{M}_{1}(t,\delta)$. The key to is to use expansions for $\tilde{M}_{1}(t)=\tilde{M}_{1}(t,\delta)$ in Theorem \ref{theorem_expansion} and particularly the dependence of this expansion in $\delta$ as discussed in Remark \ref{rem_dep_delta}. Indeed, an immediate consequence of (\ref{dep_delta1}) and (\ref{dep_delta2}) from Remark \ref{rem_dep_delta} is that \begin{eqnarray*} \left| \tilde{M}_{1}(t,\delta)- \chi_{1}(\delta)\right| &\le& \frac{M^\ast}{\mu-|\delta|}\left[ e^{-\mu t}+ \sum_{k=1}^N e^{- \mathrm{Re}(z_k) t } + \zeta(t) e^{-\mathrm{Re}(z_N)t}\right]\\ &\le& \frac{M^\ast}{\mu-a}\left[ e^{-\mu t}+ \sum_{k=1}^N e^{- \mathrm{Re}(z_k) t } + \zeta(t) e^{-\mathrm{Re}(z_N)t}\right],\quad \forall \delta\in D_a, \end{eqnarray*} for some constant $M^\ast$ independent from $\delta$ and $t$, which implies uniform convergence of $\tilde{M}_{1}(t,\delta)$ as $t\to\infty$ towards $\chi_{1}(\delta)$ on $\delta \in D_{a}$. We then move on to $\tilde{M}_{2}(t,\delta)$. Relation (\ref{exp_tilde_b}) when $k=1$, $X_{j}=1$, $L\sim {\cal E}(\mu)$, along with (\ref{phi_l}) and (\ref{bomega}) yields the following expression \begin{align} \tilde{b}_{2}(t)=\tilde{b}_{2}(t,\delta)&= \varphi_0(t) + 2\varphi_1(t),\label{expr_bomega_workload}\\ \varphi_0(t)= \varphi_0(t,\delta)&= \frac{\mu}{\mu+2\delta}\mathbbm{E}[e^{-\mu (t-\tau_1)}\mathbbm{1}_{\{\tau_1<t\}}]=\frac{\mu}{\mu+2\delta}\int_0^t e^{-\mu (t-s)}f(s) ds,\label{expr_phi0_workload}\\ \varphi_1(t)=\varphi_1(t,\delta) &= \frac{\mu}{\mu+\delta}\mathbbm{E}[\tilde{M}_{1}(t-\tau_1,\delta) e^{-\mu (t-\tau_1)}\mathbbm{1}_{\{\tau_1<t\}}]=\frac{\mu}{\mu+\delta}\int_0^t \tilde{M}_{1}(t-s,\delta)e^{-\mu (t-s)}f(s) ds.\label{expr_phi1_workload} \end{align} Differentiating (\ref{expr_phi0_workload}) and (\ref{expr_phi1_workload}) with respect to $t$ results in \begin{eqnarray} \varphi_0'(t)&=& \frac{\mu}{\mu+2\delta}\left[-\mu \int_0^t e^{-\mu (t-s)}f(s) ds + f(t)\right],\label{expr_phi0'_workload}\\ \varphi_1'(t) &=& \frac{\mu}{\mu+\delta} \left[ \int_0^t \left(\tilde{M}_{1}'(t-s,\delta)-\mu\tilde{M}_{1}(t-s,\delta)\right)e^{-\mu (t-s)}f(s) ds + f(t)\right].\label{expr_phi1'_workload} \end{eqnarray} We are also going to need the following upper bounds for $ \varphi_0(t,\delta)$, $\varphi_1(t,\delta)$, obtained from (\ref{expr_phi0_workload}), (\ref{expr_phi1_workload}), and the fact that $t\mapsto \tilde{M}_{1}(t,\delta) $ is uniformly bounded in $\delta \in D_{a/2}$ by some constant $\tilde{C}$ (independent from $\delta$ and $t$, a consequence of the fact that it converges uniformly on that set): \begin{eqnarray} |\varphi_0(t,\delta)| &\le & \left|\frac{\mu}{\mu+2\delta}\right| e^{-\mu t} \int_0^\infty e^{\mu s}f(s) ds \le \frac{\mu}{\mu-a} C_0 e^{-\mu t}, \quad \delta \in D_{a/2}, \label{bound_phi0_workload}\\ |\varphi_1(t,\delta)| &\le & \left|\frac{\mu}{\mu+\delta}\right| \tilde{C} e^{-\mu t} \int_0^\infty e^{\mu s}f(s) ds \le \frac{\mu}{\mu-a/2} C_1 e^{-\mu t}, \quad \delta \in D_{a/2}, \label{bound_phi1_workload} \end{eqnarray} for some constants $C_0$ and $C_1$ independent from $\delta \in D_{a/2}$ and $t$. We also wish to obtain similar bounds for $ \varphi_0'(t,\delta)$ and $\varphi_1'(t,\delta)$. The following upper bound for $ \varphi_0'(t,\delta)$ is easily obtained thanks to (\ref{expr_phi0'_workload}): \begin{equation} \label{bound_phi0'_workload} |\varphi_0'(t,\delta)| \le \left|\frac{\mu}{\mu+2\delta}\right| \left[ \mu e^{-\mu t}\int_0^\infty e^{\mu s}f(s) ds + f(t)\right]\le \frac{\mu}{\mu-a } [C_0^\ast e^{-\mu t} + f(t) ],\quad \delta \in D_{a/2}, \end{equation} for some constant $C_0^\ast$. As to $\varphi_1'(t,\delta)$, the fact that $t\mapsto \tilde{M}_{1}(t,\delta) $ and $t\mapsto \tilde{M}_{1}'(t,\delta) $ are uniformly bounded in $\delta \in D_{a/2}$ respectively by $\tilde{C}$ and $2C \frac{\mu}{\mu-a/2}$ (thanks to (\ref{bound_tildeM_n(1)})), easily yields from (\ref{expr_phi1'_workload}) \begin{equation*} \label{bound_phi1'_workload} |\varphi_1'(t,\delta)| \le \frac{\mu}{\mu-a/2} [C_1^\ast e^{-\mu t} + f(t) ],\quad \delta \in D_{a/2}, \end{equation*} for some constant $C_1^\ast>0$. Getting back to our original concern of showing that $\tilde{M}_{2}(t,\delta)$ converges uniformly, we first note that, in view of (\ref{solMnt}) and (\ref{expr_bomega_workload}), it is clear that it is necessary and sufficient to prove that $$ \delta\mapsto\int^t_0 \varphi_l(t-s,\delta)dm(s), \quad l=0,1, $$ converges uniformly on $\delta \in D_{a/2}$ as $t\to\infty$ towards $\frac{1}{\mathbbm{E}[\tau_1]}\int^\infty_0 \varphi_l(s,\delta)ds$ for $l=0,1$. Details will be given only for $l=0$ as similar proof is applicable for $l=1$. The starting point is the following decomposition, already used in Relation (\ref{tMntminuscn}) in Section \ref{proof_theorem_expansion}: \begin{eqnarray}\label{decomposition_unif_convergence} \int^t_0 \varphi_0(t-s,\delta)dm(s)- \frac{1}{\mathbbm{E}[\tau_1]}\int^\infty_0 \varphi_0(s,\delta)ds &=&-\frac{1}{\mathbbm{E}[\tau_1]}\int^\infty_t \varphi_0(s,\delta)ds+\int^t_0\varphi_0(t-s,\delta)dv(x)\nonumber\\ &:=& I_1(t,\delta)+I_2(t,\delta). \end{eqnarray} Thus, in view of (\ref{decomposition_unif_convergence}), it suffices to prove that $ I_1(t,\delta)$ and $ I_2(t,\delta)$ uniformly converge towards $0$ as $t\to\infty$ on $\delta \in D_{a/2}$. Uniform convergence of $ I_1(t,\delta)$ is obtained thanks to (\ref{bound_phi0_workload}) that entail: $$ \sup_{\delta \in D_{a/2}}|I_1(t,\delta)|\le \frac{1}{\mathbbm{E}[\tau_1]}\frac{1}{\mu-a} C_0 e^{-\mu t}\longrightarrow 0,\quad t\to\infty . $$ As to $I_2(t,\delta)$, performing an integration by parts as in (\ref{I2t1}) yields $$ I_2(t,\delta)=\varphi_0(t,\delta) v(0^-)+\int^t_0 v(s) \varphi_0'(t-s,\delta)ds. $$ The first term on the right-hand side uniformly converges to $0$ on $\delta \in D_{a/2}$ thanks to (\ref{bound_phi0_workload}). As to the second term, we use the inequality (\ref{bound_phi0'_workload}) to get \begin{equation}\label{ineg_I_2} \left|\int^t_0 v(s) \varphi_0'(t-s,\delta) ds\right| \le \int^t_0 |v(s)| |\varphi_0'(t-s,\delta)| ds\le \frac{\mu}{\mu-a } \int^t_0 |v(s)| [C_0^\ast e^{-\mu (t-s)} + f(t-s)] ds,\quad \delta \in D_{a/2}. \end{equation} Note that $ \int^t_0 |v(s)| e^{-\mu (t-s)} ds$ tends to zero by dominated convergence theorem, as $\int^\infty_0 |v(s)| ds$ is finite (a direct consequence of expansion (\ref{vxexpan})). Also, the light tailed assumption in (\ref{assumption_light_tailed}) for $\tau_1$ entails that for all $j=1,...,N$ one has $\int_0^t e^{-z_j s}f(t-s) ds= e^{-z_j t} \int_0^t e^{z_j s}f(s)ds\longrightarrow 0$ as $t\to\infty$. Similarly, $\int_0^t \eta(s) e^{-z_j s}f(t-s) ds \longrightarrow 0$ where $\eta(x)$ is defined by (\ref{function_eta}). Hence $\int_0^t |v(s)|f(t-s) ds $ tends to zero as $t\to\infty$. Then, from (\ref{ineg_I_2}) $I_2(t,\delta)$ uniformly converges to $0$ on $\delta \in D_{a/2}$. Thus, all in all, $\tilde{M}_{2}(t,\delta)$ converges uniformly on $\delta \in D_{a/2}$ towards $\chi_{2}(\delta)$.\hfill $\Box$ \noindent{\bf Proof of Theorem \ref{workload}.} Since $0\le -\left.\frac{\partial}{\partial \delta}\tilde{Z}(t,\delta)\right|_{\delta=0}=D(t)\le \sum_{i=1}^{N(t)}L_i$ is integrable, it is possible to exchange derivation with respect to $\delta$ and expectation and one has for all $t>0$ \begin{equation}\label{proof_interchange_expectation1} -\left.\frac{\partial}{\partial \delta}\tilde{M}_{1}(t,\delta)\right|_{\delta=0}=-\left.\frac{\partial}{\partial \delta}\mathbbm{E}[\tilde{Z}(t,\delta)]\right|_{\delta=0}= -\mathbbm{E}\left[ \left.\frac{\partial}{\partial \delta}\tilde{Z}(t,\delta)\right|_{\delta=0}\right]= \mathbbm{E}[D(t)]. \end{equation} The main point in the proof is to be able to pass to the limit in (\ref{proof_interchange_expectation1}) as $t\to\infty$. To do this, we use the fact that we proved in Lemma \ref{Lemma_analytic} that $\delta\mapsto \tilde{M}_{n(1)}(t,\delta)$ is analytic on the set $D_{a}$ where $a<\mu$ is arbitrary. Since by Lemma \ref{lemma_uniform_convergence}, $\tilde{M}_{1}(t,\delta)$ uniformly converges towards $\chi_{1}(\delta)$ on this set, a standard result in complex analysis states that the limiting function $\delta\mapsto\chi_{1}(\delta)$ is analytic on the same set, hence in particular at $\delta=0$ (which is known from its expression (\ref{CHIk11})), but, more importantly, that one can interchange derivation and passage to the limit, i.e. $$ \lim_{t\to\infty}\left.\frac{\partial}{\partial \delta}\tilde{M}_{1}(t,\delta)\right|_{\delta=0}=\left.\frac{\partial}{\partial \delta}\left[\lim_{t\to\infty}\tilde{M}_{1}(t,\delta)\right]\right|_{\delta=0}=\left.\frac{\partial}{\partial \delta}\chi_{1}(\delta)\right|_{\delta=0}. $$ Expression of $ \chi_{1}(\delta)$ in the case $k=1$ is given in Corollary \ref{nmoment1}, Expression (\ref{CHIk11}) with $X_j=1$, yielding (\ref{limiting_expected_workload}). Let us move on to the covariance of $D(t)$ and queue size $Z_1(t,0)$. One has $- \left.\frac{\partial}{\partial \delta}[Z_1(t,\delta)]^2\right|_{\delta=0}=2D(t)Z_1(t,0)$, and since the latter is integrable due to $D(t)Z_1(t,0)\le \left(\sum_{i=1}^{N(t)}L_i\right) N(t)$, as in (\ref{proof_interchange_expectation1}), interchanging expectation and derivation results in \[ -\left.\frac{\partial}{\partial \delta}\tilde{M}_{2}(t,\delta)\right|_{\delta=0}=2\mathbbm{E}[D(t)Z_1(t,0)]. \] The same argument of analyticity of $\delta\mapsto \tilde{M}_{2}(t,\delta) $ on $\delta\in D_{a/2}$ in Lemma \ref{Lemma_analytic}, coupled with the fact that uniform convergence result as $t\to\infty$ in Lemma \ref{lemma_uniform_convergence} yields that $\lim_{t\to\infty}\left.\frac{\partial}{\partial \delta}\tilde{M}_{2}(t,\delta)\right|_{\delta=0}= \left.\frac{\partial}{\partial \delta} \chi_{2}(\delta)\right|_{\delta=0} $. Now the fact that $\lim_{t\to\infty} \tilde{M}_{1}(t,0)=\chi_{1}(0)$ and $ \lim_{t\to\infty}\left.\frac{\partial}{\partial \delta}\tilde{M}_{1}(t,\delta)\right|_{\delta=0}= \left.\frac{\partial}{\partial \delta} \chi_{1}(\delta)\right|_{\delta=0} $ implies \begin{eqnarray}\label{end_expression_cov} \lim_{t\to\infty}\mathbb{C}ov[D(t), Z_1(t,0)] &=&\lim_{t\to\infty}\mathbbm{E}[D(t)Z_1(t,0)] - \mathbbm{E}[D(t)]\mathbbm{E}[Z_1(t,0)] \nonumber\\ &=&-\left.\frac{1}{2}\frac{\partial}{\partial \delta} \chi_{2}(\delta)\right|_{\delta=0}+\chi_{1}(0).\left. \frac{\partial}{\partial \delta} \chi_{1}(\delta)\right|_{\delta=0}. \end{eqnarray} Expression (\ref{CHIk1n}) with $X_j=1$ yields $ \chi_{2}(\delta)=\frac{1}{\mathbbm{E}[\tau_1]}\left( \frac{1}{\mu+2\delta}+\frac{\mu}{(\mu+\delta)^2}\frac{\L^\tau(\mu)}{1-\L^\tau(\mu)}\right), $ and in turn, \[ \left.\frac{\partial}{\partial \delta} \chi_{2}(\delta)\right|_{\delta=0}=-\left.\frac{1}{\mathbbm{E}[\tau_1]}\left( \frac{2}{(\mu+2\delta)^2}+\frac{2\mu}{(\mu+\delta)^3}\frac{\L^\tau(\mu)}{1-\L^\tau(\mu)}\right)\right|_{\delta=0}=-\frac{2}{\mu^2\mathbbm{E}[\tau_1]}\left(1+\frac{\L^\tau(\mu)}{1-\L^\tau(\mu)}\right). \] Hence, substitution of the above expression together with $\chi_{1}(\delta)$ obtained previously into (\ref{end_expression_cov}) yields (\ref{limiting_cov_workload}) for the limiting covariance.\hfill $\Box$ \section*{Acknowledgements} This work was supported by Joint Research Scheme France/Hong Kong Procore Hubert Curien grant No 35296 and F-HKU710/15T. \bibliographystyle{alpha}
2,877,628,091,373
arxiv
\section{Introduction} A {\it partitioned balanced tournament design of side $n$}, \textrm{PBTD($n$)}, defined on a $2n$-set $V$, is an arrangement of the ${2n \choose n}$ distinct unordered pairs of the elements of $V$ into an $n\times (2n-1)$ arrays such that \begin{enumerate} \item every element of $V$ is contained in precisely one cell of each column, \item every element of $V$ is contained in at most two cells of any row, \item each row contains all $2n$ elements of $V$ in the first $n$ columns, and \item each row contains all $2n$ elements of $V$ in the last $n$ columns, \end{enumerate} \noindent see \cite{hb}. E. R. Lamken prove the following theorem. \begin{thm}[\cite{pbtd}]\label{th:Lamken} There exists a PBTD{\normalfont ($n$)} for $n$ a positive integer, $n \ge 5$, except possibly for $n \in \{9,11,15\}$. \end{thm} Let $V$ be a $2n$-set. A {\it Howell design} $H(s,2n)$ is an $s \times s$ array, $H$, that satisfies the following three conditions \begin{enumerate} \item every cell of $H$ is empty or contains an unordered pair of elements from $V$, \item each element of $V$ occurs in each row and columns of $H$, and \item each unordered pair of elements from $V$ occurs in at most one cell of $H$, \end{enumerate} \noindent see \cite{howell}. For $T$ a PBTD($n$), let $T^L,T^C$ and $T^R$ be the first $(n-1)$ columns, the $n$-th column and the last $(n-1)$ columns of $T$, respectively. Then ($T^L\ T^C$) and ($T^R\ T^C$) are Howell designs $H(n,2n)$. These two designs are called {\it almost disjoint}. Conversely, if there is a pair of almost disjoint Howell designs, then there is a partitioned balanced tournament design. By computer calculation, we found almost disjoint Howell designs $H(n,2n)$ for $n \in \{9,11,15\}$ in figures~$1,2$ and $3$. Hence the following theorem holds. \begin{thm}\label{th:ArayaTokihisa} Partitioned balanced tournament designs of side $n$ exist for $n \in \{9,11,15\}$. \end{thm} It is not difficult to show that there is no PBTD($n$) for $n \le 4$. Therefore we have the following corollary from Theorem \ref{th:Lamken} and \ref{th:ArayaTokihisa}. \begin{cor} There exists a PBTD{\normalfont ($n$)} if and only if $n$ is a positive integer, $n \ge 5$. \end{cor} \begin{figure} $\begin{array}{|c|c|c|c|c|c|c|c|c|} \hline 2, 16 & 3, 17 & 4, 6 & 5, 7 & 8, 10 & 9, 11 & 12, 14 & 13, 15 & 0, 1\\\hline 0, 4 & 1, 5 & 7, 9 & 6, 8 & 11, 13 & 10, 12 & 15, 17 & 14, 16 & 2, 3 \\ \hline 1, 3 & 0, 2 & 10, 13 & 11, 12 & 14, 17 & 15, 16 & 6, 9 & 7, 8 & 4, 5\\ \hline 10, 14 & 11, 15 & 0, 8 & 1, 9 & 2, 4 & 3, 5 & 13, 16 & 12, 17 & 6, 7\\ \hline 5, 6 & 4, 7 & 2, 17 & 3, 16 & 12, 15 & 13, 14 & 0, 10 & 1, 11 & 8, 9\\ \hline 8, 12 & 9, 13 & 1, 15 & 0, 14 & 5, 16 & 4, 17 & 3, 7 & 2, 6 & 10, 11\\\hline 9, 15 & 8, 14 & 11, 16 & 10, 17 & 3, 6 & 2, 7 & 1, 4 & 0, 5 & 12, 13\\ \hline 11, 17 & 10, 16 & 5, 12 & 4, 13 & 1, 7 & 0, 6 & 2, 8 & 3, 9 & 14, 15\\ \hline 7, 13 & 6, 12 & 3, 14 & 2, 15 & 0, 9 & 1, 8 & 5, 11 & 4, 10 & 16, 17 \\\hline \end{array}$ \smallskip $\begin{array}{|c|c|c|c|c|c|c|c|c|} \hline 2, 5 & 3, 4 & 6, 15 & 7, 14 & 8, 11 & 9, 10 & 12, 16 & 13, 17 & 0,1\\\hline 0, 16 & 1, 17 & 4, 8 & 5, 9 & 6, 13 & 7, 12 & 10, 15 & 11, 14 & 2,3\\ \hline 6, 10 & 7, 11 & 1, 16 & 0, 17 & 9, 12 & 8, 13 & 2, 14 & 3, 15 & 4,5\\ \hline 3, 13 & 2, 12 & 9, 17 & 8, 16 & 4, 14 & 5, 15 & 0, 11 & 1, 10 & 6,7\\ \hline 4, 11 & 5, 10 & 2, 13 & 3, 12 & 0, 15 & 1, 14 & 7, 17 & 6, 16 & 8,9\\ \hline 1, 12 & 0, 13 & 5, 14 & 4, 15 & 7, 16 & 6, 17 & 3, 8 & 2, 9 & 10,11\\\hline 9, 14 & 8, 15 & 3, 11 & 2, 10 & 5, 17 & 4, 16 & 1, 6 & 0, 7 & 12,13\\ \hline 8, 17 & 9, 16 & 7, 10 & 6, 11 & 1, 2 & 0, 3 & 5, 13 & 4, 12 & 14,15\\ \hline 7, 15 & 6, 14 & 0, 12 & 1, 13 & 3, 10 & 2, 11 & 4, 9 & 5, 8 & 16,17\\\hline \end{array}$ \caption{a pair of almost disjoint Howells designs $H(9,18)$} \end{figure} \begin{figure} $\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|} \hline 2, 4 & 3, 5 & 18, 21 & 19, 20 & 15, 17 & 14, 16 & 11, 13 & 10, 12 & 7, 8 & 6, 9 & 0, 1 \\\hline 0, 9 & 1, 8 & 4, 6 & 5, 7 & 13, 20 & 12, 21 & 17, 19 & 16, 18 & 11, 15 & 10, 14 & 2, 3 \\\hline 11, 17 & 10, 16 & 1, 2 & 0, 3 & 6, 8 & 7, 9 & 12, 15 & 13, 14 & 19, 21 & 18, 20 & 4, 5 \\\hline 13, 21 & 12, 20 & 11, 19 & 10, 18 & 3, 4 & 2, 5 & 0, 8 & 1, 9 & 14, 17 & 15, 16 & 6, 7 \\\hline 16, 19 & 17, 18 & 13, 15 & 12, 14 & 11, 21 & 10, 20 & 5, 6 & 4, 7 & 0, 2 & 1, 3 & 8, 9 \\\hline 5, 12 & 4, 13 & 7, 14 & 6, 15 & 9, 16 & 8, 17 & 1, 18 & 0, 19 & 3, 20 & 2, 21 & 10, 11 \\\hline 8, 10 & 9, 11 & 17, 20 & 16, 21 & 14, 18 & 15, 19 & 2, 7 & 3, 6 & 1, 5 & 0, 4 & 12, 13 \\\hline 3, 7 & 2, 6 & 0, 10 & 1, 11 & 12, 19 & 13, 18 & 16, 20 & 17, 21 & 4, 9 & 5, 8 & 14, 15 \\\hline 1, 6 & 0, 7 & 5, 9 & 4, 8 & 2, 10 & 3, 11 & 14, 21 & 15, 20 & 12, 18 & 13, 19 & 16, 17 \\\hline 14, 20 & 15, 21 & 3, 8 & 2, 9 & 1, 7 & 0, 6 & 4, 10 & 5, 11 & 13, 16 & 12, 17 & 18, 19 \\\hline 15, 18 & 14, 19 & 12, 16 & 13, 17 & 0, 5 & 1, 4 & 3, 9 & 2, 8 & 6, 10 & 7, 11 & 20, 21 \\\hline \end{array}$ \smallskip $\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|} \hline 2, 12 & 11, 18 & 8, 14 & 4, 15 & 3, 13 & 5, 21 & 9, 17 & 6, 20 & 10, 19 & 7, 16 & 0, 1 \\\hline 5, 15 & 4, 14 & 11, 20 & 0, 16 & 6, 17 & 9, 18 & 7, 13 & 1, 19 & 8, 12 & 10, 21 & 2, 3 \\\hline 8, 19 & 7, 17 & 6, 16 & 11, 12 & 2, 18 & 10, 13 & 1, 20 & 9, 15 & 3, 21 & 0, 14 & 4, 5 \\\hline 4, 20 & 0, 21 & 9, 19 & 8, 18 & 11, 14 & 2, 16 & 10, 15 & 3, 12 & 1, 17 & 5, 13 & 6, 7 \\\hline 11, 16 & 6, 12 & 2, 13 & 1, 21 & 0, 20 & 7, 15 & 4, 18 & 10, 17 & 5, 14 & 3, 19 & 8, 9 \\\hline 7, 18 & 9, 20 & 1, 12 & 3, 14 & 5, 16 & 6, 19 & 8, 21 & 0, 13 & 2, 15 & 4, 17 & 10, 11 \\\hline 9, 14 & 3, 15 & 4, 21 & 2, 17 & 1, 10 & 0, 11 & 5, 19 & 8, 16 & 7, 20 & 6, 18 & 12, 13 \\\hline 3, 10 & 1, 16 & 5, 17 & 6, 13 & 4, 19 & 8, 20 & 2, 11 & 7, 21 & 0, 18 & 9, 12 & 14, 15 \\\hline 6, 21 & 5, 10 & 3, 18 & 7, 19 & 8, 15 & 1, 14 & 0, 12 & 4, 11 & 9, 13 & 2, 20 & 16, 17 \\\hline 0, 17 & 8, 13 & 7, 10 & 5, 20 & 9, 21 & 4, 12 & 3, 16 & 2, 14 & 6, 11 & 1, 15 & 18, 19 \\\hline 1, 13 & 2, 19 & 0, 15 & 9, 10 & 7, 12 & 3, 17 & 6, 14 & 5, 18 & 4, 16 & 8, 11 & 20, 21 \\\hline \end{array}$ \caption{a pair of almost disjoint Howell designs $H(11,22)$} \end{figure} \begin{landscape} \begin{figure} $\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline 2, 4 & 3, 5 & 6, 9 & 7, 8 & 10, 14 & 11, 15 & 12, 16 & 13, 17 & 18, 20 & 19, 21 & 22, 26 & 23, 27 & 24, 29 & 25, 28 & 0, 1 \\\hline 9, 10 & 8, 11 & 12, 14 & 13, 15 & 0, 18 & 1, 19 & 24, 28 & 25, 29 & 17, 26 & 16, 27 & 21, 23 & 20, 22 & 4, 6 & 5, 7 & 2, 3 \\\hline 1, 15 & 0, 14 & 2, 20 & 3, 21 & 16, 26 & 17, 27 & 23, 25 & 22, 24 & 6, 8 & 7, 9 & 18, 29 & 19, 28 & 11, 12 & 10, 13 & 4, 5 \\\hline 5, 23 & 4, 22 & 18, 28 & 19, 29 & 25, 27 & 24, 26 & 17, 20 & 16, 21 & 0, 13 & 1, 12 & 9, 11 & 8, 10 & 3, 15 & 2, 14 & 6, 7 \\\hline 17, 21 & 16, 20 & 27, 29 & 26, 28 & 19, 22 & 18, 23 & 11, 13 & 10, 12 & 5, 15 & 4, 14 & 0, 3 & 1, 2 & 7, 25 & 6, 24 & 8, 9 \\\hline 16, 28 & 17, 29 & 21, 24 & 20, 25 & 1, 13 & 0, 12 & 2, 5 & 3, 4 & 9, 27 & 8, 26 & 6, 14 & 7, 15 & 19, 23 & 18, 22 & 10, 11 \\\hline 22, 27 & 23, 26 & 1, 3 & 0, 2 & 4, 7 & 5, 6 & 8, 14 & 9, 15 & 21, 25 & 20, 24 & 10, 28 & 11, 29 & 16, 18 & 17, 19 & 12, 13 \\\hline 0, 19 & 1, 18 & 13, 16 & 12, 17 & 11, 28 & 10, 29 & 9, 26 & 8, 27 & 4, 23 & 5, 22 & 7, 24 & 6, 25 & 2, 21 & 3, 20 & 14, 15 \\\hline 13, 20 & 12, 21 & 25, 26 & 24, 27 & 3, 6 & 2, 7 & 10, 18 & 11, 19 & 22, 28 & 23, 29 & 1, 5 & 0, 4 & 9, 14 & 8, 15 & 16, 17 \\\hline 26, 29 & 27, 28 & 5, 8 & 4, 9 & 12, 20 & 13, 21 & 3, 7 & 2, 6 & 11, 14 & 10, 15 & 17, 25 & 16, 24 & 1, 22 & 0, 23 & 18, 19 \\\hline 6, 11 & 7, 10 & 0, 22 & 1, 23 & 5, 9 & 4, 8 & 19, 27 & 18, 26 & 3, 24 & 2, 25 & 12, 15 & 13, 14 & 17, 28 & 16, 29 & 20, 21 \\\hline 3, 25 & 2, 24 & 7, 11 & 6, 10 & 21, 29 & 20, 28 & 0, 15 & 1, 14 & 16, 19 & 17, 18 & 4, 27 & 5, 26 & 8, 13 & 9, 12 & 22, 23 \\\hline 8, 12 & 9, 13 & 17, 23 & 16, 22 & 2, 15 & 3, 14 & 6, 29 & 7, 28 & 1, 10 & 0, 11 & 19, 20 & 18, 21 & 5, 27 & 4, 26 & 24, 25 \\\hline 18, 24 & 19, 25 & 4, 15 & 5, 14 & 8, 17 & 9, 16 & 21, 22 & 20, 23 & 7, 29 & 6, 28 & 2, 13 & 3, 12 & 0, 10 & 1, 11 & 26, 27 \\\hline 7, 14 & 6, 15 & 10, 19 & 11, 18 & 23, 24 & 22, 25 & 1, 4 & 0, 5 & 2, 12 & 3, 13 & 8, 16 & 9, 17 & 20, 26 & 21, 27 & 28, 29 \\\hline \end{array}$ \smallskip $\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline 2, 11 & 3, 10 & 9, 24 & 15, 27 & 4, 18 & 5, 19 & 16, 23 & 12, 29 & 14, 26 & 7, 20 & 13, 25 & 8, 28 & 17, 22 & 6, 21 & 0, 1 \\\hline 18, 25 & 4, 13 & 5, 12 & 11, 26 & 15, 29 & 6, 20 & 7, 21 & 8, 23 & 0, 17 & 14, 28 & 9, 22 & 1, 27 & 10, 16 & 19, 24 & 2, 3 \\\hline 9, 23 & 20, 27 & 1, 6 & 0, 7 & 13, 28 & 15, 17 & 8, 22 & 21, 26 & 10, 25 & 2, 19 & 14, 16 & 11, 24 & 3, 29 & 12, 18 & 4, 5 \\\hline 10, 24 & 11, 25 & 22, 29 & 3, 8 & 2, 9 & 1, 16 & 15, 19 & 0, 20 & 23, 28 & 12, 27 & 4, 21 & 14, 18 & 13, 26 & 5, 17 & 6, 7 \\\hline 15, 21 & 12, 26 & 13, 27 & 17, 24 & 5, 10 & 4, 11 & 3, 18 & 7, 19 & 2, 22 & 16, 25 & 0, 29 & 6, 23 & 14, 20 & 1, 28 & 8, 9 \\\hline 5, 20 & 15, 23 & 0, 28 & 1, 29 & 19, 26 & 7, 12 & 6, 13 & 3, 16 & 9, 21 & 4, 24 & 18, 27 & 2, 17 & 8, 25 & 14, 22 & 10, 11 \\\hline 1, 8 & 7, 22 & 15, 25 & 2, 16 & 3, 17 & 21, 28 & 0, 9 & 14, 24 & 5, 18 & 11, 23 & 6, 26 & 20, 29 & 4, 19 & 10, 27 & 12, 13 \\\hline 0, 26 & 2, 28 & 4, 16 & 6, 18 & 8, 20 & 10, 22 & 12, 24 & 1, 25 & 3, 27 & 5, 29 & 7, 17 & 9, 19 & 11, 21 & 13, 23 & 14, 15 \\\hline 4, 29 & 1, 21 & 7, 18 & 13, 22 & 14, 25 & 2, 23 & 10, 20 & 11, 27 & 6, 19 & 3, 9 & 12, 28 & 15, 26 & 5, 24 & 0, 8 & 16, 17 \\\hline 12, 22 & 6, 17 & 3, 23 & 9, 20 & 1, 24 & 14, 27 & 4, 25 & 2, 10 & 13, 29 & 8, 21 & 5, 11 & 0, 16 & 15, 28 & 7, 26 & 18, 19 \\\hline 6, 27 & 0, 24 & 8, 19 & 5, 25 & 11, 22 & 3, 26 & 14, 29 & 9, 28 & 4, 12 & 1, 17 & 10, 23 & 7, 13 & 2, 18 & 15, 16 & 20, 21 \\\hline 14, 17 & 8, 29 & 2, 26 & 10, 21 & 7, 27 & 13, 24 & 5, 28 & 15, 18 & 11, 16 & 0, 6 & 3, 19 & 12, 25 & 1, 9 & 4, 20 & 22, 23 \\\hline 7, 16 & 14, 19 & 10, 17 & 4, 28 & 12, 23 & 9, 29 & 1, 26 & 6, 22 & 15, 20 & 13, 18 & 2, 8 & 5, 21 & 0, 27 & 3, 11 & 24, 25 \\\hline 3, 28 & 9, 18 & 14, 21 & 12, 19 & 6, 16 & 0, 25 & 11, 17 & 5, 13 & 8, 24 & 15, 22 & 1, 20 & 4, 10 & 7, 23 & 2, 29 & 26, 27 \\\hline 13, 19 & 5, 16 & 11, 20 & 14, 23 & 0, 21 & 8, 18 & 2, 27 & 4, 17 & 1, 7 & 10, 26 & 15, 24 & 3, 22 & 6, 12 & 9, 25 & 28, 29 \\\hline \end{array}$ \caption{a pair of almost disjoint Howell designs $H(15,30)$} \end{figure} \end{landscape} \section{Observations} Let $V=\{0,1,\dots,2n-1\}$ be a $2n$-set and $T=(T^L\ T^C\ T^R)$ a PBTD($n$). Suppose $A$ is the array obtained by permuting elements of $V$, the rows, the first $n-1$ columns, the last $n-1$ columns of $T$, or $A=(T^R\ T^C\ T^L)$. Then $A$ is also a PBTD($n$). Two PBTD($n$) are {\it isomorphic} if one can be obtained from the other by these operations. By permuting elements of $V$, we may assume $T^C$ is the transposed of the array $(\{0,1\}\ \{2,3\}\ \dots\ \{2n-2,2n-1\})$. From Dinitz and Dinitz~\cite{pbtd10}, there exist two PBTD($5$)'s up to isomorphism. For these two PBTD($5$)'s, we find that there exists the permutation $$\sigma=(0,1)(2,3)(4,5)(6,7)(8,9)$$ such that $$T^L= \begin{array}{|c|c|c|c|} \hline t_{11} & \sigma(t_{11}) & t_{13} & \sigma(t_{13})\\\hline t_{21} & \sigma(t_{21}) & t_{23} & \sigma(t_{23})\\\hline t_{31} & \sigma(t_{31}) & t_{33} & \sigma(t_{33})\\\hline t_{41} & \sigma(t_{41}) & t_{43} & \sigma(t_{43})\\\hline t_{51} & \sigma(t_{51}) & t_{53} & \sigma(t_{53})\\\hline \end{array} \text{ and } T^R= \begin{array}{|c|c|c|c|} \hline t_{16} & \sigma(t_{16}) & t_{18} & \sigma(t_{18})\\\hline t_{26} & \sigma(t_{26}) & t_{28} & \sigma(t_{28})\\\hline t_{36} & \sigma(t_{36}) & t_{38} & \sigma(t_{38})\\\hline t_{46} & \sigma(t_{46}) & t_{48} & \sigma(t_{48})\\\hline t_{56} & \sigma(t_{56}) & t_{58} & \sigma(t_{58})\\\hline \end{array} \ .$$ Thus we observe that these two PBTD($5$)'s are determined by some $4$ columns and the permutation $\sigma$. Seah and Stinson~\cite{pbtd7} obtained two almost disjoint Howell designs $H(7,14)$ by computer calcuation for the given $T^L$ which was constructed by E. R. Lamken. Then for these two PBTD($7$)'s, we find that there exists the permutation $$\sigma=(0,1)(2,3)(4,5)(6,7)(8,9)(10,11)(12,13)$$ such that $$T^L= \begin{array}{|c|c|c|c|c|c|} \hline t_{11} & \sigma(t_{11}) & t_{13} & \sigma(t_{13}) & t_{15} & \sigma(t_{15})\\\hline t_{21} & \sigma(t_{21}) & t_{23} & \sigma(t_{23}) & t_{25} & \sigma(t_{25})\\\hline t_{31} & \sigma(t_{31}) & t_{33} & \sigma(t_{33}) & t_{35} & \sigma(t_{35})\\\hline t_{41} & \sigma(t_{41}) & t_{43} & \sigma(t_{43}) & t_{45} & \sigma(t_{45})\\\hline t_{51} & \sigma(t_{51}) & t_{53} & \sigma(t_{53}) & t_{55} & \sigma(t_{55})\\\hline t_{61} & \sigma(t_{61}) & t_{63} & \sigma(t_{63}) & t_{65} & \sigma(t_{65})\\\hline t_{71} & \sigma(t_{71}) & t_{73} & \sigma(t_{73}) & t_{75} & \sigma(t_{75})\\\hline \end{array}\ .$$ \noindent Also we find that there exists the permutation $$\tau = (0,2,4)(1,3,5)(8,10,12)(9,11,13)$$ such that $$T^L= \begin{array}{|c|c|c|c|c|c|} \hline t_{11} & t_{12} & t_{13} & t_{14} & t_{15} & t_{16}\\\hline \tau(t_{15}) & \tau(t_{16}) & \tau(t_{11}) & \tau(t_{12}) & \tau(t_{13}) & \tau(t_{14})\\\hline \tau^2(t_{13}) & \tau^2(t_{14}) &\tau^2(t_{15}) & \tau^2(t_{16}) & \tau^2(t_{11}) & \tau^2(t_{12}) \\\hline t_{41} & t_{42} & \tau(t_{41}) & \tau(t_{42}) & \tau^2(t_{41}) & \tau^2(t_{42})\\\hline t_{51} & t_{52} & t_{53} & t_{54} & t_{55} & t_{56}\\\hline \tau(t_{55}) & \tau(t_{56}) & \tau(t_{51}) & \tau(t_{52}) & \tau(t_{53}) & \tau(t_{54})\\\hline \tau^2(t_{53}) & \tau^2(t_{54}) &\tau^2(t_{55}) & \tau^2(t_{56}) & \tau^2(t_{51}) & \tau^2(t_{52})\\\hline \end{array}$$ and $$T^R= \begin{array}{|c|c|c|c|c|c|} \hline t_{1,8} & t_{1,9} & t_{1,10} & t_{1,11} & t_{1,12} & t_{1,13}\\\hline \tau(t_{1,10}) & \tau(t_{1,8}) & \tau(t_{1,9}) & \tau(t_{1,13}) & \tau(t_{1,11}) & \tau(t_{1,12})\\\hline \tau^2(t_{1,9}) & \tau^2(t_{1,10}) & \tau^2(t_{1,8}) & \tau^2(t_{1,12}) & \tau^2(t_{1,13}) & \tau^2(t_{1,11})\\\hline t_{4,8} & \tau(t_{4,8}) & \tau^2(t_{4,8}) & t_{4,11} & \tau^2(t_{4,11}) & \tau^2(t_{4,11})\\\hline t_{5,8} & t_{5,9} & t_{5,10} & t_{5,11} & t_{5,12} & t_{5,13}\\\hline \tau(t_{5,10}) & \tau(t_{5,8}) & \tau(t_{5,9}) & \tau(t_{5,13}) & \tau(t_{5,11}) & \tau(t_{5,12})\\\hline \tau^2(t_{5,9}) & \tau^2(t_{5,10}) & \tau^2(t_{5,8}) & \tau^2(t_{5,12}) & \tau^2(t_{5,13}) & \tau^2(t_{5.11})\\\hline \end{array}\ .$$ Thus we observe that $T^L$ is determined by some $7$ cells, the permutations $\sigma$ and $\tau$. And $T^R$ is determined by some $14$ cells and the permutation $\tau$. From these observations, we make GAP programs, see~\cite{GAP4}, to construct partitioned balanced tournament designs. And we found designs in figures 1,2 and 3. \ \section*{acknowledgements} This work was supported by JSPS KAKENHI Grant Number 21K03350. In this research work we used the supercomputer of ACCMS, Kyoto University.
2,877,628,091,374
arxiv
\section{Introduction} Coronal hard X-ray (HXR) sources provide one of the most exciting diagnostics of coronal plasma. However, the majority of flare-associated HXRs that we can observe occur at the footpoints of flare loops, when accelerated electrons collide with dense chromospheric plasma. Because these sources are orders of magnitude brighter than coronal HXR emission, most X-ray instruments are incapable of resolving both coronal and chromospheric sources simultaneously. Coronal X-ray emission can have both thermal and non-thermal components. These can be difficult to distinguish from each other in the low corona, where accelerated electrons interact with dense arcades of cooling flare loops. For less powerful or more compact flares, looptop sources, or indeed any X-ray source located even higher up in the corona (hard X-ray sources have been observed up to 0.3\(R_\odot\)), are only observable when the flare footpoints are occulted behind the solar disk. Only in certain spectacular cases, such as the Masuda flare \citep{masudaLooptopHardXray1994}, can the thermal and non-thermal components be well-separated spatially by dynamic-range limited intsruments, without the additional advantage of footpoint occultation. Observations of both occulted flares and Masuda-type events have had profound impacts on our understanding of particle acceleration in flares. Mechanisms of particle acceleration, trapping, or turbulence that are responsible for producing bremsstrahlung in what is traditionally thought of as tenuous coronal plasma are not yet well understood. Multi-wavelength observations of coronal emission, as well as of the complete flare, are the keys to characterizing particle acceleration from one region to the next. Unfortunately, comprehensive observations of occulted flares, especially those with X-ray sources very high in the corona, are rare. Two of the best-observed examples are the X-class flare of October 27, 2002 \citep{kruckerSolarFlareHard2007} and the smaller\footnote{The unocculted SXR flux registered as GOES C5 class. Chertok's method gave a class of X1.5.} flare of November 3, 2010 \citep{glesenerObservationHeatingFlareaccelerated2013a}. The powerful October 27 flare showed an extended coronal source that moved rapidly (750 km/s) in the same direction as the accompanying coronal mass ejection (CME). An estimated 10\% of the electrons in the source were determined to be nonthermal, possibly particles trapped on field lines related to the CME. Another facet of one global magnetic eruption, CMEs, which often accompany flares \citep{harrisonNatureSolarFlares1995,zhangTemporalRelationshipCoronal2001}, can contribute to the presence of locally dense plasma capable of producing enhanced bremsstrahlung. As a flux rope rises from the solar surface, a current sheet forms below it, containing microscopic instabilities resulting in fast magnetic reconnection. Following reconnection, the envelope of field lines overlying the flux rope is pushed up, forming the CME's bright frontal loop with a piston-driven shock ahead of it. Part of the flux rope is now known as the CME core, while the remainder falls to the solar surface (Gopalswamy et al., 2003; Jing et al., 2004). High-energy particles and hot plasma can be released in bi-directional outflows observable in radio and X-rays \citep{liuDoubleCoronalHard2008}. Their motion can often be traced by co-spatial soft X-ray (SXR) or EUV plasmoids that form as the upward reconnection flow hits the CME core. \begin{figure} \includegraphics[width=0.49\textwidth]{fig1new.png} \caption{The flare of May 1, 2013, was observed from many perspectives. This diagram imagines looking down on the ecliptic; solid vectors point towards the satellites, and dashed lines represent the half of the Sun visible from each perspective.} \label{fig:geom} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{map_peak.png} \caption{From top-left, clockwise: The flare peak observed directly by STEREO-B 195 \AA. RHESSI imaging of the flare peak at 02:33UT showed an extended non-thermal source leading a compact thermal source. No corresponding sources were visible in AIA 193 \AA. However, plasma observed at 17 GHz by NoRH resembled the cooler-temperature AIA 304 \AA{} images. The view from STEREO-A was occulted by 3$^{\circ}${} less than AIA, so it saw slightly more of the flare. An animated version of this figure is available, following the flare evolution. From an hour before the flare erupts, the full-disk view offered by STEREO-B shows the active region changing. During and after the flare peak, short-lived X-ray emission and longer-lived radio emission is seen. The filament eruption is seen especially well from the STEREO-A viewpoint, with EUV plasmoids stretched along the filament axis seven minutes after the peak. } \label{fig:movie} \end{figure*} Hard X-rays have also been observed moving with CME cores \citep{hudsonHardXRadiationFast2001}. Flare-accelerated electrons in the CME core can cause the thermal CME energy to exceed the CME kinetic energy \citep{leeThreeDimensionalStructureEnergy2009,landiPhysicalConditionsCoronal2010}, resulting in nonthermal electron energy loss, observed through HXR emission. The November 3 flare studied by \citet{glesenerObservationHeatingFlareaccelerated2013a} is a prime example - simultaneous EUV and X-ray observations showed that flare-accelerated electrons had enough energy to heat the CME core. It is clear that the dynamics of CMEs and the energy release of flares are intrinsically coupled to each other. The event of May 1, 2013 (SOL2013-05-01T02:32) is an excellent opportunity to investigate how and where a solar eruption accelerates particles. Observed by an array of instruments ranging in wavelength from radio to hard X-ray, it enables the study of the coronal electron population at many temperatures. In this paper, we focus on both the potential causes of the observed X-ray emission and its properties. \section{Event Description} \subsection{Observation Geometry} The active region that produced the May 1, 2013 flare was fully visible to STEREO-B's SECCHI \citep{howardSunEarthConnection2008}, to the north-east of its field of view. The centroid of the peak flare image gave a precise location of (424",241"), which allowed us to infer the observation geometry. The flare position was located 30$^{\circ}${} behind the east solar limb as seen from Earth, which meant that the observed plasma was emitted from heights at least 111 Mm radially above the active region. Mars Odyssey saw almost the opposite view from that of AIA, with the flare seen close to the limb but on disk. STEREO-A saw the flare as occulted by 27$^{\circ}${} behind STEREO-A's east limb, which allowed plasma 88 Mm or more above the flaring region to be observed. Figure \ref{fig:geom} shows the approximate location of the flare with respect to the various observatories, and their fields-of-view at the time. \subsection{Event Overview}\label{sec:description} Figure \ref{fig:movie} shows views from various instruments near the peak of the flare (02:32:28UT in RHESSI 13-30 keV), and the accompanying animation shows the four-hour period surrounding the event for context. Lightcurves, shown in Figure \ref{fig:lightcurves} for selected instruments are shown over that same time period (left panels) or in detail when the HXR emission occurs (right panels). EUV views from the occulted perspectives - AIA and STEREO-A - showed varying amounts of increase in the hour leading up to the flare. The direct view of STEREO-B showed loops brightening and changing over the active region as it evolved. This behaviour is consistent with the appearance of loops over the solar limb as seen in the movie, and was accompanied by an increase in soft X-ray flux as seen by GOES SXI. After the flare peaks, dimming was seen in many EUV wavelengths, including material seen in absorption by STEREO-B. Shortly after the flare peak, STEREO-A presented a beautiful view of an erupting filament in both its channels, with small, bright plasmoids running up the axis of the filament. This was less spectacularly seen by AIA's EUV channels, perhaps due to the higher occultation. Post-flare loops were slow to emerge, with the first candidates being seen at 03:00UT by STEREO-A. Likely the high occultation angle, combined with an unfavorable loop orientation, allowed only the tops of the highest cooling loops to be seen from the occulted perspective. In the X-ray regime, flux initially decreased as a previous flare on the solar disk decayed. However, after 2:20UT, RHESSI detected a second fainter source above the limb that would peak little over ten minutes later. The RHESSI 10-30 keV peak occurred at 02:32:28UT and the 4-8 keV peak was slightly delayed at 02:34:15UT (see right panel, Figure \ref{fig:lightcurves}). The High Energy Neutron Detector (HEND), part of the Gamma Ray Spectrometer aboard Mars Odyssey, saw the main peak at 02:32:09UT in channels from 65-200 keV. It observed several bursty peaks starting five minutes before the main peak\footnote{This flare was not listed in the HEND flare catalog \citep{livshitsCatalogHardXray2017a} because the corresponding GOES flux increase was not enough to indicate a flare due to the occultation.}. GOES XRS did not register this emission as a flare. At the time of the RHESSI HXR peak, the amount of flux in the 1-8\AA{} channel corresponded to a C1 background with a B1 enhancement. Because the majority of the event was occulted as seen from Earth, we used the STEREO-B EUV images to better estimate the flare magnitude. Following the method of \cite{chertokSimpleWayEstimate2015}, the 286" length of the saturated portion of the peak STEREO-B image at 02:30:57UT gave a flare class of M7. The method of \cite{nittaSoftXrayFluxes2013}, which compares the full-disk EUV flux before and during the flare, gave a GOES class of M3. Both these empirically derived formulae have large uncertainties, with Chertok claiming an accuracy of less than a factor of 2 and Nitta within 0.5-1.5 times the actual SXR flux value. Because the STEREO peak image occurred almost three minutes before the peak SXR flux, it is safe to assume that the flare was at least a moderate M-class but not as large as X-class. RHESSI imaging, which will be discussed in more detail in sections \ref{sec:th-img} and \ref{sec:nt-img}, showed both thermal and non-thermal sources. The thermal source was visible for 5.5 minutes, outlasting the non-thermal source which could only be imaged over three minutes. The non-thermal source appeared both higher in the corona and earlier than any emission shown by the NoRH 17 and 34 GHz images, which corresponded well with the AIA 304\AA{} images, showing a large expanding bubble which lasted long after the X-ray sources faded. Both X-ray sources were clearly located behind the CME front. The CME itself, as seen by AIA, had a multi-thermal structure whose 3D nature was unclear due to projection effects. The hotter FeXVIII image showed most material concentrated in a single loop towards the north, while cooler plasma was distributed much further south and higher above the limb. \begin{figure*}[ht] \includegraphics[width=.64\textwidth]{lightcurves_renorm.png} \includegraphics[width=.36\textwidth]{newfig3b.png} \caption{Left panels: Lightcurves from instruments sensitive to hot plasma (top) and cooler plasma (bottom), for the same time period shown in the animated version of Figure \ref{fig:movie}. Flux was determined from an area 400" x 400" either centered on the flare (direct perspective of STEREO-B) or above the limb but in the same vertical extent (all occulted perspectives). Each lightcurve was background-subtracted using pre-flare values and normalized relative to the flare peak. The symbols indicate the imaging time cadence, which influences the relative timing of the peaks, especially in the bottom panel. The shading indicates the time period illustrated in further detail on the right. Right panels: GOES, RHESSI, and HEND lightcurves. The GOES long channel, shown in linear scale (middle) shows an increase of flux corresponding to the RHESSI emission, overlaid on a decaying slope. The direct view of HEND observed bursts before the main flare onset.} \label{fig:lightcurves} \end{figure*} \section{Spectral Analysis}\label{sec:spectra} \begin{figure} \includegraphics[width=.5\textwidth]{figure_time_ionization.png \caption{Properties derived from the RHESSI SXR source (red curve, top plot). The emission measure stayed surprisingly constant, although the temperature followed the expected behaviour. The last three panels, which depended on imaging, are shown when the flare fluxes clearly exceed the flux of the on-disk source. The compact thermal source almost doubled in size as it rose above the flaring region and the solar limb. The density, calculated using a filling factor of unity, follows the temperature profile. Long iron ionization times might explain why AIA did not observe the thermal source.} \label{fig:rhessi_props} \end{figure} \begin{figure} \includegraphics[width=.5\textwidth]{hend_rhessi_paper.png} \caption{RHESSI and HEND photon spectra. The RHESSI photon spectrum was fit with both a single thermal component plus a broken power-law. } \label{fig:rhessi} \end{figure} \subsection{RHESSI thermal spectra}\label{sec:thermalspec} Spectral analysis of the RHESSI thermal emission was complicated by the simultaneous occurrence of a decaying microflare located on disk. This made it difficult to fit the total flare spectrum integrated over the entire Sun; cross-talk at low energies resulted in a poor fit of a single thermal component. In order to cleanly isolate the two widely separated sources, we applied standard RHESSI imaging spectroscopy (\citet{kruckerRelativeTimingSpectra2002}). Using the coarsest subcollimators to distinguish coronal from on-disk emission enabled us to more accurately determine the properties of the thermal component. Fitting a single temperature model gave the temperature and emission measure (EM) evolution shown in Figure \ref{fig:rhessi_props}, with moderately high temperatures peaking around 11 MK. The EM time evolution was found to be very different from that of a standard flare. Instead of a fast decay following the flare peak, the emission measure, shown here in linear scale, had unusually small changes. This highlighted the fact that we were not observing the expected flare loops, but emission from well above the actual flare site. This was also influenced by the motion of the thermal source and the associated change in the degree of occultation. In any case, it is not clear how to explain the nearly constant EM. We estimated the volume of X-ray emitting thermal plasma by forward fitting a circular source to the RHESSI visibilities of all detectors (e.g. \citet{dennisHARDXRAYFLARE2009}). This revealed an extended source with FWHM sizes above $\sim$60" for all times, and a clear trend of an increasing size after 02:33UT. Assuming a spherical symmetry and filling factor of unity, the derived density of the hot ($\sim$11 MK) plasma was around 10$^{9}$ cm$^{-3}$, a plausible density for this CME core in the high corona. The filling factor is an unknown; RHESSI observations, with a limited number of measured visibilities, cannot distinguish between an extended source and a composite of many subsources. Introducing a filling factor could drastically increase the density; however, as we will later discuss, a filling factor of unity agrees well with the data for this particular event. \subsection{RHESSI \& HEND Hard X-ray Spectra}\label{sec:hxr-spec} This flare was one of very few where spectroscopic observations of both the chromospheric footpoints and the coronal source were available. Measurements of the footpoint emission came from the High Energy Neutron Detector (HEND), part of Mars Odyssey's Gamma Ray Spectrometer (GRS) instrument suite \citep{boyntonMarsOdysseyGammaRay2004}, which observed energetic particles from $\approx$32-2000 keV. At the time of observation, the spacecraft structure did not shade the detector, simplifying interpretation of the detected flare X-ray photons. The peak in the observed count spectrum was strongest in the 86-344 keV range. Even though this was a M-class flare, there were more than enough counts to clearly both distinguish solar emission from the background up to 400 keV, and render the errors negligible. Figure \ref{fig:lightcurves} shows that the RHESSI and HEND emission peaked almost simultaneously for the high-energy channels. During calibration, the HEND data was adjusted for the light travel time between Mars and Earth orbits. The HEND $>$ 90 keV peak, beginning at 02:32:09UT with a time binning of $\approx$20 seconds, might well have overlapped the RHESSI 10-30 keV peak in the 4-second rotation from 02:32:28-02:32:32UT. HEND showed multiple smaller peaks indicating high-energy bursts in the chromosphere starting at 02:28UT, before RHESSI registered the coronal source. An OSPEX fit of the photon spectrum integrated over the main non-thermal peak (02:32:00-02:33:13UT) to a power law with a low-energy cutoff gave slopes around $\gamma_{cor}$=3.3$\pm$0.38, where the uncertainties were derived from the standard deviations between the fit results of the individual RHESSI detectors. The flare-integrated spectra showed that non-thermal emission was the main component above 13 keV and extended down to at least 10 keV, below which thermal emission dominated. A single power-law fit to the HEND spectrum, excepting the two highest energy bins which had significant uncertainty, gave a spectral index of $\gamma_{chrom}$=2.47$\pm$0.26. The difference between the spectral indices of the coronal source (RHESSI) and footpoints (HEND) is within the range observed by \citet{battagliaRelationsConcurrentHard2006a} for flares with distinct above-the-looptop coronal and footpoint X-ray sources. Extrapolating the RHESSI fit to 100 keV, where the observed signal is difficult to distinguish from the background, we see that the photon flux observed by HEND was $\approx 10^3$ more than that observed by RHESSI. \section{Coronal Source evolution} \begin{figure*} \includegraphics[width=\linewidth]{sxr_evolution_only.png} \caption{Time evolution of the soft 4-8 keV X-ray source. Contour levels are 50, 70 and 90\%, as is common with RHESSI imaging. SXR images, made every 45 seconds starting at 02:32UT, show that the compact source expanded as it moved upward above the flare region. The background images are from AIA 304 \AA, corresponding to the timestamps.} \label{fig:hxr_evolution} \end{figure*} \begin{figure*} \includegraphics[width=.5\textwidth]{new_figure5.png} \includegraphics[width=.5\textwidth]{fe18_rhsi_023249.png} \caption{The RHESSI HXR source is best summarized by this image. Low counts meant that a multi-step process was needed to ensure an accurate source size and position. During the flare peak, the extended HXR source lead the compact SXR source, both of which were high above the flare loop arcade but behind the CME front. Note that the CME front, likely due to projection effects and the 3-D structure of the event, appears to be located more to the north in the hotter FeXVIII image than for the cooler 193 image. } \label{fig:rhessi_smooth} \end{figure*} \subsection{RHESSI imaging of the thermal emission}\label{sec:th-img} We used RHESSI forward fitting to determine the size and position of the 4-8 keV X-ray source. Not only did this allow us to separate the on-disk and coronal source, but also to determine the optimal parameters for imaging the thermal source with CLEAN. First, the source size and position was calculated by forward fitting. Next, images for each subcollimator separately were reconstructed using CLEAN. If the source position agreed with the position calculated by the forward fit, that subcollimator was selected for use in making the final CLEAN images shown in Figures \ref{fig:hxr_evolution} and \ref{fig:rhessi_smooth}. The CLEAN images show the expansion and movement of the compact thermal source. The imaging, when overlaid on AIA 304 \AA{}, clearly showed that the thermal emission came from the core of the CME. This particular AIA filter sees cold plasma, which exhibited fine structure within the CME core. In this event, cold and hot plasma must have co-existed in the core, at least in projection. A fit to the 4-8 keV centroid positions over time gave an average velocity of 150 km/s. This was much slower than the linear CME speed, measured by LASCO as 389 km/s. The source rose to a maximum height of 73 Mm above the limb, or 184 Mm (243") above the active region. This is one of the best RHESSI image sequences to date of a purely coronal source from within the core of a CME rising and expanding above the limb. \subsection{RHESSI imaging of the non-thermal emission}\label{sec:nt-img} From the RHESSI spectral analysis (section \ref{sec:hxr-spec}), we concluded that emissions above 13 keV were purely non-thermal. Above 30 keV, the background began to dominate the count spectrum. We therefore selected the energy range from 13-30 keV for imaging the non-thermal source. Relative to the thermal source, count statistics were low, with only about 1000 counts per detector if integrated over the entire non-thermal peak, so to summarize we reconstructed a single image averaged over the duration of the non-thermal burst (02:32:03-02:33:21UT). For reference we made an image of the thermal source in the further restricted 6-8 keV energy range integrated over the same time interval. Using a forward fit, we found that the thermal source came from an extended (61$\pm$10" FWHM) area. The non-thermal emission came from an even larger (110$\pm$30" FWHM) area. The center of mass location difference for the thermal and non-thermal source of 41$\pm$13" in the x- and 39$\pm$10" in y- direction, or a radial separation of 56.6$\pm$16". This clearly established that the non-thermal source was above the thermal CME core. For the summary image shown in Figure \ref{fig:rhessi_smooth}, we then used the CLEAN algorithm to make images without restricting the source shape, again allowing the source size derived from forward fitting to guide the subcollimator selection. We chose subcollimators 6 through 9 for the thermal source and subcollimators 7 through 9 for the non-thermal source. Note that the limb in EUV is slightly higher ($\sim$10") than the X-ray limb (e.g. \citet{battagliaSolarXRayLimb2017a}), so the X-ray source appeared at slightly lower altitude. The non-thermal source was clearly outside the CME core and just ahead of it, but still much behind the CME front even with its multi-thermal structure. \subsection{Nature of the non-thermal coronal emission}\label{sec:nature-nt} We estimated the instantaneous number of electrons that are required to produce the observed hard X-ray emission using equation 2.4 from \citet{linNonrelativisticSolarElectrons1974}. Such an estimate depends on parameters of the observed hard X-ray spectrum and the ambient density within the non-thermal source - that is to say, the thermal core distribution. While the absolute value and the slope of the power spectrum was well observed, the cutoff energy of the non-thermal spectrum and the ambient density are not well-constrained by the observations. Hence, the instantaneous number of non-thermal electrons and therefore the instantaneous number density can only be estimated for a range of parameters. Figure \ref{fig:ambient_density} (top) gives the density of the instantaneous electrons as a function of ambient density for three different cutoff energies. The solid line represents the extreme case where the non-thermal density is equal to the ambient density. In this case, the derivation is no longer valid since collisions between non-thermal electrons should be considered as well. Generally, the non-thermal component is thought of as a tail on the thermal core distribution that only represents a small fraction of the total particles in the distribution. The ambient density within the non-thermal source is not well constrained, but due to its higher altitude it should be below or possibly as high as the density of the hot core (~10$^9$ cm$^{-3}$, see Figure \ref{fig:rhessi_props}). An estimate of the ambient density within the non-thermal source can be approximated if we assume that non-thermal electrons are trapped within the source and the observed decay of the non-thermal emission is due to collisional losses only. In this case, the collisional stopping time (e.g. \citet{kruckerHardXRayEmissions2008}, eq 2) should be roughly equal to the observed exponential decay time of 141 seconds (13-30 keV). The bottom panel of Figure \ref{fig:ambient_density} shows the energy loss time as a function of density for different electron energies. This indicates that densities around ~10$^8$ cm$^{-3}$ give plausible energy loss times. \begin{figure} \includegraphics[width=.5\textwidth]{new_figure.png} \caption{Non-thermal electron density (top) and collisional loss timescales (bottom) calculated for bremsstrahlung emission at different energies.} \label{fig:ambient_density} \end{figure} An alternative theory is that the observed flux decay results from a rapidly decreasing ambient density as the bubble trapping the non-thermal electrons expands. In such a model, the thin target emission, which is proportional to the ambient density, decreases in time. Therefore the hard X-ray flux also decreases. The collisional losses also decrease with time, meaning that the non-thermal electrons could potentially survive for a long time within the escaping bubble. For a constant, isotropic expansion velocity, density decreases with time proportional to the third power. As the exponential fit mentioned previously is only observed over a short time interval, the decay can also be approximated with a power-law decay, at least for this event with limited counts. For isotropic expansion velocities of the order of 200 km/s, the observed decay can be roughly reproduced, assuming the injection stops at the peak time of the non-thermal emission and using the observed source size at that time of 110". As the CME is expanding with roughly double that speed (see Table \ref{tbl:events}), it is appears possible that the escaping bubble could move out at 200 km/s. For events with better statistics than for the May 1 event, the fit to the decay time could clearly distinguish between the two scenarios --- collisional decay vs expansion. For the October 27 flare \citep{kruckerSolarFlareHard2007} , which has 30 times higher count rates (see Table \ref{tbl:numbers}), an exponential decay is clearly preferred (Figure 1 and 3 of \citet{kruckerSolarFlareHard2007}). Nevertheless, as the bubble of non-thermal electrons is most likely expanding during the time scale of the collisional losses, the effect of decreasing ambient density should be considered in addition to collisional losses. In summary, we get a consistent picture assuming that the non-thermal emission is produced by a trapped population of energetic electrons within a plasma with an ambient density of $\sim$10$^8$ cm$^{-3}$. For such densities, the fraction of the non-thermal population is reaching values on the order of a percent (c.f. Figure \ref{fig:ambient_density}, top). Considering that the non-thermal electrons at 20 keV have roughly 100 times more energy than an average electron in the ambient corona at 2 MK, the energy content of the non-thermal population could be similar to the ambient energy and therefore might play a significant role in the total evolution of the event. \subsection{Coronal source at other wavelengths} \begin{figure*} \includegraphics[width=\textwidth]{aia_prediction_cbar.jpg} \caption{Left: Brightness of expected sources in AIA 131 and 94\AA{}, given the size, location, and electron density of the thermal X-ray source. Middle: AIA observations. Right: Pre-flare background-subtracted observed emission.} \label{fig:aia} \end{figure*} Although the pre-flare phase lightcurves showed flux increasing in AIA's hot channels, AIA images during the flare peak did not show a source in the corona corresponding to the thermal X-ray source. This is unlike the similar November 3 flare studied in \citet{glesenerObservationHeatingFlareaccelerated2013a}, which clearly showed the 11 MK CME core (the same temperature as derived for the CME core in this event). This is near the hot peak in AIA 131\AA's response function, due to FeXX and FeXXIII emissions. At this temperature ionization time scales for a low density plasma are not instantaneous. Ionization takes a few seconds or even minutes (e.g. \citet{bradshawCollisionalRadiativeProcesses2014}). Using the curves given in \citet{smithIonizationEquilibriumTimescales2010} for elements in constant electron temperature plasmas, we estimated the iron ionization time scales to be several minutes for this particular flare (see bottom panel of Figure \ref{fig:rhessi_props}). Therefore, the 131\AA{} signal associated with the RHESSI thermal emission is expected to have been delayed or even suppressed. We also searched for signs of the Fe line complex around 6.7 keV in the RHESSI data. Although the count spectra suggest that there might be a faint Fe line feature present, spectral fitting is inconclusive and does not give a quantitative result as it is unclear how to subtract the on-disk emission (see Section 3.1), making fitting the Fe line even more difficult than it already is for data taken late in the RHESSI mission lifetime when significant radiation damage to the detectors has been accumulated (these observations were taken 14 months past the last RHESSI anneal in February 2012). Because the source was expanding while the density was decreasing, it is not straight forward to predict the actual flux that would have been observed in AIA 131\AA. We nevertheless made a simple calculation to determine the hypothetical EUV intensity, assuming instantaneous ionization. Given the size and temperature of the soft X-ray source, we should see 550K data numbers (DN) per second over the area of the source in 131\AA. In 94\AA{} this number is much less, at only 41K DN/s. With the area calculated from the X-ray imaging, both channels should show a source whose intensity is easily observable above the background, once the plasma has become sufficiently ionized. Figure \ref{fig:aia} shows what we could expect versus what was observed. The minimum ionization time of four minutes was on the order of the decay time of the RHESSI thermal emission. Other factors, such as the presence of flows or collisions with non-thermal particles might affect the time it takes for ionization equilibrium to be reached. However, because there are no signs of the plasma in AIA, this means that whatever the rate, the ionization simply could not catch up, regardless of the presence of non-equilibrium effects. We also calculated the expected 17 GHz gyrosynchrotron flux from the size, spectral index, and density of the non-thermal HXR source, using formula 2.16 from \citet{whiteRelationshipSolarRadio2011}. Only 1-10 sfu would be produced; From the temperature and emission measure calculated from the thermal X-ray source, the potential 17 GHz emissions would be even less; using equation 15 from \citet{morgachevContributionThermalBremsstrahlung2014}, the contribution from the free-free emission would be only 0.035 sfu. Imaging at 17 GHz using data from the Nobeyama Radioheliograph (NoRH) showed much stronger emissions. Ten-second cadence images, made at both 17 and 34 GHz (17 GHz is shown in the bottom-right panel of Figure \ref{fig:geom}), showed a bubble of cold material in thermal emission emerging over the limb and rapidly expanding. This cold material showed fluxes of up to 1000 sfu, bright enough to eclipse any emission from the same electron population producing the HXR source. When the observed radio contours were overlaid on those from AIA's 50,000 K 304 \AA{} channel, the correlation immediately suggested that the majority of the emission seen in the radio was in fact due to thermal radiation. Coronal sources observed at GHz wavelengths have been interpreted as bubbles filled with nonthermal electrons \citep[c.f.][]{whiteRelationshipSolarRadio2011}; however this does not agree with the majority of the thermal emission observed in the radio for this event. \section{Discussion} The earliest report of a purely coronal hard X-ray burst with a very hard spectrum reaching up to hundreds of keV was the famous event of March 30, 1969 \citep{frostEvidenceHardXRays1971}. Although not directly mentioned in that paper, the flare was occulted \citep{badilloSolarMicrowaveBurst1969}. Using stereoscopic spectral observations of the February 16, 1984 flare, \citet{kaneStereoscopicObservationsSolar1992} were the first to determine that HXR sources must be extended, greater than 100” in size. The first observations with imaging were provided by Yohkoh HXT \citep{hudsonHardXRadiationFast2001}, which confirmed that the coronal source was extended and furthermore, moving away from the Sun. The best imaging so far was done by \citet{kruckerSolarFlareHard2007} using RHESSI, which clearly showed large sources that expanded while moving away from the Sun. Eventually the sources inflated to sizes so large that even the smallest Fourier component observable by RHESSI did not have a significant signal. \citet{glesenerObservationHeatingFlareaccelerated2013a} observed a related though barely-occulted event which demonstrated that HXR-producing electrons could heat the CME core to temperatures around 10 MK. Table 2 summarizes the key observables for previously published events. While it is not clear that all studies indeed describe the same type of event, they share several common characteristics: \begin{itemize} \item The hard X-ray profiles are rather simple with a broad peak followed by an exponential decay. \item The non-thermal part of the spectrum is generally hard and tends to further harden during the decay. \item Sources are spatially extended and move away from the Sun while expanding. \item Sources are observed behind the front of the escaping CME. \end{itemize} \begin{table* \caption{Published properties of selected occulted flares} \label{tbl:events} \begin{tabular}{p{1cm}p{3cm}cp{1cm}cccccccc} \hline Date & Publication & Occultation & GOES & CME & 30 keV & $\tau$ (s) & $\gamma_{cor}$& $\gamma_{chro}$ & 30 keV& Size& Speed \\ &&&obs.&speed&photon&&&&flux flare&arcsec&(km/s)\\ &&&est.&(km/s)&flux&&&&&&\\ \hline Mar 30, 1969 & \citet{frostEvidenceHardXRays1971}, \citet{badilloSolarMicrowaveBurst1969} & 15$^{\circ}$ &&& $\sim$3 & $\sim$300 & $\sim$2 &&& \\ Dec 14, 1971 & \citet{hudsonPurelyCoronalHard1978}& 25$^{\circ}$ &&& 0.05& $\sim$600& 2.1 &&& \\ Jul 22, 1972 & \citet{hudsonSecondstageAccelerationLimbocculted1982} & 20$^{\circ}$ &&& 0.1 & $\sim$400 & 2.5-1.8 &&& \\ Feb 16, 1984 & \citet{kaneStereoscopicObservationsSolar1992} & 36$^{\circ}$ &B3 & & 0.4& $\sim$60 & 3.8-2.6 &3.3&150& $\>$ 140$^{\circ}$ & \\ Jun 30, 1991 & \citet{vilmerHardXrayGammaray1991} & 2$\pm$12$^{\circ}$ & M5& &10 & $\sim$25 & 2.8-2 &&& \\ Apr 18, 2001 & \citet{hudsonHardXRadiationFast2001} & 27$^{\circ}$ &C2 \hspace{1cm} $>$ X1& 2400& &$\sim$30 & 4.3-3.4 &&&20"- 70"& $\sim$1000\\ Oct 27, 2003 & \citet{kruckerSolarFlareHard2007} \citet{vybornovObservationPowerfulSolar2012} & 40$^{\circ}$ &$\sim$B1 \hspace{.5mm} $>$ X1& 2300 & 0.1 & $\sim$135 &3.6-3.1 &2.3&80&200"&$\sim$750 \\ Nov 3, 2012 & \citet{glesenerObservationHeatingFlareaccelerated2013a} & 6$^{\circ}$ &C5 \hspace{1cm} X1& 240 & 0.3 & $\sim$30 &4.5&&&50"-100"& \\ May 1, 2013 & Lastufka et al. (2019) & 30$^{\circ}$ &B \hspace{1cm} M3-7& 400 &0.03& $\sim$140 & 3.3 &2.5&20&100"&$\sim$ 200 \\ Sep 1, 2014 & \citet{carleyEstimationCoronalMass2017} \citet{grechnevRadioHardXRay2018} & 36$^{\circ}$ &$\sim$B1 X2&2000& 0.2 & $\sim$900 &2.06\footnote{This spectral index was derived from flux integrated over the entire flare \citep{ackermannFermiLATObservationsHighenergy2017b}, whereas $\gamma_{chro}$ and the 30 keV flux quoted are for the first X-ray peak seen by HEND}& 3.3 &220&$\>$200"&\\ \hline \end{tabular} \end{table*} These facts support the current best explanation that these HXR sources are produced by bremsstrahlung emission from flare-accelerated electrons, which escape upward from the coronal acceleration region. Flare-accelerated electrons cannot escape freely from the Sun because they are injected within the complex magnetic field structure of the CME core. Due to the low ambient density in the corona, these electrons only lose energy slowly; however, they can move around within the CME core during their livetime, resulting in a large source size. The observed exponential decay is likely the result of the combined loss mechanisms after the injection stops. The largest contributor is probably Coulomb collisions, but electrons leaving the source volume can also result in a reduction of flux. Other loss mechanisms might be at play as well. It is also possible that acceleration might not stop completely after the event peak, such that continuing injection at a low rate prolongs the decay. Using estimates of the ambient density, the expected collisional loss times roughly agree with the observed time scales of the decay, indicating that collisions alone could be enough to explain the decaying time profile. The collisional losses are expected to heat the ambient plasma, creating a hot thermal source within the CME core \citep{glesenerObservationHeatingFlareaccelerated2013a}. The observations of the May 1 flare discussed in this work fit well within this picture of large source sizes and exponential decay. What is different from the event in \citet{glesenerObservationHeatingFlareaccelerated2013a} is that the source of the non-thermal emission was spatially displaced from the hot CME core. Therefore, further evidence is required to conclude that the May 1 CME core was heated by non-thermal electrons. If such heating did occur, it must have been from a HXR source co-spatial with the core that was no longer observable by the time the core became visible above the solar limb. This is certainly possible; comparing with the event studied by \citet{glesenerObservationHeatingFlareaccelerated2013a}, the CME core would have been barely visible above the limb at the end time of the HXR burst (c.f. Figure 3, panel 6, from \citet{glesenerObservationHeatingFlareaccelerated2013a}) if it had had the same occultation height seen here for the May 1 event. However, this implies that the RHESSI non-thermal emission shown in Figure \ref{fig:rhessi_smooth} must have come from a different population of energetic electrons than that of the observed HXR source. These could have been injected into a magnetic structure with a lower density, allowing the electrons to survive longer, or a second injection could have occurred at a later time. Nevertheless, we have no observational evidence for such a scenario so the prudent explanation is that that hot core in the May 1 flare was not heated by non-thermal particles, but rather a different mechanism. For four events in Table 2, including the May 1, spectral observations of the main flare emission allow us to estimate the relative intensity of the coronal source with respect to the rest of the flare. Spectral information for full-flare perspective was only available above 100 keV. High coronal events are best seen at lower energies around 30 keV, making an extrapolation necessary. Comparing the extrapolated fluxes at 30 keV, the total flare emission is 400 to 800 times stronger than the coronal emission. The flare spectra also tend to be harder in the chromosphere/low corona than in the high coronal source; hence, the difference in flux is expected to increase at higher energies with values between 1000 and 1500 at 100 keV. While these flux ratios are relevant for detection, the physical significance comes from the number of accelerated electrons within each source. To derive the number of electrons, we have to adopt a certain model. The total flare energy is usually derived by assuming that electrons lose all their energy in the chromosphere, with collisions being the main loss mechanism (e.g. cold thick target model, e.g. \citet{brownDeductionEnergySpectra1971}). This is a rather robust assumption for chromospheric sources, but it depends on low-energy cutoff that is poorly constrained. We summarize the energy input to the chromosphere by electrons using 20 keV as a reference in Table 3. While the coronal high-energy source is also likely produced by bremsstrahlung (for a discussion on the possible contribution of inverse Compton radiation see \citet{chenROLEINVERSECOMPTON2012}), the low ambient density within the source makes the classic thick target assumption inappropriate. Therefore the number of electrons needed to produce the hard X-ray spectrum at any time can be estimated from the instantaneous number of electrons by assuming a thin target model. However, the instantaneous number $N_e$ depends not only on the observed photon spectrum (as does the thick target approximation), but also on the ambient density, to which it is inversely proportional. Using an ambient density of 10$^8$ cm$^{-3}$ for all events, Table 3 gives the instantaneous number of electrons at peak time, again with 20 keV as the reference energy. $N_e$ at peak time is the maximum number of accelerated electrons that radiates at any time. It can therefore be used as a lower limit of the total electrons in the high-altitude source. The actual number could be larger, as a lower-magnitude injection of electrons could continue after the peak time. Particle-acceleration occurring in flares well after the peak is a well-established behavior. Additionally, a larger number of electrons might have been injected before the non-thermal source became visible above the limb as seen from Earth. Considering all these uncertainties, and that the ambient density is not well constrained, the estimate of the number of electrons is uncertain by a factor of a few at least. Nevertheless, it is our current best estimate that compared to the number of >20 keV electrons in the main flare peak, the number of electrons in the high coronal source are below the percent range. \begin{table} \label{tbl:numbers} \centering \caption{Estimates of the number of electrons above 20 keV assuming thick target emissions from the footpoints and thin target for the extended coronal source. } \begin{tabular}{lcc \hline Date& Thick target $N_e$ & Thin target $N_e$\\ & footpoints & corona\\ &$>$20 keV &$>$20 keV at peak\\ \hline Feb 16, 1984 & 2.5\rm{x}10$^{39}$ &2.5\rm{x}10$^{36}$ \\ Oct 27, 2003 & 3.5\rm{x}10$^{38}$ &6.7\rm{x}10$^{35}$ \\ May 1, 2013 & 9.4\rm{x}10$^{37}$ & 2.0\rm{x}10$^{35}$ \\ Sept 1, 2014 & 2.7\rm{x}10$^{39}$ &7.7\rm{x}10$^{35}$ \\\hline \end{tabular} \end{table} \section{Conclusion} The moderately sized flare and CME of May 1, 2013 was uniquely well-situated for high-energy observations associated with CMEs. Due to the event's position 30$^{\circ}${} behind the solar limb, RHESSI saw coronal emission with no contamination from the footpoints or flare loop arcade. From the opposite side of the Sun, HEND viewed the full, un-occulted flare. Radio and EUV instruments were available to provide context and constrain the interpretation of the event. Analysis of the thermal X-ray emissions found that plasma with temperatures up to 11 MK was ejected. The hot source was located behind and rose slower than the CME front, indicating that it was likely the result of hot plasma trapped in the complex magnetic fields of the CME core. RHESSI imaging of this large thermal X-ray source expanding as it rose above the flare site may be the best such observation to date. The long iron ionization time scales for such a high altitude source at low density made the hot core of the CME undetectable in EUV, unlike in \citet{glesenerObservationHeatingFlareaccelerated2013a}. Imaging also showed a clear short-lived non-thermal source, which was very extended with a FWHM of 110", located 185" in projection above the flaring region. This was both higher up than the thermal CME core, and still behind the CME front. It must have originated from a more tenuous part of the CME core where non-thermal electrons survived long enough to become visible from Earth. Because of their location above the CME core, the non-thermal particles in this source could not have been responsible for heating the core itself. It is possible that a different population of non-thermal electrons heated the core to its 11 MK temperature, but these were not observed due to the occultation. Another possibility is that a different heating mechanism entirely was responsible. Either way, we were unable to say for certain how the CME core was heated. Assuming a thick-target model for the HEND spectrum and a thin-target model for that of RHESSI, we deduced that the number of coronal vs chromospheric non-thermal electrons to be one in five hundred. Although some bursts were observed in the flaring region before the main peak, both RHESSI and HEND observed the main electron acceleration at almost the same time. HEND, whose count spectrum was dominated by the chromospheric footpoints, saw a thousand times more photon flux than RHESSI. Both this flux ratio and the spectral index difference agrees with earlier studies by \citet{kaneStereoscopicObservationsSolar1992,vybornovObservationPowerfulSolar2012}; however, the flares examined in these works were extremely large. With X-ray imaging spectrometers, it is understandable that only the largest such events are identified, since larger flares produce more high-energy counts which improves the possibility and quality of imaging. The presence of similar properties in the M-class flare of May 1 invites us to consider whether such a flux ratio and spectral index difference are common to all flares regardless of magnitude. Furthermore, the presence of coronal X-ray sources very high above the site of an average-sized flare supports the idea that these should be present in most flares, and could be revealed given sufficient observations from various angles, as will be provided by Solar Orbiter. \section*{Acknowledgements} The LASCO CME catalog is generated and maintained at the CDAW Data Center by NASA and The Catholic University of America in cooperation with the Naval Research Laboratory. SOHO is a project of international cooperation between ESA and NASA. The work was supported by Swiss National Science Foundation (200021-163377) and through NASA contract NAS 5-98033 for RHESSI. The authors would like to thank the anonymous reviewer, whose comments improved the paper. \bibliographystyle{plainnat}
2,877,628,091,375
arxiv
\section{INTRODUCTION} \vspace*{-2mm} \label{sec:intro} Using quarter sampling \cite{Schoberl2011}, the spatial resolution of an imaging sensor can be increased. This is achieved by physically covering three quarters of each pixel of a regular low-resolution sensor. Effectively, this leads to a non-regular sampling of the image with respect to a higher resolution grid with twice the resolution in both spatial dimensions as can be seen in Figure\,\ref{fig:flow_graph} (left). Due to the non-regularity, visually disturbing aliasing artifacts that conventionally occur for regular sampling can be reduced \cite{Dippe1985, Hennenfent2007, Maeda2009}. For the reconstruction, frequency selective reconstruction (FSR) has shown to be a successful reconstruction scheme for various inpainting and extrapolation tasks \cite{Herraiz2008, Stehle2006} and gave best results for non-regular sampling and quarter sampling in~\cite{Schoberl2011,Seiler2015,Grosche2018}. Quarter sampling, as well as any non-regular sub-sampling, can be seen as a special case of compressed sensing \cite{Candes2006, Donoho2006} as has been shown in \cite{Seiler2015, Grosche2020_localJSDE}. In the compressed sensing framework, the FSR can be interpreted as a special case of more general reconstruction algorithms from the class of matching pursuit algorithms \cite{Grosche2020_localJSDE,Mallat1993, Tropp2007}. Besides still images, the acquisition of video data is of great importance. In combination with quarter sampling, video acquisition has been investigated for fixed quarter sampling masks \cite{Jonscher2015, Jonscher2016a} as well as for dynamic quarter sampling masks \cite{Jonscher2018}. For the latter, the sampling mask changes from frame to frame and a sophisticated read-out strategy is applied such that each pixel in a $2{\times}2$ block is read exactly once within four frames. Every fourth frame, the mask repeats. Compared to a purely spatial or purely temporal sub-sampling, a more uniform placement of the pixels in time and space is achieved and a higher reconstruction quality is found \cite{Jonscher2018}. In this paper, we consider both fixed and dynamic masks. \begin{figure}[t] \centering {\footnotesize \import{images/flowgraph/}{flow_graph.pdf_tex} } \vspace*{-4mm} \footnotesize \caption{Flow diagram of quarter sampling for video data using a dynamic mask. Our novel contributions to the consistency checks of recursive FSR are highlighted in red. Abbreviations: ME: Motion estimation, CC: Consistency check, PR: Projection, RME{}: Reverse motion estimation, RMC{}: Reverse motion check, \mbox{FRMC}{}: Fast reverse motion check, NNC{}: Nearest neighbor check.} \label{fig:flow_graph} \vspace*{-4mm} \end{figure} Among those works, causal reconstruction algorithms such as the one in \cite{Jonscher2016a} are of special interest since they only use past measurements for the reconstruction of the current frame. Such causal scenario is of special importance since future measurements are not available in real-time applications at the time of the reconstruction. To processing chain of the measurement an reconstruction is illustrated in Figure\,\ref{fig:flow_graph} for a current frame at time $t$ and two preceding frames. In this paper, we focus on such causal reconstruction scenarios as it is done in \cite{Jonscher2016a}. Our novel contributions to these scenarios are twofold: As first contribution, we propose a novel combination of consistency checks that finds outliers among the motion vectors much faster and more reliable than in \cite{Jonscher2016a}. This is marked with red color in Figure\,\ref{fig:flow_graph}. As second contribution, we propose the so called \mbox{D-FSR}{} being a new implementation of recursive FSR that handles the projected pixel differently than in \cite{Jonscher2016a} and can be used with dynamic sampling masks. It is different to \cite{Jonscher2016a}, where only fixed masks are considered and it is different from \cite{Jonscher2018}, where information from future frames is used. Our analysis is performed on a variety of test sequences to show the wide applicability of the modifications. Besides analysis of the reconstruction quality of the proposed modifications, visual comparisons are provided, and the computation times are compared. This paper is organized as follows: In Section\,\ref{sec:state_of_art}, we present the state of the art. In Section\,\ref{sec:proposed_enhancements}, we describe our novel contributions. In Section\,\ref{sec:simulation_and_results}, the simulation results are presented and discussed. Section\,\ref{sec:conclusion} summarizes the paper. \vspace*{-1mm} \section{STATE OF THE ART} \vspace*{-1mm} \label{sec:state_of_art} \vspace*{-1mm} \subsection{Single Frame Reconstruction} \vspace*{-1mm} In order to reconstruct the missing pixels from the sampled image data, frequency selective reconstruction (FSR) has shown to provide high reconstruction quality outperforming other reconstruction techniques \cite{Seiler2015}. The sub-sampled image can be understood as $f^\mathrm{sub}_{mn} = f_{mn} \cdot b_{mn}$, where $f$ and $b$ are the reference high resolution image and the binary mask. From the sub-sampled image and the mask, FSR reconstructs an image $\hat{f}_{mn}$. It therefore subdivides the image into neighboring blocks that are reconstructed using the measurements from their neighboring blocks, too. The model of each block is build in the Fourier domain where the image is assumed to be approximately sparse \cite{Lam2000, Elad2010}. \vspace*{-1mm} \subsection{Recursive FSR} \vspace*{-1mm} For video data, it was proposed to additionally use information from previous frames \cite{Jonscher2016a}. For any missing pixel in the current frame, a motion vector pointing to a measured pixel in one of the preceding frames could provide useful information. In \cite{Jonscher2016a}, Jonscher et al. propose such an approach called recursive FSR (\mbox{R-FSR}{}). In \mbox{R-FSR}{}, the motion estimation is performed by a pixel-wise template matching using the already reconstructed past frames and the measured data from the current frame. Such motion vector field is illustrated in Figure\,\ref{fig:consistency_checking}\,(a). During the template matching \cite{brunelli2009template}, some motion vectors may be untrustworthy. This can result from cases where the motion is larger than the search range, from occlusions or local optima. Since non-regularly sampled data is used, these issues are increased further as fewer information is available. In order to sort out unfavorable motion vectors, Jonscher et al. propose a consistency check for which the motion vectors in the reverse direction are calculated using an additional reverse motion estimation (RME{}). Only when both motion vectors coincide, the motion vector is accepted. While this strategy is reasonable and seems successful, it is also computationally demanding since the number of cost functions that needs to be evaluated is doubled. With the accepted motion vectors at hand, values for some of the missing pixels the current frame can be found by following their motion vector into the past. If a measurement is available at the corresponding position in the past frame, it is projected to the current frame and it is used as an additional measurement during the reconstruction. In case projections from more than one past frame are available, these are averaged. \section{Novel Contributions} \vspace*{-1mm} Our novel contributions to the recursive reconstruction of non-regularly sampled video data are twofold and described in the two following sub-sections. \vspace*{-1mm} \label{sec:proposed_enhancements} \begin{figure}[t] \centering {\footnotesize \import{images/explain_crosschecks/}{explain_crosschecks.pdf_tex} } \vspace*{-5mm} \footnotesize \caption{Illustration of (a) the motion vector field from frame $f^{(t)}$ to frame $f^{(t-1)}$ and (b-d) the used consistency checks. In contrast to the values used in the text, RMC{} is shown for a search range of $\{-2,-1,0,-1,2\}$ and \mbox{FRMC}{} is shown for a partial search range being $\{-2,0,2\}$.} \label{fig:consistency_checking} \vspace*{-4mm} \end{figure} \vspace*{-1mm} \subsection{Proposed Consistency Checks} \vspace*{-1mm} As first contribution, we propose two novel consistency checks. Their aim is to achieve a reduced computational complexity and to increase the reconstruction quality. \vspace*{-2mm} \subsubsection{Fast Reverse Motion Check (\mbox{FRMC}{})} \vspace*{-1mm} The first proposed consistency check is related to RME{}. Instead of calculating the reverse motion vector field, we test the more relevant motion vectors around the already found motion vector. In a first step, we therefore propose testing the same number of motions vectors as in RME{} but placing them symmetrically around the motion vector pointing back to the original pixel. This is illustrated in Figure\,\ref{fig:consistency_checking}\,(b) and is further denoted as reverse motion check (RMC{}). This algorithm can be assumed to be roughly as fast as RME{} since the same number cost functions needs to be calculated. In a second step, we propose testing only a tiny subset of these motion vectors leading to fast RMC{} (\mbox{FRMC}{}). In case of an untrustworthy motion vector, the probability is high that many of the motion vectors in the reverse direction have a smaller cost than the currently chosen motion vector. In our setup, we test only motions in the set $\{-7,-3,-1,0,1,3,7\}$ for both spatial dimensions instead of the testing all motion vectors $\{-9,-8,\dots, 9\}$ as in RMC{} and RME{}. The number of motions to be tested is significantly reduced from $361$ to $49$. An example is illustrated in Figure\,\ref{fig:consistency_checking}\,(c). If the green arrow has the lowest cost, the motion is accepted. If any of the red arrows has the lowest cost, the motion is rejected. % \vspace*{-1mm} \subsubsection{Nearest Neighbor Check (NNC{})} \vspace*{-1mm} The second proposed consistency check does not require any additional template-matching at all and is therefore potentially faster. It is supposed to be used in combination with \mbox{FRMC}{} in order to speed up the calculations. This can be achieved since many motion vectors may already be sorted out with this simpler consistency check. It relies on testing the consistency of the determined vector field in a local neighborhood. Denoting the found motion vector field as $(\alpha_{mn}, \beta_{mn})$, we perform a $3{\times}3$ median filtering for the two individual components resulting in the filtered motion vector field \begin{align} (\tilde\alpha_{mn}, \tilde\beta_{mn}) = (\mathrm{median}_{3{\times}3}(\alpha_{mn}), \mathrm{median}_{3{\times}3}(\beta_{mn})). \end{align} Next, for each position $(m,n)$, the filtered motion vectors at the four nearest neighboring positions $(m-1,n), (m+1,n), (m,n-1)$, and $(m,n+1)$ are compared. Only if the sum of the absolute differences of the motion vectors is at most one for each neighboring pair, the motion is accepted. If not, the motion is rejected because it is considered to be untrustworthy. Such accepted/rejected motions are highlighted with green/red color in Figure\,\ref{fig:consistency_checking}\,(d). This modification of the consistency check is abbreviated as nearest neighbor-check (NNC{}) later on. % \vspace*{-1mm} \subsection{Proposed Recursive FSR for Dynamic Masks (\mbox{D-FSR}{})} \vspace*{-1mm} As second contribution to this paper, we propose a new implementation of a recursive FSR build upon the work from \cite{Jonscher2016a}. Other than \mbox{R-FSR}{} from \cite{Jonscher2016a}, our implementation handles the projected pixels differently and is capable of additionally handling dynamic masks. The novel algorithm is abbreviated as \mbox{D-FSR}{}. During the model generation, we use the projected pixels in the same manner as \mbox{R-FSR}{}. As a last step, however, the model found during the reconstruction is to be overwritten with the available measurements as commonly done in FSR \cite{Seiler2015}. In this step, \mbox{R-FSR}{} makes no difference between measured and projected pixels, whereas \mbox{D-FSR}{} considers the projected pixels to be less reliable and therefore does not use them to overwrite the model. \vspace*{-1mm} \section{SIMULATIONS AND RESULTS} \label{sec:simulation_and_results} \vspace*{-1mm} In this section, we evaluate the performance of the proposed consistency checks and \mbox{D-FSR}{}. We compare them to \mbox{R-FSR}{} + RME{} from \cite{Jonscher2016a}, investigate the impact of using a dynamic mask instead of a fixed mask, and show the runtimes. For any reconstructions with FSR, we chose the same parameters as in \cite{Jonscher2016a} except for the concealed weighting of the FSR being set to zero allowing us to perform a fully parallel processing of all blocks during the reconstruction. For all recursive reconstructions, we use three previous frames for the motion estimation and projection. For \mbox{R-FSR}{} + RME{} from \cite{Jonscher2016a} we use raw simulation data kindly provided by the authors. This data is available for the fixed mask and one of the test sequence. For all other cases we use our own implementations as described in Section\,\ref{sec:proposed_enhancements}. \begin{figure}[t] \centering {\footnotesize \import{images/test_datasets/}{datasets.pdf_tex} } \vspace*{-5mm} \footnotesize \caption{Frame 20 of each video sequence used for the simulations.} \label{fig:testdata} \vspace*{-4mm} \end{figure} For the test sets, we us several monochrome video sequences: The first 100 frames of the \textit{Spincalendar} sequence having a resolution of $1280{\times}720$ pixels are used since these were also used in \cite{Jonscher2016a}. Further video data is taken from the \textit{JVET} test sequences \cite{JVET-N1010}. For the \textit{JVET -- ClassC} sequences, the resolution is $832{\times}480$ pixels and we use the first 50 frames. Moreover, we chose three sequences from \textit{JVET -- A}. For those, we spatially down-scaled the frames by a factor of three resulting in $1280{\times}720$ pixels to achieve a similar resolution as for the other sequences. Once more, we use the first 100 frames. Figure\,\ref{fig:testdata} depicts a single frame of each used sequence. To evaluate the quality of the reconstructed videos, we calculate the frame-wise PSNR and average it for all frames of the respective video. For the PSNR calculation, a border of 40 pixels is omitted since boundary effects are not considered to be of interest in our evaluation. The PSNR values are then further averaged across the video sequences to achieve a meaningful average value. The same evaluations were done for the mean structural similarity (SSIM)~\cite{Wang2004}. \begin{figure*}[t] \centering {\footnotesize \import{images/visual_compare/}{visual_compare_wide.pdf_tex} } \vspace*{-5mm} \footnotesize \caption{Visual comparison of the different reconstruction methods for sections of frame 20 from the \textit{Tango2} sequence and frame 30 from the \textit{Spincalendar} sequence. The PSNR values given as insets are calculated for the visible sections. Since only dynamic masks are used, data for \mbox{R-FSR}{}+RME{} \cite{Jonscher2016a} is not available. \textit{(Best to be viewed enlarged on a monitor.)}} \label{fig:compare_visual} \vspace*{-4mm} \end{figure*} \begin{table}[t] \footnotesize \caption{Results for the fixed mask. The average reconstruction quality in terms of PSNR in dB is provided using different test sets. For the averages, SSIM is provided, too.} \vspace*{-2mm} \scriptsize \label{tab:results_fixed} \centering \setlength{\tabcolsep}{3pt} \begin{tabularx}{0.99\linewidth}{ll||c|c|c|c|c|c} & \textit{\textbf{(fixed mask)}} & & \mbox{R-FSR}{} & & & & \mbox{D-FSR}{} \\ & & FSR & + RME{} & \mbox{D-FSR}{} & \mbox{D-FSR}{} & \mbox{D-FSR}{} & + \mbox{FRMC}{} \\ & & \cite{Seiler2015} & \cite{Jonscher2016a} & + RME{} & + RMC{} & + \mbox{FRMC}{} & + NNC{} \\ \hline & \rule{0pt}{1\normalbaselineskip}Spincalendar & 30.38 & 31.66 & 32.67 & 32.72 & \textbf{33.00} & 32.68 \\ \hline \rule{0pt}{1\normalbaselineskip} \multirow{4}{*}{\rotatebox{90}{ \textit{Class C}}} & BasketballDrill & 31.27 & - & 31.10 & \textbf{31.44} & 31.38 & \textbf{31.44} \\ & BQMall & 27.49 & - & 27.73 & 27.81 & \textbf{27.82} & 27.81 \\ & PartyScene & 23.22 & - & 23.35 & 23.37 & 23.37 & \textbf{23.39} \\ & RaceHorses & 28.69 & - & 28.32 & 28.96 & 28.85 & \textbf{29.15} \\ \hline \rule{0pt}{1\normalbaselineskip} \multirow{3}{*}{\rotatebox{90}{ \textit{A}}} & Tango2 & 39.80 & - & 37.78 & 40.58 & 40.40 & \textbf{40.62} \\ & ParkRunning3 & 30.16 & - & 31.21 & 31.27 & 31.19 & \textbf{31.43} \\ & FoodMarket4 & 47.23 & - & 41.83 & \textbf{46.53} & 45.72 & 46.35 \\ \hline & \rule{0pt}{1\normalbaselineskip}\textit{\textbf{Average (PSNR)}} & 32.28 & - & 31.75 & 32.83 & 32.72 & \textbf{32.86}\\\hline\hline & \rule{0pt}{1\normalbaselineskip}\textit{\textbf{Average (SSIM)}} & 0.9334 & - & 0.9368 & 0.9399 & 0.9394 & \textbf{0.9406} \end{tabularx} \vspace*{-2mm} \end{table} \begin{table}[t] \vspace*{2mm} \footnotesize \caption{Results for the dynamic mask. The average reconstruction quality in terms of PSNR in dB is provided using different test sets. For the averages, SSIM is provided, too.} \vspace*{-2mm} \scriptsize \label{tab:results_vary} \centering \setlength{\tabcolsep}{3pt} \begin{tabularx}{0.9\linewidth}{ll||c|c|c|c|c} & \textit{\textbf{(dynamic mask)}} & & & & & \mbox{D-FSR}{} \\ & & & \mbox{D-FSR}{} & \mbox{D-FSR}{} & \mbox{D-FSR}{} & + \mbox{FRMC}{} \\ & & FSR \cite{Seiler2015} & + RME{} & + RMC{} & + \mbox{FRMC}{} & + NNC{} \\ \hline & \rule{0pt}{1\normalbaselineskip}Spincalendar & 30.43 & 33.20 & 33.26 & \textbf{33.56} & 33.21 \\ \hline \rule{0pt}{1\normalbaselineskip} \multirow{4}{*}{\rotatebox{90}{ \textit{Class C}}} & BasketballDrill & 31.30 & 33.62 & 34.28 & \textbf{34.37} & 34.31 \\ & BQMall & 27.53 & 30.01 & 30.14 & \textbf{30.39} & 30.27 \\ & PartyScene & 23.23 & 24.64 & 24.70 & \textbf{25.05} & 24.92 \\ & RaceHorses & 28.71 & 28.34 & 29.02 & 28.92 & \textbf{29.22} \\ \hline \rule{0pt}{1\normalbaselineskip} \multirow{3}{*}{\rotatebox{90}{ \textit{A}}} & Tango2 & 39.84 & 37.59 & 40.69 & 40.51 & \textbf{40.73} \\ & ParkRunning3 & 30.19 & 31.36 & 31.42 & 31.34 & \textbf{31.58} \\ & FoodMarket4 & 47.28 & 41.56 & \textbf{46.58} & 45.78 & 46.38 \\ \hline & \rule{0pt}{1\normalbaselineskip}\textit{\textbf{Average (PSNR)}} & 32.31 & 32.54 & 33.76 & 33.74 & \textbf{33.83}\\\hline\hline & \rule{0pt}{1\normalbaselineskip}\textit{\textbf{Average (SSIM)}} & 0.9338 & 0.9477 & 0.9510 & 0.9520 & \textbf{0.9524} \end{tabularx} \vspace*{-2mm} \end{table} Tables\,\ref{tab:results_fixed} and \ref{tab:results_vary} show the results of the reconstruction quality in terms of PSNR using a fixed and a dynamic mask, respectively. Besides the results using the single frame FSR \cite{Seiler2015}, the various consistency checks are shown in combination with \mbox{D-FSR}{}. Additionally, the SSIM was evaluated in the same manner. Its results are in accordance with the PSNR values and the averages are provided in the last row of Tables\,\ref{tab:results_fixed} and \ref{tab:results_vary} for completeness. Comparing the average results from both tables, we find that using a dynamic mask outperforms using a fixed mask by roughly \SI[retain-explicit-plus]{+1}{dB} which is consistent with the findings in \cite{Jonscher2018}. In Table\,\ref{tab:results_fixed}, we can observe that \mbox{D-FSR}{} + RME{}, outperforms the original version from \cite{Jonscher2016a} by \SI[retain-explicit-plus]{+1.01}{dB} for the \textit{Spincalendar} sequence. Beyond this, we investigated the influence of the proposed consistency checks on the reconstruction quality in terms of PSNR. Using RMC{}, the reconstruction quality averaged over all used sequences is increased by \SI[retain-explicit-plus]{+1.08}{dB} for the fixed mask and \SI[retain-explicit-plus]{+1.22}{dB} for the dynamic mask. Using the fast variant of RMC{}, namely \mbox{FRMC}{}, results in a slight decrease of the average PSNR. Interestingly, this trend is not uniform across the different sequences. For example, the \textit{FoodMarket4} scene shows a relevant loss whereas the reconstruction for other sequences improves. The average loss is reasonable, as not all motion vector are tested in the opposite direction. Lastly, the NNC{} is added to the \mbox{FRMC}{}. This combination shows the highest average reconstruction quality in both Tables\,\ref{tab:results_fixed} and \ref{tab:results_vary}. For the dynamic mask, a gain of \SI[retain-explicit-plus]{+1.52}{dB} is observed compared to the single frame FSR \cite{Seiler2015} and a gain of \SI[retain-explicit-plus]{+1.29}{dB} is observed compared to \mbox{D-FSR}{} + RME{}. This means, that \mbox{FRMC}{} + NNC{} overcomes the loss arising from switching to \mbox{FRMC}{} and even improves the quality in average. For a more in-depth view of the simulated data, Figure\,\ref{fig:compare_Jonscher_vs_frames} shows the frame-wise PSNR gain relative to the single frame FSR. The \textit{Spincalendar} sequence is used. It can be seen that roughly 10 frames are needed for the recursive algorithm to converge to a good quality and that the gain is then mostly constant for all the remaining frames. In order to be able to judge the visual quality, Figure\,\ref{fig:compare_visual} shows two sections of the reconstructed frames using the dynamic mask. For the example from the \textit{Tango2} sequence, it can clearly be seen that the rather low average PSNR of \mbox{D-FSR}{} + RME{} arises from strong artifacts indicated by the red arrow. These arise from faulty motion vectors that can occur in nearly constant regions in combination with large motion vectors and should be sorted out. For the other consistency checks, these motion vectors are sorted out as desired. For the \textit{Spincalendar} sequence, the differences among the proposed algorithms are more subtle. The cases where RMC{} performs slightly worse than \mbox{FRMC}{} + NNC{} are highlighted with red arrows. \begin{figure}[t] \centering {\footnotesize \import{images/results_compare_Jonscher_vs_frames/}{compare_Jonscher_frames.pdf_tex} } \vspace*{-4mm} \footnotesize \caption{Frame-wise gain of the reconstruction quality in terms of PSNR relative to the single frame FSR \cite{Seiler2015} using the \textit{Spincalendar} sequence.} \label{fig:compare_Jonscher_vs_frames} \vspace*{-2mm} \end{figure} In addition to the reconstruction quality, we evaluate the computation times for the different algorithms in case of the dynamic masks. We provide the runtimes for the motion estimation, the consistency checks between the current frame and its last three frames, and the reconstruction. For the motion estimation a fast variant using the GPU was developed, too. For the motion estimation and consistency checks, we restrict the executions to a single core on an Intel i9-10980XE CPU with 3.00GHz. Table\,\ref{tab:runtime} summarizes the results. Timings for \mbox{R-FSR}{} + RME{} from \cite{Jonscher2016a} are not available but its algorithmic complexity is identical to that of \mbox{D-FSR}{} + RME{}. The code for the pixel-wise motion estimation is identical for all three cases and therefore the results are all close. The same is true for the FSR. The slight differences can be used as an estimate of the accuracy of the measurements and are considered to be acceptable. Taking a look at the runtimes of the consistency checks, we can see that NNC{} + \mbox{FRMC}{} is more than 13-fold faster than the RME{} and RMC{}. Remarkably, combining \mbox{FRMC}{} and NNC{}, is more than 8-fold faster than using only \mbox{FRMC}{} since many motion vectors can be sorted out using solely the very fast NNC{}. In such cases, the slower \mbox{FRMC}{} is skipped. It is worth taking these times into relation with the total runtimes of the reconstruction, where an overall reduction of \SI{-48}{\%} is achieved. \begin{table}[t] \footnotesize \vspace*{2mm} \caption{Runtimes of the different steps in seconds. The algorithmic complexity of \mbox{R-FSR}{} + RME{} from \cite{Jonscher2016a} is identical to that of \mbox{D-FSR}{} + RME{}. ME: Motion estimation, CC: consistency check.} \scriptsize\vspace*{-2mm} \label{tab:runtime} \centering \setlength{\tabcolsep}{3pt} \begin{tabularx}{0.7\linewidth}{l||c|c|c||c} & ME & CC & \, FSR\, & Total \\ \hline \rule{0pt}{1\normalbaselineskip}\mbox{D-FSR}{} + RME{} & 21.56 & 33.80 & 6.02 & 61.37 \\ \mbox{D-FSR}{} + RMC{} & 23.11 & 40.19 & 6.11 & 69.41 \\ \mbox{D-FSR}{} + \mbox{FRMC}{} & 21.49 & 21.02 & 6.05 & 48.56 \\ \mbox{D-FSR}{} + NNC{} + \mbox{FRMC}{} & 23.14 & 2.56 & 6.07 & \textbf{31.77} \end{tabularx} \vspace*{-3mm} \end{table} \vspace*{-1mm} \section{CONCLUSION} \vspace*{-1mm} \label{sec:conclusion} Using recursive reconstruction algorithms, a pixel-wise motion estimation and projection between the current frame and its preceding frames is performed to enhance the reconstruction quality of the current frame. Since some motion vectors may be untrustworthy, it is required to perform a consistency check which sorts out such motion vectors. For this task, \mbox{R-FSR}{} from \cite{Jonscher2016a} relies on a computationally expensive reverse motion estimation (RME{}). In order to reduce the cost, we propose a new consistency check which is a combination of \mbox{FRMC}{} and NNC{}. Altogether, more relevant reverse motion vectors are tested in \mbox{FRMC}{} and most evaluations are skipped by comparing the locally neighboring motion vectors using NNC{}. The proposed \mbox{D-FSR}{} uses the projected pixels differently and can handle dynamic masks as well. With our proposed recursive reconstruction method and consistency checks, \mbox{D-FSR}{} + \mbox{FRMC}{} + NNC{}, we achieve a \SI[retain-explicit-plus]{+1.01}{dB} higher reconstruction quality in terms of PSNR compared to \mbox{R-FSR}{} + RME{} from \cite{Jonscher2016a} in case of the fixed mask and the \textit{Spincalendar} sequence. Testing a larger dataset of different video sequences, we find that the \mbox{D-FSR}{} + \mbox{FRMC}{} + NNC{} performs better than \mbox{D-FSR}{} + RME{} by \SI[retain-explicit-plus]{1.29}{dB} in average for the dynamic mask. The average PSNR gain with respect to the single frame FSR \cite{Seiler2015} is \SI[retain-explicit-plus]{+1.52}{dB}. At the same time, the proposed consistency check, is 13-fold faster than RME{} which reduces the total runtime by \SI{-48}{\%}. \bibliographystyle{IEEEbib}
2,877,628,091,376
arxiv
\section{Introduction\label{introduction}} The deterministic channel model for wireless networks proposed by Avestimehr, Diggavi and Tse \cite{amir2007_deterministicmodel} \cite{amir2007_wirelessnetworkinfoflow} (referred to as ADT model thereafter) has been a useful tool for understanding the fundamental limitations of information transfer in wireless networks. The ADT model captures two main features, the broadcasting and interference, that are present in wireless networks. It converts the wireless networks into deterministic networks, by making appropriate assumptions, that in turn lead to approximate capacity results. Consider a point-to-point Gaussian channel given by $y=\sqrt{{\mbox{SNR}}}x+z$ where $z\sim{\cal{N}}(0,1)$ (${\cal{N}}$ represents Gaussian distribution). Assume $x$ and $z$ are real numbers, then we can write $y\approx 2^n\sum_{i=1}^nx(i)2^{-i}+\sum_{i=1}^\infty (x(i+n)+z(i))2^{-i}$ where $n=\lceil\frac{1}{2}\log {\mbox{SNR}}\rceil$ (here we assume a peak power of $1$ for $x$ and $z$). If we think of the transmitted signal $x$ as a sequence of bits at different signal levels, then the ADT model truncates $x$ and passes only its bits above noise level (the first $n$ most significant bits here), i.e., it converts the original Gaussian channel into a deterministic channel without noise. When applying the ADT model to wireless networks, the broadcasting is captured by the fact that in the resultant deterministic networks, all outgoing edges from the same signal level of any transmitting node carry the same unit information, and the interference is captured by the fact that at each signal level of any receiving node, only the modulo sum of all the received signals is available to the receiving node. This model is called the linear finite-field deterministic channel model in \cite{amir2007_deterministicmodel} \cite{amir2007_wirelessnetworkinfoflow}. We refer to it as the ADT model and denote the finite field of size $p$ associated with the ADT model as $\mathbb{F}_p$ in this paper. In \cite{amir2007_deterministicmodel} \cite{amir2007_wirelessnetworkinfoflow}, the unicast (i.e., with one source S and one destination D) capacity $C$ of any linear deterministic wireless relay network was characterized as the minimum rank of the adjacency matrices describing all its S-D cuts. An exhaustive search for finding the minimum rank of the adjacency matrix for all S-D cuts results in an algorithm with complexity exponential in the size of the network. Amaudruz \& Fragouli \cite{aurore2009_combinatorial_algo_deterministic} were the first to propose a polynomial-time algorithm for finding the unicast capacity of a linear deterministic wireless relay network (see also \cite{fragouli2009_journal}). In this work, we improve upon Amaudruz \& Fragouli's work and further reduce the computational complexity of the algorithm by fully exploring the useful combinatorial features intrinsic in the problem. Our improvement applies generally with any size of finite fields $\mathbb{F}_p$ associated with the ADT model. Comparing with other algorithms on solving the same problem \cite{sadegh2009_combinatorialstudyofdeterministic} \cite{geomans2009}, our improved algorithm is very competitive in terms of complexity. This paper is organized as follows. In Section \ref{notation}, we briefly introduce the polynomial-time algorithm by Amaudruz \& Fragouli for finding the unicast capacity of linear deterministic wireless relay networks. Section \ref{ouralgo} gives a detailed description of our improvement upon the algorithm. First we introduce our improvement with an emphasis on the new components of our algorithm and how they fix the problems within the original algorithm. Then we explore several useful combinatorial features intrinsic in the problem. Finally we explain how these combinatorial features can be combined with our new components to reduce the complexity of the algorithm. We also give the comparison results between our improved algorithm and other algorithms on solving the same problem. Section \ref{conclusion} concludes the paper. \section{Preliminaries and Background\label{notation}} \subsection{Notations and Definitions} In \cite{amir2007_wirelessnetworkinfoflow}, it is shown that an arbitrary deterministic relay network can be expanded over time to generate an asymptotically equivalent (in terms of transmission rate) layered network. Therefore, we focus on layered deterministic networks. Let ${\cal{G=(V,E)}}$ denote a layered deterministic wireless relay network where ${\cal{V}}$ represents the set of nodes in the original wireless relay network, each node in $\mathcal{V}$ has several different levels of inputs and outputs and ${\cal{E}}$ is the set of directed edges going from one input of some node to one output of some other node. For example, Fig. \ref{fig:subfig1} gives a graph representation of a layered deterministic wireless relay network where each node is labeled with a capital letter, all inputs (outputs) from nodes are labeled as $\{x_i\}$ ($\{y_j\}$), $1\leqslant i,j\leqslant 8$. In the layered network ${\cal{G}}$, all paths from the source node S to the destination node D have equal lengths \cite{amir2007_wirelessnetworkinfoflow}. The set of nodes ${\cal{V}}$ are divided into different layers according to their distances to S. The first layer consists of S and the last layer consists of D. Let ${\cal{A}}(x_i)$ (or ${\cal{A}}(y_j)$) denote the node where an input $x_i$ (or an output $y_j$) belongs to. Let ${\cal{L}}(A)$ (or ${\cal{L}}(x_i)$, ${\cal{L}}(y_j)$) denote the layer number where node $A$ (or $x_i$, $y_j$) belongs to. Denote $M$ as the maximum number of nodes in each layer, $L$ the total number of layers and $d$ the maximum number of outgoing edges from any input in any node in the network ${\cal{G}}$ in this paper. A cut $\Omega$ in ${\cal{G}}$ is a partition of the nodes ${\cal{V}}$ into two disjoint sets $\Omega$ and $\Omega^c$ such that S $\in\Omega$ and D $\in\Omega^c$. A cut is called a layer cut if all edges across the cut are emanating from nodes from the same layer, otherwise it is called a cross-layer cut. An edge $(x_i,y_j)\in {\cal{E}}$ belongs to layer cut $l$ if ${\cal{L}}(x_i)=l$. The adjacency matrix $T(\textbf{x},\textbf{y})$ for the sets of inputs $\textbf{x}=\{x_1,x_2,...x_m\}$ and of outputs $\textbf{y}=\{y_1,y_2,...y_n\}$ in ${\cal{G}}$ is a matrix of size $m\times n$ with binary $\{0,1\}$ entries. The rows correspond to $\{x_i\in \textbf{x}\}$ and columns corresponding to $\{y_i\in\textbf{y}\}$ and $T(i,j)=1$ if $(x_i,y_j)\in {\cal{E}}$. The adjacency matrix $T(E)$ for a set of edges, $E$, is the adjacency matrix for the sets of their inputs and their outputs. A set of edges, $E$, are said to be linearly independent (LI) if rank$(T(E))=|E|$ (where the rank is computed over GF$(2)$), otherwise they are said to be linearly dependent (LD). In ${\cal{G}}$, each S-D path is of length $L-1$ and crosses each layer cut exactly once. A set of S-D paths are said to be LI if the subsets of their edges crossing each layer cut are LI, otherwise they are said to be LD. In this work, we will consider a slightly more general adjacency matrix, where the non-zero entries can be from a finite field $\mathcal{F}_p$, and the rank is also computed over $\mathcal{F}_p$. Of course, all our results will also apply to the binary field case. Let ${\cal{E}}_{\Omega}$ be the set of edges crossing the cut $\Omega$ in ${\cal{G}}$. The cut value of $\Omega$ is defined as rank$(T({\cal{E}}_{\Omega}))$, which based on the definition equals the maximum number of LI edges in ${\cal{E}}_{\Omega}$. Note that the cut value defined above is different than that for regular graphs (which is just the number of edges crossing the cut). It is proved \cite{amir2007_deterministicmodel}\cite{amir2007_wirelessnetworkinfoflow} that the unicast capacity of a linear deterministic wireless relay network is equal to the minimum cut value among all S-D cuts. \subsection{Algorithm by Amaudruz \& Fragouli}\label{originalalgo} The unicast algorithm by Amaudruz and Fragouli \cite{aurore2009_combinatorial_algo_deterministic} finds the maximum number $C$ of linearly independent S-D paths in a given layered linear deterministic relay network ${\cal{G}}$, where $C$ is the unicast capacity of the network. The algorithm is a path augmentation algorithm, operating in iterations. In each iteration, the algorithm tries to find an additional S-D path so that all S-D paths found are LI. Let ${\cal{P}}=\{{\cal{P}}_{1},...,{\cal{P}}_{k}\}$ denote the set of $k$ LI S-D paths found in the first $k$ iterations. In the process of finding the $(k+1)$-th S-D path ${\cal{P}}_{k+1}$ in iteration $k+1$, the algorithm may make modifications to ${\cal{P}}$ while still maintaining a set of $k$ LI complete S-D paths. The unicast algorithm determines ${\cal{P}}_{k+1}$ by exploring nodes in ${\cal{G}}$ in a certain order as outlined shortly. The algorithm is implemented in two recursive functions $E_A$ and $E_x$ that explore a node and input respectively. The exploration of a node $A$ takes place when ${\cal{P}}_{k+1}$ has been extended from S to A and needs to be completed from A to D. In iteration $k+1$, the unicast algorithm calls $E_A$ with the following inputs: ${\cal{G}}$, ${\cal{P}}=\{{\cal{P}}_{1},...,{\cal{P}}_{k}\}$, the indicator function ${\cal{M}}$ (that implements a marking mechanism for visiting nodes and inputs/outputs) and S. The function $E_A$ returns true with one more S-D path ${\cal{P}}_{k+1}$ recorded in ${\cal{P}}$ if it succeeds in finding ${\cal{P}}_{k+1}$, false otherwise. Exploring node A implies exploring all unused inputs $\{x_i\}$ of A. So we explain the exploration of an input $x_i$ of A below. Hereafter, denote $U^l$ as the sets of used edges by ${\cal{P}}$ in layer cut $l$ and $U_x^l$ and $U_y^l$ as the sets of inputs and outputs used by $U^l$. Let ${\cal{L}}(x_i)=l$. If $x_i\in U_x^l$, do nothing. Otherwise, consider each $y_j$ with $(x_i,y_j)\in{\cal{E}}$ as follows. \begin{itemize} \item [(a) ] {\it $y_j$ is used.} Let $L_{x_i}$ denote the smallest subset of $U_x^l$ with $s=|L_{x_i}|\leqslant|U_x^l|=k$ such that $T(\{L_{x_i},x_i\},U_y^l)$ has rank $s$. The authors prove that replacing any $x_k\in L_{x_i}$ with $x_i$, the algorithm can still maintain $k$ LI S-D paths and the task now is to complete ${\cal{P}}_{k+1}$ from ${\cal{A}}(x_k)$. So in this case the unicast algorithm first finds the set $L_{x_i}$ in function FindL. Then it replaces each $x_k\in L_{x_i}$ with $x_i$ and calls a Match function to find a new set of $k$ edges in layer cut $l$ to maintain $k$ LI S-D paths in ${\cal{P}}$ and tries to complete ${\cal{P}}_{k+1}$ from ${\cal{A}}(x_k)$ if ${\cal{A}}(x_k)$ is not marked or from $x_k$ if $x_k$ is not marked. We refer to this step as same-layer rewiring. \item [(b) ] {\it $y_j$ is not used.} A rank computation function is called on the matrix $T(\{U_x^l,x_i\},\{U_y^l,y_j\})$. If the matrix is not full rank or ${\cal{A}}(y_j)$ has been visited before, do nothing. If the matrix is full rank and ${\cal{A}}(y_j)$ has not been visited before, add $(x_i,y_j)$ to ${\cal{P}}_{k+1}$ and try to complete it from ${\cal{A}}(y_j)$ by exploring ${\cal{A}}(y_j)$. We refer to this step as forward move. If it fails to complete ${\cal{P}}_{k+1}$ from ${\cal{A}}(y_j)$, a $\phi$-function is called for each $y_k\in U_y^l$ with ${\cal{A}}(y_k)={\cal{A}}(y_j)$. Let ${\cal{P}}_{y_k}$ be the path using $y_k$ and let $(x_k,y_k)\in U^l$ be the path edge. The idea of the $\phi$-function is to complete ${\cal{P}}_{k+1}$ from ${\cal{A}}(y_j)$ to D using the partial path of ${\cal{P}}_{y_k}$ from ${\cal{A}}(y_j)$ to D and then try to complete the path ${\cal{P}}_{y_k}$ from ${\cal{A}}(x_k)$. The $\phi$-function does the following: remove $(x_k,y_k)$ from the set of used edges and try to complete ${\cal{P}}_{y_k}$ from ${\cal{A}}(x_k)$. We refer to this step as backward rewiring. The $\phi$-function will be executed at most $M$ times. \end{itemize} We refer the reader to \cite{aurore2009_combinatorial_algo_deterministic} for more details. The complexity of the algorithm is $O(M\cdot|{\cal{E}}|\cdot C^5)$ and its computational parts include the FindL, Match and rank computation functions each with complexity $O(k^4)$, $O(k^3)$ and $O(k^3)$ respectively. \subsection{Other Related Algorithms} Yazdi \& Savari \cite{sadegh2009_combinatorialstudyofdeterministic} developed another polynomial time algorithm with complexity $O(L^8 M^{12} h_0^3+L M^6 C h_0^4)$ (where $h_0$ denotes the maximum total number of inputs/outputs at any layer) by relating matroids with this problem. Most recently, Goemans, Iwata and Zenklusen \cite{geomans2009} proposed a strongly polynomial time algorithm for this problem, whose complexity is $O(L M^3 \log M)$, i.e., it does not depend upon $C$. \section{Improved Unicast Algorithm\label{ouralgo}} In this section we outline certain improvements that can be made to the algorithm of \cite{aurore2009_combinatorial_algo_deterministic}. In particular, we elaborate on several useful combinatorial aspects that allow us to reduce the overall time complexity. Moreover, these improvements also fix certain issues with the original algorithm \cite{aurore2009_combinatorial_algo_deterministic}. As mentioned previously, our proposed improvements apply over arbitrary finite fields. \subsection{Improving the Original Algorithm} \label{improvement} The main idea in \cite{aurore2009_combinatorial_algo_deterministic} is to find path ${\cal{P}}_{k+1}$ in iteration $k+1$ while maintaining linear independence among all S-D paths in ${\cal{P}}$. In this process, previous paths may be rewired. However, there are cases when the original algorithm may fail to find the exact unicast capacity. We illustrate this using the following examples. We point out that these issues seem to have been resolved in \cite{fragouli2009_journal}. However, our proposed algorithm has several differences from \cite{fragouli2009_journal} as discussed at the end of Section \ref{comparison}. \noindent \textbf{Improved Backward Rewiring} We use the example in Fig. \ref{fig12} to show that there are cases where the $\phi$-function above is insufficient, causing failures of the original algorithm. Then we illustrate how it can be fixed by introducing an improved backward rewiring mechanism. In Fig. \ref{fig:subfig1}, three LI S-D paths with color red, green and blue are found in the first three iterations of the algorithm. Let's see how the algorithm goes in iteration four. Let's say the algorithm has extended ${\cal{P}}_4$ along the purple path to $y_{20}$. The call $E_A({\cal{G}},{\cal{P}},{\cal{M}},N)$ fails since the only input $x_{24}$ of N is used by paths in ${\cal{P}}$. So $\phi$-function is called on $y_{19}$ and then node I is explored in $E_A({\cal{G}},{\cal{P}},{\cal{M}},I)$, but since there is only one path from all inputs of I to D, $E_A({\cal{G}},{\cal{P}},{\cal{M}},I)$ fails, and finally the algorithm returns false and reports unicast capacity of $3$. However, the unicast capacity of the network is $4$ and a capacity-achieving transmission scheme is given by the four S-D paths in Fig. \ref{fig:subfig2} in different colors. We propose the following improved backward rewiring mechanism to fix the problem above and to replace the original $\phi$-function. Let A denote a node in the network (not to be confused with A in the figure). First, the backward rewiring is allowed on every node A whenever it is explored in finding ${\cal{P}}_{k+1}$. Second, the backward rewiring on node A includes the following operations. Let ${\cal{L}}(A)=l+1$. For any output $y$ of A with $y\in U_y^l$ and $y$ is used by a path in ${\cal{P}}$ at the beginning of the current iteration (if such $y$ exists), (1) find one $x\in U_x^l$ such that $T(U_x^l-x,U_y^l-y)$ has full rank, (2) then rematch $(U_x^l-x,U_y^l-y)$ to generate a new set of $k$ LI used path edges in layer cut $l$ and (3) finally try to complete the partial path from ${\cal{A}}(x)$. Lemma \ref{lemmmm22} guarantees that for a given $y\in U_y^l$ there is always one such $x$ and also a set of edges\footnote{We use the notation $P_{y \rightarrow x}$ since this set of edges can be interpreted as an alternating path, as we show in Section \ref{useful}} $P_{y\rightarrow x}=$ $\{(x_1,y_1=y)$, $(x_1,y_2)$, $(x_2,y_2)$, $(x_2,y_3)$, $...(x_{m'-1},y_{m'})$, $(x_{m'}=x,y_{m'})\}$ $=\{e_1,e_2,...,e_{2m'-1}\}$ with $(x_i,y_i),1\leq i\leq m'$ being edges used by ${\cal{P}}$, which can be found with complexity $O(k^3)$ and $O(k^2)$ respectively. Along the alternating path $P_{y\rightarrow x}$, the rematching of the used path edges in layer cut $l$ can be done easily as follows: $U^l=U^l-e_1+e_2-e_3+...-e_{2m'-1}$. Consider applying our improved backward rewiring in the example in Fig. \ref{fig12}. It happens on the outputs of nodes N and I. Its application to N is straightforward. Let's look at its application at the output $y_{14}$ of node I. First it finds $x_6\in U_x^2$ with $T(U_x^2-x_6,U_y^2-y_{14})$ having full rank and the alternating path $P_{y_{14}\rightarrow x_6}=\{(x_7,y_{14}),(x_7,y_{13}),(x_6,y_{13})\}$. The rematching is done by $U^2=U^2-(x_7,y_{14})+(x_7,y_{13})-(x_6,y_{13})$. Then node $B={\cal{A}}(x_6)$ is explored. Finally the improved algorithm returns four LI S-D paths in Fig. \ref{fig:subfig2} as expected. \vspace{-1mm} \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[scale=0.35]{new_example3_sm.eps} \label{fig:subfig1} } \subfigure[]{ \includegraphics[scale=0.35]{new_example3_3_sm.eps} \label{fig:subfig2} } \caption[]{Illustrating example for improved backward rewiring} \label{fig12} \end{figure} \vspace{-1mm} \noindent \textbf{Improved Same-Layer Rewiring} We use the example in Fig. \ref{fig34} to show that the same-layer rewiring in original algorithm is insufficient. Suppose the red S-D path is found in the first iteration. In iteration two, suppose that the algorithm first extends ${\cal{P}}_2$ along the green path to $x_4$. The same-layer rewiring from $x_4$ will mark $x_3$. Since $T(x_3+x_4,y_5+y_6)$ is not full rank, the algorithm fails to complete ${\cal{P}}_2$ along the green path. It continues to extend ${\cal{P}}_2$ along the blue path to $x_5$. Since $x_3$ is marked, the same-layer rewiring from $x_5$ won't be applied on $x_3$ and the call $E_A({\cal{G}},{\cal{P}},{\cal{M}},C)$ fails. The algorithm finally returns false and reports unicast capacity of $1$. However, the network has a unicast capacity of $2$ indicated by the two paths in Fig. \ref{fig:subfig4}. We develop our improved same-layer rewiring to fix the above problem as follows. First, an input $x_k$ should not be blocked from being visited via same-layer rewiring from any input $x_i$ just because it has been visited via same-layer rewiring from another input $x_j$. Consider the example in Fig. \ref{fig34}. If we allow $x_3$ to be visited via same-layer rewiring from $x_5$, the algorithm may succeed in finding two LI paths as indicated in Fig. \ref{fig:subfig4}. However, this needs to be done carefully. Consider again the example in Fig. \ref{fig34}. If we allow same-layer rewirings from all inputs, then we might run into an infinite loop of going from $x_5$ to $x_3$ via same-layer rewiring and going from $x_3$ to $x_5$ via same-layer rewiring and so on. The goal of a same-layer rewiring operation in iteration $k+1$ is to ensure that every input, which allows the algorithm to maintain $k$ LI S-D paths and can further extend the current partial path, has the opportunity of being explored, while ensuring that we do not enter an infinite loop. In this work we achieve this by using a pair of labels of each node. \vspace{-1mm} \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[scale=0.35]{new_example2_sm.eps} \label{fig:subfig3} } \subfigure[]{ \includegraphics[scale=0.35]{new_example2_2_sm.eps} \label{fig:subfig4} } \caption[]{Illustrating example for improved same-layer rewiring} \label{fig34} \end{figure} \vspace{-1mm} Each node has a label that takes values - ``explored" or ``unexplored". The other label is a type that takes values 1, 2. We initialize the type of every node to be 1 at the beginning of the iteration. A type $1$ input is allowed to initiate same-layer rewirings. An input that is explored via a same-layer rewiring from a type $1$ input $x_i$ is assigned as type $2$. A type $2$ input is not allowed to initiate same-layer rewirings to avoid the possibility of infinite loop. If an input $x$ (of either type) is explored via a backward rewiring, it is re-assigned as type $1$ (since $U_x^l$ and $U_y^l$ change since last time $x$ was explored). Consider applying our improved same-layer rewiring in the example in Fig. \ref{fig34}. $x_3$ is first visited via a same-layer rewiring from $x_4$ (of type $1$) when it is assigned as type $2$. Later on $x_3$ is revisited via a same-layer rewiring from $x_5$ (of type $1$) when it is assigned as type $2$ again, so it won't initiate a same-layer rewiring to $x_5$, instead it only looks for a possible forward move which happens along the edge $(x_3,y_5)$ (and the improved algorithm finally succeeds in finding $2$ LI paths as in Fig. \ref{fig:subfig4}). \subsection{Useful Combinatorial Features}\label{useful} In this subsection, several useful combinatorial features intrinsic in the problem are introduced which are used later in our improved algorithm to reduce the complexity. In the following, we define a set $\Lambda_{x_i}$ similar to but more general than $L_{x_i}$ in the original algorithm by Amaudruz and Fragouli. $\Lambda_{x_i}$ applies to any size of finite field $\mathbb{F}_p$ associated with the ADT model for the network. \begin{definition}\label{defin10} Define $\Lambda_{x_i}$ as a subset of $U_x^{{\cal{L}}(x_i)}$ when $x_i$ is explored such that \begin{eqnarray}\label{eqn21} T(x_i,U_y^{{\cal{L}}(x_i)})=\sum_{x_j\in\Lambda_{x_i}}a_{x_i}^j\cdot T(x_j,U_y^{{\cal{L}}(x_i)}). \end{eqnarray} where $\{a_{x_i}\}$ are non-zero coefficients from $\mathbb{F}_p$. \end{definition} \begin{lemma}\label{lemma00} $\Lambda_{x_i}$ and the set $\{a_{x_i}\}$ are unique and can be found with complexity $O(k^3)$ in iteration $k+1$. \end{lemma} Since $T(U_x^{{\cal{L}}(x_i)},U_y^{{\cal{L}}(x_i)})$ has full-rank, $\Lambda_{x_i}$ and the set $\{a_{x_i}\}$ are unique and can be found with complexity $O(k^3)$ by using Gaussian elimination. Let ${\cal{G}}_{x_i}$ denote the bipartite graph containing nodes $U_x^{{\cal{L}}(x_i)}\cup U_y^{{\cal{L}}(x_i)}$ when $x_i$ is explored in iteration $k+1$ and ${\cal{G}}_{x_i}^{+}$ denote the bipartite graph containing nodes $\{x_i\}\cup U_x^{{\cal{L}}(x_i)}\cup U_y^{{\cal{L}}(x_i)}$. In the following, we refer to an alternating path as a path in which the edges belong alternatively to the set of used edges and the set of unused edges. \begin{lemma}\label{lemma5} There is an alternating path from $x_i$ to any $x_j\in\Lambda_{x_i}$ in the graph ${\cal{G}}_{x_i}^{+}$ of the form $P_{x_i\rightarrow x_j}=$ $\{(x_i,y_1)$, $(x_1,y_1)$, $(x_1,y_2)$, $(x_2,y_2)$, $...(x_{m-1},y_m)$, $(x_m=x_j,y_m)\}$ with $(x_q,y_q),1\leq q\leq m$ being edges used by ${\cal{P}}$. The complexity for finding these $|\Lambda_{x_i}|$ paths is bounded by $O(k^2)$ in iteration $k+1$. \end{lemma} \begin{proof} Let ${\cal{L}}(x_i)=l$. Given rank$(T(U_x^l,U_y^l))=k$, for any $x_j\in\Lambda_{x_i}$, rank$(T(U_x^l,U_y^l))=$ rank$(T(U_x^l+x_i-x_j,U_y^l))=k$ where $k=|{\cal{P}}|$ in iteration $k+1$. Introduce an auxiliary output $y'$ and an edge $(x_j,y')$. It's easy to see that rank$(T(U_x^l+x_i,U_y^l+y'))=k+1$. Let ${\cal{G}}_{x_i}^{++}$ denote the bipartite graph containing nodes $\{x_i\}\cup U_x^l\cup U_y^l\cup\{y'\}$. Given $T(U_x^l,U_y^l)$ has full rank, we know that the polynomial of the determinant of the Edmonds matrix of the bipartite graph ${\cal{G}}_{x_i}$ is not identically zero, so there is a size $k$ perfect matching in ${\cal{G}}_{x_i}$ \cite{rajeev1995_randomizedalgorithm}, $M_1=U^l$ giving such a matching. Similarly given rank$(T(U_x^l+x_i,U_y^l+y'))=k+1$, there is a size $k+1$ perfect matching in ${\cal{G}}_{x_i}^{++}$. By Berge's Lemma \cite{berge1957}, we know that there is an alternating path, relative to the matching $M_1$, starting from an unused input $x_i$ to an unused output $y'$, alternating between edges not in the current matching $M_1$ and edges in the current matching $M_1$, i.e., there is a path $P_{x_i\rightarrow y'}=$ $\{(x_i,y_1)$, $(x_1,y_1)$, $(x_1,y_2)$, $(x_2,y_2)$, $...(x_{m-1},y_m)$, $(x_m,y_m)$, $(x_m=x_j,y')\}$ with $(x_q,y_q),1\leq q\leq m$ being edges in $M_1$. So we proved that there is an alternating path $P_{x_i\rightarrow x_j}=$ $\{(x_i,y_1)$, $(x_1,y_1)$, $(x_1,y_2)$, $(x_2,y_2)$, $...(x_{m-1},y_m)$, $(x_m=x_j,y_m)\}$ with $(x_q,y_q),1\leq q\leq m$ being edges in $M_1=U^l$. Since the number of nodes in ${\cal{G}}_{x_i}^{+}$ is bounded by $O(k)$, the number of its edges is bounded by $O(k^2)$. Finding $P_{x_i\rightarrow x_j}$ for all $x_j\in\Lambda_{x_i}$ in ${\cal{G}}_{x_i}^{+}$ can be done with complexity $O(k^2)$ with some well-known graph traversal algorithms, like breadth-first search \cite{CRLS_2001}. \end{proof} \begin{lemma}\label{lemmmm22} Let rank$(T(U_x^l,U_y^l))=|U_x^l|=|U_y^l|=k+1$. Given any $y\in U_y^l$, there exists at least one $x\in U_x^l$, such that rank$(T(U_x^l-x,U_y^l-y))=k$. Moreover there is an alternating path from $y$ to $x$ of the form $P_{y\rightarrow x}=$ $\{(x_1,y_1=y)$, $(x_1,y_2)$, $(x_2,y_2)$, $(x_2,y_3)$, $...(x_{m'-1},y_{m'})$, $(x_{m'}=x,y_{m'})\}$ with $(x_q,y_q),1\leq q\leq m'$ being edges in $U^l$. The complexity of finding one such $x$ is bounded by $O(k^3)$ and the complexity of finding path $P_{y\rightarrow x}$ is bounded by $O(k^2)$. \end{lemma} Due to lack of space, we skip the proof here. The proof of existence of $P_{y\rightarrow x}$ is similar to Lemma \ref{lemma5} by introducing an auxiliary input $x'$ and output $y'$ and edges $(x',y),(x,y')$ leading to rank$(T(U_x^l+x',U_y^l+y'))=k+2$. Lemma \ref{lemma2} develops an equivalent but computationally simple method to speed up the rank computation when $x_i$ is explored given $\Lambda_{x_i}$ and the set of associated coefficients $\{a_{x_i}\}$. \begin{lemma}\label{lemma2} Let $T(U_x^l,U_y^l)$ have full rank $k$. The rank computation for checking rank$(T(U_x^l+x_i,U_y^l+y))=k$ or $k+1$ for any $x_i\not\in U_x^l$, ${\cal{L}}(x_i)=l$, $y\not\in U_y^l$ and $(x_i,y)\in{\cal{E}}$ is equivalent to checking $T(x_i,y)=\sum_{x_j\in\Lambda_{x_i}}a_{x_i}^j\cdot T(x_j,y)$ or not, with complexity bounded by $O(k)$ given $\Lambda_{x_i}$ and $\{a_{x_i}\}$. \end{lemma} \begin{proof} Given $T(U_x^l,U_y^l)$ has full rank $k$, rank$(T(U_x^l+x_i,U_y^l+y))=k$ is equivalent to that $T(x_i,U_y^l+y)=\sum_{x_j\in\Lambda_{x_i}'}a_{x_i'}^{j}\cdot T(x_j,U_y^l+y)$ for some $\Lambda_{x_i}'\subseteq U_x^l$ and $\{a_{x_i'}\}$. Since $\Lambda_{x_i}\subseteq U_x^l$ and the set $\{a_{x_i}\}$ are unique for which $T(x_i,U_y^l)=\sum_{x_j\in\Lambda_{x_i}}a_{x_i}^j\cdot T(x_j,U_y^l)$ holds (by Lemma \ref{lemma00}), there must be $\Lambda_{x_i}'=\Lambda_{x_i}$ and $\{a_{x_i}\}=\{a_{x_i'}\}$. This leads to that rank$(T(U_x^l+x_i,U_y^l+y))=k$ is equivalent to $T(x_i,y)=\sum_{x_j\in\Lambda_{x_i}}a_{x_i}^j\cdot T(x_j,y)$. \end{proof} \begin{lemma} \label{lemmaappen16} Let $x'\in\Lambda_{x_i}$. If $x'$ is explored via a same-layer rewiring from $x_i$, $\Lambda_{x'}=\Lambda_{x_i}+x_i-x'$ and the set of associated coefficients $\{a_{x'}\}$ can be computed from $\{a_{x_i}\}$ with complexity $O(k)$ in iteration $k+1$. \end{lemma} \begin{proof} Let ${\cal{L}}(x_i)=l$. Note that when $x'$ is explored via a same-layer rewiring from $x_i$, $U_x^l$ is updated as $U_x^l-x'+x_i$, $U_y^l$ is unchanged and $T(U_x^l-x'+x_i,U_y^l)$ has full rank. Based on definition, \begin{eqnarray}\label{eqn81} T(x_i,U_y^l)=\sum_{x_j\in\Lambda_{x_i}\char92 x'}a_{x_i}^j\cdot T(x_j,U_y^l)+a_{x_i}'\cdot T(x',U_y^l). \end{eqnarray} where $\{a_{x_i}\}$ are non-zero coefficients from $\mathbb{F}_p$. So we have \begin{eqnarray}\label{eqn82} T(x',U_y^l)=\sum_{x_j\in\Lambda_{x_i}\char92 x'}\frac{a_{x_i}^j}{a_{x_i}'}\cdot T(x_j,U_y^l)-\frac{1}{a_{x_i}'}\cdot T(x_i,U_y^l). \end{eqnarray} Since $T(U_x^l-x'+x_i,U_y^l)$ has full rank, equation (\ref{eqn82}) is the unique way that the row $T(x',U_y^l)$ can be expressed as a linear combination of the rows in this matrix. So we conclude $\Lambda_{x'}=\Lambda_{x_i}+x_i-x'$ and the set of associated coefficients $\{a_{x'}\}$ can be computed from $\{a_{x_i}\}$ with complexity $O(k)$. Note that in iteration $k+1$, $|\Lambda_{x_i}|\leqslant|U_x^l|=k$. \end{proof} \subsection{Reducing the Complexity and the Overall Algorithm} As mentioned before, the computational parts of algorithm \cite{aurore2009_combinatorial_algo_deterministic} include the FindL (finding $L_{x_i}$), Match (update $U$ after a same-layer rewiring from $x_i$) and rank computation functions. Now we explain how the combinatorial features from Section \ref{useful} can be used to further reduce the complexity of the unicast algorithm. Lemma \ref{lemma00} shows that $\Lambda_{x_i}$ and the set of associated coefficients $\{a_{x_i}\}$ for any type $1$ input $x_i$ can be computed with complexity $O(k^3)$ in iteration $k+1$. Lemma \ref{lemmaappen16} tells that for any type $2$ input $x'$, $x'\in\Lambda_{x_i}$, that is explored via a same-layer rewiring from a type $1$ input $x_i$, $\Lambda_{x'}$ and the set of associated coefficients $\{a_{x'}\}$ can be computed with complexity $O(k)$ given $\Lambda_{x_i}$ and the set of associated coefficients $\{a_{x_i}\}$. Second, based on Lemma \ref{lemma5}, the matching or updating of $U$ after same-layer rewirings from any type $1$ input $x_i$ can be done with complexity $O(k^2)$ in iteration $k+1$ as follows. First find all $|\Lambda_{x_i}|$ paths $P_{x_i\rightarrow x_j}$, $\forall x_j\in\Lambda_{x_i}$ with complexity $O(k^2)$ for $x_i$. Let $P_{x_i\rightarrow x_j}=$ $\{(x_i,y_1)$, $(x_1,y_1)$, $(x_1,y_2)$, $...(x_{m-1},y_m)$, $(x_m=x_j,y_m)\}$ $=\{e_1,e_2,...,e_{2m}\}$ with $(x_q,y_q),1\leq q\leq m$ being edges used by ${\cal{P}}$ for any $x_j\in\Lambda_{x_i}$. Then updating of $U^{{\cal{L}}(x_i)}$ after a same-layer rewiring from $x_i$ to $x_j$ can be done by $U^{{\cal{L}}(x_i)}\leftarrow U^{{\cal{L}}(x_i)}+e_1-e_2+...-e_{2m}$. Third, Lemma \ref{lemma2} tells that the rank computation in a forward move from any $x_i$ (either of type $1$ or of type $2$), $x_i\not\in U_x^l$, ${\cal{L}}(x_i)=l$, for checking rank$(T(U_x^l+x_i,U_y^l+y))=k$ or $k+1$ for any $y\not\in U_y^l$ and $(x_i,y)\in{\cal{E}}$ is equivalent to checking $T(x_i,y)=\sum_{x_j\in\Lambda_{x_i}}a_{x_i}^j\cdot T(x_j,y)$ or not, with complexity bounded by $O(k)$ given $\Lambda_{x_i}$ and $\{a_{x_i}\}$ in iteration $k+1$. Finally, as mentioned before, in our improved backward rewiring from an output $y$, to find one $x$ with $T(U_x^l-x,U_y^l-y)$ having full rank and to rematch $(U_x^l-x,U_y^l-y)$ can be done with complexity $O(k^3)$ in iteration $k+1$ guaranteed by Lemma \ref{lemmmm22}. Table \ref{table1} gives an overall description of our improved unicast algorithm which is implemented in a function $E_A({\cal{G}},{\cal{P}},{\cal{M}},A)$ where all inputs are the same as in the original algorithm. A complete software implementation of our improved unicast algorithm can be found in \cite{myhomepage}. \begin{table}[htbp \caption{Pseudo-code for our improved algorithm} \vspace{-4mm} \begin{center} \resizebox{1.0\columnwidth}{!}{ \begin{tabular}{l} \hline \hline \{(T,F)\}=$E_A({\cal{G}},{\cal{P}},{\cal{M}},A)$\\ $\left\{\begin{array}{l} {\cal{M}}(A)=T, {\cal{L}}(A)=l\\ U^l=\{{\mbox{used edges in layer cut }}l\}, U_x^l=\{x_i\in U^l\}, U_y^l=\{y_j\in U^l\}\\ {\mbox{for any }}x: {\cal{A}}(x)=A, x\not\in U_x^l, {\cal{M}}(x)=F, {\mbox{GetType}}(x)=2\\ \left\{\begin{array}{l} {\cal{M}}(x)=T\\ {\mbox{for any }}y: (x,y)\in {\cal{E}}, y\not\in U_y^l, {\cal{M}}({\cal{A}}(y))=F {\mbox{ //\emph{forward move}}}\\ \left\{\begin{array}{l} {\mbox{if }}T(x,y)\neq\sum_{x_j\in\Lambda_{x}}a_x^j\cdot T(x_j,y)\\ \left\{\begin{array}{l} {\mbox{Update}}({\cal{P}}); U^l\leftarrow U^l+e\\ {\mbox{if }}{\cal{A}}(y)=D, {\mbox{ return (T)}}\\ {\mbox{else if }}E_A({\cal{G}},{\cal{P}},{\cal{M}},{\cal{A}}(y))=T, {\mbox{ return(T)}}\\ U^l\leftarrow U^l-e; {\mbox{ Restore}}({\cal{P}})\\ \end{array}\right.\\ \end{array}\right.\\ \end{array}\right.\\ {\mbox{for any }}x: {\cal{A}}(x)=A, x\not\in U_x^l, {\cal{M}}(x)=F, {\mbox{GetType}}(x)=1\\ \left\{\begin{array}{l} {\cal{M}}(x)=T\\ {\mbox{Compute }}\Lambda_{x}{\mbox{ and the set of coefficients }}\{a_x\}\\ {\mbox{for any }}y: (x,y)\in {\cal{E}}, y\not\in U_y^l, {\cal{M}}({\cal{A}}(y))=F{\mbox{ //\emph{forward move}}}\\ \left\{\begin{array}{l} {\mbox{if }}T(x,y)\neq\sum_{x_j\in\Lambda_{x}}a_x^j\cdot T(x_j,y)\\ \left\{\begin{array}{l} {\mbox{Update}}({\cal{P}}); U^l\leftarrow U^l+e \\ {\mbox{if }}{\cal{A}}(y)=D, {\mbox{ return (T)}}\\ {\mbox{else if }}E_A({\cal{G}},{\cal{P}},{\cal{M}},{\cal{A}}(y))=T, {\mbox{ return(T)}}\\ U^l\leftarrow U^l-e; {\mbox{ Restore}}({\cal{P}})\\ \end{array}\right.\\ \end{array}\right.\\ {\mbox{Find all paths }}P_{x\rightarrow x_j}{\mbox{ for all }}\forall x_j\in\Lambda_{x}\\ {\mbox{for any }}x_j: x_j\in\Lambda_{x}{\mbox{ with }}P_{x\rightarrow x_j}=\{e_1,e_2,...e_{2m}\}=\\ \{(x,y_1),(x_1,y_1),(x_1,y_2),...(x_m=x_j,y_m)\}{\mbox{ //\emph{same-layer rewiring}}}\\ \left\{\begin{array}{l} {\cal{M}}(x_j)=F; {\mbox{ SetType}}(x_j,2);\\ \Lambda_{x_j}=\Lambda_{x}-x_j+x\\ {\mbox{compute }}\{a_{x_j}\}{\mbox{ based on }}\{a_x\}{\mbox{ according to Lemma \ref{lemmaappen16}}}\\ {\mbox{Update}}({\cal{P}}); U^l\leftarrow U^l+e_1-e_2+...+e_{2m-1}-e_{2m}\\ {\mbox{if }}E_A({\cal{G}},{\cal{P}},{\cal{M}},{\cal{A}}(x_j))=T, {\mbox{ return(T)}}\\ U^l\leftarrow U^l-e_1+e_2-...-e_{2m-1}+e_{2m}; {\mbox{ Restore}}({\cal{P}})\\ \end{array}\right.\\ \end{array}\right.\\ {\mbox{for any }}y: {\cal{A}}(y)=A, y\in U_y^{l-1}, {\cal{M}}(y)=F \\ {\mbox{and }}y {\mbox{ is used by }}{\cal{P}} {\mbox{ at the beginning of the iteration}}{\mbox{ //\emph{backward rewiring}}}\\ \left\{\begin{array}{l} {\cal{M}}(y)=T\\ {\mbox{find one }}x\in U_x^{l-1}{\mbox{ with }}T(U_x^{l-1}-x,U_y^{l-1}-y){\mbox{ having full rank}}\\ {\mbox{and find }}P_{y\rightarrow x}=\{e_1,e_2,...e_{2m'-1}\}\\ =\{(x_1,y_1=y),(x_1,y_2),(x_2,y_2),...(x_{m'}=x,y_{m'})\}\\ {\cal{M}}(x)=F, {\mbox{SetType}}(x,1)\\ {\mbox{Update}}({\cal{P}}); U^{l-1}\leftarrow U^{l-1}-e_1+e_2-...-e_{2m'-1}\\ {\mbox{If }}E_A({\cal{G}},{\cal{P}},{\cal{M}},{\cal{A}}(x))=T,{\mbox{ return (T)}}\\ U^{l-1}\leftarrow U^{l-1}+e_1-e_2+...+e_{2m'-1};{\mbox{ Restore}}({\cal{P}})\\ \end{array}\right.\\ {\mbox{return (F)}}\\ \end{array}\right.$\\ \hline \hline \end{tabular} } \end{center} \label{table1} \end{table} \vspace{-3mm} \subsection{Complexity Analysis and Comparison with Existing Results}\label{comparison} To analyze the complexity, we first bound the total number of inputs of different types being visited in each iteration of the algorithm. Note that once a node or input/output is visited/explored, it's labeled as explored (by ${\cal{M}}$) and not allowed to be explored again unless it is relabeled as unexplored again. At the beginning of each iteration, all inputs are initialized as unexplored type $1$ inputs whose number is bounded by $O(|{\cal{V}}_x|)$ (let ${\cal{V}}_x=$\{all inputs in the network\}). In each backward rewiring operation, one input will be assigned as unexplored type $1$ input. From the definition of backward rewiring, the total number of valid outputs that initiate a backward rewiring is no more than $|{\cal{V}}_x|$, which means the total number of backward rewiring operations is bounded by $O(|{\cal{V}}_x|)$. So the total number of type $1$ inputs being visited is bounded by $O(|{\cal{V}}_x|)$ in each iteration. In each same-layer rewiring operation from a type $1$ input, one input will be assigned as unexplored type $2$ input. The total number of same-layer rewiring operations from any type $1$ input $x$ is no more than $|\Lambda_{x}|\leqslant k$ in iteration $k+1$. So the total number of type $2$ inputs being visited is bounded by $O(k|{\cal{V}}_x|)$ in iteration $k+1$. The worst case in computation in iteration $k+1$ are no more than: (1) for each type $1$ input $x_i$, compute $\Lambda_{x_i}$ and $\{a_{x_i}\}$ with complexity $O(k^3)$ and find all paths $P_{x_i\rightarrow x_j}$ for $\forall x_j\in\Lambda_{x_i}$ with complexity $O(k^2)$, (2) for each type $2$ input $x_j$, compute $\Lambda_{x_j}$ and $\{a_{x_j}\}$ with complexity $O(k)$, (3) for each type $1$ or type $2$ input $x$, compute rank for $T(U_x^l+x,U_y^l+y)$ for all $y\not\in U_y^l$, $(x,y)\in{\cal{E}}$ with complexity $O(k)$ given $\Lambda_x$ and $\{a_x\}$ (for any $x$, the total number of such $y$ is no larger than $d$) and (4) in each backward rewiring from a certain $y$, find one $x$ with $T(U_x^l-x,U_y^l-y)$ having full rank and to rematch $(U_x^l-x,U_y^l-y)$ with complexity $O(k^3)$. Note that $k\leqslant C$. It's obvious that the total complexity of our improved algorithm is bounded by $O(|{\cal{V}}_x|\cdot C^4+d\cdot|{\cal{V}}_x|\cdot C^3)$. Due to lack of space, we skip the proof of correctness for our improved algorithm, however a complete and detailed proof can be found in \cite{myhomepage}. Table \ref{table2} lists the comparison results between different algorithms for finding the unicast capacity of linear deterministic wireless relay networks, specially in their complexity. \begin{table}[htbp] \caption{Comparison of algorithm complexity} \vspace{-4mm} \begin{center} \resizebox{1.0\columnwidth}{!}{ \begin{threeparttable} \begin{tabular}{l|p{32mm}|p{58mm}} \hline \hline Algorithm&Complexity{\tnote{$\ast$}}&Notes\\ \hline \cite{aurore2009_combinatorial_algo_deterministic}&$O(M |{\cal{E}}| C^5)$&Always higher than ours\\ \cite{fragouli2009_journal}&$O(d|{\cal{V}}_x|C^5+|{\cal{V}}_y|C^5)$&especially when $C$ is large\\ \hline \cite{sadegh2009_combinatorialstudyofdeterministic}&$O(L^8 M^{12} h_0^3+L M^6 C h_0^4)$&Always higher than ours, especially when $M$ or $L$ is large\\ \hline \cite{geomans2009}&$O(L^{1.5} M^{3.5} \log(ML))$ or $O(L M^3 \log M)$&Straightforward comparison is not possible. \cite{geomans2009} will have lower complexity if $C$ is much larger than $M$\\ \hline Our work&$O(|{\cal{V}}_x| C^4+d |{\cal{V}}_x| C^3)$&-\\ \hline \hline \end{tabular} \begin{tablenotes} \item[$\ast$]{Denote $C$ as the unicast capacity, $M$ the maximum number of nodes in each layer, $L$ the total number of layers, $d$ the maximum number of inputs of any node, $h_0$ the maximum number of inputs/outputs at any layer, $E$ the total number of edges, $|{\cal{V}}_x|$ the total number of inputs and $|{\cal{V}}_y|$ the total number of outputs. Note that $M\geq d$ (since by definition each input can have at most one connection to each node in the next layer), $|{\cal{E}}|\geq|{\cal{V}}_x|$ (because of broadcasting) and $h_0\geq C$ (based on definition).} \end{tablenotes} \end{threeparttable} } \end{center} \label{table2} \end{table} \vspace{-3mm} We note that the issues with the original algorithm \cite{aurore2009_combinatorial_algo_deterministic} mentioned in Section \ref{improvement} have been fixed in \cite{fragouli2009_journal}. The main difference between our improved algorithm and the algorithm in \cite{fragouli2009_journal} is that our improved algorithm utilizes those useful combinatorial features intrinsic in the problem described in Section \ref{useful} which lead to reduced complexity. The other difference comes from the same-layer rewiring and backward rewiring. In \cite{fragouli2009_journal}, the same-layer rewiring starts on each input at most once (using the ML indicator function) while our algorithm allows multiple same-layer rewirings starting from certain inputs (that is, if an input is explored via a backward rewiring, it is reassigned as type $1$ input and allows to initiate same-layer rewiring again). In \cite{fragouli2009_journal}, the backward rewiring (implemented in $\phi$-function there) allows exploration on every $x_k\in U_x$ such that the resulting adjacency matrix of used path edges still remains full rank while our algorithm only finds one such $x_k\in U_x$ and explores it. Note that it can be verified that the combined effects of the different same-layer rewiring and backward rewiring in two algorithms are the same. \section{Conclusions\label{conclusion}} An improved algorithm for finding the unicast capacity of linear deterministic wireless networks is presented. Our algorithm improves upon the original algorithm by Amaudruz \& Fragouli. We amend the original algorithm so that it finds the unicast capacity correctly for any given deterministic networks. Moreover we fully explore several useful combinatorial features intrinsic in the problem which lead to reduced complexity. Our improved algorithm applies with any size of finite field associated with the ADT model defining the network. Our improved algorithm proves to be very competitive when comparing with other algorithms on solving the same problem in terms of complexity. \bibliographystyle{IEEEtran}
2,877,628,091,377
arxiv
\section{Introduction} A \textit{numerably contractible space} is a topological space $X$ which admits a numerable cover by sets $U\subset X$ for which the inclusions are nullhomotopic. Numerably contractible spaces are of importance in homotopy theory, as we will explain. Some important weak homotopy equivalence are strict ones if the spaces involved are numerably contractible. Let $k\mathop{\rm Top}^\ast$ denote the category of based $k$-spaces. For $X$ in $k\mathop{\rm Top}^\ast$ let $JX$ denote the James construction on $X$ in $k\mathop{\rm Top}^\ast$, i.e. the based free topological monoid on $X$. In \cite[(17.3)]{DKP} and \cite[Cor. 3.4]{Pup1} D. Puppe proved: \begin{theo}\label{1_1} If $X$ is $h$-wellpointed, path-connected and numerably contractible, then $JX\simeq\Omega\Sigma X$. \end{theo} For $X$ in $k\mathop{\rm Top}^\ast$ let $\mathbb{C}^\ast_n(X)$ denote the based free algebra over the operad $\mathcal{C}_n$ of little $n$-cubes. P. May constructed a weak equivalence $\mathbb{C}^\ast_n(X)\to\Omega^n\Sigma^nX$ for a path-connected $X$ \cite{May}. In his thesis H. Meiwes proved \cite{Meiwes}. \begin{theo}\label{1_2} If $X$ is as in Theorem \ref{1_1}, then May's map $\mathbb{C}^\ast_n(X)\to\Omega^n\Sigma^nX$ is a genuine homotopy equivalence. \end{theo} In the context of these theorems we share D. Puppe's point of view \cite{Pup1}: ``Frequently a weak homotopy equivalence is considered as good as a genuine one, because for spaces having the homotopy type of a $CW$-complex there is no difference and most interesting spaces in algebraic topology are of that kind. I am not going to argue against this because I agree with it, but I do think that the methods by which we establish the genuine homotopy equivalences give some new insight into homotopy theory.'' Indeed, constructing homotopy equivalences between spaces which are not necessarily of the homotopy type of $CW$-complexes deprives one of the algebraic side of homotopy theory, so that these constructions have a different, more geometric flavor. We do not know whether A. Dold introduced the notion of a numerably contractible space, but he was certainly among the first ones to work with them. Following J. Smrekar \cite{Smrekar}, we therefore also call such a space a Dold space. In his paper ``Partitions of Unity in the Theory of Fibrations'' Dold proved \cite[Thm. 6.3]{Dold}: \begin{theo}\label{1_3} Given a commutative diagram $$ \xymatrix{ E\ar[rr]^f\ar[rd]_p && E' \ar[ld]^{p'} \\ & B } $$ such that $p$ and $p'$ have the weak covering homotopy property and $B$ is a Dold space then $f$ is a fiberwise homotopy equivalence iff its restriction to every fiber is a homotopy equivalence. \end{theo} As simple consequences of this result one has the following strengthened versions of well-known results about homotopy pullbacks (for simplicity we state the results for commutative squares. They also hold for homotopy commutative squares with a specified homotopy.) \begin{prop}\label{1_4} Let $$ \xymatrix{ X_1 \ar[r]^u\ar[d]_f & Y_1 \ar[d]^g \\ X_0 \ar[r]^v & Y_0 } $$ be a homotopy pullback. If $v$ is a homotopy equivalence, so is $u$. Conversely, if $u$ is a homotopy equivalence, $g$ induces a surjection of sets of path-components, and $Y_0$ is a Dold space, then $v$ is a homotopy equivalence. \end{prop} \begin{prop}\label{1_5} Given a commutative diagram, $$ \xymatrix{\ X_2 \ar[r]^{f'} \ar[d]^w \ar @{} [dr] |{\textrm{I}}& X_1 \ar[r]^f \ar[d]^v \ar @{} [dr] |{\textrm{II}} & X_0 \ar[d]^u \\ Y_2 \ar[r]^{g'} & Y_1 \ar[r]^g & Y_0 } $$ \begin{enumerate} \item suppose that II is a homotopy pullback. Then I is a homotopy pullback iff the combined square I+II is a homotopy pullback \item suppose that I and I+II are homotopy pullbacks, that $g'$ induces a surjection of sets of path-components and $Y_1$ is a Dold space, then II is a homotopy pullback. \end{enumerate} \end{prop} \begin{prop}\label{1_6} Let $$ \xymatrix{ X_1 \ar[r]^f\ar[d]_u & X_0 \ar[d]^v \\ Y_1 \ar[r]_g & Y_0 } $$ be a commutative square and $F(f,x)$ the homotopy fiber of $f$ over $x\in X_0$. \begin{enumerate} \item If the square is a homotopy pullback, then the induced map $$ F(f,x)\to F(g,v(x)) $$ is a homotopy equivalence for each $x\in X_0$. \item If for each $x\in X_0$ the map $F(f,x)\to F(g,v(x))$ is a homotopy equivalence and $X_0$ is a Dold space, the square is a homotopy pullback. \end{enumerate} \end{prop} We also have the following improved version of M. Mather's second cube theorem \cite{Mather}. \begin{prop}\label{1_7} Given a commutative cube diagram whose vertical faces are homotopy pullback, $$ \xymatrix@=1.5ex{ A_0 \ar[rrrr]\ar[dd] \ar[drr] &&&& A_1 \ar[dd]|!{[d];[d]}\hole \ar[drr] \\ && A_2 \ar[rrrr]\ar[dd] &&&& A_3 \ar[dd]^f \\ B_0 \ar[rrrr]|!{[rr];[rr]}\hole \ar[drr] &&&& B_1 \ar[drr] \\ && B_2 \ar[rrrr] &&&& B_3 } $$ then \begin{enumerate} \item the top face is a homotopy pushout if the buttom face is a homotopy pushout. \item the bottom face is a homotopy pushout if the top face is a homotopy pushout, $f$ induces a surjection on path-components, and $B_3$ is a Dold space. \end{enumerate} \end{prop} Homotopy pushouts and pullbacks have become increasingly important tools in homotopy theory and homological algebra. E.g. there exist comparatively simple proofs of Theorem \ref{1_1} based on Propositions \ref{1_4} to \ref{1_6} (unpublished). Another example is the following result of G. Allaud \cite{Allaud}, which is an immediate consequece of Propositions \ref{1_4} and \ref{1_5}. \begin{prop}\label{1_8} Let $f:X\to Y$ be a based map of path-connected Dold spaces such that $\Omega f: \Omega X\to\Omega Y$ is a homotopy equivalence. Then $f$ is a homotopy equivalence. \end{prop} So we feel that it is time for a more systematic investigation of Dold spaces. In Section 2 we will recall the definition of Dold spaces and some facts about numerable covers. In Section 3 we will list a number of elementary facts about Dold spaces. Section 4 is the main part of the paper: We will study properties of simplicial Dold spaces and their realizations. We give a characterization of wellpointed connected Dold spaces and use it to derive results about the realization of maps of simplicial spaces which are dimensionwise fibrations. In Section 5 we apply these results to free algebras over topological operads. We close the paper with a section on counter examples. Some of our results are well-known, some are known to specialists but have not appeared in print, some are new. We derive most of the well-known facts as special cases of more general results. We have tried to give references as well as possible, but we are not sure that we always found the original source. We are indepted to A. Hatcher for bringing Example \ref{6_1} to our attention and to J. Smrekar for suggesting the name ``Dold space'' and for e-mail exchange about function space properties of Dold spaces. The latter turned out to be so retrictive that we did not include them in this paper. In fact, the category of Dold spaces is rather badly behaved with respect to function spaces. In particular, loop spaces of Dold spaces need not be Dold spaces (see Example \ref{6_3}). \section{Dold covers} In this section we recall the basic definitions and list results related to coverings. Let $\{a_j; j\in J\}$ denote a set of elements of $\mathbb{R}_+=\{x\in\mathbb{R}; x\ge 0\}$. We define $$ \sum\limits_{j\in J} a_j=\sup\left\{\sum\limits_{j\in E} a_j;\ \ E \subset J \textrm{ finite}\right\} $$ \begin{defi}\label{2_1} A \textit{partition of unity} on a space $X$ is a set of maps $\{f_j:X\to[0,1]; j\in J\}$ such that $$ \sum\limits_{j\in J}f_j(x)=1\quad \textrm{ for all } x\in X. $$ \end{defi} \begin{defi}\label{2_2} Let $X$ be a space. A subset $A\subset X$ is called \textit{ambiently contractible} if the inclusion $A\to X$ is nullhomotopic. \end{defi} \begin{defi}\label{2_3} Let $\mathcal{U}=\{U_\alpha; \alpha\in A\}$ be a cover of $X$. \begin{enumerate} \item $\mathcal{U}$ is called \textit{locally finite} if each $x\in X$ has a neighborhood $V$ such that $V\cap U_\alpha\neq\emptyset$ for only finitely many $\alpha\in A$. \item A \textit{numeration of} $\mathcal{U}$ is a partition of unity $\{f_\alpha; \alpha\in A\}$ of $X$ such that $\{\mathop{\rm Supp} (f_\alpha); \alpha\in A\}$ is locally finite and $\mathop{\rm Supp} (f_\alpha)\subset U_\alpha$ for all $\alpha\in A$. (Recall, the support $\mathop{\rm Supp}(f)$ of a map $:X\to I$ is the closure of $f^{-1}(]0,1])$.) If $\mathcal{U}$ admits a numeration, it is called a \textit{numerable cover}. \item $\mathcal{U}$ is called an \textit{ambiently contractible cover} if each $U_\alpha$ is ambiently contractible. \item $\mathcal{U}$ is called a \textit{Dold cover} if it is numerable and ambiently contractible. \end{enumerate} \end{defi} \begin{defi}\label{2_4} A space $X$ is called \textit{ambiently locally contractible} if is has a ambiently contractible open cover. We call $X$ \textit{numerably contractible} or \textit{Dold space} if it has a Dold cover. \end{defi} \begin{rema}\label{2_4a} In the literature the term ``weakly contractible'' is used for what we call ``ambiently contractible''. \end{rema} \begin{exa_n}\label{2_5} \begin{enumerate} \item Each discrete space is a Dold space. \item Each contractible space is a Dold space. \item Each paracompact ambiently locally contractible space is a Dold space. \item By \cite[Thm. II. 3]{DE} each paracompact $LEC$ space is a Dold space. \item Each $CW$-complex is a Dold space (see \ref{3_8}). The converse does not hold (see \ref{6_1}). \end{enumerate} \end{exa_n} We will make use of the following results. \begin{nr}\label{2_6} \textbf{Lemma} \cite[p. 347]{tD}: If $\{f_j,j\in J\}$ is a partition of unity on $X$, then $\{f^{-1}_j(]0,1]); j\in j\}$ is a numerable cover of $X$. \end{nr} \begin{nr}\label{2_7} \textbf{Lemma} \cite[p. 349]{tD}: Let $\{f_j:X\to\mathbb{R}_+; j\in J\}$ be a set of maps such that $\mathcal{U}=\{f^{-1}(]0,\infty[); j\in J\}$ is a locally finite cover of $X$, then $\mathcal{U}$ is a numerable cover. \end{nr} \begin{coro}\label{2_8} The following are equivalent \begin{enumerate} \item $X$ is a Dold space \item $X$ has a partition of unity $\{f_j; j\in J\}$ such that $\{f^{-1}_j(]0,1]); j\in J\}$ is an ambiently contractible open cover of $X$. \item there is a set of maps $\{f_j:X\to\mathbb{R}_+; j\in J\}$ such that $\{f^{-1}_j(]0,\infty[); j\in J\}$ is a locally finite ambiently contractible cover of $X$. \end{enumerate} \end{coro} \begin{prop}\label{2_9} Let $\mathcal{U}=\{U_\alpha; \alpha\in A\}$ be a cover of $X$ by Dold spaces. Suppose that $\mathcal{U}$ has a numerable refinement $\mathcal{V}=\{V_j; j\in J\}$, i.e. a numerable cover of $X$ such that each $V_j$ is contained in some $U_\alpha$. Then $X$ is a Dold space. \end{prop} \begin{proof} Let $\{f_j; j\in J\}$ be a numeration of $\mathcal{V}$. Since each $U_\alpha$ is a Dold space there are partitions of unity $$ \{g_{\alpha,k}:U_\alpha\to I; k\in K_\alpha\} $$ such that $\{\mathop{\rm Supp}(g_{\alpha,k}); k\in K_\alpha\}$ is locally finite and $\mathop{\rm Supp}(g_{\alpha,k})$ is contractible in $U_\alpha$ and hence in $X$ for all $k\in K_\alpha$. Choose a function $\beta:J\to A$ such that $V_j \subset U_{\beta(j)}$. For $k\in K_{\beta(j)}$ define $f_{j,k}: X\to [0,1]$ by $$ f_{j,k}(x)=\left\{ \begin{array}{ll} f_j(x)\cdot g_{\beta(j),k}(x) \quad & \textrm{ for } x\in\mathop{\rm Supp}(f_j)\\ 0 & \textrm{ for } x\in X\backslash f^{-1}_j(]0,1]) \end{array} \right. $$ Then $f_{j,k}$ is well-defined and continuous, because $\mathop{\rm Supp}(f_j)$ and $X\backslash f^{-1}_j(]0,1])$ are closed in $X$. The collection $\{f_{j,k}; j\in J, k\in K_{\beta(j)}\}$ is a partition of unity and $$ f^{-1}_{j,k}(]0,1])=f^{-1}_j(]0,1])\cap g_{\beta(j),k}(]0,1]) \subset\mathop{\rm Supp}(g_{\beta(j),k}). $$ Hence $f^{-1}_{j,k}(]0,1])$ is contractible in $X$. Now apply \ref{2_8}(2). \end{proof} \vspace{2ex} \begin{rema} The numeration condition on the cover $\mathcal{U}$ in \ref{2_9} is essential as the following example shows. \end{rema} \begin{exa_n}\label{2_10} Let $X\subset\mathbb{R}^2$ be the cone on $M=\{(0,0)\}\cup\{(\frac{1}{n},0); n\in\mathbb{N}\backslash\{0\}\}$ with cone point $(0,1)$. Then $X$ is a Dold space. Now let $$ Y=(X\sqcup X)/(0,0)\sim (0,0). $$ The two copies of $X$ form a closed cover of $Y$, but $Y$ is not a Dold space, because no open neighborhood of $(0,0)$ is contractible in $Y$. \end{exa_n} \section{Elementary properties} Suppose $U\subset X$ is ambiently contractible in $X$ to a point $x_0$, then $U$ must lie in the path-component of $x_0$. We obtain \begin{prop}\label{3_1} (1) $X$ is a Dold space iff its path-components are open and Dold spaces.\\ (2) If $X=\coprod_{j\in J} X_j$, then $X$ is a Dold space iff each summand $X_j$ is a Dold space. \end{prop} This observation allows us to restrict our attention to path-connected Dold spaces. \begin{prop}\label{3_2} A space $Y$ dominated by a Dold space $X$ is itself a Dold space. \end{prop} \begin{proof} (\cite[p. 235]{DKP} Let $\{V_\lambda; \lambda\in\Lambda\}$ be a Dold cover of $X$, and $f:X\to Y$ and $g:Y\to X$ be maps such that $f\circ g\simeq\mathop{\rm id}_Y$. Then $\{g^{-1}(V_\lambda); \lambda\in\Lambda\}$ is a numerable cover of $Y$ and each $g^{-1}(V_\lambda)$ is contractible in $Y$ because $$ \xymatrix{ g^{-1}(V_\lambda )\ar[r]^g & V_\lambda\subset X \ar[r]^(0.6)f & Y } $$ is nullhomotopic and homotopic to the inclusion $g^{-1}(V_\lambda)\subset Y$. \end{proof} \begin{coro}\label{3_3} If $X$ and $Y$ are homotopy equivalent then $X$ is a Dold space iff $Y$ is a Dold space. \end{coro} \begin{prop}\label{3_4} Given a diagram $$ \xymatrix{ X & A \ar[l]_f\ar[r]^g & Y } $$ with $X$ and $Y$ Dold spaces, then the double mapping cylinder $\widehat{M}(f,g)$ is a Dold space. \end{prop} \begin{proof} $\widehat{M}(f,g)=(X\sqcup A\times[0,1]\sqcup Y)/\sim$ with $(a,0)\sim f(a)$ and $(a,1)\sim g(a)$. Define $\alpha:\widehat{M}(f,g)\to [0,1]$ by $$ \alpha(z)=\left\{ \begin{array}{ll} 0 & z\in X\\ t & z=(a,t)\in A\times[0,1]\\ 1 & z\in Y \end{array}\right. $$ and $\beta: \widehat{M}(f,g)\to [0,1]$ by $\beta(z)=1-\alpha(z)$. Then $\{\alpha,\beta\}$ is a numerable cover, $\alpha^{-1}(]0,1])\simeq Y$ and $\beta^{-1}(]0,1])\simeq X$. Hence $\widehat{M}(f,g)$ is a Dold space by \ref{2_9}. \end{proof} Recall that a map $f:A\to X$ is an \textit{$h$-cofibration} if there is a commutative triangle $$ \xymatrix{ & A\ar[ld]_f\ar[rd]^j\\ X \ar[rr]^h && Y } $$ with $j$ a cofibration and $h$ a homotopy equivalence under $A$. Dually, an \textit{$h$-fibration} is a map $f:A\to X$ which is homotopy equivalent over $X$ to a fibration. \begin{coro}\label{3_5} Let $$ \xymatrix{ A\ar[r]^f\ar[d]_g & B\ar[d]\\ C\ar[r] & X } $$ be a pushout square with $f$ an $h$-cofibration and $B$ and $C$ Dold spaces. Then $X$ is a Dold space. \end{coro} \begin{proof} Since $f$ is an $h$-cofibration, the canonical map $\widehat{M}(f,g)\to X$ is a homotopy equivalence. \end{proof} \begin{coro}\label{3_6} \begin{enumerate} \item The unreduced suspension $\widehat{\Sigma} X$ of any space $X$ is a Dold space. \item \cite[Lemma 1.3]{Pup1} For any map $f:A\to X$ into a Dold space $X$, the unreduced mapping cone is a Dold space. \item If $f:A\to X$ is an $h$-cofibration and $X$ a Dold space, then $X/f(A)$ is a Dold space. \end{enumerate} \end{coro} \begin{prop}\label{3_7} \cite[Lemma 1.6]{Pup1} Let $\xymatrix{X_0\ar[r]^{f_0}& X_1\ar[r]^{f_1}& X_2\ar[r]^{f_2}& \ldots}$ be a sequence of maps of Dold spaces. Then \begin{enumerate} \item the mapping telescope $TX=(\coprod_{n\ge 0} X_n\times I)/\sim$ with $(x,1)\in X_n\times I$ related to $(f_n(x),0)\in X_{n+1}\times I$ is a Dold space. \item if each $f_i$ is an h-cofibration, $\mathop{\rm colim} X_n$ is a Dold space. \end{enumerate} \end{prop} \begin{proof} $TX$ is the double mapping cylinder of $$ \xymatrix{ \coprod\limits_{n\textrm{ even}} X_n & \coprod\limits_{n\le 0}X_n \ar[l]_(0.4){(g_n)} \ar[r]^(0.4){(h_n)} & \coprod\limits_{n\textrm{ odd}} X_n } $$ with $g_n(x)=\left\{ \begin{array}{ll} x & n\textrm{ even}\\ f_n(x) & n\textrm{ odd} \end{array}\right. \qquad h_n(x) =\left\{ \begin{array}{ll} f_n(x) & n\textrm{ even}\\ x & n\textrm{ odd} \end{array}\right. $ If all the $f_n$ are h-cofibrations, the canonical maps $TX\to \mathop{\rm colim} X_n$ is homotopy equivalence. \end{proof} \begin{coro}\label{3_8} \cite[Prop. 6.7]{Dold} Each $CW$-complex $X$ and hence each space of the homotopy type of $CW$-complex is a Dold space. \end{coro} \begin{proof} Let $X^{(n)}$ denote the $n$-skeleton of $X$. Then $X^{(n)}$ is a Dold space by induction on $n$ using \ref{3_5}. Hence $X$ is a Dold space by \ref{3_7}.2. \end{proof} \begin{prop}\label{3_9} Let $p:E\to B$ be any map. Assume that $B$ and the homotopy fibers $F_b(p)$ of $p$ over $b$ are Dold spaces for all $b\in B$, then $E$ is a Dold space. \end{prop} \begin{proof} By \ref{3_1} we may assume that $B$ is path-connected, and by \ref{3_3} we may assume that $p:E\to B$ is a fibration whose fiber $F$ over a fixed $b_0\in B$ is a Dold space. Let $\mathcal{U}=\{U_\lambda; \lambda\in\Lambda\}$ be an open Dold cover of $B$ and $\{f_\lambda: B\to[0,1];\lambda\in\Lambda\}$ a numeration of $\mathcal{U}$, and let $\mathcal{V}=\{V_\gamma, \gamma\in\Gamma\}$ with $\{g_\gamma: F\to[0,1];\gamma\in\Gamma\}$ be the corresponding data for $F$. Let $$ H_\lambda:U_\lambda\times I\to B $$ be a homotopy from the inclusion $i_\lambda:U_\lambda\subset B$ to the constant map to $b_0$. Since $p$ is a fibration there is a homotopy $K_\lambda$ $$ \xymatrix@R=1ex{ p^{-1}(U_\lambda)\times 0\; \ar@{^{(}->}[rr] && E \ar[dd]^p \\ \bigcap \\ p^{-1}(U_\lambda)\times I \ar[rr]^(0.6){H_\lambda\circ(p\times \mathop{\rm id})} \ar[uurr]^{K_\lambda} && B } $$ from the inclusion $j_\lambda:p^{-1}(U_\lambda)\subset E$ to a map $p^{-1}(U_\lambda)\stackrel{k_\lambda}{\to}F\subset E$. Define maps $\tau_{\lambda,\gamma}:E\to[0,1]$ by $$ \tau_{\lambda,\gamma}(e)=\left\{ \begin{array}{ll} f_\lambda(p(e))\cdot g_\gamma(k_\lambda(e)) & e\in p^{-1}(\mathop{\rm Supp}(f_\lambda))\\ 0 & e\notin p^{-1}(f^{-1}_\lambda(]0,1]) \end{array}\right. . $$ Since the $k^{-1}_\lambda(g^{-1}_\gamma(]0,1]))$, $\gamma\in\Gamma$, cover $p^{-1}(U_\lambda)$ and since the $p^{-1}(f^{-1}_\lambda(]0,1])) \linebreak \subset p^{-1}(U_\lambda)$, $\lambda\in\Lambda$, cover $E$, $$ \mathcal{W}=\{W_{\lambda,\gamma}=\tau^{-1}_{\lambda,\gamma}(]0,1]); \lambda\in\Lambda,\gamma\in\Gamma\} $$ covers $E$. This cover is locally finite and ambiently contractible: $K_\lambda$ deforms $W_{\lambda,\gamma}$ into $k_\lambda(W_{\lambda,\gamma})\subset V_\gamma\subset F$, and $V_\gamma$ is ambiently contractible in $F$. Let $e\in E$. Then there is an open neighborhood $U$ of $p(e)$ such that $U\cap U_\lambda=\emptyset$ for all but finitely many $\lambda_1,\ldots,\lambda_n$. So $p^{-1}(U)$ only meets $p^{-1}(U_{\lambda_i})$, $i=1,\ldots,n$. Each $k_{\lambda_i}(e)$ has an open neighborhood $V_i$ such that $V_i\cap V_\gamma=\emptyset$ for all but finitely many $\gamma_{i1},\ldots,\gamma_{ir_i}$. Then $p^{-1}(U)\cap\bigcap\limits^n_{i=1}k^{-1}_{\lambda_i}(V_i)\cap \tau^{-1}_{\lambda,\gamma}(]0,1])\neq\emptyset$ only if $(\lambda,\gamma)\in\{(\lambda_i,\gamma_{ij}); 1\le i\le n, 1\le j\le r_i\}$. So $E$ is a Dold space by \ref{2_8}. \end{proof} \begin{coro}\label{3_10} \cite[Lemma 1.5]{Pup1} If $X$ and $Y$ are Dold spaces so is $X\times Y$.\hfill$\square$ \end{coro} \begin{coro}\label{4_10a} Let $$ \xymatrix{ P \ar[rr]^g \ar[d]^q && E\ar[d]^p\\ X \ar[rr]^f && B } $$ be a homotopy pullback, let $X$ and the homotopy fibers $F_b(p)$ of $p$ over all $b\in B$ be Dold spaces. Then $P$ is a Dold space. \end{coro} \begin{proof} The homotopy fiber $F_x(q)$ of $q$ over $x\in X$ is homotopy equivalent to $F_{f(x)}(p)$ and hence a Dold space. \end{proof} \begin{coro}\label{3_11} Given a diagram $$ \xymatrix@R=1ex{ P \ar[rrr] \ar[ddd] &&& X \ar[ddd]\ar[ddl]\\ \\ && Q \\ Y \ar[rrr]\ar[rru] &&& B } $$ with $B$ a path-connected Dold space, whose outer square is a homotopy pullback and whose inner square is a homotopy pushout, then $Q$ is a Dold space. \end{coro} \begin{proof} If $F(f)$ and $F(g)$ are the homotopy fibers of $f$ and $g$ respectively, the homotopy fiber of the induced map $r:Q\to B$ is homotopy equivalent to the join $F(f)\ast F(g)$ (e.g. see \cite[Prop. 5.5]{Vogt}). Since the join of two spaces is Dold space, the result follows. \end{proof} \section{Simplicial spaces} Let $\bigtriangleup$ denote the category of finite ordered sets $[n]=\{0<1<\ldots<n\}$ and order preserving maps and $\mathop{\rm Mon}\bigtriangleup$ the subcategory of injective order preserving maps. A \textit{simplicial space} is a functor $X_\bullet:\bigtriangleup^{\mathop{\rm op}}\to\mathop{\rm Top}$, $[n]\mapsto X_n$, a \textit{semisimplicial space} is a functor $X_\bullet: (\mathop{\rm Mon}\bigtriangleup)^{\mathop{\rm op}}\to\mathop{\rm Top}$. Let $|X_\bullet|$ denote the usual topological realization, also called \textit{slim realization} of the simplicial space $X_\bullet$ and $||X_\bullet||$ denote the realization of the semisimplicial space $X_\bullet$, also called \textit{fat realization}. Since a simplicial space can be considered as a semisimplicial one it has a fat and a slim realization. A simplicial space $X_\bullet$ is called \textit{proper} if the inclusions $sX_n\subset X_n$ of the subspaces of degenerate elements of $X_n$ are cofibrations for all~$n$. \begin{prop}\label{4_1} \begin{enumerate} \item If $X_\bullet$ is a semisimplicial space such that $X_0$ is a Dold space, then $||X_\bullet||$ is a Dold space. \item If $X_\bullet$ is a proper simplicial space such that $X_0$ is a Dold space, then $|X_\bullet|$ is a Dold space. \end{enumerate} \end{prop} \begin{proof} (1) (The idea of the proof is probably due to D. Puppe. We learnt it many years ago from H. Meiwes \cite{Meiwes2}.) Let $||X||^{(n)}$ denote the $n$-skeleton of the fat realization. Since $||X||^{(n)}\subset ||X||^{(n+1)}$ is a cofibration it suffices to show that each $||X||^{(n)}$ is a Dold space. Assume inductively that $||X||^{(n-1)}$ is a Dold space. Recall that $$ ||X||^{(n)}=||X||^{(n-1)}\cup_{X_n\times\partial\bigtriangleup^n} X_n\times\bigtriangleup^n, $$ where $\bigtriangleup^n$ is the standard $n$-simplex. Choose two different points $u_1\neq u_2$ in the interior of $\bigtriangleup^n$. For a space $Y$ let $CY=(Y\times I)/(Y\times 0)$ be the cone on $Y$ with cone-point $\ast$, and $\varphi:CY\to I$ the map $(y,t)\mapsto t$. Define maps $$ \lambda_i:(\bigtriangleup^n,u_i)\stackrel{h_i}{\to}(C(\partial\bigtriangleup^n), \ast) \stackrel{\varphi}{\to}(I,0) \qquad i=1,2 $$ by choosing based homeomorphisms $h_i$. The maps $$ X_n\times\bigtriangleup^n \stackrel{\textrm{proj.}}{\to}\bigtriangleup^n \stackrel{\lambda_i}{\to} I $$ together with the constant map to 1 on $||X||^{(n-1)}$ define maps $$ f_i:||X||^{(n)}\to I. $$ Then $\{f^{-1}_1(]0,1]),f^{-1}_2(]0,1])\}$ is a numerable cover of $||X||^{(n)}$ by \ref{2_7}. The subspaces $$ f^{-1}_i(]0,1])=||X||^{(n-1)}\cup_{X_n\times\partial\bigtriangleup^n} X_n\times(\bigtriangleup^n\backslash\{u_i\}). $$ deformation retract onto $||X||^{(n-1)}$. Hence they are Dold spaces. So $||X||^{(n)}$ is a Dold space by \ref{2_9}. If $X$ is a proper simplicial space the natural map $||X||\to |X|$ is a homotopy equivalence. Hence $|X|$ is a Dold space. \end{proof} \begin{coro}\label{4_2} Let $J$ be small category and $D:J\to\mathop{\rm Top}$ a diagram of Dold spaces. Then $\mathop{\rm hocolim} D$ is a Dold space. \end{coro} \begin{proof} $\mathop{\rm hocolim} D$ is a topological realization of the proper simplicial space $$ [n]\mapsto \coprod\limits_{i,j\in J} J_n(i,j)\times D(i) $$ with $J_n(i,j)=\{(\alpha_1,\ldots,\alpha_n)\in(mor J)^n$; $\alpha_1\circ\ldots\circ\alpha_n:i\to j\}$ for $n>0$ and $$ J_0(i,j)=\left\{ \begin{array}{ll} \mathop{\rm id} & i=j\\ \emptyset & i\neq j \end{array}\right. $$ Its $0$-th space is $\coprod\limits_{j\in J}D(j)$ and hence a Dold space. \end{proof} We now consider the based case \begin{defi}\label{4_2a} We call a based space $(X,x_0)$ \textit{wellpointed}, if the inclusion $\{x_0\}\subset X$ is a closed cofibration, and \textit{h-wellpointed} if it is an h-cofibration. \end{defi} The homotopy colimit of a diagram $D$ generally will have a different homotopy type, when taken in the category of based spaces. Let $BJ$ denote the classifying space of $J$. The inclusions of the basepoints define a map $$ BJ\to \mathop{\rm hocolim} D $$ and the based homotopy colimit is the quotient $(\mathop{\rm hocolim} D)/BJ$. \begin{coro}\label{4_3} Let $D: J\to\mathop{\rm Top}^\ast$ be a diagram of wellpointed Dold spaces and based maps. Then based-$\mathop{\rm hocolim} D$ is a wellpointed Dold space. \end{coro} \begin{proof} If $D: J\to\mathop{\rm Top}^\ast$ is a diagram of wellpointed spaces then $BJ\to \mathop{\rm hocolim} D$ is a closed cofibration. Apply \ref{3_6}. \end{proof} \begin{nr}\label{4_3a}\textbf{Remark:} The condition that $X$ be wellpointed can be achieved functorially by a \textit{whiskering process}: For a based space $(X,x_0)$ we define $X_I=(X\sqcup I)/(x_0\sim 1)$ and choose $0\in I$ as basepoint of $X_I$. The natural map $q: X_I\to X$ mapping $I$ to $x_0$ is a homotopy equivalence. If $X$ is h-wellpointed, it is even a based homotopy equivalence. We most often state our results for wellpointed spaces because the pushout-product theorem for cofibrations requires one factor to be a closed cofibration, but for constructions which are homotopy invariant in the based category the results extend to h-wellpointed spaces. An example is the following corollary: \end{nr} \begin{coro}\label{4_4} \cite[Lemma 1.5]{Pup1} \begin{enumerate} \item Given a diagram $$ \xymatrix{ X & \ar[l]_f A\ar[r]^g &Y }$$ of h-wellpointed spaces and based maps with $X$ and $Y$ Dold spaces. Then the reduced mapping cylinder $M(f,g)$ is a Dold space. \item Let $(X_{\alpha};\alpha\in A)$ be a family of h-wellpointed Dold spaces. Then $\bigvee\limits_{\alpha\in A} X_\alpha$ is a Dold space. \item Let $X$ and $Y$ be h-wellpointed Dold spaces. The $X\wedge Y$ is a Dold space. \item The reduced suspension $SX$ of an h-wellpointed space is a Dold space. \end{enumerate} \end{coro} \begin{rema}\label{4_5} Example \ref{2_10} shows that \ref{4_4}.2 does not hold without some assumptions on the basepoints. \end{rema} We next give a characterization of path-connected Dold spaces which needs some preparations. Let $$ p:E\to X $$ be a map of based spaces and $F(p)$ the homotopy fiber of $p$ over the basepoint $\ast$. With $p$ we associate a map $$ q_\bullet: E_\bullet(p) \to \Omega_\bullet X $$ of simplicial spaces as follows: $\Omega_n X\cong \mathop{\rm Top}((\bigtriangleup^n,\bigtriangleup^n_0),(X,\ast))$ with the function space topology. Here $\bigtriangleup^n_0$ is the $0$-skeleton of $\bigtriangleup^n$. Boundaries and degeneracies are defined as for the singular functor. In fact, $\Omega_\bullet (\;)$ is a topologized version of the singular functor $$ {\mathop{\rm Top}}^\ast\to {r\mathop{\rm Top}}^{\bigtriangleup^{op}} $$ from ${\mathop{\rm Top}}^\ast$ into the category of reduced simplicial spaces, i.e. simplicial spaces $Y_\bullet$ with $Y_0$ a point. It is right adjoint to the realization functor ${r\mathop{\rm Top}}^{\bigtriangleup^{op}} \to\mathop{\rm Top}^\ast$. Let $C\bigtriangleup^n$ denote the cone on $\bigtriangleup^n$ with cone point $c_0$. We define $$ E_n(p)=\{(e,w)\in E\times\mathop{\rm Top}((C\bigtriangleup^n,\bigtriangleup^n_0),(X,\ast)), w(c_0)=p(e)\}. $$ Boundaries and degeneracies are again defined by the corresponding maps of the standard simplices. Finally, we define $$ q_n:E_n(p)\to \Omega_nX, \quad (e,w)\mapsto w|\bigtriangleup^n $$ Let $L_n\subset\bigtriangleup^n$ be the union of edges joining the $i$-th with the $(i+1)$-st vertex of $\bigtriangleup^n$. Since $L_n\subset\bigtriangleup^n$ is a strong deformation retract and the inclusion is a cofibration there is a fibration and homotopy equivalence $$ \Omega_nX \to \mathop{\rm Top}((L_n,L_n\cap \bigtriangleup^n_0), (X,\ast))\cong(\Omega X)^n. $$ Since $C\bigtriangleup^n\cong\bigtriangleup^{n+1}$ we have a similar homotopy equivalence $$ E_n(p) \to F(p)\times(\Omega X)^n, $$ and $$ \xymatrix{ E_n(p) \ar[rr] \ar[d]_{q_n} && F(p)\times(\Omega X)^n\ar[d]^{\textrm{proj.}} \\ \Omega_nX \ar[rr] &&(\Omega X)^n } $$ commutes. Keeping this in mind, it is easy to show that $q_\bullet: E_\bullet(p)\to\Omega_\bullet X$ is a simplicial object in the category Pull, whose objects are maps and whose maps are commutative squares which are homotopy pullbacks. A result of V.~Puppe \cite{VP} implies \begin{nr}\label{4_6} $$\xymatrix{ F(p)\ar[rr] \ar[d] && ||E_\bullet(p)|| \ar[d]^{||q_\bullet||} \\ \ast \ar[rr] && ||\Omega_\bullet X|| }$$ is a homotopy pullback. The horizontal maps are the inclusions of the $0$-skeleta. \end{nr} Let $P(p)=\{(e,\alpha)\in E\times \mathop{\rm Top}(I,X); \alpha(0)=p(e)\}$ be the mapping path-space of $p$. The maps $$ \begin{array}{rcl} E_n(p)\times\bigtriangleup^n & \longrightarrow & P(p)\\ (e,w,t) &\longmapsto &(e,\overline{w}) \end{array} $$ with $\overline{w}(s)=w(s,t)$ for $(s,t)\in C\bigtriangleup^n=(I\times\bigtriangleup^n)/(0\times\bigtriangleup^n)$, define a map $$ u:||E_\bullet (p)|| \longrightarrow P(p). $$ The counit $v:||\Omega_\bullet X||\to X$ of the adjoint pair \begin{nr}\label{4_6a} $$ \xymatrix{ ||\ ?\ ||:{r\mathop{\rm Top}}^{(\mathop{\rm Mon}\bigtriangleup)^{op}} \ar@<0.5ex>[r] & \ar@<0.5ex>[l] {\mathop{\rm Top}}^\ast:\Omega_\bullet } $$ \end{nr} is induced by maps $$ \Omega_nX\times \bigtriangleup^n\to X, \quad (\sigma,t)\mapsto \sigma(t), $$ and we obtain a map of fiber sequences \begin{nr}\label{4_7} $\xymatrix{ F(p) \ar[rr]\ar[d]^\mathop{\rm id} && ||E_\bullet(p)|| \ar[rr]^{||q_\bullet||}\ar[d]^u \ar @{} [drr] |{\textrm{I}} && ||\Omega_\bullet X|| \ar[d]^v \\ F(p) \ar[rr] && P(p) \ar[rr] && X }$ Since $||\Omega_\bullet X||$ is a Dold space by \ref{4_1}, the square I is a homotopy pullback by~\ref{1_6}. Consider the case where $E$ is a point. Then $P(p)$ is contractible, and so is $||E_\bullet (p)||$: Note that $E_n(p)\cong \Omega_{n+1}X$, so that $E_\bullet(p)$ is the based path-space construction $P\Omega_\bullet X$ in ${\mathop{\rm Top}}^{\bigtriangleup^{op}}$. It is well-known that $||P\Omega_\bullet X||\simeq \Omega_0X=\ast$. So $u$ is a homotopy equivalence. Hence $v$ is a homotopy equivalence by \ref{1_4}, provided $X$ is a path-connected Dold space. We obtain \end{nr} \begin{prop}\label{4_8} A path-connected space $X$ is a Dold space iff the counit $||\Omega_\bullet X||\to X$ of the adjoint pair \ref{4_6a} is a homotopy equivalence. \hfill $\square$ \end{prop} Now let $E$ be any based space. Since $v$ is a homotopy equivalence provided $X$ is a path-connected Dold space, $u$ is a homotopy equivalence. \begin{prop}\label{4_9} If $X$ is a path-connected Dold space, then for any based map $p:E\to B$ the maps $u$ and $v$ of \ref{4_7} are homotopy equivalences. \end{prop} \begin{rema}\label{4_10} From \ref{4_9} we obtain an alternative proof of Proposition \ref{3_9}. Let $p:E\to X$ be a map, $X$ a path-connected Dold space, and suppose the homotopy fiber $F(p)$ is also a Dold space, then $E$ is a Dold space: Consider $$ ||E_\bullet(p)||\stackrel{v}{\to} P(p) \stackrel{r}{\to}E $$ with $r(e,\alpha)=e$. The maps $v$ and $r$ are homotopy equivalences. Since $E_0(p)=F(p)$ the space $||E_\bullet(p)||$ is a Dold space by \ref{4_1}, and hence so is $E$. \end{rema} \begin{prop}\label{4_11} Let $p_\ast:E_\ast\to X_\ast$ be a map of based semisimplicial spaces. Let $F(p_n)$ denote the homotopy fiber of $p_n:E_n\to X_n$. If each $X_n$ is a path-connected Dold space, then $$ \xymatrix{ ||F(p_\ast)|| \ar[rr] \ar[d] && ||E_\ast|| \ar[d] \\ \ast \ar[rr] && ||X_\ast|| } $$ is a homotopy pullback. \end{prop} \begin{proof} By the naturality of our constructions we have a commutative diagram of semisimplicial spaces $$ \xymatrix{ F(p_\ast) \ar[rr] \ar[d]^{\mathop{\rm id}} && ||E_\bullet(p_\ast)|| \ar[rr] \ar[d]^{u_\ast} && ||\Omega_\bullet X_\ast|| \ar[d]^{v_\ast} \\ F(p_\ast) \ar@{-}[rr] \ar[d]^{\mathop{\rm id}} && P(p_\ast) \ar[rr] \ar[d]^{r_\ast} && X_\ast \ar[d]^{\mathop{\rm id}} \\ F(p_\ast) \ar[rr] && E_\ast \ar[rr]^{p_\ast} && X_\ast } $$ Since the vertical maps are homotopy equivalences in each degree they induce homotopy equivalences of fat realizations. Hence it suffices to show that $$ \xymatrix{ ||[k] \mapsto F(p_k)|| \ar[rr] \ar[d] && ||[k] \mapsto ||E_\bullet(p_k)||\;|| \ar@<-3ex>[d] \\ \ast \ar[rr] && ||[k] \mapsto ||\Omega_\bullet X_k||\;|| } $$ is a homotopy pullback. For this we study the map $$ q_{n,k}:E_n(p_k) \to \Omega_n(p_k) $$ of bi-semisimplicial spaces. Since its total fat realization is independent of the order in which we realize, we may first realize with respect to $k$ and obtain a map of semisimplicial spaces $$ \overline{q}_n:||E_n(p_\ast)|| \to ||\Omega_nX_\ast|| $$ \textbf{Claim:} $\overline{q}_n$ is a semisimplicial object in Pull, i.e. $$ \xymatrix{ ||E_n(p_\ast)|| \ar[rr]^{d^i}\ar[d]_{\overline{q}_n} && ||E_{n-1}(p_\ast)|| \ar[d]^{\overline{q}_{n-1}} \\ ||\Omega_nX_\ast|| \ar[rr]^{d^i} && ||\Omega_{n-1}X_\ast|| } $$ is a homotopy pullback for each $n$. \textbf{Proof:} Let $j\neq i$. There is a strong deformation retraction of $C\bigtriangleup^n$ to $\bigtriangleup^n\cup_{v_j}[c_0,v_j]$, where $[c_0,v_j]$ is the line from the cone point $c_0$ to the $j$-th vertex $v_j$ of $\bigtriangleup^n$. This deformation retraction can be chosen compatibly with $d^i$ yielding a commutative square $$ \xymatrix{ E_n(p_k) \ar[rr]\ar[d]_{d^i} && F(p_k)\times \Omega_n X_k \ar[d]^{\mathop{\rm id} \times d^i} \\ E_{n-1}(p_k) \ar[rr] && F(p_k)\times \Omega_{n-1} X_k } $$ whose horizontal maps are homotopy equivalences. Since the fat realization preserves products up to homotopy, we obtain a commutative diagram $$ \xymatrix{ ||E_n(p_\ast)|| \ar[rr]\ar[d]_{d^i} && ||F(p_\ast)||\times ||\Omega_nX_\ast|| \ar[d]^{\mathop{\rm id}\times d^i} \\ ||E_{n-1}(p_\ast)|| \ar[rr] && ||F(p_\ast)||\times ||\Omega_{n-1}X_\ast|| } $$ whose horizontal maps are homotopy equivalences. So it suffices to show that $$ \xymatrix{ ||F(p_\ast)||\times ||\Omega_nX_\ast|| \ar[rr]^{\mathop{\rm id}\times d^i}\ar[d]^{\textrm{proj.}}&& ||F(p_\ast)||\times ||\Omega_nX_\ast|| \ar[d]^{\textrm{proj.}} \\ ||\Omega_nX_\ast|| \ar[rr]^{d^i} && ||\Omega_nX_\ast|| } $$ is a homotopy pullback. But this is evident. This proves the claim. We now apply V. Puppe's result \cite{VP} again: Since $||F(p_\ast)||$ is the $0$-skeleton of $||[n]\mapsto ||E_n(p_\ast)||\;||$, we obtain a homotopy pullback $$ \xymatrix{ ||F(p_\ast)|| \ar[rr]\ar[d] && ||E_\bullet(p_\ast)|| \ar[d] \\ \ast \ar[rr] && ||\Omega_\bullet X_\ast|| } $$ \end{proof} The basic idea of the argument of the previous proof is due to D. Puppe. He used it so show that the fat realization commutes with the loop space functor for path-connected semisimplicial Dold spaces. \begin{prop}\label{4_12} Let $X_\bullet$ be a based semisimplicial space such that each $X_n$ is a path-connected Dold space. Then there is a canonical homotopy equivalence $$ ||\Omega X_\bullet|| \to \Omega||X_\bullet|| $$ In particular, $\Omega||X_\bullet||$ is a Dold space if $\Omega X_0$ is a Dold space (e.g. if $X_0$ is based contractible). \end{prop} \begin{proof} Let $\ast$ denote the semisimplicial point. Apply \ref{4_11} to the map $p_\bullet: \ast\to X_\bullet$. Since $F(p_\bullet)=\Omega X_\bullet$ and $||\ast||$ is contractible, the statement follows. \end{proof} We will see that loop spaces of Dold spaces need not be Dold spaces. But we have \begin{prop}\label{4_13} If $X$ is a path-connected h-wellpointed Dold space, then $\Omega\Sigma X$ is a Dold space. \end{prop} \begin{proof} By Remark \ref{4_3a} we may assume that $X$ is wellpointed. Let $X_\bullet$ be the simplicial space which is the $n$-fold wedge of $X$ in degree $n$. Boundaries $d^i$ are the folding map for $0<i<n$ and projections for $i=0,n$. Degeneracies are the obvious injections. Apply \ref{4_11} to $p_\bullet:\ast\to X_\bullet$. Since $X_\bullet$ is proper, we have homotopy equivalences $$ ||F(p_\bullet)|| = ||\Omega X_\bullet|| \simeq \Omega ||X_\bullet||\simeq \Omega |X_\bullet|=\Omega\Sigma X. $$ Since $F(p_0)$ is a point, $||F(p_\bullet)||$ is a Dold space. \end{proof} In his proof of \ref{4_12} Puppe used the nerve $N_\bullet\Omega_M X$ of the Moore loop space $\Omega_MX$ of $X$ rather than $\Omega_\bullet X$. In fact, there is a simplicial map $$ \alpha_\bullet:N_\bullet\Omega_M X\to\Omega_\bullet X $$ which is degreewise a homotopy equivalence inducing a homotopy equivalence \cite[Appendix]{Brink} $$ ||\alpha_\bullet||: ||N_\bullet\Omega _MX||\to || \Omega_\bullet X||. $$ We note that $N_\bullet\Omega_M X$ is proper if $X$ is wellpointed, because $\Omega_MX$ is wellpointed respectively h-wellpointed if $X$ is \cite[(11.3)]{DKP}. We obtain \begin{coro}\label{4_14} Suppose $X$ is a wellpointed path-connected space. Then $X$ is a Dold space iff $v\circ|\alpha_\ast|:\mathcal{B}\Omega_MX\to X$ is a homotopy equivalence, where $B$ is the classifying space functor and $v$ is the map of \ref{4_8}. \hfill$\square$ \end{coro} The following result is an extension of Proposition \ref{4_11}. \begin{prop}\label{4_15} Given a commutative diagram of based simplicial spaces $$ \xymatrix{ A_\ast \ar[rr]^{f_\ast} \ar[d]_{q_\ast} && E_\ast \ar[d]^{p_\ast} \\ B_\ast \ar[rr]^{g_\ast} && X_\ast } $$ which is a homotopy pullback in each dimension. If each $B_n$ and each $X_n$ is a path-connected Dold space, then $$ \xymatrix{ ||A_\ast|| \ar[rr]^{||f_\ast||} \ar[d]^{||q_\ast|} && ||E_\ast|| \ar[d]^{||p_\ast||} \\ ||B_\ast|| \ar[rr]^{||g_\ast||} && ||X_\ast|| } $$ is a homotopy pullback. \end{prop} \begin{proof} From \ref{4_11} we obtain a diagram $$ \xymatrix@R=2ex@C=2ex{ ||F(q_\ast)|| \ar[rrr] \ar[dd] \ar[rd] && & ||A_\ast|| \ar[rd] \ar[dd] |!{[d];[d]}\hole \\ & ||F(p_\ast)|| \ar[rrr] \ar[dd] &&& ||E_\ast|| \ar[dd] \\ \ast \ar[rd] \ar[rrr] |!{[rr];[r]}\hole &&& ||B_\ast|| \ar[rd] \\ & \ast \ar[rrr] &&& ||X_\ast|| } $$ whose front face $(F)$ and back face $(B)$ are homotopy pullbacks. Since the map $F(q_\ast)\to F(p_\ast)$ is a homotopy equivalence in each dimension by assumption, its realization is a homotopy equivalence. Hence the left face $(L)$ is a homotopy pullback. If $(R)$ denotes the right face, we find that $(B)+(R)$ is a homotopy pullback, because $(L)+(F)$ is one. Hence $(R)$ is a homotopy pullback by \ref{1_5}. \end{proof} \section{$\mathbf{k}$-spaces and free algebras over operads} Throughout this section we work in the category $k$Top of $k$-spaces and its based version $k\mathop{\rm Top}^\ast$. Recall that $X$ is a $k$-space if a subset $U\subset X$ is open precisely if $f^{-1}(U)$ is open for all maps $f:C\to X$ and all compact Hausdorff spaces $C$. The inclusion functor $i:k\mathop{\rm Top}\subset\mathop{\rm Top}$ has a right adjoint $k:\mathop{\rm Top}\to k\mathop{\rm Top}$ obtained from $X$ by declaring the subsets $U$ satisfying the condition above as open. The counit of this adjunction $$ ik(X)\to X $$ is the identity on underlying sets. Hence the topology of $k(X)$ is finer than the one of $X$, and we obtain \begin{prop}\label{5_1} If $X$ is a Dold space so is $k(X)$. \end{prop} Since $i$ preserves colimits and $k$ limits, we moreover have \begin{prop}\label{5_2} The results of the previous sections also hold in the category $k\mathop{\rm Top}$ respectively $k\mathop{\rm Top}^\ast$. \end{prop} We include $k\mathop{\rm Top}$ into our considerations because Theorems \ref{1_1} and \ref{1_2} are phrased in $k\mathop{\rm Top}^\ast$. In his proof of Theorem \ref{1_2} Meiwes needed to show that $\mathbb{C}^\ast_n(X)$ and the $k$-fold symmetric product $SP_k(X)$ are Dold spaces if $X$ is a (wellpointed for $\mathbb{C}^\ast_n(X)$) Dold space. He did this by explicitely constructing Dold covers. We will obtain these results from more general easy to prove facts. Let $\mathcal{P}$ be a topological operad. We call $\mathcal{P}$ reduced if $\mathcal{P}(0)$ consists of a single element. If $X$ is $\mathcal{P}$-space and $\mathcal{P}$ is reduced, the single element of $\mathcal{P}(0)$ determines a basepoint in $X$. Let $\mathcal{P}\mathop{\rm Top}$ be the category of $\mathcal{P}$-spaces. We have a forgetful functor $$ U:\mathcal{P}\mathop{\rm Top} \to k\mathop{\rm Top} $$ and, if $\mathcal{P}$ is reduced, $$ U^\ast: \mathcal{P}\mathop{\rm Top} \to {k\mathop{\rm Top}}^\ast. $$ They have left adjoints $$ \mathbb{P}:k\mathop{\rm Top}\to \mathcal{P}\mathop{\rm Top} \quad\textrm{ respectively }\quad \mathbb{P}^\ast:{k\mathop{\rm Top}}^\ast\to\mathcal{P}\mathop{\rm Top} $$ defined by $$ \mathbb{P}(X)=\coprod\limits^\infty_{n=0}\mathcal{P}(n)\times_{\Sigma_n}X^n \quad \textrm{ and }\quad \mathbb{P}^\ast(X)\left(\coprod\limits^\infty_{n=0}\mathcal{P}(n)\times_{\Sigma_n} X^n\right) /\sim $$ The relation $\sim$ in the definition of $\mathbb{P}^\ast(X)$ is defined as follow: Let $\ast\in\mathcal{P}(0)$ denote the single element and let $$ \begin{array}{ll} \sigma_i:\mathcal{P}(k) \to \mathcal{P}(k-1), & \alpha\mapsto \alpha\circ(\mathop{\rm id}_i\times\ast\times \mathop{\rm id}_{k-i-1}) \\ s_i: X^{k-1} \to X^k, & (x_1,\ldots,x_{k-1})\mapsto (x_1,\ldots,x_i,\ast,x_{i+1},\ldots,x_{k-1}) \end{array} $$ Then $(\sigma_i(\alpha),x)\sim (\alpha,s_i(x))$. \begin{prop}\label{5_3} Let $\mathcal{P}$ be an operad such that each $\mathcal{P}(n)/\Sigma_n$ is a Dold space, and let $X\in k\mathop{\rm Top}$ be a path-connected Dold space. Then $\mathbb{P}(X)$ is a Dold space. \end{prop} \begin{prop}\label{5_4} Let $X\in k\mathop{\rm Top}^\ast$ be a wellpointed path-connected Dold sapce. Then $\mathbb{P}^\ast(X)$ is a Dold space for each reduced operad $\mathcal{P}$. \end{prop} The proofs make use of the following result of May \cite[Thm. 12.2]{May}: \begin{prop}\label{5_5} \begin{enumerate} \item Let $X_\bullet$ be a simplicial $k$-space, then there is a natural homeomorphism $|\mathbb{P}(X_\bullet)|\to\mathbb{P}(|X_\bullet|)$. \item Let $X_\bullet$ be a wellpointed simplicial $k$-space and $\mathcal{P}$ a reduced operad. Then there is a natural homemorphism $|\mathbb{P}^\ast(X_\bullet)|\to\mathbb{P}^\ast(|X_\bullet|)$. \end{enumerate} \end{prop} May proves the based case, but the proof applies verbatim also to the non-based case. \textbf{Proof of \ref{5_3}:} Choose a basepoint $x_0\in X$ and let $q:X_I\to X$ be the map of \ref{4_3a}. Since $X_I$ is a wellpointed Dold space the map $$ v\circ|\alpha_\ast|:|N_\ast\Omega_MX_I|\to X_I $$ of \ref{4_14} is a based homotopy equivalence. By \ref{5_5} we have a sequence of homotopy equivalences (we ignore basepoints) $$ |\mathbb{P}(N_\ast\Omega_MX_I)|\to\mathbb{P}(|N_\ast\Omega_MX_I|)\to\mathbb{P}(X_I) \to \mathbb{P}(X). $$ $N_0\Omega_MX_I$ is a single point. Hence $$ \mathbb{P}(N_0\Omega_MX_I)=\mathbb{P}(\ast)=\coprod\limits^\infty_{n=0}\mathbb{P}(n) /\Sigma_n $$ which is a Dold space. Hence $|\mathbb{P}(N_\ast\Omega_MX_I)|$ and $\mathbb{P}(X)$ are Dold spaces by \ref{4_1}, because $\mathbb{P}(N_\ast\Omega_MX_I)$ is proper. \hfill$\square$ \vspace{2ex} \textbf{Proof of \ref{5_4}:} If $X$ is wellpointed the map $q:X_I\to X$ is a based homotopy equivalence. We obtain a sequence of based homotopy equivalences $$ |\mathbb{P}^\ast(N_\ast\Omega_MX_I)|\to \mathbb{P}^\ast(|N_\ast\Omega_MX_I|) \to \mathbb{P}^\ast(X_I)\to \mathbb{P}^\ast(X). $$ Since $\mathbb{P}^\ast(\ast)=\ast$, all spaces are Dold spaces by \ref{4_1}, because $ \mathbb{P}^\ast(N_\ast\Omega_MX_I)$ is proper. \hfill$\square$ \begin{coro}\label{5_6} Let $X\in k\mathop{\rm Top}$ be a Dold space. Then $SP_k(X)$ is a Dold space. \end{coro} \begin{proof} Let $\{X_\alpha; \alpha\in A\}$ be the set of path-components of $X$. Then $SP_k(X)$ is the disjoint union of spaces $$ SP_{r_1}(X_{\alpha_1})\times\ldots\times SP_{r_q}(X_{\alpha_q}), \qquad r_1+\ldots+r_q=k. $$ By \ref{3_1} and \ref{3_10} it suffices to prove the result for path-connected $X$. Let $\mathcal{C}om$ be the operad for commutative monoid structures, i.e. $\mathcal{C}om(n)$ is a single point for each $n$. Then $$ \mathcal{C}om(X)=\coprod\limits_n SP_n(X) $$ is a Dold space by \ref{5_3}. So $SP_n(X)$ is a Dold space by \ref{3_1}. \end{proof} \section{Counter examples} \begin{prop}\label{6_1} There are Dold spaces which are not of the homotopy type of a $CW$-complex. \end{prop} The following example was brought to our attention by A. Hatcher \cite{Hat}. Let $Y=\{\frac{1}{n};\ n\in\mathbb{N}\}\cup\{0\}\subset\mathbb{R}$ and let $\widehat{\Sigma} Y$ be the unreduced suspension of $Y$. Then $\widehat{\Sigma} Y$ is a Dold space. Let $f:\widehat{\Sigma} Y\to X$ be any map into a $CW$-complex. Since $\widehat{\Sigma} Y$ is compact, $f$ factors through a finite subcomplex $A\subset X$. Hence $f_\ast: H_1(\widehat{\Sigma} Y)\to H_1(X)$ factors through $H_1(A)$. Let $B_n$ be the unreduced suspension of $\{0\}\cup \{\frac{1}{i};\ 1\leq i\leq n\}$. Then $B_n$ is a retract of $\widehat{\Sigma} Y$ and $H_i(B_n)\cong \mathbb{Z}^n$. Hence $H_1(\widehat{\Sigma} Y)$ is not finitely generated, but $H_1A$ is. So the map $f$ cannot be a homotopy equivalence. \begin{coro}\label{6_2} There are weak homotopy equivalences between Dold spaces which are not homotopy equivalences. \end{coro} \textbf{Example:} Let $g:R\widehat{\Sigma} Y\to \widehat{\Sigma} Y$ be a $CW$-approximation of $\widehat{\Sigma} Y$ of the previous example. Then $g$ is a weak equivalence but not a homotopy equivalence. \begin{prop}\label{6_3} The loop space of a Dold space need not be a loop space. \end{prop} \textbf{Example:} $\mathbb{Q}$ with the subspace topology of $\mathbb{R}$ is not a Dold space. Let $N_\bullet\mathbb{Q}$ denote the nerve of $(\mathbb{Q},+)$. Since $(\mathbb{Q},+)$ is a topological group there is a homotopy equivalence $\mathbb{Q}\to\Omega||N_\bullet\mathbb{Q}||$. Hence $\Omega||N_\bullet\mathbb{Q}||$ is not a Dold space but $||N_\bullet\mathbb{Q}||$ is one. \hfill$\square$ \vspace{2ex} Recall that a \textit{closed class} $\mathcal{C}$ in the sense of Dror Farjoun is a full subcategory of the category $S_\ast$ of wellpointed spaces of the homotopy type of a $CW$-complex which is closed under homotopy equivalences and based homotopy colimits \cite[D1]{Dror}. The class of wellpointed Dold spaces is closed under homotopy equivalences and based homotopy colimits, but Dror Farjoun's results do not generalize to this class. \begin{exa}\label{6_4} Let $F\to E\to B$ be a fibration sequence with path-connected $B$. If $F$ and $E$ are in a closed class $\mathcal{C}$, then so is $B$ \cite[D. 11]{Dror}. This does not hold for Dold spaces. Consider the based path-space fibration $$ \Omega C\to PC\to C $$ over the polish circle $C$ with a nice point $c_0\in C$. It is well-known that $\Omega C$ and $PC$ are contractible and hence Dold spaces, but $C$ is not a Dold space. \end{exa}
2,877,628,091,378
arxiv
\section{Introduction} White dwarfs (WDs) are compact objects, formed at the final evolution stage of middle mass and low mass main sequence stars. Majority of stars in our galaxy at the end of their evolution will form WDs as an ultimate end product. Their mass is around one solar mass and size is of the same order of the Earth's size. Unlike normal stars there is no thermonuclear fusion in WDs and all the thermal energy accumulated during their formation will gradually dissipate in the form of light, heat etc. Moreover for WDs the larger the mass, the smaller the radius.\cite{shapiro1} In order to describe theoretically the structure and physical properties of WDs there exist, at least, three equations of state (EoS) in the literature: the well-known Chandrasekhar EoS,\cite{chandra1} the Salpeter EoS\cite{salp1,salp2} and the Relativistic Feynman-Metropolis-Teller (RFMT) EoS\cite{FMT1,rotondo,rotondo2} that generalizes both the Chandrasekhar and Salpeter EoS. All main features, advantages, drawbacks and applications of these EoS are outlined in Ref.~\refcite{rotondo2}. Although WDs, in general, are namely investigated in classical physics, the effects of general relativity (GR) become crucial to investigate their stability close to the maximum mass (the Chandrasekhar mass limit).\cite{shapiro1,chandra1,rotondo2} In this work we construct equilibrium configurations of static and uniformly rotating WDs using the Chandrasekhar EoS and the RFMT EoS within GR for the sake of completeness.\cite{rotondo2,bosh2013} First we perform computations for zero temperature uniformly rotating WDs at the mass shedding limit by utilizing the RFMT EoS within the Hartle formalism.\cite{H1967, HT1968, bosh2013, bosh02} Afterwards, we investigate static WDs at finite temperatures by employing the Chandrasekhar EoS. Finally, we superpose these theoretical results with the estimated data obtained from the Sloan Digital Sky Survey Data Release 4 (SDSS-E06 catalog) by Tremblay et al.\cite{tremb01} \section{Cold Rotating and Hot Static White Dwarfs} \begin{figure}[t] \centerline{\includegraphics[width=0.75\columnwidth,clip]{RotWDsData}} \caption{Mass-radius relation of uniformly rotating WDs obtained with the Chandrasekhar EoS and the RFMT EoS for T $= 0$ K case and their superposition with the estimated masses and radii of WDs taken from the SDSS-E06 catalog (blue dots, see online version). Upper curves indicate rotating WDs and lower curves indicate static WDs.}\label{fig:vs} \end{figure} \begin{figure}[t] \centerline{\includegraphics[width=0.75\columnwidth,clip]{HotWDsData}} \caption{Mass-radius relation of static WDs obtaining using the Chandrasekhar EoS for selected finite temperatures T = $[0,10^6,10^7,4\times10^7,10^8]$ K and their superposition with the estimated masses and radii of WDs taken from the SDSS-E06 catalog (blue dots, see online version).}\label{fig:vs2} \end{figure} In Fig.~\ref{fig:vs} we constructed the mass-radius relation of uniformly rotating WDs using both the Chandrasekhar and RFMT EoS in GR by fulfilling their stability criteria.\cite{rotondo2,bosh2013} The radius of the WDs in this plot is defined as the average spherical radius $\left<R\right>=(1/3) \left(R_{p}+2R_{e}\right)$, where $R_{p}$ is the polar radius, $R_{e}$ is the equatorial radius. For the sake of generality we accounted for the nuclear composition of the WD matter in the RFMT EoS. As one can see from Fig.~\ref{fig:vs} the consideration of the chemical composition along with rotation at the Keplerian limit is not sufficient to explain all the observational data. In order to tackle this relevant issue S. M.~de Carvalho et al.\cite{car01} proposed to include the finite-temperature effects in the EoS. Following this idea we performed a similar analysis for static WDs at finite-temperatures with the simplest Chandrasekhar EoS by solving the Tolman-Oppenheimer-Volkoff (TOV) equation (see Ref.~\refcite{bosh01} for details). The results of Ref.~\refcite{bosh01} are illustrated in Fig.~\ref{fig:vs2}. As one can see here in analogy to Ref.~\refcite{car01} the only inclusion of the finite-temperature effects in the Chandrasekhar EoS leads to the mass-radius relation in better agreement with the observational data. It should be noted, that from the astronomical observations of isolated WDs one infers the effective surface temperature and the surface gravity but not the mass-radius relation. All main parameters are inferred and estimated by using certain models. However for WDs in close eclipsing binaries there exist techniques to measure their masses. The data obtained from these binaries are considered to be more reliable and model-independent. Therefore to perform more realistic computations one needs to take into account the effects of rotation, chemical composition and temperature together for selected WDs with known parameters. Only after one can perform more precise analyses and make further predictions. \section{Conclusion} In this work we calculated the mass-radius relations for cold uniformly rotating WDs within the Hartle formalism in GR using both the Chandrasekhar and RFMT EoS. We superposed our results with the estimated values of masses and radii obtained by Tremblay et al.\cite{tremb01} As a result we showed that the rotation along with the chemical composition are not sufficient enough to explain all the observational data. Furthermore, to investigate WDs at finite-temperatures we considered a static case and solved the TOV equation numerically by using the Chandrasekhar EoS. We compared and contrasted our results with the estimated data from the observations and showed that the data, including the range of low masses, can be covered and described by taking into account the effect of finite temperatures alone. We have considered the effects of finite-temperatures and rotation separately. For more detailed analyses one needs to consider both effects together and work with model-independent data obtained from the spectroscopic or photometric measurements of masses and radii. In all our computations we used the uniform temperature of the WDs core. To link this temperature with a real surface temperature of a WD we exploited the Koester formula\cite{koester1} which establishes the relationship between the effective surface temperature and the core temperature. We found that most of the observed WDs core temperatures are lower than $10^8$~K (see Fig. ~\ref{fig:vs2}). More data are still needed to confirm and extend our results. In view of recent discoveries of WDs by Koester and Kepler et al.\cite{koester2, kepler1} the theoretical study of these objects become more fascinating and overwhelming then ever before. \section*{Acknowledgments} This work was supported by Grant No 3101/GF4 IPC-11 and F.0679 of the Ministry of Education and Science of the Republic of Kazakhstan. B.K. acknowledges ICRANet for support and hospitality.
2,877,628,091,379
arxiv
\section{Introduction} \par% The mass and the half-life of a nucleus are fundamental properties which result from the interaction of all nucleons \cite{Bohr}. Both quantities are essential for the understanding of nuclear structure and also for the nucleosynthesis in astrophysics. \par New phenomena in nuclear physics on shell structure, pairing correlations, etc. have been discovered with precise nuclear masses. The driplines, which determine the borders of nuclear existence, are obtained from the mass differences of neighboring nuclei. The actual paths of the nucleosynthesis in stars are governed by the nuclear binding energies and lifetimes. \par A very important motivation for measuring new masses of exotic nuclides is the test and improvement of nuclear theories. Although the progress of the theoretical calculations was enormous in the last years \cite{{Bender_RMP},{Lunney_RMP}}, especially in the microscopic calculations, their predictive power is still up to a factor of 100 worse compared to our presently achieved experimental accuracy \cite{Li-2005, Li-PRL}. \par In this contribution we report on first results from a very recent experiment performed at GSI. The experiment consisted of two parts. In the first part the lifetimes of stored hydrogen-like $^{140}$Pr$^{58+}$ ions have been measured and the objective of the second part were the masses of neutron-deficient $^{152}$Sm projectile fragments. \section{Experiment} \subsection{Production, separation, storage, and frequency measurement} \par% Proton-rich nuclides were produced via fragmentation of the 508 and 615~MeV/u $^{152}$Sm primary beam, provided by the heavy-ion synchrotron SIS \cite{Blasche}, in the 1032 and 4009~mg/cm$^2$ $^9$Be production targets, respectively. The first combination was used for the lifetime measurements of $^{140}$Pr$^{58+}$ ions and the second one was used in the mass measurements. The target was placed in front of the fragments separator facility FRS \cite{Ge-NIM24}. The fragments were separated in flight and injected, stored, and electron cooled in the storage ring ESR \cite{Franzke}. The experimental facility is schematically presented in Figure~\ref{facility}. The experimental conditions used in the first part of the experiment are indicated in the figure. \begin{figure}[t!] \includegraphics*[width=\textwidth]{facility.eps} \caption {Schematic layout of the secondary nuclear beam facility at GSI. The heavy-ion synchrotron SIS, the fragment separator FRS, and the storage-cooler ring ESR used in this experiment are highlighted. The indicated primary beam energy, the production target, degrader, and the energy of the injected into the ESR fragment are those used in the first part of the experiment devoted to the lifetime measurements of stored hydrogen-like $^{140}$Pr$^{58+}$ ions.} \label{facility} \end{figure} \par% The injection into the ESR was optimized with the primary beam and the electro-magnetic fields of the FRS-ESR facilities were set at a constant magnetic rigidity value during the measurements. The magnetic rigidity value of $B\rho$=7.655~Tm was required in the first part of the experiment and $B\rho$=6.5~Tm was used in the part of the experiment aimed at mass measurements of neutron-deficient nuclei. In general, the different selections of the secondary fragments were done by changing the primary beam energy impinging on the production target. The selected reference fragment emerging from the target has then to match the prepared ion-optical setting. In this experiment our reference fragment was $^{108}$Sb$^{51+}$. In principle, all fragments in same magnetic rigidity acceptance are transmitted as well. This is successfully used in our mass measurements by simultaneously storing the nuclides with unknown masses and with known masses for calibration. \par% However, applying another separation criterion in addition we can easily reduce and further select the number of nuclear species injected into the ESR. Such a separation is possible with atomic energy-loss in matter and a two-fold magnetic rigidity analysis, B$\rho$-$\Delta$E-B$\rho$ method \cite{Ge-NIM24}. The B$\rho$-$\Delta$E-B$\rho$ method was applied in this experiment for half-life measurements of hydrogen-like $^{140}$Pr fragments. In this case we want to avoid that neither the mother nor the daughter nuclei are contaminated by other fragments, such as e.g. helium-like $^{140}$Nd$^{58+}$ ions. Moreover, in order to obtain the exact number of $^{140}$Pr$^{58+}$ ions decaying via nuclear electron capture to $^{140}$Ce$^{58+}$ ions, the amount of injected daughter ions should be kept as small as possible. 731~mg/cm$^2$ aluminium degrader was used at the middle focal plane of the FRS (see Figure~\ref{facility}). The first half of the FRS before the degrader was set to transmit fully-ionized $^{140}$Pr$^{59+}$ ions. By applying this FRS setting, and also using the slit systems, no $^{140}$Ce$^{58+}$ ions can be transmitted till the degrader. The atomic charge state distribution after the production target is very similar for praseodymium and neodymium and amounts to about 86\% in the fully ionized state, about 13\% in the hydrogen-like state, and about 0.5\% in the helium-like state (GLOBAL \cite{GLOBAL} calculations). The corresponding charge state distributions after the degrader are also very similar to the one above (the degrader thickness is well above the equilibrium thickness of about 255~mg/cm$^2$ \cite{GLOBAL}). Thus, setting the second half of the FRS and the ESR on the wanted $^{140}$Pr$^{58+}$, we achieved that the intensity of $^{140}$Nd$^{58+}$ in the ESR was less than 1 per mill of the praseodymium intensity. No other fragments have been transmitted in this setting. \par% The ions injected and stored in the ESR were electron cooled. The electron cooling process contracts the phase-space volume of stored beams and the initial velocity distribution is reduced to typically $\Delta{v}/v\approx5\times10^{-7}$. At the injection only 25\% of the ESR acceptance is filled but after electron cooling the circulating ions have exactly the same mean velocity and thus occupy the entire storage acceptance of about $\pm$1.2\% \cite{Raidi} due to their different mass-to-charge ratios. By selecting the voltage of the cooler electrodes we define the velocity of the merged electrons and thus the velocity of the cooled ions. \par Besides the electron cooling the ESR is also equipped with a stochastic cooling device \cite{No-NIMA} which provides fast precooling at a fixed fragment velocity, corresponding to 400 MeV/u, and allows to access shorter-lived nuclei as demonstrated in our previous experiments~\cite{Ge-RNB6}. This fixed velocity results in a magnetic rigidity of $B\rho$=7.655~Tm in the part of the run devoted to lifetime studies in which we have applied the stochastic cooling. However, for the mass measurements we reduced the magnetic rigidity of the ESR to stay in the optimum operating domain of the electron cooler which was employed throughout the present experiment. \par% The masses and lifetimes have been measured with the time-resolved Schottky mass spectrometry (SMS) \cite{Li-2005}. It is based on the Schottky-noise spectroscopy \cite{Borer}, which is widely used for non-destructive beam diagnostics in circular accelerators and storage rings. The stored ions were circulating in the ESR with revolution frequencies of about 2~MHz. At each turn they induced mirror charges on two electrostatic pick-up electrodes. The 30$^{th}$-31$^{st}$ harmonics of the signals were down-shifted to the frequency range from 0 to 300~kHz, digitized with a 640 kHz sampling rate, and stored as 16-bit words on a hard-disk for the off-line analysis. \par% Fast Fourier Transformation is applied to the stored data leading to the revolution frequency spectra. The frequencies provide information about the mass-over-charge ratios of the ions~\cite{{Raidi},{Li-2005}}. The area of the frequency peak is proportional to the number of stored ions, which is the basis for lifetime measurements \cite{Ge-RNB6}. The details of the data acquisition system and of the data analysis can be found in Ref.~\cite{Li-2005} and references therein. \subsection{Magnetic rigidity acceptance of the ESR} \par% \begin{figure}[b!] \includegraphics*[width=14cm]{sm152_calibration.eps} \caption {Calibration of the ESR acceptance with the $^{152}$Sm primary beam at different velocities (see text). Beyond the presented data points no orbiting ions were observed.} \label{calibration} \end{figure} It is important to know the range of the mass-over-charge values which can be simultaneously stored in the ESR with given electron cooler parameters such as electron current and cooler voltage. Therefore, we have measured the $B\rho$ acceptance of the ESR. \par% The $B\rho$ value of any stored ion is defined by its mass-over-charge ratio $m/q$ and its velocity $v$ via: $$ B\rho=\frac{mv\gamma}{q}, $$ where $\gamma$ is the relativistic Lorentz factor. For this calibration measurement we used the primary beam $^{152}$Sm$^{62+}$ ions with precisely known mass-over-charge ratio \cite{AW03}. Since the velocity is defined by the electron cooler voltage and electron current, the magnetic rigidity can be determined. \par% The revolution frequency $f$ of the primary beam has been measured with SMS for different beam velocities and the length of the corresponding closed orbit $L$ was then calculated via $f=v/L$. \par% The experimental data points are shown in Figure~\ref{calibration}. The error bars of each point are within the symbol size. The linear fit was used to parameterize the data. \par% It is obvious that with this calibration one can exactly select the measured mass-over-charge range of stored fragments by varying the cooler parameters. \section{Preliminary results} \subsection{Mass measurements of neutron-deficient nuclides} \par% The present mass accuracy of the time-resolved SMS is about 30~$\mu{u}$ \cite{Li-2005}. Therefore, the objective of this experiment was the mass surface which is presently unknown or experimentally known but with error bars larger than the SMS accuracy. The present status of knowledge of nuclear masses was taken from the Atomic Mass Evaluation (AME) 2003~\cite{AW03}. \par% The production yields, ionic charge-state distribution, transmission through the FRS, and injection into the ESR have been calculated with the MOCADI~\cite{MOCADI} and the LISE++~\cite{LISE} codes. The optimum setting was obtained for $^{108}$Sb$^{51+}$ ions being at the magnetic rigidity of 6.5~Tm corresponding to the central orbit of the FRS. \par% Using the calibration curve from Figure~\ref{calibration}, the cooler voltages were calculated which are needed to cover the mass surface aimed at in the experiment. The voltage was varied in steps of 2~kV from 190~kV till 220~kV. The cooler current was kept constant at 0.4~A, a relatively high current chosen for fast electron cooling. With these parameters, the nuclei with half-lives longer than one second are expected to be recorded in the frequency spectra. \par% A part of the chart of nuclides with the mass surface expected to be covered in this experiment is shown in Figure~\ref{nchart}. The developed single particle method~\cite{Li-2005,GL-2005} is the base for precise mass determination of even a single stored ion. Thus, particle yields in the ESR as low as one ion in one hundred injections could be measured. The analysis of the data is in progress. The presently identified nuclides are indicated in the figure with white crosses. \begin{figure}[h!] \includegraphics*[width=\textwidth]{nchart.eps} \caption {Part of the chart of nuclides showing the mass surface which is expected to be covered in this experiment. The rp-path~\cite{rp}, the stable nuclei, and the nuclides with very well known mass values~\cite{AW03} are indicated in the figure. Nuclides so far identified in the frequency spectra of this work are indicated with white crosses.} \label{nchart} \end{figure} \par% One can see from Figure~\ref{nchart} that the expected mass surface as well as some of the presently identified nuclides lie close to the calculated rp-process path~\cite{rp}. \par% An example of a measured Schottky frequency spectrum is shown in Figure~\ref{spectrum}. Two nuclides with presently unknown mass values~\cite{AW03} are indicated. \begin{figure}[h!] \includegraphics*[width=\textwidth]{spectrum_108Sb.eps} \caption {Part of a Schottky frequency spectrum. Known and unknown masses are indicated according to the AME 2003~\cite{AW03}.} \label{spectrum} \end{figure} \subsection{Lifetime Measurement with Single Stored Fragments} \par% Already in previous experiments we have proven that we are sensitive and selective to single particles \cite{Li-2005} stored and cooled in the ESR. SMS is ideally suited to measure decay properties of bare and few-electron fragments if the Q-value and the change in B$\rho$ are not exceeding the storage acceptance of the ESR \cite{{Ir-PRL},{Li-PLB},{Oh-PRL}}. In case the change of B$\rho$ values in the decay is too large, the daughter nuclei can be intercepted by detectors placed outside the storage orbits of the fragments. In this experiment we aimed at investigating the decay of a nucleus with a strong electron capture (EC) branch and a half-life in the order of a few minutes. The selected nucleus was $^{140}$Pr$^{58+}$ characterized with a Q$_{EC}$ value of 3388 keV. Mother and daughter nuclei are well resolved in the Schottky spectrum and we specifically aimed our measurements on the study of single particle decay such that we observed mother and daughter nuclei discretely changing the area of the corresponding Schottky frequency peaks. This is really a unique measurement and can be only performed with our facilities under the described conditions. An example of measured decays with only a few mother nuclei is illustrated in Figure~\ref{140Pr-decay} where a series of subsequent-in-time Schottky frequency spectra are plotted. Goals of this experimental study are to check the SMS with nuclei of well-known lifetimes down to a few stored ions and to investigate the decay statistics under these extreme conditions. \begin{figure}[t!] \includegraphics*[width=\textwidth]{pr_ec.eps} \caption {A series of subsequent-in-time Schottky frequency spectra of mother $^{140}$Pr$^{58+}$ and daughter $^{140}$Ce$^{58+}$ ions. About six mother nuclei were initially stored. Two out of them decayed via nuclear electron capture into $^{140}$Ce$^{58+}$. The correlated intensity changes are clearly seen. Other ions decayed via $\beta^+$ decay or were lost e.g. due to interaction with the residual gas.} \label{140Pr-decay} \end{figure} \section{Towards pure isomeric beams for the ILIMA project} \par% \begin{figure}[b!] \includegraphics*[width=\textwidth]{scraping.eps} \caption {Schottky frequency spectra of well resolved mother and daughter ions characterized by a Q value of about 3 MeV. Note, that we can inject monoisotopic fragment beams in the ESR as we have proven many years ago. Left panel: undisturbed mother and daughter traces in time recorded for about 520~s. Right panel: 170~s after the injection into the ESR, the primary beam of mother ions was eliminated by mechanical scraping. This is a demonstration for the feasibility to provide pure isomeric beams in the storage rings. } \label{ILIMA} \end{figure} Although the present experimental program at the SIS-FRS-ESR facilities has been quite successful and has led to several basic discoveries, the field of research is expected to be drastically extended by the next-generation facility FAIR \cite{CDR}. It will consist of a more powerful driver accelerator, a large acceptance in-flight separator Super-FRS \cite{Super-FRS}, and a new storage-cooler ring system specially adapted to the large phase space and short half-lives of the exotic nuclear beams \cite{Be-EPAC}. Within the FAIR framework, ILIMA \cite{ILIMA} is an accepted proposal which will be an extension of the present successful program at the FRS-ESR. One goal is to provide pure isomeric beams circulating in the new storage ring system to be investigated and used in reactions with the internal target or collision zones with other stored particles as electrons or antiprotons. An important demonstration of the feasibility in this direction has been achieved in the present experiment by scraping off one component of the mother and daughter nuclei recorded with SMS. The goal was achieved by a precise mechanical scraper at a dispersive plane in the ESR. This mechanical separation in the micrometer range brought the success as demonstrated in Figure \ref{ILIMA}. The technique which has been applied is very similar to one developed for the measurements of the horizontal beam size of cooled ion beams with micrometer resolution~\cite{St-EPAC04}. More sensitive methods to achieve the micrometer separation \cite{St-EPAC04} can be done by moving the stored beam towards a fixed position of the scraper. \section{Conclusion} \par% The time-resolved Schottky Mass Spectrometry has again proven its great potential for precise mass determination of short-lived nuclides~\cite{Li-2005}. In this work the technique was now applied to neutron-deficient nuclides below samarium. The covered mass surface is very close to the astrophysical rp-process path. Thus our results will contribute to a better understanding of this nucleosynthesis process. \par% Unique results have been achieved in the present experiments with lifetime measurements of a few mother nuclei stored in the ESR. \par% An important step towards the future has been achieved with the demonstration of a method to provide pure isomeric beams. The spatial separation of ground states or isomeric states with excitation energies of at least 3.5~MeV is now a realistic perspective.
2,877,628,091,380
arxiv
\section{Introduction} Chiral Perturbation Theory ($\chi PT$) \cite{weinberg} has proved very successful in order to describe the physics of mesons at very low energies. The key point of the whole approach is to identify the lightest pseudoscalar mesons $\pi , K$ and $\eta$ as the Goldstone bosons associated to the chiral symmetry breaking. These particles will be the only degrees of freedom at low energies and their interactions can be described in terms of the most general effective Lagrangian which respects the chiral symmetry constraints. So far as this is a low energy approach, the amplitude of a given process is basically given as an expansion in the external momenta over the scale of symmetry breaking $4\pi f_\pi\simeq 1.2\,$GeV. The approach is known to provide a good description of meson interactions up to about 500 MeV. However, if one is interested in resonances in particular, as it happens in meson spectroscopy, there is little that one can do with just plain $\chi PT$. The method that we expose here naturally leads to low lying resonances and allows one to face many problems so far intractable within $\chi PT$. The method incorporates the following elements: 1) Unitarity is implemented exactly; 2) It can deal with coupled channels allowed with pairs of particles from the lightest octets of pseudoscalar mesons and ($\frac{1}{2}^+$) baryons; 3) A chiral expansion in powers of the external four-momentum of the lightest pseudoscalars is done for Re $T^{-1}$, instead of the $T$ matrix itself as it is done in standard $\chi PT$. We sketch here the steps involved in this expansion for the meson meson interaction. One starts from a $K$ matrix approach in coupled channels where unitarity is automatically fulfilled and writes \begin{equation} T^{-1} = K^{-1} - i\,\sigma , \end{equation} where $T$ is the scattering matrix, $K$ is a real matrix in the physical region and $\sigma$ is a diagonal matrix which measures the phase-space available for the intermediate states \begin{equation} \sigma_{nn}(s) = - \frac{k_n}{8\pi\sqrt{s}}\,\theta\left(s - (m_{1n} + m_{2n})^2\right), \end{equation} where $k_n$ is the on shell CM momentum of the meson in the intermediate state $n$ and $m_{1n}$, $m_{2n}$ are the masses of the two mesons in the state $n$. The meson meson states considered here are $K\bar{K}$, $\pi\pi$, $\pi\eta$, $\eta\eta$, $\pi K$, $\pi\bar{K}$, $\eta K$, $\eta\bar{K}$. Since $K$ is real in the physical region, from eq. (1) one sees that $K^{-1}$ = Re $T^{-1}$. In non-relativistic Quantum Mechanics, in the scattering of a particle from a potential, it is possible to expand $K^{-1}$ in powers of the momentum of the particle at low energies as follows (in the s-wave for simplicity) \begin{equation} \hbox{Re}\,T^{-1}\equiv K^{-1} = \sigma\cdot ctg\delta\, \propto -\frac{1}{a} + \frac{1}{2}r_0 k^2 , \end{equation} with $k$ the particle momentum, $a$ the scattering length and $r_0$ the effective range. The ordinary $\chi$PT expansion up to $O(p^4)$ is given by \cite{weinberg} \begin{equation} T = T_2 + T_4 , \end{equation} where $T_2$ is obtained from the lowest order chiral Lagrangian, $L^{(2)}$, and is of $O(p^2)$, whereas $T_4$ contains one loop diagrams in the s, t, u channels, constructed from the lowest order Lagrangian, tadpoles and the finite contribution from the tree level diagrams of the $L^{(4)}$ Lagrangian and is $O(p^4)$. This last contribution, after a suitable renormalization, is just a polynomial, $T^{(p)}$. Our $T^{-1}$ matrix, starting from eq. (4) is given by \begin{eqnarray} \label{t-1} T^{- 1} &=& \left[T_2+T_4+...\right]^{-1}= T_2^{- 1} [1 + T_4 T_2^{- 1}+...]^{- 1}\nonumber \\ &=& T_2^{- 1} [1 - T_4 T_2^{- 1}+...]=T_2^{-1} [T_2-T_4] T_2^{-1} \end{eqnarray} Due to the fact that $\hbox{Im}\,T_4= T_2 \sigma T_2$, the above equation is nothing but eq. (1), but using eq. (4) to expand $K^{-1}=\hbox{Re}\, T^{-1}$. Inverting the former result, one obtains: \begin{equation} T = T_2\,[ T_2 - T_4]^{-1}\, T_2, \end{equation} which is the coupled channel generalization of the inverse amplitude method of \cite{dob}. Once this point is reached one has several options to proceed: a) A full calculation of $T_4$ within the same renormalization scheme as in $\chi PT$ can be done. The eight $L_i$ coefficients from $L^{(4)}$ are then fitted to the existing meson meson data on phase shifts and inelasticities up to 1.2 GeV, where 4 meson states are still unimportant. This procedure has been carried out in \cite{dob,gue}. The resulting $L_i$ parameters are compatible with those used in $\chi PT$. At low energies the $O(p^4)$ expansion for $T$ of eq. (6) is identical to that in $\chi PT$. However, at higher energies the nonperturbative structure of eq. (6), which implements unitarity exactly, allows one to extend the information contained in the chiral Lagrangians to much higher energy than in ordinary $\chi$ PT. Indeed it reproduces the resonances present in the L = 0, 1 partial waves. \vskip .2cm b) A technically simpler and equally successful additional approximation is generated by ignoring the crossed channel loops and tadpoles and reabsorbing them in the $L_i$ coefficients given the weak structure of these terms in the physical region. The fit to the data with the new $\hat{L}_i$ coefficients reproduces the whole meson meson sector, with the position, widths and partial decay widths of the $f_0(980)$, $a_0(980)$, $\kappa(900)$, $\rho(770)$, $K^\ast(900)$ resonances in good agreement with experiment \cite{oller1}. A cut off regularization is used in \cite{oller1} for the loops in the s-channel. By taking the loop function with two intermediate mesons \begin{equation} G_{nn}(s) = i\int\frac{d^4 q}{(2\pi)^4}\, \frac{1}{q^2 - m_{1n}^2 + i\epsilon} \, \frac{1}{(P-q)^2 - m_{2n}^2 + i\epsilon}, \end{equation} where $P$ is the total meson meson momentum, one immediately notices that \begin{equation} \hbox{Im}\, G_{nn}(s) = \sigma_{nn}. \end{equation} Hence, we can write \begin{equation} \hbox{Re}\, T_4 = T_2\, \hbox{Re}\, G\, T_2 + T_4^{(p)}, \end{equation} where $\hbox{Re}\, G$ depends on the cut off chosen for $|\vec{q}|$. This means that the $\hat{L}_i$ coefficients of $T_4^{(p)}$ depend on the cut off choice, much as the $L_i$ coefficients in $\chi PT$ depend upon the regularization scale. \vskip .2cm c) For the L = 0 sector (also in L = 0, S = $-1$ in the meson baryon interaction) a further technical simplification is possible. In these cases it is possible to choose the cut off such that, given the relation between $\hbox{Re}\, G$ and $T_4^{(p)}$, this latter term is very well approximated by $\hbox{Re} T_4= T_2 \,\hbox{Re}\, G\, T_2$. This is possible in those cases because of the predominant role played by the unitarization of the lowest order $\chi PT$ amplitude, which by itself leads to the low lying resonances, and because other genuine QCD resonances appear at higher energies. In such a case eq. (5) becomes \begin{equation} T = T_2 \,[T_2 - T_2\, G \,T_2]^{-1} \,T_2 = [1 - T_2 \,G]^{-1}\, T_2, \end{equation} or, equivalently, \begin{equation} T = T_2 + T_2 \,G \,T, \end{equation} which is a Bethe-Salpeter equation with $T_2$ and $T$ factorized on shell outside the loop integral, with $T_2$ playing the role of the potential. This option has proved to be successful in the L = 0 meson meson sector in \cite{oller2} and in the L = 0, S = $-1$ meson baryon sector in \cite{osetra}. In the meson baryon sector with S = 0, given the disparity of the masses in the coupled channels $\pi N$, $\eta N$, $K\Sigma$, $K\Lambda$, the simple ``one cut off approach'' is not possible. In \cite{kaiser} higher order Lagrangians are introduced while in \cite{par} different subtraction constants in G are incorporated in each of the former channels leading in both cases to acceptable solutions when compared with the data. In fig. 1 we show the results done with the method of \cite{oller1} for some selected phase shifts and inelasticities in the meson meson sector, showing resonances in different channels, (see \cite{phipipi} for an update of the results). The agreement with the meson meson data is quite good up to 1.2 GeV and the parameters $\hat{L_i}$ obtained from the fit are essentially compatible with those of $\chi PT$. \begin{figure}[ht] \vspace{-.3cm} \centerline{ \includegraphics[width=0.7\textwidth,angle=0]{pekin1.ps} } \caption{ Meson-meson scattering results of the non-perturbative chiral approach. For the data see references in \cite{dob,oller1,gue,ollernew}. Horizontal scale in MeV. } \label{fig3} \vspace*{-.3cm} \end{figure} \section{$\bar{K}N$ interaction in free space} The meson-baryon interaction Lagrangian at lowest order in momentum is given by \begin{equation} L_1^{(B)} = \langle \bar{B} i \gamma^{\mu} \frac{1}{4 f^2} [(\Phi \partial_{\mu} \Phi - \partial_{\mu} \Phi \Phi) B - B (\Phi \partial_{\mu} \Phi - \partial_{\mu} \Phi \Phi)] \rangle \ , \end{equation} where $\Phi$ represents the octet of pseudoscalar mesons and $B$ the octet of $1/2^+$ baryons. The symbol $\langle \rangle$ denotes the trace of SU(3) matrices. The coupled channel formalism requires to evaluate the transition amplitudes between the different meson-baryon channels. For $K^- p$ scattering there are ten channels, namely $K^-p$, $\bar{K}^0 n$, $\pi^0 \Lambda$, $\pi^0 \Sigma^0$, $\pi^+ \Sigma^-$, $\pi^- \Sigma^+$, $\eta \Lambda$, $\eta \Sigma^0$, $K^+ \Xi^-$ and $K^0 \Xi^0$, while in the case of $K^- n$ scattering there are six: $K^-n$, $\pi^0\Sigma^-$, $\pi^- \Sigma^0$, $\pi^- \Lambda$, $\eta \Sigma^-$ and $K^0 \Xi^-$. These amplitudes have the form \begin{equation} V_{ij}=-C_{ij}\frac{1}{4f^2}\overline{u}(p_i)\gamma^\mu u(p_j) (k_{i\mu}+k_{j\mu}) \ , \end{equation} where $p_j,p_i(k_j,k_i)$ are the initial, final momenta of the baryons (mesons) and $C_{ij}$ are SU(3) coefficients that can be found in Ref.~\cite{osetra}. At low energies the spatial components can be neglected and the amplitudes reduce to \begin{equation} V_{i j} = - C_{i j} \frac{1}{4 f^2} (k_j^0 + k_i^0) \ . \end{equation} The coupled-channel BS equations in the center of mass frame read \begin{equation} T_{i j} = V_{i j} + \overline{V_{i l} \; G_l \; T_{l j}} \ , \end{equation} \noindent where the indices $i,l,j$ run over all possible channels and $\overline{V_{i l} \; G_l \; T_{l j}}$ corresponds to the loop integral involving $V$, $T$ and the meson baryon propagators of $G$, all functions of the loop variable. However, as was shown in Ref.~\cite{osetra}, the off-shell part of $V_{il}$ and $T_{lj}$ goes into renormalization of coupling constants and $V_{il}$, $T_{lj}$ factorize outside the integral with their on-shell values, thus reducing the problem to one of inverting a set of algebraic equations. \begin{table}[ht] \caption{$K^-p$ threshold ratios and $K^-N$ scattering lengths} \begin{center} \begin{tabular}{|l|c|c|} \hline & This work & Exp. \\ \hline \rule[-6mm]{0mm}{15mm} $\gamma=\displaystyle\frac{\Gamma(K^-p\to \pi^+ \Sigma^-)}{ \Gamma(K^-p \to \pi^-\Sigma^+)}$ & 2.32 & 2.36$\pm$0.04 \cite{To71} \\ \hline \rule[-6mm]{0mm}{15mm} $R_c=\displaystyle\frac{\Gamma(K^-p\to {\rm charged})}{ \Gamma(K^-p \to {\rm all})}$ & 0.627 & 0.664$\pm$0.011 \cite{To71} \\ \hline \rule[-6mm]{0mm}{15mm} $R_n=\displaystyle\frac{\Gamma(K^-p\to \pi^0\Lambda)}{ \Gamma(K^-p \to {\rm neutral})}$ & 0.213 & 0.189$\pm$0.015 \cite{To71} \\ \hline $a_{K^-p}$ (fm) & $-$1.00 + i 0.94 & $-$0.67 + i 0.64 \cite{Adm81} \\ & & $-$0.98 (from Re($a$)) \cite{Adm81} \\ & & ($-$0.78$\pm$0.18) + i(0.49$\pm$0.37) \cite{Iw97}\\ & & \\ \hline $a_{K^-n}$ (fm) & 0.53 + i 0.62 & 0.37 + i 0.60 \cite{Adm81} \\ & & 0.54 (from Re($a$))\cite{Adm81} \\\hline \end{tabular} \end{center} \label{tab:table1} \end{table} The value of the cut-off, $q_{\rm max}=630$ MeV, was chosen to reproduce the $K^- p$ scattering branching ratios at threshold, while the weak decay constant, $f=1.15 f_\pi$, was taken in between the pion and kaon ones to optimize the position of the $\Lambda(1405)$ resonance. The predictions of the model for several scattering observables are summarized in Table \ref{tab:table1}. Cross sections for $K^- p$ scattering to different channels are also calculated in \cite{osetra} and good results are obtained for low energies of the kaons where the s-wave is dominant. A recent application of these methods in the S=0 sector is done in \cite{ramonet}, where the $\Delta(1232)$ resonance is also nicely reproduced. \section{Application to the photoproduction of meson baryon pairs in resonant states} As quoted above, a good description of the interaction of $K^-p$ and its coupled channels is obtained in terms of the lowest order Lagrangians and the Bethe Salpeter equation with a single cut off. One of the interesting features of the approach is the dynamical generation of the $\Lambda(1405)$ resonance just below the $K^-p$ threshold. The threshold behavior of the $K^-p$ amplitude is thus very much tied to the properties of this resonance. Modifications of these properties in a nuclear medium can substantially alter the $K^-p$ and $K^-$ nucleus interaction and experiments looking for these properties are most welcome. Some electromagnetic reactions appear well suited for these studies. Application of the chiral unitary approach to the $K^-p\rightarrow\gamma\Lambda$, $\gamma\Sigma^0$ reactions at threshold has been carried out in \cite{lee} and a fair agreement with experiment is found. In particular one sees there that the coupled channels are essential to get a good description of the data, increasing the $K^-p\rightarrow\gamma\Sigma^0$ rate by about a factor 16 with respect to the Born approximation. In a recent paper \cite{nac1} the $\gamma p\rightarrow K^+\Lambda(1405)$ reaction was proposed as a means to study the properties of the resonance, together with the $\gamma A\rightarrow K^+\Lambda(1405) A'$ reaction to see the modification of its properties in nuclei. The resonance $\Lambda(1405)$ is seen in its decay products in the $\pi\Sigma$ channel, but as shown in \cite{nac1} the sum of the cross sections for $\pi^0\Sigma^0$, $\pi^+\Sigma^-$, $\pi^-\Sigma^+$ production has the shape of the resonance $\Lambda(1405)$ in the I = 0 channel. Hence, the detection of the $K^+$ in the elementary reaction, looking at $d\sigma/dM_I$ ($M_I$ the invariant mass of the meson baryon system which can be induced from the $K^+$ momentum), is sufficient to get a clear $\Lambda(1045)$ signal. In nuclear targets Fermi motion blurs this simple procedure (just detecting the $K^+$), but the resonance properties can be reconstructed by observing the decay products in the $\pi\Sigma$ channel. In fig. 2 we show the cross sections predicted for the $\gamma p\rightarrow K^+ \Lambda(1405)$ reaction looking at $K^+\pi^0\Sigma^0$, $K^+ all$ and $K^+ \Lambda(1405)$ (alone). All of them have approximately the same shape and strength given the fact that the I = 1 contribution is rather small. In the figure the dashed dotted line indicates what one should expect to see in nuclei, just detecting the $K^+$, from the effect of Fermi motion. \begin{figure}[ht] \centerline{ \includegraphics[width=0.4\textwidth,angle=-90]{combi.ps} } \caption{Cross section for $\gamma p\rightarrow K^+ X$ with $X=all$, $\pi^0\Sigma^0$, $\Lambda$(1405). } \label{fig3} \end{figure} The energy chosen for the photon is $E_\gamma$ = 1.7 GeV which makes it suitable of experimentation at SPring8/RCNP, where the experiment is planned \cite{nakano}, and TJNAF. One variant of this reaction is the crossed channel reaction $K^- p \rightarrow\Lambda(1405)\gamma$. This reaction, for a $K^-$ momentum in the 300 to 500 MeV/c range, shows clearly the $\Lambda(1405)$ resonant production \cite{nac2} and has the advantage that the analogous reaction in nuclei still allows the observation of the $\Lambda(1405)$ resonance with the mere detection of the photon, the Fermi motion effects being far more moderate than in the case of the $\gamma A\rightarrow K^+\Lambda(1405)X$ reaction which requires larger photon momenta and induces a broad distribution of $M_I$ for a given $K^+$ momentum. \section{Photoproduction of resonant two meson states} Another application which can be done using the same reaction is the photoproduction of resonant two meson states. Particularly the $f_0$(980) and $a_0$(980) resonances. These states appear in $L=0$ in isospin zero and one respectively. The scalar sector of the meson is very controversial and the chiral unitary theory has brought a new perspective on these states. In particular, it has been possible to identify the lightest scalar octet, made of the $\sigma$, $f_0(980)$, $a_0(980)$ and $\kappa(900)$ resonances. {\em All of them can be simply generated by unitarization of the lowest order ChPT}, with just a cutoff as a free parameter. The $O(p^4)$ chiral parameters can be understood as the residual contact terms that appear when one integrates out heavier states and the resulting Lagrangian is that of ChPT. Hence, the values of the chiral constants can be related to the masses and widths of the preexisting heavier resonances (``Resonance Saturation Hypothesis''). Indeed, most of their values are saturated by vector resonances alone (that is vector meson dominance) but some other parameters still need the existence of scalar states. Recently \cite{ollernew}, using the N/D unitarization method with explicit resonances added to the lowest order ChPT Lagrangian, it has been established that these heavier scalar states should appear with a mass around 1.3 - 1.4 GeV for the octet and 1 GeV for the singlet. In addition, the $\sigma$, $\kappa$, $a_0$ and a strong contribution to the $f_0$, were also generated from the unitarization of the ChPT lowest order. These states still survive when the heavier scalars are removed. That agrees with our observation that the $\sigma$, $\kappa$, $f_0$ and $a_0$ are generated {\em independently of the chiral parameters}, that is, of the preexisting scalar nonet, which is heavier. In addition, it was also stablished in that work that the physical $f_0(980)$ resonance is a mixture between the discussed strong $K\bar{K}$ scattering contribution and the preexisting singlet resonance with a mass around 1 GeV. Since Chiral Perturbation Theory does not deal with quarks and gluons, it is very hard to make any conclusive statement about the nature of these states ($q\bar{q}$, four-quark, molecule, etc...), unless we make additional assumptions. However, any model of the nature of these states should be able to explain the different features of these resonances as they appear in the chiral unitary approach. In addition, it would be very interesting to obtain further information from other processes. In the present case the reaction suggested is \cite{marco} $\gamma p\rightarrow p M$. where $M$ is either of the resonances $a_0$(980) or $f$(980). In practice the meson M will decay into two mesons , $\pi\pi$ or $K\bar{K}$ in the case of the $f_0$(980) or $K\bar{K}$, $\pi\eta$ in the case of the $a_0$(980). \begin{figure}[ht] \centerline{ \includegraphics[width=0.4\textwidth,angle=-90]{fig6.ps} } \caption{Results for the photoproduction cross section on protons as a function of the invariant mass of the meson-meson system. } \label{fig3} \end{figure} In Fig.~3 we show the results for the 5 channels considered. We observe clear peaks for $\pi^+ \pi^-$, $\pi^0 \pi^0$ and $\pi^0 \eta$ production around 980 MeV. The peaks in $\pi^+ \pi^-$ and $\pi^0 \pi^0$ clearly correspond to the formation of the $f_0(980)$ resonance, while the one in $\pi^0 \eta$ corresponds to the formation of the $a_0(980)$. The $\pi^0 \pi^0$ cross section is $\frac{1}{2}$ of the $\pi^+ \pi^-$ one due to the symmetry factor . The $K^+ K^-$ and $K^0 \bar{K}^0$ production cross section appears at energies higher than that of the resonances and hence do not show the resonance structure. Yet, final state interaction is very important and increases appreciably the $K^+ K^-$ production cross section for values close to threshold with respect to the Born approximation. It is interesting to notice that the $f_0$(980) resonance shows up as a peak in the reaction. This is in contrast to the cross section for $\pi \pi \rightarrow \pi \pi$ in $I=0$ which exhibits a minimum at the $f_0$ energy because of the interference between the $f_0$ contribution and the $\sigma(500)$ broad resonance. However, we should bear in mind that we have plotted there the contribution of the $f_0$ resonance alone. The tree level contact term and Bremsstrahlung diagrams, plus other contributions which would produce a background, are not considered there. In any case it is interesting to quote in this respect that a related reaction from the dynamical point of view, which also involves the interaction of two mesons in the final state, the $\phi \rightarrow \pi^0 \pi^0 \gamma$ and the $\phi \rightarrow \pi^0\eta \gamma$ decay, which have been measured recently at Novosibirsk \cite{Novo}, show clearly the $f_0$ and $a_0$ excitation, respectively, in the invariant mass spectra of the two mesons. A theoretical study along the lines reported here has been done in \cite{uge} where a good description of the experimental spectra as well as the absolute rates is obtained. \section{ Summary} We have reported on the unitary approach to meson meson and meson baryon interaction using chiral Lagrangians, which has proved to be an efficient method to extend the information contained in these Lagrangians to higher energies where $\chi PT$ cannot be used. This new approach has opened the doors to the investigation of many problems so far intractable with $\chi PT$ and a few examples have been reported here. We have applied these techniques to the problem of photoproduction of scalar mesons $f_0$(980), $a_0$(980) and the photoproduction of the $\Lambda(1405)$, a resonant state of meson baryon in the $S=-1$ sector and have found signals which are well within measuring range in present facilities. The experimental implementation of these experiments confronted with the theoretical predictions will contribute with new tests of these emerging pictures implementing chiral symmetry and unitarity, which for the moment represent the most practical approach to QCD at low energies. \section*{Acknowledgments.} \vspace{-0.4cm} This work is partly supported by DGICYT, contract number PB 96-0753.
2,877,628,091,381
arxiv
\section{Introduction} The existence of solutions to the Poisson equation $$ \Delta u = f $$ on a complete Riemannian manifold $(M, g)$, for a given function $f$ on $M$, is a classical problem which has been the object of deep interest in the literature. Malgrange \cite{mal} obtained solvability of the Poisson equation for any smooth function $f$ with compact support, as a consequence of the existence of a Green's function for $-\Delta$ on every complete Riemannian manifold. Under integrability assumptions on $f$, existence of solutions have been established by Strichartz \cite{str} and Ni-Shi-Tam \cite[Theorem 3.2]{nst} (see also \cite[Lemma 2.3]{ni}). Moreover, in the same paper, the authors proved an existence result for the Poisson problem on manifolds with non-negative Ricci curvature under a sharp integral assumption involving suitable averages of $f$. This condition in particular is satisfied if $$ |f(x)|\leq \frac{C}{\big(1+r(x)\big)^{\alpha}} $$ for some $C>0$ and $\alpha>2$, where $r(x):=\operatorname{dist}(x,p)$ is the distance function of any $x\in M$ from a fixed reference point $p\in M$. In fact, they proved a more general result where the decay rate of $f$ is just assumed to be of order $1+\varepsilon$. Note that this result is sharp on the flat space $\mathbb{R}^{n}$. From now on let us consider solutions $u$ of the Poisson equation $\Delta u=f$ which can be represented as $$ u(x)=\int_{M} G(x,y)f(y)\,dy\,, $$ where $G(x,y)$ is a Green's function of $-\Delta$ on $M$ (see Section \ref{sect-prel} for further details). Muntenau-Sesum \cite{ms} addressed the case of manifolds with positive spectrum, i.e. $\lambda_1(M)>0$, and Ricci curvature bounded from below, obtaining existence of solutions under the pointwise decay assumption $$ |f(x)|\leq \frac{C}{\big(1+r(x)\big)^{\alpha}} $$ for some $C>0$ and $\alpha>1$. Note that this result is sharp on $\mathbb{H}^{n}$. Their proof relies on very precise integral estimates on the minimal positive Green's function, which are inspired by the work of Li-Wang \cite{liwa1}. In \cite{cmp} the authors generalized the result in \cite{ms}, obtaining existence of solutions on manifolds with positive essential spectrum, i.e. $\lambda_1^{\text{ess}}(M)>0$, for source functions $f$ satisfying $$ \sum_{m=1}^{\infty}\frac{\theta_{R}(m+1)-\theta_{R}(m)}{\lambda_{1}\left(M\setminus B_{m}(p)\right)}\sup_{M\setminus B_m(p)}|f| < \infty, $$ for any $R>0$, where $\theta_{R}(m)$ is a function related to a lower bound on the Ricci curvature, locally on geodesic balls with center $p$ and radius $2R+m$. In particular, the authors showed in \cite[Corollary 1.3]{cmp} existence of solutions on Cartan-Hadamard manifolds with strictly negative Ricci curvature, whenever $$ -C\big(1+r(x)\big)^{\gamma_{1}} \leq \mathrm{Ric} \leq -\frac{1}{C}\big(1+r(x)\big)^{\gamma_{2}} ,\quad |f (x)| \leq \frac{C}{\big(1+r(x)\big)^{\alpha}}, $$ for some $C>0$ and $\gamma_{1},\gamma_{2}\geq 0$ with $\alpha>1+\frac{\gamma_{1}}{2}-\gamma_{2}$. Observe that the results in \cite{ms} and \cite{cmp} cannot be used whenever the Ricci curvature tends to zero at infinity fast enough (see \cite{jpw}) since, in this case, one has $\lambda_1^{\text{ess}}(M)=0$ (and so $\lambda_1(M)=0$). In particular the case of $\mathbb{R}^n$ is not covered. On the other hand, the result in \cite{nst} does not apply on manifolds with negative curvature. The purpose of our paper is to obtain a general result which includes, as special cases, both manifolds with strictly negative curvature and manifolds with Ricci curvature vanishing at infinity. Moreover, our result is sharp on spherically symmetric manifolds, and in particular on $\mathbb{R}^n$ and $\mathbb{H}^n$. Note that the condition $\lambda_1(M)>0$ is equivalent to the validity of the Poincar\'e inequality $$ \lambda_1(M)\int_M u^2\, dV \leq \int_M |\nabla u|^2\,dV $$ for any $u\in C^\infty_c(M)$. On the other hand, one has positive essential spectrum if and only if, for some compact subset $K\subset M$, one has $\lambda_1(M \setminus K)>0$ and $$ \lambda_1(M \setminus K)\int_M u^2\, dV \leq \int_M |\nabla u|^2\,dV $$ for any $u\in C^\infty_c(M\setminus K)$. Generalizing the previous inequalities, one says that $(M,g)$ satisfies a {\em weighted Poincar\'e inequality} with a non-negative weight function $\rho$ if \begin{equation}\label{wpi2} \int_M \rho \,v^2\, dV \leq \int_M |\nabla v|^2 \,dV \end{equation} for every $v\in C^\infty_c(M)$. If for any $R\geq R_0>0$ there exists a non-negative function $\rho_R$ such that \eqref{wpi2} holds for every $v\in C^\infty_c(M\setminus B_R(p))$ and for $\rho\equiv\rho_R$, we say that $(M,g)$ satisfies a {\em weighted Poincar\'e inequality at infinity}. In addition, inspired by \cite{liwa1}, we say that $(M,g)$ satisfies the property $\left(\mathcal{P}^{\infty}_{\rho_R}\right)$ if a weighted Poincar\'e inequality at infinity holds for the family of weights $\rho_R$ and the conformal $\rho_R$-metric defined by $$ g_{\rho_R} := \rho_R\, g $$ is complete for every $R\geq R_0$. The validity of a weighted Poincar\'e inequality on some classes of manifolds has been investigated in the literature. It is well known that on $\mathbb{R}^n$ inequality \eqref{wpi2} holds with $\rho(x)=\frac{(n-2)^2}{4}\frac{1}{r^2(x)}$. It is also called {\em Hardy inequality}. More in general, it holds on every Cartan-Hadamard manifold with $\rho(x)=\frac{C}{r^2(x)}$, for some $C>0$ (see \cite{car} and \cite{gri} for some refinement of this result). In order to state our main results, we need to introduce a (increasing) function $\omega(s)$ related to the value of the Ricci curvature on the annulus $B_{\frac{5}{4}s}(p)\setminus B_{\frac{3}{4}s}(p)$ (see \eqref{eq127} for the precise definition). In this paper we prove the following result. \begin{theorem} \label{teo1} Let $(M,g)$ be a complete non-compact Riemannian manifold satisfying the property $\left(\mathcal{P}^\infty_{\rho_R}\right)$ and let $f$ be a locally H\"older continuous function on $M$. If $$ \sum_{m}^{\infty}\Big(\omega(m+1)-\omega(m)+1\Big)\sup_{M\setminus B_m(p)}\left|\frac{f}{\rho_m}\right| < \infty, $$ then the Poisson equation \begin{equation*} \Delta u=f \quad\hbox{in } M \end{equation*} admits a classical solution $u$. \end{theorem} Assume that $\lambda_1^{\text{ess}}(M)>0$ and $$ \mathrm{Ric} \geq -C\big(1+r(x)\big)^{\gamma} $$ for some $\gamma\geq 0$. Then it is direct to see that $$ \omega(m+1)-\omega(m)\sim C\Big(\theta_{R}(m+1)-\theta_{R}(m)\Big) \sim C m^{\frac{\gamma}{2}} $$ for every $R>0$ and the property $\left(\mathcal{P}^\infty_{\rho_R}\right)$ holds for every $R$ with $\rho_R(x)=\lambda_1(M\setminus B_R(p))$. Thus $$ \Big(\omega(m+1)-\omega(m)+1\Big)\sup_{M\setminus B_m(p)}\left|\frac{f}{\rho_m}\right| \sim C\, \frac{\theta_{R}(m+1)-\theta_{R}(m)}{\lambda_{1}\left(M\setminus B_{m}(p)\right)}\sup_{M\setminus B_m(p)}|f| \,, $$ therefore our result is in accordance with those in \cite{ms} and \cite{cmp}. We recall that by \cite[Corollary 1.4, Lemma 1.5]{liwa1} the validity of a weighted Poincar\'e inequality \eqref{wpi2} on $M$ implies the non-parabolicity of the manifold; on the contrary, if $(M,g)$ is non-parabolic, then a weighted Poincar\'e inequality holds on $M$, with weight $$\rho(x):=\frac{|\nabla G(p,x)|^2}{4 G^2(p,x)},$$ where $G$ is the minimal positive Green's function on $(M,g)$. Exploiting this result, using similar techniques as in Theorem \ref{teo1}, we obtain the following refined result on complete non-compact non-parabolic manifolds. \begin{theorem} \label{teo2} Let $(M,g)$ be a complete non-compact non-parabolic Riemannian manifold with minimal positive Green's function $G$. Let $\rho(x)=\frac{|\nabla G(p,x)|^2}{4 G^2(p,x)}$ and let $f$ be a locally H\"older continuous function on $M$. If $$ \sum_{m}^{\infty}\Big(\omega(m+1)-\omega(m)\Big)\sup_{M\setminus B_m(p)}\left|\frac{f}{\rho}\right| < \infty, $$ then the Poisson equation \begin{equation*} \Delta u=f \quad\hbox{in } M \end{equation*} admits a classical solution $u$. \end{theorem} \begin{rem} We explicitly observe that in Theorem \ref{teo2} the completeness of the conformal metric $g_\rho=\rho g$ is not required. As it was observed in \cite{liwa1}, the completeness of $g_\rho$ would hold if $G(p,x)\to 0$ as $r(x)\to \infty$, a condition that we do not need to assume here. \end{rem} It is well-known that $\mathbb{R}^n$ is a non-parabolic manifold if $n\geq3$, with minimal positive Green's function $G(x,y)=\frac{c_n}{|x-y|^{n-2}}$ for some positive constant $c_n$. Moreover the weighted Poincar\'{e} -- Hardy's inequality holds on $\mathbb{R}^n$ with $$\rho(x)=\frac{|\nabla G(0,x)|^2}{4 G^2(0,x)}=\frac{(n-2)^2}{4}\frac{1}{|x|^2}.$$ In this case, using the definition \eqref{eq127} of the function $\omega(s)$, it is easy to see that $$ \omega(m+1)-\omega(m)\sim C \log \left(1+\frac{1}{m}\right) \sim \frac{C}{m}\,. $$ Hence we can apply Theorem \ref{teo2}, with $$ \Big(\omega(m+1)-\omega(m)\Big)\sup_{M\setminus B_m(p)}\left|\frac{f}{\rho_m}\right| \sim C\, m\, \sup_{M\setminus B_m(p)}\left|f\right| $$ and the convergence of the series follows, whenever $|f(x)|\leq C/(1+r(x))^\alpha$ for some $\alpha>2$. This condition is optimal, as it can be easily verified by explicit computations. In general, concerning Cartan-Hadamard manifolds, by using Theorem \ref{teo1} we improve \cite[Corollary 1.3]{cmp} allowing the Ricci curvature to approach zero at infinity. \begin{cor}\label{cor-2} Let $(M,g)$ be a Cartan-Hadamard manifold and let $f$ be a locally H\"older continuous, bounded function on $M$. If $$ -C\big(1+r(x)\big)^{\gamma_{1}} \leq \mathrm{Ric} \leq -\frac{1}{C}\big(1+r(x)\big)^{\gamma_{2}} ,\quad |f (x)| \leq \frac{C}{\big(1+r(x)\big)^{\alpha}}, $$ for some $C\geq 1$, $\gamma_1,\gamma_2\in\mathbb{R}$, $\gamma_{1}\geq\gamma_{2}$, $\gamma_1\geq0$ and $\alpha$ satisfying $$ \alpha > \begin{cases} 1+\frac{\gamma_1}{2}-\gamma_2 &\quad\hbox{if } \gamma_2\geq-2 \\ 3+\frac{\gamma_1}{2} &\quad\hbox{if } \gamma_2< -2 \end{cases} $$ then the Poisson equation \begin{equation*} \Delta u=f \quad\hbox{in } M \end{equation*} admits a classical solution $u$. \end{cor} \begin{rem}\label{rem-rot} In the special case $\gamma_{1}=\gamma_{2}=\gamma\geq 0$ the condition on $\alpha$ in the previous corollary becomes $$ \alpha > \begin{cases} 1-\frac{\gamma}{2}&\quad\hbox{if } \gamma\geq-2 \\ 2 &\quad\hbox{if } \gamma< -2\,. \end{cases} $$ In particular in $(M,g)$ is the standard hyperbolic space $\mathbb{H}^n$, then $\gamma=0$. Thus we need that $\alpha>1$ and this condition is sharp as observed above. We will consider also the case $\gamma<0$ in the Subsection \ref{ssu} on model manifolds. \end{rem} The paper is organized as follows: in Section \ref{sect-prel} we collect some preliminary results and we define precisely the function $\omega$; in Section \ref{sec-grad} we prove a refined local gradient estimates for positive harmonic functions; in Section \ref{sec-est} we prove key estimates on the positive minimal Green's function $G(x,y)$ of a non-parabolic manifold, by means of the property $\left(\mathcal{P}^{\infty}_{\rho_R}\right)$; in Section \ref{sec-proofs} we prove Theorem \ref{teo1}; finally in Section \ref{sec-ex} we prove Corollary \ref{cor-2} and show the optimality of the assumption in Theorem \ref{teo2} for rotationally symmetric manifolds. \ Finally we note that some results concerning the Poisson equation on some manifolds satisfying a weighted Poincar\'e inequality have been very recently obtained in \cite{msw2}. However their assumptions and results apparently are completely different to ours. \ \section{Preliminaries} \label{sect-prel} Let $(M,g)$ be a complete non-compact $n$-dimensional Riemannian manifold. For any $x\in M$ and $R>0$, we denote by $B_{R}(x)$ the geodesic ball of radius R with centre $x$ and let $\mathrm{Vol}(B_{R}(x))$ be its volume. We denote by $\mathrm{Ric}$ the Ricci curvature of $g$. For any $x \in M$, let $\mu(x)$ be the smallest eigenvalue of $\mathrm{Ric}$ at $x$. Thus, for any $V\in T_{x}M$ with $|V|=1$, $\mathrm{Ric}(V,V)(x) \geq \mu(x)$ and we have $\mu(x)\geq -\omega(r(x))$ for some $\omega\in C([0,\infty))$, $\omega\geq 0$. Hence, for any $x\in M$, we have \begin{equation}\label{eq3} \mathrm{Ric}(V,V)(x) \geq -(n-1) \frac{\varphi''(r(x))}{\varphi(r(x))}, \end{equation} for some $\varphi\in C^{\infty}((0,\infty))\cap C^{1}([0,\infty))$ with $\varphi(0)=0$ and $\varphi'(0)=1$. Note that $\varphi,\varphi',\varphi''$ are positive in $(0,\infty)$. We set $$ K_R(x):=\sup_{y\in B_{r(x)+R}\setminus B_{r(x)-R}}\frac{\varphi''(r(y))}{\varphi(r(y))} $$ for $r(x)>R>1$; $$ I_R(x):=\begin{cases} \sqrt{K_R(x)}\coth\left(\sqrt{K_R(x)} R/2\right)&\text{if }\,K_R(x)>0 \\ \frac{2}{R} &\text{if }\,K_R(x)=0; \end{cases} $$ \begin{align}\label{defQ} Q_{R}(x):=\max\left\{K_R(x), \frac{I_R(x)}{R}, \frac{1}{R^2}\right\}. \end{align} Note that $Q_{R}(x)\equiv Q_{R}(r(x))$. For any $z\in M$, let $\gamma$ be the minimal geodesic connecting $p$ to $z$. We define the function \begin{equation}\label{eq127} \omega(z)=\omega(r(z)):=\int_a^{r(z)} \sqrt{Q_{\frac{r((\gamma(s))}{4}}(r(\gamma(s))}\,ds, \end{equation} for a given $a>0$. Note that $t\mapsto\omega(t)$ is increasing and so invertible. Under $\eqref{eq3}$, we know that \begin{equation}\label{eq6} \mathrm{Vol}(B_{R}(p)) \leq C \int_{0}^{R}\varphi^{n-1}(\xi)\,d\xi. \end{equation} Moreover, let $\operatorname{Cut}(p)$ be the {\em cut locus} of $p\in M$. It is known that every complete Riemannian manifold admits a Green's function (see \cite{mal}), i.e. a smooth function defined in $(M\times M)\setminus \{(x,y)\in M\times M:\,x=y\} $ such that $G(x,y)=G(y,x)$ and $\Delta_{y} G(x,y)=-\delta_{x}(y)$. We say that $(M,g)$ is non-parabolic if there exists a minimal positive Green's function $G(x,y)$ on $(M,g)$, and parabolic otherwise. We say that $(M,g)$ satisfies a {\em weighted Poincar\'e inequality} with a non-negative weight function $\rho$ if \begin{equation}\label{wpi} \int_M \rho \,v^2\, dV \leq \int_M |\nabla v|^2 \,dV \end{equation} for every $v\in C^\infty_c(M)$. If for any $R\geq R_0>0$ there exists a non-negative function $\rho_R$ such that \eqref{wpi2} holds for every $v\in C^\infty_c(M\setminus B_R(p))$ and for $\rho\equiv\rho_R$, we say that $(M,g)$ satisfies a {\em weighted Poincar\'e inequality at infinity}. In addition, inspired by \cite{liwa1}, we say that $(M,g)$ satisfies the property $\left(\mathcal{P}^{\infty}_{\rho_R}\right)$ if a weighted Poincar\'e inequality at infinity holds for the family of weights $\rho_R$ and the conformal $\rho_R$-metric defined by $$ g_\rho := \rho_R\, g $$ is complete. With this metric we consider the $\rho$-distance function $$ r_\rho (x,y)=\inf_{\gamma} \, l_\rho (\gamma) $$ where the infimum of the lengths is taken over all curves joining $x$ and $y$, with respect to the metric $g_\rho$. For a fixed point $p\in M$, we denote by $$r_\rho(x) = r_\rho (p,x).$$ Note that $|\nabla r_\rho (x)|^2 = \rho(x)$. Finally, we denote by $$B^\rho_R(p)=\{x \in M: r_\rho(x)\leq R\}.$$ Let $\lambda_{1}(M)$ be the bottom of the $L^{2}$-spectrum of $-\Delta$. It is known that $\lambda_{1}(M)\in[0,+\infty)$ and it is given by the variational formula $$ \lambda_{1}(M) = \inf_{v\in C^{\infty}_{c}(M)}\frac{\int_{M}|\nabla v|^{2}\,dV}{\int_{M}v^{2}\,dV}\,. $$ If $\lambda_{1}(M)>0$, then $(M,g)$ is non-parabolic (see \cite[Proposition 10.1]{gri1}). Whenever $(M,g)$ is non-parabolic, let $G_{R}(x,y)$ be the Green's function of $-\Delta$ in $B_{R}(z)$ satisfying zero Dirichlet boundary conditions on $\partial B_{R}(z)$, for some $z\in M$. We have that $R\mapsto G_{R}(x,y)$ is increasing and, for any $x,y\in M$, \begin{equation}\label{eq9} G(x,y) = \lim_{R\to\infty} G_{R}(x,y), \end{equation} locally uniformly in $(M\times M)\setminus \{(x,y)\in M\times M:\,x=y\} $. We define $\lambda_{1}(\Omega)$, with $\Omega$ an open subset of $M$, to be the first eigenvalue of $-\Delta$ in $\Omega$ with zero Dirichlet boundary conditions. It is well known that $\lambda_{1}(\Omega)$ is decreasing with respect to the inclusion of subsets. In particular $R\mapsto\lambda_{1}(B_{R}(x))$ is decreasing and $\lambda_{1}(B_{R}(x))\to \lambda_{1}(M)$ as $R\to\infty$. \ For any $x\in M$, for any $s>0$ and for any $0\leq a < b\leq +\infty$, we define \begin{align*} \mathcal{L}_{x}(s) &:= \{y \in M\,:\,G(x,y)=s \},\\ \mathcal{L}_{x}(a,b)&:= \{y \in M\,:\, a< G(x,y)< b \}. \end{align*} \ \section{Local gradient estimate for harmonic functions} \label{sec-grad} In this section we improve \cite[Lemma 3.1]{cmp}. We set $$ k_R(z):=\sup_{B_R(z)}\frac{\varphi''(r(y))}{\varphi(r(y))} $$ for $z\in M$ and $R>0$; $$ i_R(z):=\begin{cases} \sqrt{k_R}\coth\left(\sqrt{k_R(z)} R/2\right)&\text{if }\,k_R(z)>0 \\ \frac{2}{R} &\text{if }\,k_R(z)=0. \end{cases} $$ \begin{lemma}\label{lemma00} Let $R>0$ and $z\in M$. Let $u\in C^{2}(B_{R}(z))$ be a positive harmonic function in $B_{R}(z)$. Then $$ |\nabla u(\xi)| \leq C \sqrt{\max\left\{k_R(z), \frac{i_R(z)}{R}, \frac{1}{R^2}\right\}}\, u(\xi)\quad\text{for any}\quad \xi\in B_{R/2}(z), $$ for some positive constant $C>0$. \end{lemma} \begin{proof} Following the classical argument of Yau, let $v:=\log u$. Then $$ \Delta v = - |\nabla v|^{2} . $$ Let $\eta(\xi)=\eta(d(\xi))$, with $d(\xi):=\operatorname{dist}(\xi,z)$, a smooth cutoff function such that $\eta(\xi)\equiv 1$ on $B_{R/2}(z)$, with support in $B_{R}(z)$, $0\leq \eta\leq 1$ and $$-\frac{4}{R}\leq \frac{\eta'}{\eta^{1/2}} \leq 0 \quad\text{and}\quad \frac{|\eta''|}{\eta} \leq \frac{8}{R^{2}}.$$ Let $w=\eta^{2}|\nabla v|^{2}$. Then \begin{align*} \frac12 \Delta w &= \frac12 \eta^{2} \Delta |\nabla v|^{2} + \frac12 |\nabla v|^{2} \Delta \eta^{2} + \langle \nabla |\nabla v|^{2},\nabla \eta^{2}\rangle. \end{align*} Then, from classical Bochner-Weitzenb\"och formula and Newton inequality, one has \begin{align*} \frac12 \Delta |\nabla v|^{2} & = |\nabla^{2} v|^{2} + \mathrm{Ric}(\nabla v,\nabla v) + \langle \nabla v,\nabla \Delta v\rangle \\ &\geq \frac1n (\Delta v)^{2} - (n-1) \frac{\varphi''}{\varphi} |\nabla v|^{2} - \langle \nabla |\nabla v|^{2},\nabla v\rangle \\ &= \frac1n |\nabla v|^{4} - (n-1) \frac{\varphi''}{\varphi} |\nabla v|^{2} - \langle \nabla |\nabla v|^{2},\nabla v\rangle . \end{align*} Moreover, by Laplacian comparison, since $\mathrm{Ric}\geq -(n-1)k_R(z)$ in $B_R(z)$, we have \begin{align*} \frac12 \Delta \eta^{2} &= \eta \eta' \Delta \rho + \eta \eta'' + (\eta')^{2} \\ &\geq (n-1)i_R(z)\eta\eta' + \eta \eta''+ (\eta')^{2}\\ &\geq -\frac{4}{R} \left((n-1)i_R(z)+\frac{2}{R}\right)\eta \end{align*} pointwise in $B_{R}(z)\setminus (\{z\}\cup \operatorname{Cut}(z))$ and weakly on $B_{R}(z)$. Thus, \begin{align*} \frac12 \Delta w &\geq \frac1n \frac{w^{2}}{\eta^{2}}-(n-1)\frac{\varphi''}{\varphi}w - \frac{4}{R}\left((n-1)i_R(z)+\frac{2}{R}\right)\frac{w}{\eta} \\ &-4\frac{|\eta'|^{2}}{\eta^{2}}w + \frac{2}{\eta}\langle \nabla w,\nabla \eta\rangle-\langle \nabla w,\nabla v\rangle + \frac{2}{\eta}\langle \nabla v,\nabla \eta \rangle w \\ &\geq \frac1n \frac{w^{2}}{\eta^{2}}-(n-1)\frac{\varphi''}{\varphi}w - \frac{4}{R}\left((n-1)i_R(z)+\frac{2}{R}\right)\frac{w}{\eta} \\ & + \frac{2}{\eta}\langle \nabla w,\nabla \eta\rangle-\langle \nabla w,\nabla v\rangle - \frac{64}{R^{2}}\frac{ w}{\eta}-\frac{8}{R}\frac{ w^{3/2}}{\eta^{3/2}}\\ &\geq \frac{1}{2n} \frac{w^{2}}{\eta^{2}}-(n-1)\frac{\varphi''}{\varphi}w - \frac{4}{R}\left((n-1)i_R(z)+\frac{18+8n}{R}\right)\frac{w}{\eta} \\ & + \frac{2}{\eta}\langle \nabla w,\nabla \eta\rangle-\langle \nabla w,\nabla v\rangle. \end{align*} Let $q$ be a maximum point of $w$ in $\overline{B}_{R}(z)$. Since $w\equiv0$ on $\partial B_{R}(z)$, we have $q\in B_{R}(z)$. First assume $q\notin \operatorname{Cut}(z)$. At $q$, we obtain \begin{align*} 0 &\geq \left[\frac{1}{2n} w - (n-1)\frac{\varphi''}{\varphi}-\frac{4}{R}\Big((n-1)i_R(z)+\frac{18+8n}{R}\Big)\right]w. \end{align*} So $$ w(q)\leq 2n(n-1)\frac{\varphi''\big(r(q)\big)}{\varphi\big(r(q)\big)}+\frac{8n(n-1)}{R}i_R(z)+\frac{144n+64n^2}{R^2}. $$ Thus, for any $\xi \in B_{R/2}(z)$, \begin{align*} |\nabla v(\xi)|^{2}&\leq 2n(n-1)\frac{\varphi''\big(r(q)\big)}{\varphi\big(r(q)\big)}+\frac{8n(n-1)}{R}i_R(z)+\frac{144n+64n^2}{R^2}\\ &\leq 2n(n-1)k_R(z)+\frac{8n(n-1)}{R}i_R(z)+\frac{144n+64n^2}{R^2} \end{align*} We get $$ \frac{|\nabla u(\xi)|}{u(\xi)}=|\nabla v(\xi)| \leq C \sqrt{\max\left\{k_R(z), \frac{i_R(z)}{R}, \frac{1}{R^2}\right\}}. $$ for some positive constant $C>0$. By standard Calabi trick (see \cite{cal, cy}), the same estimate can be obtained when $q\in \operatorname{Cut}(z)$. This concludes the proof of the lemma. \end{proof} As a corollary we have the following \begin{cor}\label{lemma0} Let $(M,g)$ be non-parabolic. If $r(z)>R>0$, then $$ |\nabla G(p,z)| \leq C \sqrt{Q_{R}(z)}\, G(p,z), $$ for some positive constant $C>0$. \end{cor} \ \section{Green's function estimates} \label{sec-est} \subsection{Pointwise estimate} \begin{lemma}\label{lemma1} Let $(M,g)$ be non-parabolic and let $a>0$ and $y\in M\setminus B_{a}(p)$. Then $$ A^{-1} \exp \left(-B\, \omega(y)\right) \leq G(p,y) \leq A \exp \left(B\, \omega(y)\right), $$ with $A:=\max\{ \max_{\partial B_a(p)}G(p,\cdot), \left(\min_{\partial B_a(p)}G(p,\cdot)\right)^{-1}\}$ and $B=2n(n-1)$. \end{lemma} \begin{proof} Let $y\in M\setminus \overline{B_{a}(p)}$ with $a> 0$ and consider the minimal geodesic $\gamma$ joining $p$ to $y$ and let $y_{0}\in\partial B_{a}(p)$ be a point of intersection of $\gamma$ with $\partial B_{a}(p)$. Since $G(p,\cdot)$ is harmonic in $B_{r(z)/4}(z)$, for every $z\in \gamma$ with $r(z)\geq a$, by Lemma \ref{lemma0} we get $$ |\nabla G(p,z)| \leq C \sqrt{Q_{r(z)/4}(z)}\,G(p,z) . $$ We have \begin{align*} G(p,y)&=G(p,y_0)+\int_{a}^{r(y)}\langle \nabla G(p,\gamma(s)), \dot{\gamma}(s)\rangle \,ds \\ &\leq G(p,y_0) + C\int_{a}^{r(y)} \sqrt{Q_{\frac{r(\gamma(s))}{4}}\big(r(\gamma(s))\big)} G(p,\gamma(s)) \,ds. \end{align*} By Gronwall inequality, $$ G(p,y) \leq G(p,y_0) \exp\left(C\int_{a}^{r(y)} \sqrt{Q_{\frac{r(\gamma(s))}{4}}\big(r(\gamma(s))\big)}\,ds\right)\leq A \exp\left(B\,\omega(y)\right), $$ with $A:=\max\{ \max_{\partial B_a(p)}G(p,\cdot), \left(\min_{\partial B_a(p)}G(p,\cdot)\right)^{-1}\}$ and $B=2n(n-1)$. Similarly, $$ G(p,y) \geq A^{-1} \exp\left(-B\,\omega(y)\right). $$ \end{proof} \begin{rem} \label{remark101} We also note that $$ \mathcal{L}_{p}\left(A \exp \left(B\, \omega(a)\right),\infty\right) \subset B_{a}(p). $$ In fact, let $y\in M\setminus B_a(p)$ and take $j>r(y)$. Since $G_{j}(p,y)\leq G(p,y)$ and $G_{j}(p,\cdot)\equiv 0$ on $\partial B_{j}(p)$, by Lemma \ref{lemma1}, we have $$ G_{j}(p,y)\leq A \exp \left(B \omega(a)\right)\quad\text{on}\quad \partial\left(B_{j}(p)\setminus B_{a}(p)\right); $$ note that the right hand side is independent of $y$. Since $y\mapsto G_{j}(x,y)$ is harmonic in $B_{j}(p)\setminus B_{a}(p)$, by maximum principle, $$ G_{j}(p,y)\leq A \exp \left(B \omega(a)\right)\quad\text{in}\quad B_{j}(p)\setminus B_{a}(p). $$ Sending $j\to\infty$, by \eqref{eq9}, we obtain $$ G(p,y)\leq A \exp \left(B \omega(a)\right)\quad\text{in}\quad M\setminus B_{a}(p), $$ and the claim follows. \end{rem} \subsection{Auxiliary estimates} \begin{lemma}\label{lemma2} Let $(M,g)$ be non-parabolic. For any $s>0$, there holds $$ \int_{\mathcal{L}_{p}(s)}|\nabla G(p,y)|\,dA(y) = 1 $$ where $dA(y)$ is the $(n-1)$-dimensional Hausdorff measure on $\mathcal{L}_{x}(s)$. As a consequence, by the co-area formula, for any $0<a<b$, there holds $$ \int_{\mathcal{L}_{p}(a,b)}\frac{|\nabla G(p,y)|^2}{G(p,y)}\,dy = \log\left(\frac{b}{a}\right) \,. $$ \end{lemma} For the proof see \cite{ms}. Moreover, we get the following weighted integrability property for the Green's function. \begin{lemma}\label{lemmastoc} Assume that $(M,g)$ satisfies the property $\left(\mathcal{P}^\infty_{\rho_R}\right)$. Fix $m\geq R_0$. Then, for any $R_1>0$ such that $B_m(p)\subset B^{\rho_m}_{R_1}(p)$, one has $$ \int_{M\setminus B^{\rho_m}_{2R_1}(p)} \rho_m(y)\,|G(p,y)|^2\,dy < \infty \,. $$ \end{lemma} \begin{rem} Note that $B_m(p)\subset B^{\rho_m}_{R_1}(p)$ for every $R_1$ large enough. \end{rem} \begin{proof} In order to simplify the notation, let $\rho\equiv \rho_m$. Fix $R_1>0$ such that $B_m(p)\subset B^\rho_{R_1}(p)$ and let $\phi$ be defined as \begin{equation*} \phi(x):=\begin{cases} 0 & \textrm{on } B^\rho_{R_1}(p) \\ \frac{r_\rho(x)-R_1}{R_1} & \textrm{on } B^\rho_{2R_1}(p)\setminus B^\rho_{R_1}(p)\\ 1 & \textrm{on } M\setminus B^\rho_{2R_1}(p) \,. \end{cases} \end{equation*} Let $R>2R_1$ and $G^{\rho}_{R}(p,y)$ be the Green's function of $-\Delta$ in $B^{\rho}_{R}(p)$ satisfying zero Dirichlet boundary conditions on $\partial B^{\rho}_{R}(p)$. Following the proof in \cite{liwa1}, since $G^{\rho}_R$ is harmonic in $B^{\rho}_{R}(p)$, one has \begin{align*} \int_{B^{\rho}_R(p)}|\nabla \left(\phi \,G^{\rho}_R\right)|^2\,dV &= \int_{B^{\rho}_R(p)}|\nabla \phi|^2 \left(G^{\rho}_R\right)^2\,dV + \int_{B^{\rho}_R(p)}|\nabla G^{\rho}_R |^2 \phi^2\,dV\\ &+ 2 \int_{B^{\rho}_R(p)}\langle \nabla \phi, \nabla G^{\rho}_R \rangle \phi G^{\rho}_R \,dV \\ &= \int_{B^{\rho}_R(p)}|\nabla \phi|^2 \left(G^{\rho}_R\right)^2\,dV + \frac{1}{2}\int_{B^{\rho}_R(p)}\Delta\left(G^{\rho}_R\right)^2 \phi^2\,dV\\ &+ 2 \int_{B^{\rho}_R(p)}\langle \nabla \phi, \nabla G^{\rho}_R \rangle \phi G^{\rho}_R \,dV \\ &= \int_{B^{\rho}_R(p)}|\nabla \phi|^2 \left(G^{\rho}_R\right)^2\,dV \end{align*} where the last equality follows by integration by parts and the fact that $G^{\rho}_{R}(p,y)$ vanishes on $\partial B^{\rho}_{R}(p)$. Hence, the weighted Poincar\'e inequality yields \begin{align*} \int_{M\setminus B^\rho_{R_1}(p)} \rho \,\left(G^{\rho}_R\right)^2\phi^2\,dV \leq \int_{B^{\rho}_R(p)}|\nabla \left(\phi \,G^{\rho}_R\right)|^2\,dV \leq \frac{1}{R_1^2}\int_{B^{\rho}_{2R_1}(p)\setminus B^{\rho}_{R_1}(p)}\rho\,\left(G^{\rho}_R\right)^2\,dV \end{align*} Letting $R\rightarrow \infty$, by Fatou's lemma and uniform convergence of $G_R^\rho \rightarrow G$ on compact subsets, we get $$ \int_{M\setminus B^\rho_{2R_1}(p)} \rho \,G^2\,dV \leq \frac{1}{R_1^2}\int_{B^{\rho}_{2R_1}(p)\setminus B^{\rho}_{R_1}(p)}\rho\, G^{2}\,dV $$ and the thesis follows. \end{proof} We expect a decay estimate similar to the one in \cite[Theorem 2.1]{liwa1}. However we leave out this refinement since it is not necessary in our arguments. \subsection{Integral estimates on level sets} We begin by noting that, using Remark \ref{remark101} and the fact that $G(p,\cdot)\in L^1_{\text{loc}}(M)$ one has the following integral estimate on large level sets. \begin{proposition}\label{lemma3} Let $(M,g)$ be non-parabolic. Choose $A,B$ as in Lemma \ref{lemma1}. Then \begin{align*} \int_{\mathcal{L}_{p}\left(A \exp \left(B\, \omega(a)\right),\infty\right)} &G(p,y)\,dy <\infty. \end{align*} \end{proposition} For intermediate levels sets, we get the following key inequality. \begin{proposition}\label{claim2} Assume that $(M,g)$ satisfies the property $\left(\mathcal{P}^\infty_{\rho_R}\right)$. Then, there exists a positive constant $C$ such that, for any function $f$ and any $0<\delta<1$, $\varepsilon >0$ satisfying $\mathcal{L}_p \left(\frac{\delta\varepsilon}{2},2\varepsilon\right) \subset M \setminus B_m(p)$ for some $m>R_0$, one has $$ \left|\int_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)} G(p,y)\,f(y)\,dy \right| \leq C \left(-\log\delta +1\right) \sup_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)} \left|\frac{f}{\rho_m}\right|\,. $$ \end{proposition} \begin{proof} We follow the general argument in \cite{liwa1} and \cite{ms}; however some relevant differences are in order, due to the use of the property $\left(\mathcal{P}^\infty_{\rho_R}\right)$. Let $\phi:=\chi \psi$ with \begin{equation*} \chi(y):=\begin{cases} \frac{1}{\log 2} \log \left(\frac{2 G(p,y)}{\delta \epsilon}\right) & \textrm{on } \mathcal{L}_p \left(\frac{\delta\varepsilon}{2},\delta\varepsilon\right)\\ 1 & \textrm{on } \mathcal{L}_p \left(\delta\varepsilon,\varepsilon\right)\\ \frac{1}{\log 2} \log \left(\frac{2 \varepsilon}{G(p,y)}\right) & \textrm{on } \mathcal{L}_p \left(\varepsilon,2\varepsilon\right) \\ 0 & \textrm{elsewhere} \end{cases} \end{equation*} and for any fixed $R>0$ \begin{equation*} \psi(y):=\begin{cases} 1 & \textrm{on } B^{\rho_m}_{R}(p) \\ R+1-r_{\rho_m}(y) & \textrm{on } B^{\rho_m}_{R+1}(p)\setminus B^{\rho_m}_{R}(p)\\ 0 & \textrm{on } M\setminus B^{\rho_m}_{R+1}(p) \,. \end{cases} \end{equation*} By the weighted Poincar\'e inequality at infinity we get \begin{align*} \left|\int_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)\cap B^{\rho_m}_{R}(p)} G(p,y)\,f(y)\,dy \right| &\leq \int_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)\cap B^{\rho_m}_{R}(p)} G(p,y)\,|f(y)|\,dy \\ &\leq \sup_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)\cap B^{\rho_m}_{R}(p)} \left|\frac{f}{\rho_m}\right| \, \int_{M} \rho_m(y)\,G(p,y) \phi^2(y)\,dy \\ &\leq \sup_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)\cap B^{\rho_m}_{R}(p)} \left|\frac{f}{\rho_m}\right| \, \int_{M} \left|\nabla \left( \sqrt{G(p,y)} \phi(y)\right)\right|^2\,dy \,. \end{align*} We estimate \begin{align*} \int_{M} \left|\nabla \left( \sqrt{G(p,y)} \phi(y)\right)\right|^2\,dy &\leq \frac{1}{2} \int_{\mathcal{L}_{p}(\frac{\delta \varepsilon}{2}, 2\varepsilon)} \frac{|\nabla G(p,y)|^2}{G(p,y)}\,dy + 2 \int_M G(p,y)|\nabla \phi|^2 \,dy \\ &= C(-\log\delta+1) + 2 \int_M G(p,y)|\nabla \phi|^2 \,dy \end{align*} where we used Lemma \ref{lemma2} in the last equality. On the other hand \begin{align*} \int_M G(p,y)|\nabla \phi|^2 \,dy &\leq 2 \int_M G(p,y)|\nabla \chi|^2 \psi^2 \,dy + 2 \int_M G(p,y)|\nabla \psi |^2 \chi^2 \,dy \\ &\leq 2(\log 2)^2 \int_{\mathcal{L}_{p}(\frac{\delta \varepsilon}{2}, 2\varepsilon)} \frac{|\nabla G(p,y)|^2}{G(p,y)}\,dy \\ &\quad\ + 2 \int_{B^\rho_{R+1}(p)\setminus B^\rho_{R}(p)} \rho_m(y) \,G(p,y) \chi^2 \,dy \\ &\leq C(-\log\delta+1)+ \frac{4}{\delta\varepsilon} \int_{B^{\rho_m}_{R+1}(p)\setminus B^{\rho_m}_{R}(p)} \rho_m(y) \,G^2(p,y) \,dy \,. \end{align*} Now we let $R\rightarrow \infty$ and use Lemma \ref{lemmastoc}. The thesis now follows. \end{proof} In the special case when $M$ is non-parabolic with positive minimal Green's function $G$ and with weight $\rho(x)=\frac{|\nabla G(p,x)|^2}{4 G^2(p,x)}$, we have the following refinement of Proposition \ref{claim2}. \begin{proposition}\label{claim3} Assume that $(M,g)$ is non-parabolic with positive minimal Green's function $G$ and with weight $\rho(x)=\frac{|\nabla G(p,x)|^2}{4 G^2(p,x)}$. Then there exists a positive constant $C$ such that for any function $f$ and any $0<\delta<1$, $\varepsilon >0$ one has $$ \left|\int_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)} G(p,y)\,f(y)\,dy \right| \leq C \left(-\log\delta \right) \sup_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)} \left|\frac{f}{\rho}\right|\,. $$ \end{proposition} \begin{proof} We have \begin{align*} \left|\int_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)} G(p,y)\,f(y)\,dy \right| &\leq\sup_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)} \left|\frac{f}{\rho}\right| \left(\int_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)} G(p,y)\,\rho(y)\,dy\right)\\ &=\frac{1}{4}\sup_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)} \left|\frac{f}{\rho}\right| \left(\int_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)} \frac{|\nabla G(p,y)|^2}{G(p,y)}\,dy\right)\\ &=\frac{1}{4}\left(-\log\delta \right)\sup_{\mathcal{L}_{p}(\delta \varepsilon, \varepsilon)} \left|\frac{f}{\rho}\right|, \end{align*} where we have used Lemma \ref{lemma2} in the last equality. \end{proof} \ \section{Proof of Theorem \ref{teo1}} \label{sec-proofs} In order to prove Theorem \ref{teo1}, we will show that $$ |u(x)|=\left| \int_{M}G(x,y)f(y)\,dy \right| \leq v(x), $$ with $v\in C^{0}(M)$. We divide the proof in two parts, we first consider the case when $(M,g)$ is non-parabolic and then the case when it is parabolic. \begin{proof}[Proof of Theorem \ref{teo1}] {\bf Case 1:} {\em $(M,g)$ non-parabolic.} \ By assumption, $(M,g)$ satisfies $\left(\mathcal{P}_{\rho_R}^\infty\right)$. Let $x\in M$ and choose $R=R(x)>R_0$ large enough so that $x\in B_R (p)$. One has \begin{align*} \left|\int_M G(x,y)\,f(y)\, dy\right| &\leq \left| \int_{B_R(p)} G(x,y)\,f(y)\,dy \right|+\left|\int_{M\setminus B_R(p)} G(x,y)\,f(y)\,dy\right|\\ &\leq C_1(x) + \int_{M\setminus B_R(p)} G(x,y)\,|f(y)|\,dy \end{align*} since $G(x,\cdot)\in L^1_{\text{loc}}(M)$. Hence, by Harnack's inequality, we have \begin{align}\label{eq501} \left|\int_M G(x,y)\,f(y)\, dy\right| &\leq C_1(x) + C_2(x)\int_{M\setminus B_R(p)} G(p,y)\,|f(y)|\,dy \\ \nonumber &\leq C_1(x) + C_2(x)\int_{M} G(p,y)\,|f(y)|\,dy \,, \end{align} where $C_2(x)$ can be chosen as the constant in the Harnack's inequality for the ball $B_{r(x)+1}(p)$. Then we estimate \begin{align*} \int_{M}G(p,y)\,|f(y)|\,dy &= \int_{\mathcal{L}_{p}\left(0, \,A \exp \left(B\, \omega(a)\right)\right)} G(p,y)\,|f(y)|\,dy \\ &\,\,\,+ \int_{\mathcal{L}_{p}\left(A \exp \left(B\, \omega(a)\right),\infty\right)} G(p,y)\,|f(y)|\,dy \,. \end{align*} By Proposition \ref{lemma3}, Remark \ref{remark101} we get \begin{align}\label{eq128} \int_{M}G(p,y)\,|f(y)|\,dy &\leq \int_{\mathcal{L}_{p}\left(0, A \exp \left(B\, \omega(a)\right)\right)} G(p,y)\,|f(y)|\,dy + C_3(a) \end{align} for some positive constant $C_3(a)$. To estimate the first integral, we observe that, for any $m_{0}=m_{0}(x)\geq a$ one has \begin{align}\label{eq129} \int_{\mathcal{L}_{p}\left(0, \,A \exp \left(B\, \omega(a)\right)\right)} &G(x,y)\,|f(y)|\,dy = \int_{\mathcal{L}_{p}\left(0, \,(2A)^{-1}\exp(-B\omega(m_{0}))\right)}G(x,y)\,|f(y)|\,dy \nonumber \\ &\quad+ \int_{\mathcal{L}_{p}\left((2A)^{-1}\exp(-B\omega(m_{0})),\,A \exp \left(B\, \omega(a)\right)\right)}G(x,y)\,|f(y)|\,dy \,. \end{align} We need the following lemma. \begin{lemma}\label{lemma5} Choose $A,B$ as in Lemma \ref{lemma1}. For any $m\geq m_0\geq a$ one has \begin{equation}\label{eq400}\mathcal{L}_{p}\left(0, A^{-1}\exp(-B\omega(m))\right) \subset M \setminus B_m(p). \end{equation} \end{lemma} \begin{proof} Since $m_{0}\geq a$, by Remark \ref{remark101} imply \begin{equation}\label{eq300} \mathcal{L}_{p}\left(0, A^{-1}\exp(-B\omega(m_{0}))\right) \subset\mathcal{L}_{p}\left(0, A^{-1} \exp \left(-B\, \omega(a)\right)\right)\subset M\setminus B_{a}(p) . \end{equation} If $$ z\in \mathcal{L}_{p}\left(0, A^{-1}\exp(-B\omega(m))\right) \subset M\setminus B_{a}(p) \,, $$ then by Lemma \ref{lemma1} $$ A^{-1}\exp(-B\omega(m)) \geq G(p,z) \geq A^{-1}\exp(-B\omega(z)) \,. $$ Thus, $$ \omega(z)\geq \omega(m) $$ and, by monotonicity of $\omega$, we obtain $r(z)\geq m$. \end{proof} In particular, we get $$ \mathcal{L}_{p}\left(0, (2A)^{-1}\exp(-B\omega(m_{0}))\right) \subset \mathcal{L}_{p}\left(0, A^{-1}\exp(-B\omega(m_{0}))\right) \subset M\setminus B_{m_0}(p). $$ Thus, $$ \mathcal{L}_{p}\left((2A)^{-1}\exp(-B\omega(m_{0})),\,A \exp \left(B\, \omega(a)\right)\right) \subset B_{m_{0}}(p) $$ Then, since $G(x,\cdot)\in L^1_{\text{loc}}(M)$, we get \begin{align}\label{eq130} \int_{\mathcal{L}_{p}\left((2A)^{-1}\exp(-B\omega(m_{0})),\,A \exp \left(B\, \omega(a)\right)\right)}G(x,y)\,|f(y)|\,dy \leq C_4(a,m_0). \end{align} Now, for any $m\geq m_{0}$, let \begin{equation}\label{11} \varepsilon:=(2A)^{-1}\exp(-B\omega(m)),\quad\quad\delta:=\exp(B\omega(m)-B\omega(m+1)). \end{equation} By Lemma \ref{lemma5}, $$ \mathcal{L}_p(0,2\varepsilon) \subset M\setminus B_m(p). $$ Hence we can apply Proposition \ref{claim2} obtaining \begin{align}\label{eq201} &\int_{\mathcal{L}_{p}\left(0, (2A)^{-1}\exp(-B\omega(m_{0}))\right)}G(x,y)\,|f(y)|\,dy \\\nonumber &= \sum_{m\geq m_{0}} \int_{\mathcal{L}_{p}\left((2A)^{-1}\exp(-B\omega(m+1)), (2A)^{-1}\exp(-B\omega(m))\right)}G(x,y)\,|f(y)|\,dy \\\nonumber &\leq C \sum_{m\geq m_{0}}^{\infty}\left(\omega(m+1)-\omega(m)+1\right)\sup_{\mathcal{L}_{p}\left((2A)^{-1}\exp(-B\omega(m+1)), (2A)^{-1}\exp(-B\omega(m))\right)}\left|\frac{f}{\rho_m}\right|\\ \nonumber &\leq C \sum_{m\geq m_{0}}^{\infty}\left(\omega(m+1)-\omega(m)+1\right)\sup_{\mathcal{L}_{p}\left(0, A^{-1}\exp(-B\omega(m))\right)}\left|\frac{f}{\rho_m}\right|\\ \nonumber &\leq C \sum_{m\geq m_{0}}^{\infty}\left(\omega(m+1)-\omega(m)+1\right)\sup_{M \setminus B_m(p)}\left|\frac{f}{\rho_m}\right| <\infty \,, \end{align} where in the last inequality we used Lemma \ref{lemma5}. The proof of Theorem \ref{teo1} is complete in this case. \ \noindent {\bf Case 2:} {\em $(M,g)$ parabolic.} \ Let $G(x,y)$ be a Green's function on $M$ (which is positive inside a certain ball, and negative outside). Fix any $R>0$ and let $\rho \equiv\rho_{R_0}$. Note that, arguing as in the proof of \eqref{eq501}, it is sufficient to estimate \begin{align*} \int_{M}|G(p,y)||f(y)|\,dy &= \int_{M\setminus B^\rho_{R}(p)}|G(p,y)||f(y)|\,dy + \int_{B^\rho_{R}(p)}|G(p,y)||f(y)|\,dy \\ &\leq \int_{M\setminus B^\rho_{R}(p)}|G(p,y)||f(y)|\,dy+ C, \end{align*} since $G(p,\cdot)\in L^{1}_{\rm{loc}}(M)$ and $f$ is locally bounded. We have that $$ M\setminus B^\rho_{R}(p) = \bigcup_{i=1}^{N} E_{i}, $$ where each $E_{i}$ is an end with respect to $B^\rho_{R}(p)$. Note that every end $E_{i}$ is parabolic. In fact, if at least one end $E_{i}$ is non-parabolic, then $(M,g)$ is non-parabolic (see \cite{li} for a nice overview), but we are in the case that $(M,g)$ is parabolic. Since every $E_{i}$ is parabolic, every $E_{i}$ has finite weighted volume (see \cite{liwa2}), i.e. $$ \int_{E_i} \rho\,dy < \infty \,. $$ Now choose $R$ large enough so that we can apply Lemma \ref{lemmastoc} obtaining \begin{align*} &\int_{M\setminus B^\rho_{R}(p)}|G(p,y)||f(y)|\,dy \\ &\qquad\leq \left(\int_{M\setminus B^\rho_{R}(p)}\rho (y)|G(p,y)|^{2}\,dy\right)^{\frac{1}{2}}\left(\int_{M\setminus B^\rho_{R}(p)}\rho(y)\left(\frac{|f(y)|}{\rho(y)}\right)^{2}\,dy\right)^{\frac{1}{2}}\\ &\qquad\leq C\, \sup_{M\setminus B_{R_{0}}(p)} \left|\frac{f}{\rho}\right| \int_{M\setminus B^\rho_{R}(p)}\rho\,dy <\infty\,. \end{align*} This concludes the proof of Theorem \ref{teo1}. \end{proof} \begin{proof}[Proof of Theorem \ref{teo2}] We start as in the proof of Theorem \ref{teo1} using \eqref{eq501}, \eqref{eq128}, \eqref{eq129} and \eqref{eq130}. Then, similar to \eqref{eq201}, using Proposition \ref{claim3}, we obtain \begin{align*} &\int_{\mathcal{L}_{p}\left(0, (2A)^{-1}\exp(-B\omega(m_{0}))\right)}G(x,y)\,|f(y)|\,dy \\\nonumber &= \sum_{m\geq m_{0}} \int_{\mathcal{L}_{p}\left((2A)^{-1}\exp(-B\omega(m+1)), (2A)^{-1}\exp(-B\omega(m))\right)}G(x,y)\,|f(y)|\,dy \\ &\leq C \sum_{m\geq m_{0}}^{\infty}\left(\omega(m+1)-\omega(m)\right)\sup_{M \setminus B_m(p)}\left|\frac{f}{\rho}\right| <\infty \,, \end{align*} Then $$ \left| \int_{M}G(x,y)f(y)\,dy \right| <\infty $$ and the proof of Theorem \ref{teo2} is complete. \end{proof} \ \section{Cartan-Hadamard and model manifolds} \label{sec-ex} We consider Cartan-Hadamard manifolds, i.e.~complete, non-compact, simply connected Riemannian manifolds with non-positive sectional curvatures everywhere. Observe that on Cartan-Hadamard manifolds the cut locus of any point $p$ is empty. Hence, for any $x\in M\setminus \{p\}$ one can define its polar coordinates with pole at $p$, namely $r(x) = \operatorname{dist}(x, p)$ and $\theta\in \mathbb S^{n-1}$. We have \begin{equation*} \textrm{meas}\big(\partial B_{r}(p)\big)\,=\, \int_{\mathbb S^{n-1}}A(r, \theta) \, d\theta^1d \theta^2 \ldots d\theta^{n-1}\,, \end{equation*} for a specific positive function $A$ which is related to the metric tensor \cite[Sect. 3]{gri1}. Moreover, it is direct to see that the Laplace-Beltrami operator in polar coordinates has the form \begin{equation*} \Delta \,=\, \frac{\partial^2}{\partial r^2} + m(r, \theta) \, \frac{\partial}{\partial r} + \Delta_{\theta} \, , \end{equation*} where $m(r, \theta):=\frac{\partial }{\partial r}(\log A)$ and $ \Delta_{\theta} $ is the Laplace-Beltrami operator on $\partial B_{r}(p)$. We have $$ m(r,\theta) =\Delta r(x). $$ Let $$\mathcal A:=\left\{f\in C^\infty((0,\infty))\cap C^1([0,\infty)): \, f'(0)=1, \, f(0)=0, \, f>0 \ \textrm{in}\;\, (0,\infty)\right\} .$$ We say that $(M,g)$ is a rotationally symmetric manifold or a model manifold if the Riemannian metric is given by \begin{equation*}\label{e2} g \,=\, dr^2+\varphi(r)^2 \, d\theta^2, \end{equation*} where $d\theta^2$ is the standard metric on $\mathbb S^{n-1}$ and $\varphi\in \mathcal A$. In this case, \begin{equation*} \Delta \,=\, \frac{\partial^2}{\partial r^2} + (n-1) \, \frac{\varphi'}{\varphi} \, \frac{\partial}{\partial r} + \frac1{\varphi^2} \, \Delta_{\mathbb S^{n-1}} \, . \end{equation*} Note that $\varphi(r)=r$ corresponds to $M=\mathbb R^n$, while $\varphi(r)=\sinh r$ corresponds to $ M=\mathbb H^n $, namely the $n$-dimensional hyperbolic space. The Ricci curvature in the radial direction is given by $$ \mathrm{Ric}( \nabla r, \nabla r) (x) = -(n-1)\frac{\varphi''(r(x))}{\varphi(r(x))}. $$ \subsection{Cartan-Hadamard manifolds} Concerning the validity of the property $\left(\mathcal{P}_\rho^\infty\right)$ on a Cartan-Hadamard manifold we have the following result. \begin{lemma}\label{lemma-peso} Let $(M,g)$ be a Cartan-Hadamard manifold with $$ \mathrm{Ric}( \nabla r, \nabla r) (x)\leq -C\big(1+r(x)\big)^{\gamma} $$ for some $\gamma\in \mathbb{R}$, $C>0$ and any $x\in M\setminus\{p\}$. Then $(M,g)$ satisfies the property $\left(\mathcal{P}_{\rho_{R}}^\infty\right)$ with $$ \rho_R(x) = \begin{cases} C'\, r(x)^{\gamma} &\quad\hbox{if } \gamma\geq -2 \\ C'\, r(x)^{-2} &\quad\hbox{if } \gamma < -2 \end{cases} $$ for all $R>0$ large enough and some $C'>0$. \end{lemma} \begin{rem} As it will be clear from the proof, we have a weighted Poincar\'e inequality on $M$ if $\gamma \leq 0$ and a the weighted Poincar\'e inequality for functions with compact support in $M\setminus B_1(p)$ if $\gamma>0$. \end{rem} \begin{proof} We can find $\varphi\in \mathcal{A}$ given by \begin{equation}\label{15} \varphi(r)= \begin{cases} \exp\big(B\,r^{1+\frac{\gamma}{2}}\big) &\quad\hbox{if } \gamma>-2 \\ r^\delta &\quad\hbox{if } \gamma=-2 \\ r &\quad\hbox{if } \gamma<-2 \end{cases} \end{equation} for $r$ large enough, $B>0$ small, $\delta=\delta(C)>1$ such that $\mathrm{Ric}( \nabla r, \nabla r) (x) \leq -\frac{\varphi''(r(x))}{\varphi(r(x))}$. By the Laplacian comparison in a strong form, which is valid only on Cartan-Hadamard manifolds (see \cite[Theorem 2.15]{xin}), one has $$ \Delta r(x) \geq \begin{cases} C\, r(x)^{\gamma/2} &\quad\hbox{if } \gamma\geq-2 \\ C r(x)^{-1} &\quad\hbox{if } \gamma<-2 \,. \end{cases} $$ Suppose $\gamma\leq 0$ and let $\alpha:=\max\{\gamma,-2\}\leq 0$. For any $u\in C^\infty_c (M)$, since $|\nabla r|^2=1$, we have \begin{align*} &C \int_M r(y)^\alpha \,u(y)^2\,dy\\ &\qquad\leq \int_M u(y)^2 r(y)^{\alpha/2} \Delta r (y)\,dy \\ &\qquad= -2 \int_M \langle \nabla u, \nabla r\rangle u(y) r(y)^{\alpha/2}\,dy + \frac{\alpha}{2} \int_M u(y)^2 r(y)^{\alpha/2-1} |\nabla r(y)|^2\,dy \\ &\qquad\leq 2 \int_M |u(y)| |\nabla u(y)| r(y)^{\alpha/2}\,dy\\ &\qquad\leq \frac{C}{2} \int_M r(y)^\alpha \,u(y)^2\,dy + \frac{2}{C} \int_M |\nabla u(y)|^2\,dy \,. \end{align*} Thus $$ \int_M r(y)^\alpha \,u(y)^2\,dy \leq \frac{4}{C^2} \int_M |\nabla u(y)|^2\,dy $$ and the weighted Poincar\'e inequality on $M$ follows in this case. Suppose now $\gamma >0$. By a Barta-type argument (see e.g. \cite[Theorem 11.17]{gri2}), \[\lambda_1(M\setminus B_R(p)) \geq [C R^{\frac{\gamma}{2}}]^2 \quad \textrm{in}\;\; M\setminus B_R(p)\,. \] Thus, the Poincar\'e inequality reads \begin{align}\label{poineq} C R^\gamma \int_M u(y)^2\,dy \leq \int_M |\nabla u(y)|^2\,dy \end{align} for any $u$ with compact support in $M\setminus B_R(p)$. Now let $R>1$ and, for every $k\in\mathbb{N}$, define the cutoff functions $$ \varphi_k(x):=\begin{cases} r(x)-k+1, &r(x)\in[k-1,k)\\ k+1-r(x), &r(x)\in[k,k+1)\\ 0 &\text{otherwise}.\end{cases} $$ Note that $|\nabla \varphi_k|\leq 1$ and for all $x\in M\setminus B_1(p)$, $\sum_k \varphi_k =1$ and $x\in \operatorname{supp}\varphi_k$ at most for two integers $k$. If $\operatorname{supp} u \subset M \setminus B_1(p)$, we have \begin{align*} \int_M r(y)^\gamma \,u(y)^2\,dy &= \int_M r(y)^\gamma \,\left(\sum_k \varphi_k (y) u(y)\right)^2\,dy \\ &\leq 2\sum_k \int_M r(y)^\gamma \,\varphi_k (y)^2 u(y)^2\,dy \\ &\leq C\sum_k (k-1)^\gamma \int_M \varphi_k (y)^2 u(y)^2\,dy \\ &\leq C\sum_k \int_M |\nabla\left(\varphi_k (y) u(y)\right)|^2\,dy, \end{align*} where in the last passage we used \eqref{poineq} with $R=k-1$. Thus \begin{align*} \int_M r(y)^\gamma \,u(y)^2\,dy &\leq C\sum_k \left(\int_M u(y)^2|\nabla \varphi_k (y)|^2\,dy+\int_M \varphi_k(y)^2|\nabla u(y)|^2\,dy\right)\\ &\leq C\int_M u(y)^2\,dy+C\int_M |\nabla u(y)|^2\,dy\\ &\leq C\int_M |\nabla u(y)|^2\,dy, \end{align*} where in the last passage we used \eqref{poineq} with $R=1$. Hence the weighted Poincar\'e inequality holds for functions with support in $M\setminus B_1(p)$. \medskip Finally, the completeness of the metric $g_{\rho_R}:= {\rho_R}\, g$ follows. In fact, for any curve $\eta(s)$ parametrized by arclength with $0\leq s \leq T$, the length of $\eta$ with respect tp $g_{\rho_R}$ is given by $$ \int_\eta \sqrt{{\rho_R}}\,ds \to \infty \quad\hbox{as } T\to \infty \,. $$ \end{proof} \ \noindent Let us write some estimates which will be useful both in the proof of Corollary \ref{cor-2} and in the last Subsection \ref{ssu}. Choose $\varphi\in\mathcal{A}$ as in \eqref{15} with $\gamma=\gamma_1$ obtaining $$ \frac{\varphi'(r(x))}{\varphi(r(x))}=\begin{cases} C\,r(x)^{\gamma_1/2} &\quad\hbox{if } \gamma_1\geq -2 \\ C\,r(x)^{-1} &\quad\hbox{if } \gamma_1< -2 \end{cases} $$ and $$ \frac{\varphi''(r(x))}{\varphi(r(x))} = \begin{cases} C\,r(x)^{\gamma_1}+C' r(x)^{\gamma_1/2-1} &\quad\hbox{if } \gamma_1\geq -2 \\ 0 &\quad\hbox{if } \gamma_1<-2\, \end{cases} $$ for $r(x)>R>1$. A simple computation shows that, for $R=r(x)/4$, one has $$ K_R(x) = \begin{cases} C\, r(x)^{\gamma_1/2} &\quad\hbox{if } \gamma_1\geq -2 \\ 0 &\quad\hbox{if } \gamma_1<-2\,, \end{cases} $$ $$ \frac{I_R(x)}{R} = \begin{cases} C\, r(x)^{\gamma_1/2-1}\coth\left(C'r(x)^{\gamma_1/2+1}\right) &\quad\hbox{if } \gamma_1\geq -2 \\ \frac{2}{r(x)^2} &\quad\hbox{if } \gamma_1<-2\, \end{cases} $$ and $$ Q_R(x) = \begin{cases} C\, r(x)^{\gamma_1} &\quad\hbox{if } \gamma_1\geq -2 \\ \frac{2}{r(x)^2} &\quad\hbox{if } \gamma_1<-2\,. \end{cases} $$ Thus $$ \omega(r) = \begin{cases} C\, r^{\gamma_1/2+1} &\quad\hbox{if } \gamma_1\geq -2 \\ C \log r &\quad\hbox{if } \gamma_1<-2\,, \end{cases} $$ and, as $m\to\infty$, \begin{equation}\label{asdf} \omega(m+1)-\omega(m) \sim\begin{cases} C\, m^{\gamma_1/2} &\quad\hbox{if } \gamma_1\geq -2 \\ C m^{-1} &\quad\hbox{if } \gamma_1<-2\,. \end{cases} \end{equation} On the other hand, using Lemma \ref{lemma-peso} with $\gamma=\gamma_2$, we get the estimate $$ \sup_{M\setminus B_m(p)} \frac{1}{\rho_m} \leq \begin{cases} C\,m^{-\gamma_2} &\quad\hbox{if }\gamma_2\geq -2 \\ C\, m^{2} &\quad\hbox{if } \gamma_2 < -2 \end{cases}\,. $$ \begin{proof}[Proof of Corollary \ref{cor-2}] For $\gamma_1\geq \gamma_2$ and $\gamma_1\geq 0$, we get $$ \sum_{m}^{\infty}\Big(\omega(m+1)-\omega(m)+1\Big)\sup_{M\setminus B_m(p)}\left|\frac{f}{\rho_m}\right| \leq \begin{cases} C \sum_{m}^{\infty} \,m^{\gamma_1/2-\gamma_2-\alpha} &\quad\hbox{if }\gamma_2\geq -2 \\ C \sum_{m}^{\infty}\, m^{2+\gamma_1/2-\alpha} &\quad\hbox{if } \gamma_2< -2. \end{cases} $$ and the thesis immediately follows. \end{proof} \subsection{Optimality on rotationally symmetric manifolds}\label{ssu} We show that the assumptions in Theorem \ref{teo2} are sharp on model manifolds. Let $(M,g)$ be a rotationally symmetric manifold with $\varphi\in\mathcal{A}$ defined as in \eqref{15} for any $r>1$. One has $$ \int_{M}G(x,y)f(y)\,dy<\infty \quad\quad\hbox{for any }\, x \in M \quad \Longleftrightarrow \quad \int_{M}G(p,y)f(y) \,dy<\infty . $$ Hence a solution of $\Delta u = f$ in $M$ exists if and only if $$ u(p)=\int_{0}^{\infty}\left(\int_{r}^{\infty}\frac{1}{\varphi(t)^{n-1}}dt\right)f(r)\,\varphi(r)^{n-1}\,dr <\infty. $$ \noindent {\em Case 1:} $\gamma>-2$. With our choice of $\varphi$, by the change of variable $s=t^{1+\frac{\gamma}{2}}$, it is easily seen that, for any $r>0$ sufficiently large \begin{equation}\label{asd} \int_{r}^{\infty}\frac{1}{\varphi(t)^{n-1}}dt \sim C r^{-\frac{\gamma}{2}}\exp\left(-(n-1)r^{1+\frac{\gamma}{2}}\right). \end{equation} Hence \begin{align*} \frac 1{C} \int_{1}^{\infty} & r^{-\frac{\gamma}{2}}\exp\left(-(n-1)r^{1+\frac{\gamma}{2}}\right) \frac{1}{\big(1+r\big)^{\alpha}}\exp\left((n-1)r^{1+\frac{\gamma}{2}}\right)\,dr \leq |u(p)|\\&\leq C \int_{1}^{\infty} r^{-\frac{\gamma}{2}}\exp\left(-(n-1)r^{1+\frac{\gamma}{2}}\right) \frac{1}{\big(1+r\big)^{\alpha}}\exp\left((n-1)r^{1+\frac{\gamma}{2}}\right)\,dr \end{align*} Therefore, \begin{align*} \frac 1 C\int_{1}^{\infty}\frac{1}{r^{\alpha+\frac{\gamma}{2}}}\,dr &\leq |u(p)|\leq C \int_{1}^{\infty}\frac{1}{r^{\alpha+\frac{\gamma}{2}}}\,dr\,. \end{align*} This yields that $$|u(p)|<\infty \quad \textrm{ if and only if} \quad \alpha>1-\frac{\gamma}{2}. $$ \ \noindent On the other hand, a direct computation, using \eqref{asd}, shows that $$ \rho(x)=\frac{|\nabla G(p,x)|^2}{4G^2(p,x)} \sim C r(x)^{\gamma}\,. $$ Furthermore, from \eqref{asdf}, the assumption of Theorem \ref{teo2} is satisfied if and only if $$ \alpha>1-\frac{\gamma}{2}, $$ and the optimality follows in this case. \ \noindent {\em Case 2:} $\gamma=-2$. We have, \begin{equation}\label{qwe} \int_{r}^{\infty}\frac{1}{\varphi(t)^{n-1}}dt = C\, r^{-\delta(n-1)+1}\,. \end{equation} Thus \begin{align*} \frac 1{C} \int_{1}^{\infty} r^{-\delta(n-1)+1}\frac{1}{\big(1+r\big)^{\alpha}}\,r^{\delta(n-1)}\,dr \leq |u(p)|\leq C \int_{1}^{\infty} r^{-\delta(n-1)+1}\frac{1}{\big(1+r\big)^{\alpha}}\,r^{\delta(n-1)}\,dr \end{align*} Therefore, \begin{align*} \frac 1 C\int_{1}^{\infty}\frac{1}{r^{\alpha-1}}\,dr &\leq |u(p)|\leq C \int_{1}^{\infty}\frac{1}{r^{\alpha-1}}\,dr\,, \end{align*} and $$|u(p)|<\infty \quad \textrm{ if and only if} \quad \alpha>2. $$ \ \noindent On the other hand, a direct computation, using \eqref{qwe}, shows that $$ \rho(x)=\frac{|\nabla G(p,x)|^2}{4G^2(p,x)} \sim C r(x)^{-2}\,. $$ Furthermore, from \eqref{asdf}, the assumption of Theorem \ref{teo2} is satisfied if and only if $$ \alpha>2, $$ and the optimality follows in this case. \ \noindent {\em Case 3:} $\gamma<-2$. We have, \begin{equation}\label{zxc} \int_{r}^{\infty}\frac{1}{\varphi(t)^{n-1}}dt = C\, r^{2-n}\,. \end{equation} Thus \begin{align*} \frac 1{C} \int_{1}^{\infty} r^{2-n}\frac{1}{\big(1+r\big)^{\alpha}}\,r^{n-1}\,dr \leq |u(p)|\leq C \int_{1}^{\infty} r^{2-n}\frac{1}{\big(1+r\big)^{\alpha}}\,r^{n-1}\,dr \end{align*} Therefore, \begin{align*} \frac 1 C\int_{1}^{\infty}\frac{1}{r^{\alpha-1}}\,dr &\leq |u(p)|\leq C \int_{1}^{\infty}\frac{1}{r^{\alpha-1}}\,dr\,, \end{align*} and $$|u(p)|<\infty \quad \textrm{ if and only if} \quad \alpha>2. $$ \ \noindent On the other hand, a direct computation, using \eqref{zxc}, shows that $$ \rho(x)=\frac{|\nabla G(p,x)|^2}{4G^2(p,x)} \sim C r(x)^{-2}\,. $$ Furthermore, from \eqref{asdf}, the assumption of Theorem \ref{teo2} is satisfied if and only if $$ \alpha>2, $$ and the optimality follows in this last case. \ \ \begin{ackn} The authors are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`{a} e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). The first two authors are supported by the PRIN project ``Variational methods, with applications to problems in mathematical physics and geometry''. \end{ackn} \ \
2,877,628,091,382
arxiv
\section*{Acknowledgements} \label{sec:ack} We would like to thank Tarun Kathuria for helpful discussions. This material is based upon work supported by the US Department of Energy, Office of Science under Award Numbers 7081675 and 1772593; Cray, under Award Number 47277; and DARPA, under Award Number FA8750-17-2-0091. \section{Large Indices} \label{sec:allarge} In this section, we review the techniques used to construct optimal tilings and find communication lower bounds for nested-loop programs with large indices. Our approach in this section will be the building block for what we do in Sections We limit our attention to cases where the index functions, $\phi_{i}$, are projections: that is, their output is a subset of the input. For convenience, let the indices of the output of $\phi_{i}$ be denoted $\supp(\phi_{i})$. For instance, if $\phi(x_{1},...,x_{5})=(x_{1},x_{4})$, then $\supp(\phi)=\{1,4\}$. In order to find the communication lower bound, it suffices to solve the LP (\ref{eq:hbl}). This linear program has one constraint for each subgroup $H\le\mathbb{Z}^{d}$; since the number of such subgroups is infinite, determining a finite closed form for inequalities using a brute-force enumeration of all possible $H$ is impossible. Note, however, that since $\rank(\phi_{i}(H))\le\rank(H)=d$, the number of non-unique constraints is at most $(d+1)^{n+1}$ . In the general, continuous case - that is, for arbitrary affine $\phi$, with $H$ ranging over subgroups of $\mathbb{R}^{d}$ rather than $\mathbb{Z}^{d}$ - an algorithm guaranteed to terminate in finite (but unbounded) time was given by Valdimarrson \textbf{\cite{Val10}}. A separation oracle for the resulting polytope was given in \cite{GGOW16_bl}, which immediately implies an algorithm for enumerating the relevant constraints in double-exponential time. In the case where $\phi_{i}$ are projections, however, a simple, closed-form listing of the constraints is given by Theorem 6.6 of \cite{CDK+13}, which states that it suffices to check that the inequality $\sum_{i=1}^{n}s_{i}\rank(\phi_{i}(H))\ge\rank(H)$ holds for all $H$ in the set of subgroups $\{e_{1},...,e_{d}\}$, where $e_{i}$ is the subgroup comprised of all vectors with zero entries at all indices except for $i$. Therefore, this LP reduces to: \begin{eqnarray} \min\sum s_{j}\ \st\label{eq:lp_largeindex_projective}\\ 1 & \le & \sum_{j\text{ s.t. }\supp(\phi_{j})\ni i}s_{j}\qquad\forall i\in[1..d]\nonumber \end{eqnarray} Thinking of the $\phi_{i}$ as 0-1 vectors with $1$s in the indices contained in its support, and letting $\boldsymbol{\vec{s}}$ denote the vector $[s_{1},...,s_{n}]^{T}$, we can rewrite the linear program as (omitting nonnegativity constraints) as follows: minimize $\ov^{T}\sv$ subject to: \begin{equation} \begin{bmatrix}\vert & & \vert\\ \phi_{1} & \cdots & \phi_{n}\\ \vert & & \vert \end{bmatrix}\sv\ge\ov\ .\label{eq:hbl_largeindex_matrix_constraints} \end{equation} The solution to this linear program, which we denote $k_{HBL}$, immediately gives us the communication lower bound $\prod_{i}L_{i}/M^{k_{HBL}-1}$. Now that we have a lower bound, we would like to find an actual tiling that attains it in order to show that it is tight. Let us ansatz\textbf{ }(following Loomis-Whitney, etc.) that the optimal tile is a hyperrectangle of dimensions $b_{1}\times...\times b_{d}$, where the $b_{i}$ are constants which we wish to determine. We wish to select a tile whose volume (that is, $\prod_{i\in\{1..d\}}b_{i}$) is as large as possible, but we are subject to memory limitations: the subsets of each array that are used must fit in cache. Since the subsets of array $A_{i}$ required to complete the operations in this hyperrectangle are of size $\prod_{j\in\supp(\phi_{i})}b_{j}$, we obtain the constraint (again, ignoring constant factors) $\prod_{j\in\supp(\phi_{i})}b_{j}\le M$. Taking logs base $M$ and letting $\lambda_{i}$ denote $\log_{M}b_{i}$, we obtain the following linear program: maximize $\ov^{T}[\lambda_{1},...,\lambda_{d}]$ subject to: \begin{equation} \begin{bmatrix}- & \phi_{1} & -\\ & \vdots\\ - & \phi_{n} & - \end{bmatrix}\begin{bmatrix}\lambda_{1}\\ ...\\ \lambda{}_{d} \end{bmatrix}\le\ov\ .\label{eq:largealglp} \end{equation} Taking the dual gives us (\ref{eq:hbl_largeindex_matrix_constraints}), which implies that this tiling obtains the lower bound. Notice that we did not encode the constraint that $b_{i}\le L_{i}$ in this linear program. Although this does not change the result when $L_{i}$ is assumed to be very large, this does not always hold, and the lower bound computed by \ref{eq:hbl_largeindex_matrix_constraints} is not always tight. In the following section, we modify this approach to give tight lower bounds for arbitrarily-sized inputs. \section{Discussion and Future Work} \label{sec:Discussion-and-Future} In this paper, we have shown a systematic, efficiently computable way of determining optimal tilings for projective loop nests of arbitrary size, and used it to rederive several tight lower bounds that have hitherto largely been computed by a problem-specific approach. Our approach reveals some structural properties of the tile as well: All such loop nests share an optimal tile shape (rectangles). Furthermore, as the optimal tile's dimension for \emph{any} projective loop nest is the solution to a linearly parameterized linear program, its cardinality for a given loop nest must be of the form $M^{f(L_{1},...,L_{d})}$ for some piecewise linear function $f$. In fact, for a given loop nest, we may programmatically find a closed form of $f$ by feeding LP (\ref{eq:algfull}), which calculates the dimensions of the tile, into a multiparametric linear program solver, e.g. that of \cite{BBM03}, as in \cite{DD18}. This piecewise-linear structure has also been previously shown to hold for convolutions \cite{DD18}, and \emph{we conjecture that this property holds even in the general, non-projective case as well.} The immediate application we see for our approach is as compiler optimization to automatically block projective nested loops. While many such common loops have already been extensively optimized in high-performance libraries (and some of these optimizations have been implemented in compilers, e.g. \texttt{icc}'s --opt-matmul flag), our techniques are fully general - applying to applications (e.g. pairwise interactions) that do not fit this mold - and do not require programmers to have any familiarity with specific high performance libraries, only access to a compiler with the right optimizations. Furthermore, as the memory model we use can be generalized to multiprocessor machines (as in \cite{Kni15}, following the approach of \cite{ITT04}), our work also provides evidence for the intuition that the best way to split projective loop-nest tasks up on a multiprocessor system is to assign each processor a rectangular subset of the iteration space. Our work is intended as a first step towards generally optimizing \emph{non-projective} nested loops, such as those found in neural nets, image processing, and other similar structured computations, many of which lack well-studied high-performance implementations \cite{BI19}. Algorithms to find such tilings - and the shapes thereof - are known\footnote{Such algorithms, which enumerate all the constraints of the HBL linear program, are in general hard (double exponential in $n$ and $d$, as of the time of publication of this paper). However, as the cost only needs to be incurred once (e.g. during a computation of a highly performance sensitive kernel), and as $n$ and $d$ tend to be relatively small in practice, this is less of an impediment than it might appear at first glance.} for problems with large indices \cite{DR16,CDK+13}; however, a general method for addressing the small-bound case, which occurs in many applications (including most machine learning ones, where, for instance, filter sizes tend to vary), is still unknown, and is left to future work. \section{Examples} \label{sec:Examples-and-Applications} We demonstrate several applications of our theory below. \subsection{Matrix-Matrix and Matrix-Vector Multiplication} We start by re-deriving the classical lower bound \cite{HK81} for the triply-nested-loop matrix multiplication \begin{align*} & {\rm for}\,\{x_{1},x_{2},x_{3}\}\in[L_{1}]\times[L_{2}]\times[L_{d}]\\ & \ \ \ \ \;\;A_{1}(x_{1},x_{3})+=A_{2}(x_{1},x_{2})\times A_{3}(x_{2},x_{3}) \end{align*} Our memory accesses are given by the functions: \begin{eqnarray*} \phi_{1}(x_{1},x_{2},x_{3}) & = & (x_{1},x_{3})\\ \phi_{2}(x_{1},x_{2},x_{3}) & = & (x_{1},x_{2})\\ \phi_{3}(x_{1},x_{2},x_{3}) & = & (x_{2},x_{3}) \end{eqnarray*} Therefore, the HBL LP is to minimize $s_{1}+s_{2}+s_{3}$ subject to \begin{equation} \begin{bmatrix}1 & 1 & 0\\ 0 & 1 & 1\\ 1 & 0 & 1 \end{bmatrix}\begin{bmatrix}s_{1}\\ s_{2}\\ s_{2} \end{bmatrix}\ge\begin{bmatrix}1\\ 1\\ 1 \end{bmatrix}\ .\label{eq:gemm-hbl-primal} \end{equation} The optimal value of this LP is obtained when all the $s_{i}$ are $1/2$, giving a tile size upper bound of $M^{1/2+1/2+1/2}=M^{3/2}$, which provides the standard $L_{1}L_{2}L_{3}/M^{1l2}$ lower bound. Now let us consider the case where $L_{3}$ may be small, which corresponds to problem sizes approaching matrix-vector multiplications (which occurs $L_{3}=1$). In this case, our tile, which has length $M^{1/2}$ in the $L_{3}$ dimension, cannot fit in our iteration space.s We first find a lower bound. Removing the row corresponding to $x_{3}$ from (\ref{eq:gemm-hbl-primal}), we get that given any $\hat{s}_{i}$ satisfying \begin{equation} \begin{bmatrix}1 & 1 & 0\\ 0 & 1 & 1 \end{bmatrix}\begin{bmatrix}\hat{s}_{1}\\ \hat{s}_{2}\\ \hat{s}_{3} \end{bmatrix}\ge\begin{bmatrix}1\\ 1\\ 1 \end{bmatrix}\label{eq:gemm-hbl-primal-clipped} \end{equation} raising $M$ to the power \[ \max\left\{ \hat{s}_{1}+\hat{s}_{2}+\hat{s}_{3},\hat{s}_{1}+\hat{s}_{2}+\hat{s}_{3}+(\log_{M}L_{3})(1-\hat{s}_{1}-\hat{s}_{3})\right\} \] represents a valid upper bound on the tile size. Since (\ref{eq:gemm-hbl-primal-clipped}) is satisfied when $\hat{s}_{2}=1$ and $\hat{s}_{1},\hat{s}_{3}=0$, this term becomes \[ \max\left\{ 1,1+\log_{M}L_{3}\right\} \] giving an upper bound of $\max\left\{ M,ML_{3}\right\} =ML_{3}$ (as $L_{3}$ is always positive); therefore the communication lower bound is given by \[ \frac{L_{1}L_{2}L_{3}}{ML_{3}}M=L_{1}L_{2}\ . \] This is as expected, since we need to read at least $L_{1}L_{2}$, the size of $A_{2}$, into fast memory to perform the operation. \emph{} Now let us consider the question of finding the tile. Instantiating LP (\ref{eq:algfull}) with the relevant values of $\phi_{1,2,3}$, we get: \begin{equation} \begin{aligned}\max & \lambda_{1}+\lambda_{2}+\lambda_{3}\ \st\\ & \lambda_{1}+\lambda_{3}\le1\\ & \lambda_{1}+\lambda_{2}\le1\\ & \lambda_{2}+\lambda_{3}\le1\\ & \lambda_{3}\le\beta_{3}=\log_{M}L_{3} \end{aligned} \label{eq:gemmarray} \end{equation} There are two cases here: if $\beta_{3}\ge1$, then the last constraint is of no relevance, so the solution becomes $3/2$, as in the case above . On the other hand, if $\beta_{3}\le1$, then adding the second and fourth inequalities gives \begin{equation} \lambda_{1}+\lambda_{2}+\lambda_{3}\le1+\lambda_{3}\le1+\beta_{3}\ .\label{eq:sumineqmatmul} \end{equation} We again split based on whether or not $\beta_{3}\ge1/2$; intuitively, we may consider this a question of whether the $L_{3}$ is sufficiently large (at least $\sqrt{M}$) to fit the $\sqrt{M}\times\sqrt{M}\times\sqrt{M}$ tile derived above, or whether we must modify the tile's shape to get it to fit in the $L_{3}$ dimension. If $\beta_{3}\ge1/2$, then the optimum for the LP without the fourth constraint, $\lambda_{1}=\lambda_{2}=\lambda_{3}=1/2$, satisfies the fourth constraint and is therefore optimal, leading to the same $\sqrt{M}\times\sqrt{M}\times\sqrt{M}$ as in the ``large loop bound'' cases discussed above. If $\beta_{3}\le1/2$, then we can set $\lambda_{3}=\beta_{3}$ to make the fourth inequality tight, and then set $\lambda_{1}=1-\beta_{3}$ and $\lambda_{2}=\beta_{3}$ to tighten \ref{eq:sumineqmatmul} in addition to the first inequality in the LP; as three irredundant inequalities are tight and we only have three variables, this solution must be optimal as well. This obtains a tile size of $M/L_{3}\times L_{3}\times L_{3}=ML_{3}$ (with a communication cost of $L_{1}L_{2}$, a quantity that is equal to the size of $A_{2}$ and therefore must be optimal) as expected. Alternatively, we could achieve the same tile size with a tile of size $\sqrt{M}\times\sqrt{M}\times L_{3}$ (corresponding to $\lambda=\lambda_{2}=1/2$, $\lambda_{3}=\beta_{3})$. In fact, the LP is optimized by any point between the two solutions we found previously; specifically, for any $\alpha\le1$, \begin{eqnarray*} \lambda_{1} & = & \alpha/2+(1-\alpha)(1-\beta_{3})\\ \lambda_{2} & = & \alpha/2+(1-\alpha)\beta_{3}\\ \lambda_{3} & = & \beta_{3} \end{eqnarray*} optimizes LP (\ref{eq:gemmarray}); this corresponds to a tile size of: \[ \frac{M^{1-\alpha/2}}{L_{3}^{1-\alpha}}\times M^{\alpha/2}L_{3}^{1-\alpha}\times L_{3}\ . \] When attempting to optimize this matrix multiplication on a real-world system, we may select any tiling from the above $\alpha$-parameterized family of optimal tilings in order to find one that runs well in practice (e.g. inner loops being multiples of cache line lengths or vector units). As the communication cost's derivation is symmetrical (i.e. it continues to be valid when we swap the subscripts) and the tile for the small-$L_{3}$ case above remains be a legal tiling if $L_{3}$ is the smallest loop index, we obtain the following \emph{tight} lower bound for matrix multiplication's communication cost: \[ \max(L_{1}L_{2}L_{3}/\sqrt{M},L_{1}L_{2},L_{2}L_{3},L_{1}L_{3}) \] \subsection{Tensor Contraction} Let $1\le j<k-1<d$. Let us consider a tensor contraction of the form \begin{align*} & {\rm for}\,\{x_{1},...,x_{d}\}\in[L_{1}]\times...\times[L_{d}]\\ & \ \ \ \ \;\;A_{1}(x_{1},...,x_{j},x_{k},...,x_{d})+=A_{2}(i_{1},...,i_{k-1})\times A_{3}(x_{j+1},x_{d}) \end{align*} This nested-loop model encapsulates several machine learning applications. For instance, \emph{pointwise convolutions} - convolutions with $1\times1$ filters, often used along depth-separable convolutions \cite{HZCKWWAA17} to mimic the effect of standard machine learning convolutions with less memory usage, may be represented as tensor contractions: \begin{align} & {\rm for}\,\{b,c,k,w,h\}=0:\{B,C,K,W,H\}-1\nonumber \\ & \ \ \ \ \;\;Out(k,h,w,b)+=Image(w,h,c,b)\times Filter(k,c)\label{eqn_CNN} \end{align} The same holds for fully connected convolutional layers. The communication lower bound for the large-loop bound case is, as derived in \cite{CDK+13}, is $L_{1}...L_{d}/\sqrt{M}$. We instantiate the LP \ref{eq:algfull} to get: \[ \max\lambda_{1}+...+\lambda_{d} \] subject to \begin{eqnarray*} \lambda_{1}+...+\lambda_{j}+\lambda_{k}+....+\lambda_{d} & \le & 1\\ \lambda_{1}+...+\lambda_{k-1} & \le & 1\\ \lambda_{j+1}+...+\lambda_{d} & \le & 1\\ \lambda_{1} & \le & \beta_{1}=\log_{M}L_{1}\\ & \vdots\\ \lambda_{d} & \le & \beta_{d}=\log_{M}L_{d} \end{eqnarray*} The structure of this linear program is much like that of matrix multiplication, and it can be transformed into one identical to that for matrix multiplication. Let $\gamma_{1}=\sum_{i\in[j]}\lambda_{i}$, $\gamma_{2}=\sum_{i\in[j+1,k-1]}\lambda_{i}$, and $\gamma_{3}=\sum_{i\in[k,d]}\lambda_{i}$. Then we can rewrite the linear program as maximizing $\gamma_{1}+\gamma_{2}+\gamma_{3}$ subject to: \begin{eqnarray*} \gamma_{1}+\gamma_{3} & \le & 1\\ \gamma_{1}+\gamma_{2} & \le & 1\\ \gamma_{2}+\gamma_{3} & \le & 1\\ \gamma_{1} & \le & \sum_{i\in[j]}\beta_{i}\\ \gamma_{2} & \le & \sum_{i\in[j+1,k-1]}\beta_{i}\\ \gamma_{3} & \le & \sum_{i\in[k,d]}\beta_{i} \end{eqnarray*} As this linear program is identical to that for matrix multiplication, it immediately follows that its optimum is either $3/2$ or $1+\min\left\{ \sum_{i\in[j]}\beta_{i},\sum_{i\in[j+1,k-1]}\beta_{i},\sum_{i\in[k,d]}\beta_{i}\right\} $, whichever is smaller for the given program. \subsection{$n$-body Pairwise Interactions} Suppose we have a list of $n$ objects, and each object interacts with every other object. This comes up frequently in many scientific computing applications (e.g. particle simulations), as well as database joins. The nested loops for this problem are (for some arbitrary function $f$): \begin{align*} & {\rm for}\,\{x_{1},x_{2}\}\in[L_{1}]\times[L_{2}]\\ & \ \ \ \ \;\;A_{1}[x_{1}]=f(A_{2}[x_{1}],A_{3}[x_{3}]) \end{align*} Instantiating \ref{eq:algfull}, we get: \[ \begin{aligned}\max & \lambda_{1}+\lambda_{2}\ \st\\ & \lambda_{1}\le1\\ & \lambda_{2}\le1\\ & \lambda_{1}\le\beta_{1}=\log_{M}L_{1}\\ & \lambda_{2}\le\beta_{2}=\log_{M}L_{2} \end{aligned} \] which gives us a maximum tile size of $\min\left\{ M^{2},L_{1}M,L_{2}M,L_{1}L_{2}\right\} $ and a maximum communication cost of $\min\left\{ L_{1}L_{2}/M,L_{2},L_{1},M\right\} $. The last term, $M$, is a result of the assumption in our model that each tile carries $M$ words of memory into cache. Therefore, it is important to note that \emph{if total amount of memory required to execute the program without going back to main memory is less than $M$, the output of the program will still be $M$, when in the actual cost is in fact the sum of the sizes of the matrices.} \emph{} \section{Introduction} Many structured computations, including dense linear algebra, $n$-body problems, and many machine learning kernels, can be expressed as a collection of \emph{nested loops}, where each iteration accesses elements from several multidimensional arrays, indexed by some function of the current loop iteration function: \begin{equation} \begin{aligned} & \text{for }x_{1}\in\left[L_{1}\right],...,\text{for }x_{d}\in\left[L_{d}\right]:\\ & \qquad\text{perform operations on }A_{1}\left[\phi_{1}\left(x_{1},...,x_{d}\right)\right],...,A_{n}\left[\phi_{n}\left(x_{1},...,x_{d}\right)\right] \end{aligned} \ .\label{eq:nested-loop-1} \end{equation} where $[L_{1}]$ represents the set $\{1,...,L_{1}\}$. For many such problems, the time and energy costs of communication - that is, moving data between different levels of the memory hierarchy, or between different cores or processors - can significantly outweigh the cost of computation in practice \cite{BCD+14}. For example, communication-optimized implementations of matrix multiply \cite{HK81,BCD+14}, n-body problems \cite{DGKSY13}, and convolutional neural nets \cite{GAB+18}, among others, have significantly outperformed their non-communication-optimized counterparts. Therefore, rearranging the order in which we perform these operations by dividing the nested loops into subsets called \emph{tiles} which are executed in sequence can lead to significantly improved results in practice. Most previous applied work, including that cited above, has been focused on finding communication-optimal tilings and lower bounds for \emph{specific} problems. While this is useful for commonly used kernels whose optimizations can impact performance across a large number of applications (e.g. matrix multiply, convolutions), it is less practicable to develop new theory for and hand-optimize algorithms whose applications fall into smaller niches. This has stymied research into, for instance, unconventional neural net architectures such as capsule networks \cite{HSF18}, which require optimized kernels to test at scale but lack such kernels due to being unproven and not widely used \cite{BI19}. Progress has also been made \cite{CDK+13,DR16} in generalizing some of these techniques by considering communication patterns via the \emph{Brascamp-Lieb inequalities}, which apply to any loop nest where the array indices are affine functions of the loop indices (i.e. the $\phi_{i}$ above are affine). These methods provide both communication lower bounds and constructions for tilings for such problems. Unfortunately, the above lines of work have largely ignored situations when certain loop bounds ($L_{i}$, above) are small. In this case, the methods can produce weak lower bounds and infeasible tilings. Take, for instance, the case of matrix multiplication: \begin{align*} & {\rm for}\,\{x_{1},x_{2},x_{3}\}\in[L_{1}]\times[L_{2}]\times[L_{d}]\\ & \ \ \ \ \;\;A_{1}(x_{1},x_{3})+=A_{2}(x_{1},x_{2})\times A_{3}(x_{2},x_{3}) \end{align*} Existing combinatorial and geometric \cite{BCD+14}, techniques states that a lower bound on the communication between a cache of size $M$ and main memory required to execute this set of instructions is \[ \Omega\left(L_{1}L_{2}L_{3}/M^{1/2}\right) \] words of memory, and may be attained by rewriting the nested loops as follows: \begin{align*} & {\rm for}\,\{o_{1},o_{2},o_{3}\}\in[0..L_{1}/B_{1}-1]\times[0..L_{2}/B_{2}-1]\times[0..L_{3}/B_{3}-1]\\ & \ \ \ \ \;\;{\rm for}\,\{i_{1},i_{2},i_{3}\}\in[B_{1}]\times[B_{2}]\times[B_{d}]\\ & \ \ \ \ \;\;\ \ \ \ \;\;x_{1}=B_{1}o_{1}+i_{1}\\ & \ \ \ \ \;\;\ \ \ \ \;\;x_{2}=B_{2}o_{2}+i_{2}\\ & \ \ \ \ \;\;\ \ \ \ \;\;x_{2}=B_{2}o_{2}+i_{2}\\ & \ \ \ \ \;\;\ \ \ \ \;\;A(x_{1},x_{3})+=A_{2}(x_{1},x_{2})\times A_{3}(x_{2},x_{3}) \end{align*} where the \emph{tile }(the three inner loops) has dimensions $B_{1}=B_{2}=B_{3}\lessapprox\sqrt{M/3}$. However, when $L_{1}<\sqrt{M/3}$, this tiling becomes infeasible. Furthermore, the lower bound also ceases to be useful. For instance, when $L_{3}=1$, corresponding to a matrix-vector multiplication, the minimum communication needed to evaluate this multiplication is at least $L_{1}L_{2}$, since $A_{2}$ must be read in its entirety. However, the previous lower bound evaluates to $\Omega\left(L_{1}L_{2}/M^{1/2}\right)$, which is clearly unachievable. \cite{DD18} addresses this situation for convolutions, finding a separate lower bound (and a corresponding, feasible, tiling) for the case when the filter size is small (as they often are in most CNNs). In this paper, we apply the techniques from \cite{DD18} to find a \emph{general} communication lower bound and optimal tiling for \emph{arbitrary loop bounds} in the case where the array accesses are all subsets of the loop bounds (the so-called ``projective case'', which applies to most dense linear algebra applications, as well as point convolutions), and in doing so we prove that the optimal tile shape for a projective loop nest is always a rectangle. We review the proof in the large-bound case in Section \ref{sec:allarge}, present a stronger communication lower bound that encompasses bounds of arbitrary size in Section \ref{sec:The-Lower-Bound}, and present a linear program that gives the actual tiling required to achieve this lower bound (proving that it is tight) in Section \ref{sec:Tiling-construction}. We then conclude with several examples and a discussion in Sections \ref{sec:Examples-and-Applications} and \ref{sec:ack}. \section{Problem Setup, Preliminaries, Notation, and Definitions} Define $[n]$ to be the set $\{1,2,...,n]$, and $[m,n]$ to be the set $[m,m+1,...,n-1,n]$. Formally, we will concern ourselves with the following $d$-level nested-loop program, which consists of operations on the elements of the $d_{1},...,d_{n}$-dimensional arrays $A_{1},...,A_{n}$ indexed by affine functions $\phi_{i}:\mathbb{Z}^{d}\rightarrow\mathbb{Z}^{d_{i}}$ for $i\in[n]$: \begin{equation} \begin{aligned} & \text{for }x_{1}\in\left[L_{1}\right],...,\text{for }x_{d}\in\left[L_{d}\right]:\\ & \qquad\text{perform operations on }A_{1}\left[\phi_{1}\left(x_{1},...,x_{d}\right)\right],...,A_{n}\left[\phi_{n}\left(x_{1},...,x_{d}\right)\right] \end{aligned} \label{eq:nested-loop} \end{equation} This representation includes many commonly used matrix and tensor operations, including most linear algebra operations, tensor contractions, and convolutional neural nets. We will assume that each $x_{i}$ is present in the support of at least one of the $\phi_{j}$; this assumption may be made without loss of generality as in \cite{CDK+13}. Let us formally model the machine as follows: suppose we have a processor attached to a cache of size $M$, which is in turn connected to a slow memory of unlimited size. The processor may only perform operations on elements of the arrays present in the cache, and we wish to find a reordering of the operations in (\ref{eq:nested-loop}) that minimizes the amount of communication between the cache and the slow memory. \cite{CDK+13} provides a tight lower bound for communication complexity in this model when $L_{1},...,L_{d}$ are sufficiently large, as follows: First, represent each operation in (\ref{eq:nested-loop}), indexed by $x_{1},...,x_{d}$, as the point indexed by the vector $(x_{1},...,x_{d})\in\mathbb{Z}^{d}$ . As a result, the entire set of operations represented by (\ref{eq:nested-loop}) can be treated as the hyper-rectangle of $\boldsymbol{x}\in\left[L_{1}\right]\times...\times\left[L_{d}\right]$. Furthermore, note that the element of array $A_{i}$ of memory required for the operation indexed by $(x_{1},...,x_{d})$ is $\phi_{i}(x_{1},...,x_{d})$; in particular, given a set $S\subset\mathbb{Z}^{d}$ of operations, the elements of $A_{i}$ it requires are indexed by $\phi_{i}(S)$. As a result, it suffices to find a lower bound on the number of subsets (or an upper bound on the size of a single subset) needed to tile the hyper-rectangle, with each one corresponding to a segment of the program that can be executed without going back to main memory. To satisfy this condition, we require that each subset (``tile'') $S$ satisfy the condition: \[ \left|\phi_{i}(S)\right|\le M \] as we cannot use more than $M$ words memory in a computation without going to slow memory. Invoking the discrete Brascamp-Lieb inequality \cite{BCCT10,CDK+13}, we get that any such tile has volume at most $M^{\sum_{i\in[n]}s_{i}}$, where the $s_{i}$ are the solutions to the linear program: \begin{eqnarray} \min\sum_{i\in[1..n]}s_{i}\st\label{eq:hbl}\\ \sum_{i=1}^{n}s_{i}\rank(\phi_{i}(H)) & \ge & \rank(H)\qquad\forall\text{ subgroups }H\le\mathbb{Z}^{d}\nonumber \end{eqnarray} This implies that the minimum number of tiles needed to cover the entire hyper-rectangle is at least $\prod_{i\in[1..d]}L_{i}/M^{\sum_{i\in[n]}s_{i}}$. Since each tile corresponds to an execution of a subset of operations without going back to slow memory, and we must complete all operations in the 'hypercube', the total number of words transferred between slow and fast memory must be at least \[ \Omega\left(\frac{\prod_{i\in[d]}L_{i}}{M^{\sum_{i\in[n]}s_{i}-1}}\right)\ . \] An explicit construction of a tile shape that achieves this lower bound is described in \cite{RD16}. We will review this later in the paper for the projective case. \subsection{Multiple small bounds} We now generalize the proof Section \ref{sec:HBL-Setup,-One}\textbf{ }to the case where multiple loop bounds are taken to be small. Suppose that the loops indexed by $x_{i}$ have bounds $L_{i}$. Let $R_{j}\subseteq\{1..n\}$ denote the set of indices $i$ such that $\supp(\phi_{i})$ contains $x_{j}$. As before, our approach considers the communication lower bound for a ``slice'' - that is, a subset of the iteration polytope formed by restricting certain loop indices to fixed values - and summing these slice lower bounds over all possible values of the fixed indices. This time, however, each slice will be formed by simultaneously fixing multiple indices, which we assume without loss of generality are $x_{1}$ through $x_{q}$ (the following argument holds for any $q$, and is independent of the actual value of $q$). As was the case in the single-variable case, an upper bound on max tile size for a single slice is given by $M^{\sum_{j\in\{1..n\}}\hat{s}_{j}},$where $\hat{s}_{j}$ are any nonnegative numbers that satisfy: \begin{eqnarray} 1 & \le & \sum_{j\text{ s.t. }\supp(\phi'_{j})\ni x_{i}}\hat{s}_{j}\label{eq:croppedmulti} \end{eqnarray} where $\phi'_{j}$ now corresponds to removing $x_{1},...,x_{q}$ from $\phi_{j}$ (or, alternatively, chopping off the first $q$ rows of the HBL LP constraint matrix (\ref{eq:hbl_largeindex_matrix_constraints})).We now develop an analog to Lemma \ref{lem:1dslicingdistro} in order to maximize the sum of the slices over $\{x_{1},...,x_{q}\}\in\{1..L_{1}\}\times...\times\{1..L_{q}\}$. Our main result is as follows: \begin{thm} \label{lem:slicing} Let $q\in[1..d]$, and $\hat{s}_{i}$ be any nonnegative numbers satisfying \[ 1\le\sum_{j\text{ s.t. }\supp(\phi'_{j})\ni x_{i}}\hat{s}_{j} \] where $\phi'_{j}$ is obtained by removing $x_{1},...,x_{q}$ from $\phi_{j}$. Then $M^{k}$, where \[ k=\sum_{i=1}^{n}\hat{s}_{i}+\sum_{j\in[q]\st\sum_{i\in R_{j}}\hat{s}_{i}\le1}\left[\beta_{j}\left(1-\sum_{i\in R_{j}}\hat{s}_{i}\right)\right] \] represents an upper bound on the tile size. \end{thm} Notice that this theorem holds for all possible $q$, as well as reorderings of the variables. As a result, this lemma in fact generates $2^{d}$ separate upper bounds for tile size (one for each subset $\mathscr{Q}$ of indices that we hold to be small). Therefore, the smallest upper bound on tile size (which corresponds to the largest lower bound on communication) we can achieve in this manner is $M^{\hat{k}}$ for \[ \hat{k}=\min_{\mathscr{Q}\subseteq[d]}\sum_{i=1}^{n}\hat{s}_{\mathscr{Q},i}+\sum_{j\in\mathscr{Q}\st\sum_{i\in R_{j}}\hat{s}_{i}\le1}\left[\beta_{j}\left(1-\sum_{i\in R_{j}}\hat{s}_{\mathscr{Q},i}\right)\right] \] where $\hat{s}_{\mathscr{Q},i}$ is the solution to the HBL LP (\ref{eq:hbl_largeindex_matrix_constraints}) with the rows indexed by elements of $\mathscr{Q}$ removed. \begin{proof} By induction on $q$. The base case, for $q=1$, is simply Lemma \ref{lem:1dslicingdistro}. Let $\hat{s}'_{i}$ be defined as $\hat{s}_{[q-1],i}$. Suppose for induction that $M^{k}$, for \[ k=\sum_{i=1}^{n}\hat{s}'_{i}+\sum_{j\in[q-1]\st\sum_{i\in R_{j}}\hat{s}_{i}\le1}\left[\beta_{j}\left(1-\sum_{i\in R_{j}}\hat{s}'_{i}\right)\right] \] represents an upper bound on the tile size. We start by finding an upper bound on the tile size, as before, by summing over several ``slices'', each being defined as the subset of the elements where $x_{1}$ through $x_{q}$ are set to fixed values. We begin by generalizing the notion of slices to the case where multiple indices may be small. As before, let $\phi_{i}\brokenvert_{\{\hat{x}_{1},...,\hat{x}_{q}\}}$ denote $\phi_{i}$ with $x_{j}$ fixed to $\hat{x}_{j}$ for all $j\in[q]$. By definition, as $\phi_{i}$ only depends on indices in its support, $\phi_{i}\brokenvert_{\{x_{1},...,x_{q}\}}$ must be identical to $\phi_{i}\brokenvert_{\{x_{1},...,x_{q}\}\cap\supp(\phi_{i})}$. We wish to maximize the size of the entire tile - that is, the sum of all the sizes of the slices: \[ \sum_{x_{1}\in[1..L_{1}],...,x_{q}\in[1..L_{q}]}\vert\phi_{1}\brokenvert_{\{x_{1},...,x_{q}\}}(V)\vert^{\hat{s}_{1}}\dots\vert\phi_{n}\brokenvert_{\{x_{1},...,x_{q}\}}(V)\vert^{\hat{s}_{n}} \] subject to the memory constraints \[ \sum_{x_{k}\in[1..L_{k}]\text{ for }k\in[1..q]\cap\supp(\phi_{i})}\vert\phi_{i}\brokenvert_{\{x_{1},...,x_{q}\}}(V)\vert\le M\qquad\forall i\in\bigcup_{j\in[1..q]}R_{j}\ . \] As before, we will simplify our notation by defining $y_{j,\{x_{1},...,x_{q}\}}\coloneqq\vert\phi_{j}\brokenvert_{\{x_{1},...,x_{q}\}}(V)\vert$. Our optimization problem therefore can be rewritten as maximizing: \begin{equation} \sum_{x_{1}\in[1..L_{1}],...,x_{q}\in[1..L_{q}]}y_{1,\{x_{1},...,x_{q}\}}^{\hat{s}_{1}}\dots y_{n,\{x_{1},...,x_{q}\}}^{\hat{s}_{n}}\label{eq:multismallobjnointersect} \end{equation} subject to the constraints: \begin{equation} 1\le\sum_{x_{k}\in[1..L_{k}]\text{ for }k\in[q]\cap\supp(\phi_{i})}y_{i,\{x_{1},...,x_{q}\}}\le M\qquad\forall i\in\bigcup_{j\in[q]}R_{j}\ .\label{multismallconstraints} \end{equation} The definition of $\phi_{i}\brokenvert_{\{\hat{x}_{1},...,\hat{x}_{q}\}}$ (and therefore of $y_{j,\{x_{1},...,x_{q}\}}$) requires us to further impose an additional constraint on the solution: for all $i$, the value of $y_{i,\{x_{1},...,x_{q}\}}$ must remain independent of indices not in the support of $\phi_{i}$. Formally, if $x_{k}\notin\supp(\phi_{i})$, then \begin{equation} y_{i,\{x_{1},...x_{k-1},a,x_{k+1},...,x_{q}\}}=y_{i,\{x_{1},...x_{k-1,},b,x_{k+1},....,x_{q}\}}\label{eq:fixsuppconstraint} \end{equation} for any $a,b$. Our approach will be to find a candidate solution which ignores this constraint, and then to show that this candidate solution actually does satisfy (\ref{eq:fixsuppconstraint}) (i.e. that this constraint is redundant). Furthermore, in order to make it easier to reason about the constraints (\ref{multismallconstraints}), we will multiply them all by the appropriate values in order to ensure that the sum is over the same set of variables: $x_{1}$ through $x_{q}$: \begin{equation} \prod_{j\in[q]\backslash\supp(\phi_{i})}L_{j}\le\sum_{x_{1}\in[1..L_{1}],...,x_{q}\in[1..L_{q}]}y_{i,\{x_{1},...,x_{q}\}}\le M\prod_{j\in[q]\backslash\supp(\phi_{i})}L_{j}\qquad\forall i\in\bigcup_{j\in[q]}R_{j}\ .\label{eq:multi_small_constraint} \end{equation} Since our goal is to find an upper bound on the tile size, which is the result of this constrained maximization problem, we can remove the lower bound constraints on $\sum_{x_{1}\in[1..L_{1}],...,x_{q}\in[1..L_{q}]}y_{i,\{x_{1},...,x_{q}\}}$ (i.e. the leftmost inequality in (\ref{eq:multi_small_constraint})) without affecting correctness. The resulting problem is almost identical to that of Lemma \ref{lem:1dslicingdistro}, except with different limits (one may think of this 'flattening' the $q$-dimensional tensor $x_{1},...,x_{q}$ into a single vector in order to get a single sum as we did in the previous section). Recall that none of the steps we used to compute the maximum in our proof of Lemma \ref{lem:1dslicingdistro} actually used the value of the right sides of the constraints, since all those constants were all differentiated away as a constant factor when taking gradients; as a result, the same result applies here. Specifically, the maximum is obtained at a point specified as follows: select some subset $\mathscr{S}\subseteq\{1..L_{1}\}\times...\times\{1..L_{q}\}$ of integer tuples, which represent $x_{i}$-indices for which $y_{i,\{x_{1},..,x_{q}\}}$ will be nonzero. For each $\{x_{1},..,x_{q}\}$ in $\mathscr{S}$, $y_{i,\{x_{1},..,x_{q}\}}$ must be equal to a constant value independent of $\{x_{1},...,x_{q}\}$. In order to maximize (\ref{eq:multismallobjnointersect}), we set constraint (\ref{multismallconstraints}) to obtain: \begin{equation} y_{i,\{x_{1},...,x_{q}\}}=\frac{M}{\vert\mathscr{S}_{i}\vert}\ \forall i\label{eq:plug_in_multi_small} \end{equation} where $\mathscr{S}_{i}$ is $\phi_{i}$ (restricted to $x_{1}...x_{q}$) applied to $\mathscr{S}$. For indices not in $\mathscr{S}$, set $y_{i,\{x_{1},...,x_{q}\}}$ to zero for all $i$. The resulting upper bound for tile size is therefore: \begin{eqnarray} \sum_{x_{1},...,x_{q}\in\mathscr{S}}\prod_{i}\left(\frac{M}{\vert\mathscr{S}_{i}\vert}\right)^{\hat{s}_{i}} & = & \vert\mathscr{S}\vert\prod_{i}\left(\frac{M}{\vert\mathscr{S}_{i}\vert}\right)^{\hat{s}_{i}}\nonumber \\ & = & \frac{\vert\mathscr{S}\vert}{\prod_{i}\vert\mathscr{S}_{i}\vert^{\hat{s}_{i}}}M^{\sum_{i}\hat{s}_{i}}\label{eq:multismallobjintermsofs} \end{eqnarray} where the first equality is a result of the independence of the summand with $x$, with the number of nonzero terms in the sum being $\vert\mathscr{S}\vert$. \textbf{Claim:} without loss of generality, we can assume that $\mathscr{S}$ is a rectangle; that is, it can be written as set $C_{1}\times\dots\times C_{q}$ for some sets $C_{i}\subseteq[L_{i}]$ \textbf{Proof of claim:} Suppose not. Then there exist points $x',x''\in\mathscr{S}$ such there exists some point $x^{*}\notin\mathscr{S}$, where each $x_{j}^{*}$ is equal to $x_{j}'$ for all $j$ except a single value $j^{*}$, at which it takes on the value of $x_{j}''$. To see why this is true, take any two distinct $x',x''\in\mathscr{S}$, and repeatedly change one component of $x'$ to match the corresponding component of $x''$, stopping when either $x'=x''$, or $x'\notin\mathscr{S}$. In the latter case, set $x^{*}=x'$, and let $x'$ denote its immediate predecessor in this process. If we never end up with an $x^{*}$ for any distinct pairs of $x'$ and $x''$ in $\mathscr{S}$, then $\mathscr{S}$ must be a rectangle. Our goal will be to show that this configuration is suboptimal. Consider the set of functions $\phi_{i}$ for $i\in R_{j^{*}}$, that is, the set of functions containing $x_{j^{*}}$. Let us consider the following categories, distinguished by how $\phi_{i}$ maps $x'$, $x''$, and $x^{*}$. \begin{enumerate} \item $\phi_{i}(x')=\phi_{i}(x^{*})\ne\phi_{i}(x'')$. Notice that replacing $x''$ with $x^{*}$ in $\mathscr{S}$ either reduces $\left|\mathscr{S}_{i}\right|$ by one (if there is no other $x^{\dagger}$ such that $\phi_{i}(x^{\dagger})=\phi_{i}(x'')$) or keeps it the same (if such an $x^{\dagger}$ exists); notice that in the latter case, adding $x^{*}$ to $\mathscr{S}$ keeps $\left|\mathscr{S}_{i}\right|$ constant. We denote these cases (1a) and (1b) respectively. \item $\phi_{i}(x'')=\phi_{i}(x^{*})\ne\phi_{i}(x')$. Analogously to case (1), replacing $x'$ with $x^{*}$ either reduces $\mathscr{S}_{i}$ (case (2a)) or keeps it the same (case (2b)). \item $\phi_{i}$ maps $x'$, $x''$, and $x^{*}$ to three distinct points. \item $\phi_{i}(x')=\phi_{i}(x'')=\phi_{i}(x^{*})$. \item $\phi_{i}(x')=\phi_{i}(x'')\ne\phi_{i}(x^{*})$. Notice that this category must be empty: If $x_{j}''=x_{j}'$, then by definition this quantity is also $x^{*}$; therefore, going to $*$ from $''$ can only make the number of agreements better under any projection. \end{enumerate} In the above categories, $i$ satisfying (1) and (4) implies that $i\notin R_{j^{*}}$, while $i$ satisfying (2) and (3) implies that $i\in R_{j^{*}}$. We will show that $\mathscr{S}$ is suboptimal by providing strict improvements on it. \begin{enumerate} \item If there are any $i$ in category (1a), we replace $x''$ with $x^{*}$ in $\mathscr{S}$, reducing $\left|\mathscr{S}_{i}\right|$. In order to see how this change affects the values of $\mathscr{S}_{i}$ for other $i$, we first note that for other $i\notin R_{j^{*}}$, $\phi_{i}(x')=\phi_{i}(x^{*})$, so this change can only keep constant or decrease $\left|\mathscr{S}_{i}\right|$ for such $i$. For all $i$ in any of the other categories - (1b), (2ab), (3), or (4) - $\left|\mathscr{S}_{i}\right|$ remains the same. Therefore, as the value of $\left|\mathscr{S}_{i}\right|$ either remains the same or decreases (with at least one strict decrease), and $\left|\mathscr{S}\right|$ remains constant, we obtain a strict increase in the value of (\ref{eq:multismallobjintermsofs}). \item If some $i$ falling into category (3): Denote the set of $i$ such that $\phi_{i}$ maps $x'$, $x''$, and $x^{*}$ onto different values as $Q$. We will split into two cases, based on the values of $\sum_{i\in R_{j^{*}}}\hat{s}_{i}$: \begin{enumerate} \item Suppose $\sum_{i\in R_{j^{*}}}\hat{s}_{i}\ge1$. Consider the assignment to the $y_{i,x}$ given by $\mathscr{S}$; its objective (\ref{eq:multismallobjintermsofs}) is: \[ \sum_{x_{1}\in[L_{1}],...,x_{j^{*}-1}\in[L_{j^{*}-1}],x_{j^{*}+1}\in[L_{j^{*}+1}],...,x_{q}\in[L_{q}]}\left(\sum_{x_{j^{*}}\in[L_{j^{*}}]}y_{1,\{x_{1},...,x_{q}\}}^{\hat{s}_{1}}\dots y_{n,\{x_{1},...,x_{q}\}}^{\hat{s}_{n}}\right) \] Factoring the innermost term into terms that are constant w.r.t. $x_{j^{*}}$ and those that are not, we can rewrite this as: \[ \sum_{x_{1}\in[L_{1}],...,x_{j^{*}-1}\in[L_{j^{*}-1}],x_{j^{*}+1}\in[L_{j^{*}+1}],...,x_{q}\in[L_{q}]}\left(\prod_{i\in[n]\backslash R_{j}^{*}}y_{i,\{x_{1},...,x_{q}\}}^{\hat{s}_{i}}\sum_{x_{j^{*}}\in[L_{j^{*}}]}\prod_{i\in R_{j}^{*}}y_{i,\{x_{1},...,x_{q}\}}^{\hat{s}_{i}}\right)\ . \] Let us restrict our attention a single ``slice'': that is, an instance of the term \begin{equation} \sum_{x_{j^{*}}\in[L_{j^{*}}]}\prod_{i\in R_{j^{*}}}y_{i,\{x_{1},...,x_{q}\}}^{\hat{s}_{i}}\label{eq:innerterm1sum} \end{equation} with fixed values for $x_{1}$ through $x_{q}$, excluding $x_{j^{*}}$. By equality constraints, we get that all the nonzero values of $y_{i,\{x_{1},...,x_{q}\}}^{\hat{s}_{i}}$ must be equal to a constant independent of $x_{1},...,x_{q}$ (but dependent on $i$). Let $m_{i}=\sum_{x_{i}^{*}\in[L_{j^{*}}]}y_{i,\{x_{1},...,x_{q}\}}$, and $\sigma$ denote the number of $x_{j^{*}}$ such that $(x_{1},...,x_{q})$, with all coordinates except $x_{j^{*}}$ set to our fixed values, are in $\mathscr{S}$ (and therefore, nonzero terms in the above sum (\ref{eq:innerterm1sum})); this restricts the nonzero values of $y_{i,\{x_{1},...,x_{q}\}}$ to $m_{i}/\sigma$. Therefore, we may rewrite the above sum as: \begin{eqnarray*} \sum_{x_{j^{*}}\in[L_{j^{*}}]}\prod_{i\in R_{j^{*}}}y_{i,\{x_{1},...,x_{q}\}}^{\hat{s}_{i}} & = & \sigma\prod_{i\in R_{j^{*}}}\left(\frac{m_{i}}{\sigma}\right)^{\hat{s}_{i}}\\ & = & \sigma^{1-\sum_{i\in R_{j^{*}}}\hat{s}_{i}}\prod_{i\in R_{j^{*}}}m_{i}^{\hat{s}_{i}} \end{eqnarray*} As $\sum_{i\in R_{j^{*}}}\hat{s}_{i}\ge1$, the exponent of $\sigma$ in the above expression is nonpositive; therefore, this term is bounded above by \[ \prod_{i\in R_{j^{*}}}m_{i}^{\hat{s}_{i}} \] which we get when we set $\sigma$ to $1$. As this upper bound holds individually for each ``slice'', the value of the objective (\ref{eq:multismallobjintermsofs}) is upper bounded by setting $\sigma$ to $1$ for every slice, i.e. adding an additional constraint forcing $L_{j^{*}}$ to 1, which is equivalent to removing $x_{j*}$ from the problem entirely. Applying our inductive hypothesis, we get that an upper bound is $M^{k'}$ where \[ k'=\sum_{i=1}^{n}\hat{s}'_{i}+\sum_{j\in[1..q]\backslash\{j^{*}\}\st\sum_{i\in R_{j}}\hat{s}'_{i}\le1}\left[\beta_{j}\left(1-\sum_{i\in R_{j}}\hat{s}'_{i}\right)\right]\ . \] Since $\sum_{i\in R_{j^{*}}}\hat{s}_{i}\ge1$, there is no difference between $\hat{s}_{i}'$ and $\hat{s}_{i}$ for all $i$, as the only constraint that the former must satisfy that the latter is not required to is $\sum_{i\in R_{j^{*}}}\hat{s}_{i}\ge1$, which holds regardless in this case. Therefore, we can replace $\hat{s}_{i}'$ with $\hat{s}_{i}$ in order to completing the induction for the entire proof of Lemma \ref{lem:slicing} in this particular case. \item Now suppose $\sum_{i\in R_{j^{*}}}\hat{s}_{i}<1$. As $Q\subseteq R_{j^{*}}$, it immediately follows that $\sum_{i\in Q}\hat{s}_{i}\le\sum_{i\in R_{j^{*}}}\hat{s}_{i}<1$. Consider the values of $y_{i,x'}$, $y_{i,x''}$, and $y_{i,x^{*}}$; we will show that a reassignment of these three values strictly increases the objective.\\ Without loss of generality, we will assume $\prod_{i\notin Q}y_{i,x^{*}}^{\hat{s}_{i}}$ may be taken as nonzero. Why? Given some $i'$ is not in $Q$, then by definition $\phi_{i'}$ must map $x',x^{*}$ to the same point, or $x'',x^{*}$ to the same point. In the former case, we can set $y_{i',x^{*}}=y_{i',x'}$ without violating any constraint, as we can substitute the two terms freely in any constraint-sum involving them; in the latter case, the same applies if we set $y_{i',x^{*}}=y_{i',x''}$. As $x',x''\in\mathscr{S}$, both $y_{i',x'}$ and $y_{i',x''}$ must be nonzero, so $y_{i',x^{*}}$ must be nonzero as well. As nonzero values of $y_{i,x}$ are independent of $x$ for all $i$, we must have \begin{equation} y_{i',x^{*}}=y_{i',x''}=y_{i',x'}\label{eq:notinqequality} \end{equation} For all $i$, let the value of $y_{i,x'}+y_{i,x''}+y_{i,x^{*}}$ be denoted $\mu_{i}$, and define $k',k'',k^{*}$ such that $y_{i,x'}=k'\mu_{i}$ (and likewise for $k''$, $k^{*}$); our starting configuration, with $\mathscr{S}$ containing $x',x''$ but not $x^{*}$, is $k'=k''=1/2$, $k^{*}=0$. So as not to break any constraints, we will require that the value of $y_{i,x'}+y_{i,x''}+y_{i,x^{*}}$ stay constant, so we will enforce $k'+k''+k^{*}=1$. The contribution of these three tiles to the objective (\ref{eq:multismallobjnointersect}) is: \begin{align*} & \prod_{i\in Q}y_{i,x'}^{\hat{s}_{i}}\prod_{i\notin Q}y_{i,x'}^{\hat{s}_{i}}+\prod_{i\in Q}y_{i,x''}^{\hat{s}_{i}}\prod_{i\notin Q}y_{i,x''}^{\hat{s}_{i}}+\prod_{i\in Q}y_{i,x^{*}}^{\hat{s}_{i}}\prod_{i\notin Q}y_{i,x^{*}}^{\hat{s}_{i}}\\ & =\left(\prod_{i\in Q}y_{i,x'}^{\hat{s}_{i}}+\prod_{i\in Q}y_{i,x''}^{\hat{s}_{i}}+\prod_{i\in Q}y_{i,x^{*}}^{\hat{s}_{i}}\right)\prod_{i\notin Q}y_{i,x'}^{\hat{s}_{i}} \end{align*} with equality following from (\ref{eq:notinqequality}). We substitute $y_{i,x'}=k'\mu_{i}$ and the corresponding definitions for $y_{i,x''},y_{i,x^{*}}$ to rewrite the above expression as: \begin{align*} & \left(\prod_{i\in Q}\left(k'\mu_{i}\right)^{\hat{s}_{i}}+\prod_{i\in Q}\left(k''\mu_{i}\right)^{\hat{s}_{i}}+\prod_{i\in Q}\left(k^{*}\mu_{i}\right)^{\hat{s}_{i}}\right)\prod_{i\notin Q}y_{i,x'}^{\hat{s}_{i}}\\ & =\left(k'^{\sum_{i\in Q}\hat{s}_{i}}\prod_{i\in Q}\mu_{i}^{\hat{s}_{i}}+k''^{\sum_{i\in Q}\hat{s}_{i}}\prod_{i\in Q}\mu_{i}^{\hat{s}_{i}}+k{}^{*\sum_{i\in Q}\hat{s}_{i}}\prod_{i\in Q}\mu_{i}^{\hat{s}_{i}}\right)\prod_{i\notin Q}y_{i,x'}^{\hat{s}_{i}}\\ & =\left(k'^{\sum_{i\in Q}\hat{s}_{i}}+k''^{\sum_{i\in Q}\hat{s}_{i}}+k{}^{*\sum_{i\in Q}\hat{s}_{i}}\right)\prod_{i\notin Q}y_{i,x'}^{\hat{s}_{i}}\prod_{i\in Q}\mu_{i}^{\hat{s}_{i}} \end{align*} We will leave $y_{i,x'}$ constant for all $i\notin Q$, and we will not vary $\mu_{i}$, the sum of $y_{i,x'}$, $y_{i,x''}$, and $y_{i,x^{*}}$, so it suffices to maximize \[ k'^{\sum_{i\in Q}\hat{s}_{i}}+k''^{\sum_{i\in Q}\hat{s}_{i}}+k{}^{*\sum_{i\in Q}\hat{s}_{i}} \] subject to \[ k'+k''+k^{*}=1. \] As $\sum_{i\in Q}\hat{s}_{i}<1$, the solution to this maximization problem is obtained by setting $k'=k''=k^{*}=1/3$; all other assignments (including the current one) are suboptimal. As we do not vary any $y_{i,x}$ for $i\notin Q$ and any $x$, this change does not affect the constraints corresponding to any other $\phi_{i}$ than those in $Q$, which all must be still satisfied as we do not vary $\mu_{i}$; therefore, both these assignments satisfy the constraints (\ref{multismallconstraints}). Therefore the current assignment under $\mathscr{S}$, with $k^{*}$ set to $0$ and $k',k''$ set to $1/2$, must be suboptimal, providing us with our contradiction in this case. \end{enumerate} \item If there exists $i$ in category (2a), but none in (1a) and (3), we will replace $x'$ with $x^{*}$ in $\mathscr{S}$, decreasing $\left|\mathscr{S}_{i}\right|$ by one. The values of $\left|\mathscr{S}_{i}\right|$ for other $i$, in this case, either also decrease (for other $i$s falling in case (2a)), remain the same (for $i$s falling in cases (1b), (2b), (4)), therefore corresponding to a strict improvement in the value of (\ref{eq:multismallobjintermsofs}). \item If we have $i$ in categories (1b), (2b), or (4) (but none in categories (1a), (2a), or (3), all of which were dealt with in earlier cases) add $x^{*}$ to $\mathscr{S}$; this does not change any value of $\mathscr{S}_{i}$, but increases $\mathscr{S}$ by $1$, leading to a strictly improved solution. \end{enumerate} Each of these cases (except case (3a), which uses the inductive hypothesis), presents a strict improvement to the value of (\ref{eq:multismallobjintermsofs}). Therefore, $\mathscr{S}$ must not be optimum, providing a contradiction. We can therefore conclude that optimum value of $\mathscr{S}$ must have no triple $x',x''\in\mathscr{S}$, $x^{*}\notin\mathscr{S}$ such that $x^{*}$ agrees with $x'$ everywhere except one coordinate where it agrees with $x''$, and therefore $\mathscr{S}$ must be a rectangle, as desired. $\blacksquare$ Now that we've shown that $\mathscr{S}$ is a rectangle, let us assume that its dimensions are $\rho_{1},...,\rho_{q}$. Then $\mathscr{S}$ has cardinality $\prod_{i\in[q]}\rho_{i}$, and $\mathscr{S}_{i}$ has cardinality $\prod_{j\in[q]\cap\supp(\phi_{i})}\rho_{j}$. Substituting into (\ref{eq:multismallobjintermsofs}), we get: \begin{eqnarray*} \frac{\vert\mathscr{S}\vert}{\prod_{i}\vert\mathscr{S}_{i}\vert^{\hat{s}_{i}}}M^{\sum_{i}\hat{s}_{i}} & = & \frac{\prod_{i\in[q]}\rho_{i}}{\prod_{i\in[d]}\left(\prod_{j\in[q]\cap\supp(\phi_{i})}\rho_{j}\right)^{\hat{s}_{i}}}M^{\sum_{i}\hat{s}_{i}}\\ & = & \frac{\prod_{i\in[q]}\rho_{i}}{\prod_{j\in[q]}\left(\prod_{i\in R_{j}}\rho_{j}^{\hat{s}_{i}}\right)}M^{\sum_{i}\hat{s}_{i}}\\ & = & \prod_{j\in[q]}\rho_{j}^{1-\sum_{i\in R_{j}}\hat{s}_{i}}M^{\sum_{i}\hat{s}_{i}} \end{eqnarray*} Since we have full control over the value of $\rho_{i}$, we can maximize the value of this expression by setting the $\rho_{i}$ to their maximum possible value, $L_{i}$ if $1-\sum_{i\in R_{j}}\hat{s}_{i}\ge0$, and to their minimum possible value, $1$, if $1-\sum_{i\in R_{j}}\hat{s}_{i}\le0$. Therefore, the maximum value of our objective (\ref{eq:multismallobjnointersect}) is obtained at: \[ M^{\sum_{i}\hat{s}_{i}}\prod_{j\in[q]\st\sum_{i\in R_{j}}\hat{s}_{i}\le1}L_{j}^{1-\sum_{i\in R_{j}}\hat{s}_{i}} \] or equivalently, $M^{k}$ where \begin{equation} k=\sum_{i=1}^{n}\hat{s}_{i}+\sum_{j\in[q]\st\sum_{i\in R_{j}}\hat{s}_{i}\le1}\left[\beta_{j}\left(1-\sum_{i\in R_{j}}\hat{s}_{i}\right)\right]\ .\label{eq:allqlowerbound} \end{equation} as desired. Finally, we need to modify our solution to satisfy (\ref{eq:fixsuppconstraint}) with no change to the objective value. Let $y'_{i,\{x_{1},...,x_{q}\}}$ be $\max_{x_{j}\st j\in\supp(\phi_{i})}y_{i,\{x_{1},...,x_{q}\}}$, which takes on the value $\frac{M}{\vert\mathscr{S}_{i}\vert}$ if there is some nonzero element of $\mathscr{S}$ that matches $(x_{1},...,x_{q})$ at the indices in the support of $\phi_{i}$, and is zero otherwise. In order to show that this modification does not change the value of objective (\ref{eq:multismallobjnointersect}), it suffices to show that \begin{equation} y_{1,\{x_{1},...,x_{q}\}}^{\hat{s}_{1}}\dots y_{n,\{x_{1},...,x_{q}\}}^{\hat{s}_{n}}=\left(y'_{1,\{x_{1},...,x_{q}\}}\right)^{\hat{s}_{1}}\dots\left(y'_{n,\{x_{1},...,x_{q}\}}\right)^{\hat{s}_{n}}\label{eq:singleterm} \end{equation} Suppose $\left(x_{1},...,x_{q}\right)\in\mathscr{S}$. Both sides are nonzero, and by equality constraint it is obvious that they must be the same. Suppose $\left(x_{1},...,x_{q}\right)\notin\mathscr{S}$. Clearly the left is zero. Recall that $\mathscr{S}$ is a rectangle; that is, it can be written as set $\{\left(x_{1},...,x_{n}\right):x_{i}\in C_{i}\forall i\}$ for some sets $C_{i}\subseteq[L_{i}]$. By definition, there must exist some $k$ such that $x_{k}\notin C_{k}$. There must be some some $j'$ such that $\phi_{j'}$ contains $x_{k}$; by definition, $y'_{j',\{x_{1},...,x_{k},...,x_{j}\}}$ - and therefore, the entire right-hand-side of (\ref{eq:singleterm}) - must be zero as well. Furthermore, in order to show that this solution does not violate any of the constraints, consider \[ \sum_{x_{k}\in[1..L_{k}]\text{ for }k\in[q]\cap\supp(\phi_{i})}y'_{i,\{x_{1},...,x_{q}\}} \] By definition, at most $\mathscr{S}_{i}$ of these terms may be nonzero, and each since must have value $M/\vert\mathscr{S}_{i}\vert$, this term must be at most $M$, as desired. Notice that this proof works if we fix any subset of $1..q$ rather than the entire set. In other words, we can freely replace the sum from $1$ to $q$ with a sum over any subset of $1$ to $q$ and still get a valid upper bound (by changing the sum from $j\in[q]$ to summing over a subset of $[q]$ in equation (\ref{eq:allqlowerbound})). \end{proof} \section{The Lower Bound} \label{sec:The-Lower-Bound} \subsection{One Small Index} \label{sec:HBL-Setup,-One} We will start our approach to small loop bounds by considering the case when all loop indices but one are assumed to be bounded by arbitrarily large values. Our approach will be to (a) find an upper bound for a tile restricted to single ``slice'' of the iteration space formed by fixing the loop index with a small bound, (b) calculate an upper bound for the entire tile by summing individual slice bounds together over all possible values of the same index, and (c) divide the total number of operations by the aforementioned quantity to achieve a communication lower bound. Let us first consider the case where a single loop bound - say, $L_{1}$, the upper bound on $x_{1}$ - is small, and the others are large. We may assume without loss of generality that $L_{1}\le M$; if the opposite is true, then $L_{1}$ would be large enough for the analysis of Section \ref{sec:allarge} to apply, as any tile whose memory footprint is at most $M$ would fit in the $L_{1}$ dimension. Furthermore, suppose without loss of generality that $\phi_{1},...,\phi_{p}$ (for some integer $p$) all contain $x_{1}$ and $\phi_{p+1},...,\phi_{n}$ do not. We will now find a communication lower bound for the subset of instructions whose $x_{1}$ index is fixed (since the loop bounds are constant and therefore independent of $x_{1}$, the result is the same for all possible values of $x_{1}$). Let $\phi'_{1},...,\phi'_{p}$ be the functions with $x_{1}$ removed. For instance, if $\phi_{1}=(x_{1},x_{2},x_{3})$, then $\phi'_{1}=(x_{2},x_{3})$. A communication lower bound for a single ``slice'' of operations with $x_{1}$ fixed can be found by using LP \ref{eq:lp_largeindex_projective}, with the $\phi$ replaced with $\phi'$, to compute an upper bound for the max tile size... \begin{eqnarray*} \min\sum\hat{s}_{j}\ \st\\ 1 & \le & \sum_{j\text{ s.t. }\supp(\phi'_{j})\ni i}\hat{s}_{j}\qquad\forall i\in[1..d] \end{eqnarray*} This amounts to removing the first row in the constraint matrix of the LP (\ref{eq:hbl_largeindex_matrix_constraints}): \[ \begin{bmatrix}\multicolumn{3}{c}{\text{[remove first row]}}\\ \vert & & \vert\\ \phi_{1} & \cdots & \phi_{n}\\ \vert & & \vert \end{bmatrix}\begin{bmatrix}\hat{s}_{1}\\ \vdots\\ \hat{s}_{d} \end{bmatrix}\ge\ov\ \] To find a upper bound for the size of a tile, we sum over the upper bounds for the size each of its slices, each of which corresponds to a single value of $x_{1}$. Let $\phi_{1}\brokenvert_{x_{1}=k},...,\phi_{n}\brokenvert_{x_{1}=k}$ be the functions with $x_{1}$ fixed to $k$. Then, the maximum tile size is found by maximizing the following quantity (with $V$ representing the tile): \begin{eqnarray} \sum_{i\in[L_{1}]}\vert\phi_{1}\brokenvert_{x_{1}=i}(V)\vert^{\hat{s}_{1}}\dots\vert\phi_{n}\brokenvert_{x_{1}=i}(V)\vert^{\hat{s}_{n}} & = & M^{\sum_{i\in[p+1,n]}\hat{s}_{i}}\sum_{i\in[L_{1}]}\vert\phi_{1}\brokenvert_{x_{1}=i}(V)\vert^{\hat{s}_{1}}\dots\vert\phi_{p}\brokenvert_{x_{1}=i}(V)\vert^{\hat{s}_{p}}\label{eq:LagrangeSingleVarObjective} \end{eqnarray} subject to: \begin{equation} \sum_{i\in\left[L_{1}\right]}\vert\phi_{j}\brokenvert_{x_{1}=i}(V)\vert\le M\qquad\forall j\in[p]\ .\label{eq:lagrangesinglevar} \end{equation} We bound (\ref{eq:LagrangeSingleVarObjective}) subject to the constraints (\ref{eq:lagrangesinglevar}), and compute the maximum tile size, as follows: \begin{lem} \label{lem:1dslicingdistro}The maximum tile size for a tile $V$, subject to the constraints that (a) that $\phi_{i}(V)\le M$ for all $i$ and (b) the set of all distinct $x_{1}$-coordinates of elements of $V$ is at most $L_{1}$ in cardinality (i.e. the tile fits inside the loop bounds), is bounded above by $M^{\kappa}$, where \[ \kappa=\max\left\{ \sum_{i=1}^{n}\hat{s}_{i}+\beta_{1}\left(1-\sum_{i=1}^{p}\hat{s}_{i}\right),\sum_{i=1}^{n}\hat{s}_{i}\right\} \ . \] \end{lem} \begin{proof} There are three cases: \begin{enumerate} \item If $\sum_{i\in[p]}\hat{s}_{i}<1$, the maximum of the quantity (\ref{eq:LagrangeSingleVarObjective}) is achieved when we distribute the weight across terms in the sum, i.e. for all $j\in[1..p]$, let $\vert\phi_{j}\brokenvert_{x_{1}=i}(V)\vert=M/L_{1}$ for all $i\in[1..L_{1}]$, which leads to a tile size of $M^{\kappa}$ where \begin{equation} \kappa\coloneqq\sum_{i=1}^{n}\hat{s}_{i}+\beta_{1}\left(1-\sum_{i=1}^{p}\hat{s}_{i}\right)\label{eq:sle1} \end{equation} and $\beta_{1}=\log_{M}L_{1}$. \item If $\sum_{i\in[p]}\hat{s}_{i}>1,$the maximum is achieved when we concentrate the entire weight into one term of the sum (i.e. for all $j\in[1..p]$, let $\vert\phi_{j}\brokenvert_{x_{1}=i'}(V)\vert=M$ for some $i'$ and let $\vert\phi_{j}\brokenvert_{x_{1}=i}(V)\vert=0$ for $i\ne i'$), which leads to a tile size of $M^{\kappa}$ where \begin{equation} \kappa\coloneqq\sum_{i=1}^{n}\hat{s}_{i}\ .\label{eq:sge1} \end{equation} \item If $\sum_{i\in[p]}\hat{s}_{i}=1$, then both (\ref{eq:sle1}) and (\ref{eq:sge1}) are equal. Furthermore, since the only difference between $\hat{s}$ and $s$ is that the latter must satisfy the additional constraint $\sum_{i\in\{1..p\}}s_{i}\ge1$ in the constraint (which is satisfied in this case by $\hat{s}$ as well), we get an upper bound of $M^{\sum_{i=1}^{n}\hat{s}_{i}}=M^{\sum_{i=1}^{n}s_{i}}$ immediately from (\ref{eq:lp_largeindex_projective}). \end{enumerate} For convenience, denote $\vert\phi_{i}\brokenvert_{x_{1}=x'_{i}}(V)\vert$, the slice of $V$ corresponding to $x_{1}'$, as $y_{i,x'_{1}}$. We want to maximize \[ \sum_{x_{1}=1}^{L_{1}}y_{1,x_{1}}^{\hat{s}_{1}}\dots y_{p,x_{1}}^{\hat{s}_{p}} \] subject to \[ \sum_{x_{1}=1}^{L_{1}}y_{i,x_{1}}-M\le0\qquad\forall i\in[p]\ . \] Without loss of generality, assume all the $\hat{s}_{i}$ are positive; if $\hat{s}_{i}=0$, then we can remove $y_{i,x_{1}}$ from both the statement of the maximization problem (e.g. by setting it to $1$ for all $x_{i}$) and from the quantities (\ref{eq:sle1}) and (\ref{eq:sge1})\textbf{ }without affecting the rest of the proof. Since any slack in any one of the above inequalities can be removed by increasing one of the $y_{i,x_{i}}$, and doing so will only increase the quantity we're trying to maximize, we can take these inequalities to be equalities. The Lagrange multipliers for this problem are: \begin{eqnarray*} \mathcal{L} & = & \sum_{x_{1}=1}^{L_{1}}y_{1,x_{1}}^{\hat{s}_{1}}\dots y_{p,x_{1}}^{\hat{s}_{p}}\\ & & -\lambda_{1}\left(\sum_{x_{1}=1}^{L_{1}}y_{1,x_{1}}-M\right)\\ & & \vdots\\ & & -\lambda_{p}\left(\sum_{x_{1}=1}^{L_{1}}y_{p,x_{1}}-M\right)\ . \end{eqnarray*} Setting the gradient (with respect to both $y_{i,j}$ and $\lambda_{i}$) to $0$, and looking at the derivative with respect to $y_{i,j}$, we get: \begin{eqnarray} \hat{s}_{i}y_{1,j}^{\hat{s}_{1}}...y_{i-1,j}^{\hat{s}_{i-1}}y_{i,j}^{\hat{s}_{i}-1}y_{i+1,j}^{\hat{s}_{i+1}}...y_{p,j}^{\hat{s}_{p}} & = & \lambda_{i}\ .\label{eq:lagrangemultderiv} \end{eqnarray} These equations are invariant in $j$: that is, no matter which value $j$ we fix $x_{1}$ to, the set of equations that $y_{i,j}$ must satisfy are identical (this intuitively follows from symmetry across the $x_{i}$). As a result, we may assume $\lambda_{i}\ne0$; if it is in fact zero, then the quantity we're trying to maximize would be zero, which clearly cannot be the case since we can construct a tile containing only one element (i.e. with our objective being $1$) that satisfies all the constraints of the maximization problem. In particular, $\lambda_{i}y_{i,j}/\hat{s}_{i}=y_{1,j}^{\hat{s}_{1}}...y_{p,j}^{\hat{s}_{p}}$ must remain invariant as $i$ varies (with a fixed $j$), which implies that for any $i_{1},i_{2},j$, \[ \frac{\lambda_{i_{1}}y_{i_{1},j}}{\hat{s}_{i_{1}}}=\frac{\lambda_{i_{2}}y_{i_{2},j}}{\hat{s}_{i_{2}}} \] implying that the ratio between $y_{i,j}$ for two different values of $i$ is independent of the $j$ (i.e. slice) we choose, remaining fixed at \[ \frac{y_{i_{1},j}}{y_{i_{2},j}}=\frac{\lambda_{i_{2}}\hat{s}_{i_{1}}}{\lambda_{i_{1}}\hat{s}_{i_{2}}} \] Therefore, the point we're trying to solve for satisfies this relationship: \begin{equation} y_{i,j}=\frac{\lambda_{1}\hat{s}_{i}}{\lambda_{i}\hat{s}_{1}}y_{1,j}\label{eq:yijratio} \end{equation} For any given $j$, one of two cases must hold: either $y_{i,j}=0$ for all $i$ (in which case the tile does not intersect at all with the slice $x_{1}=j$) or all $y_{i,j}$ are nonzero, and we can substitute (\ref{eq:yijratio}) into (\ref{eq:lagrangemultderiv}) to get: \begin{eqnarray*} \frac{\lambda_{i}}{\hat{s}_{i}} & = & y_{1,j}^{\hat{s}_{1}}...y_{i-1,j}^{\hat{s}_{i-1}}y_{i,j}^{\hat{s}_{i}-1}y_{i+1,j}^{\hat{s}_{i+1}}...y_{p,j}^{\hat{s}_{p}}\\ & = & \frac{\prod_{k=1}^{p}y_{k,j}^{\hat{s}_{k}}}{y_{i,j}}\\ & = & \frac{\prod_{k=1}^{p}\left(\frac{\lambda_{1}\hat{s}_{k}}{\lambda_{k}\hat{s}_{1}}y_{1,j}\right)^{\hat{s}_{k}}}{\frac{\lambda_{1}\hat{s}_{i}}{\lambda_{i}\hat{s}_{1}}y_{1,j}}\\ & = & y_{1,j}^{-1+\sum_{k}\hat{s}_{k}}\frac{\lambda_{i}\hat{s}_{1}}{\lambda_{1}\hat{s}_{i}}\prod_{k=1}^{p}\left(\frac{\lambda_{1}\hat{s}_{k}}{\lambda_{k}\hat{s}_{1}}\right)^{\hat{s}_{k}}\\ & = & y_{1,j}^{-1+\sum_{k}\hat{s}_{k}}\frac{\lambda_{i}\hat{s}_{1}}{\lambda_{1}\hat{s}_{i}}\left(\frac{\lambda_{1}}{\hat{s}_{1}}\right)^{\sum_{k}\hat{s}_{k}}\prod_{k=1}^{p}\left(\frac{\hat{s}_{k}}{\lambda_{k}}\right)^{\hat{s}_{k}} \end{eqnarray*} Canceling $\frac{\lambda_{i}}{\hat{s}_{i}}$ from both sides, and moving the first term in the last expression over to the left, we get \begin{eqnarray*} y_{1,j}^{1-\sum_{k}\hat{s}_{k}} & = & \left(\frac{\lambda_{1}}{\hat{s}_{1}}\right)^{\sum_{k}\hat{s}_{k}-1}\prod_{k=1}^{p}\left(\frac{\hat{s}_{k}}{\lambda_{k}}\right)^{\hat{s}_{k}}\ . \end{eqnarray*} We may assume that $1-\sum_{k=1}^{p}\hat{s}_{k}$ is nonzero, as the case when it is zero is covered by case (3) above. Therefore, since the right hand side is independent of $j$, it follows that all nonzero values of $y_{1,j}$ are equal. Since $y_{1,j}$ determines the value of $y_{i,j}$ for all $i$ via (\ref{eq:yijratio}), it follows that each $y_{i,j}$ must either be (a) equal to some nonzero constant independent of $j$ or (b) be equal to zero, if and only if all $y_{i',j}$ for the same $j$ must also be zero. Let the number of $j$ such that $y_{1,j}\ne0$ be $\vartheta$, which must fall between $1$ and $L_{1}$ inclusive (since the number of slices is at most equal to the loop bound corresponding to the dimension we're summing over). Therefore, in order to satisfy (a), the remaining $y_{i,j}$ must be equal to \[ y_{i,j}=\frac{M}{\vartheta}\ . \] Substituting this into (\ref{eq:LagrangeSingleVarObjective}), we get that the max tile size is: \[ M^{\sum_{i\in[p+1,n]}\hat{s}_{i}}\vartheta\prod_{i=1}^{p}\left(\frac{M}{\vartheta}\right)^{\hat{s}_{i}}=M^{\sum_{i=1}^{n}\hat{s}_{i}}\vartheta^{1-\sum_{i=1}^{p}\hat{s}_{i}} \] so the log (base $M)$ of tile size is: \[ \sum_{i=1}^{n}\hat{s}_{i}+\left(\log_{M}\vartheta\right)\left(1-\sum_{i=1}^{p}\hat{s}_{i}\right)\ . \] Therefore, either $1-\sum_{i=1}^{p}\hat{s}_{i}$ is positive, in which case the maximum occurs when we set $\vartheta$ to $L_{1}$, giving (recall that $\beta_{1}=\log_{M}L_{1}$): \[ \sum_{i=1}^{n}\hat{s}_{i}+\beta_{1}\left(1-\sum_{i=1}^{p}\hat{s}_{i}\right)\ , \] or $1-\sum_{i=1}^{p}\hat{s}_{i}$ is negative, in which case the maximum occurs at $\vartheta=1$, in which case we get \[ \sum_{i=1}^{n}\hat{s}_{i}\ . \] as desired. \end{proof} \section{Tiling construction} \label{sec:Tiling-construction} In this section, we describe an explicit construction of a tiling that achieves the upper bound on tile size (and therefore achieves the lower bound on computation) from section \ref{sec:The-Lower-Bound}. Consider the LP that gives us the tiling in this case. We start with the linear program (\ref{eq:largealglp}) and add constraints requiring that the blocks be no larger than the loop bounds (in log-space, $\lambda_{i}\le\beta_{i}$): \begin{eqnarray} \max\sum_{i\in d}\lambda_{i}\st\label{eq:algfull}\\ \sum_{i\text{ s.t. }x_{i}\in\supp(\phi_{j})}\lambda_{i} & \le & 1\qquad\forall j\in[n]\nonumber \\ \lambda_{i} & \le & \beta_{i}\qquad\forall i\in[q]\nonumber \\ \lambda_{i} & \ge & 0\qquad\forall i\in[d]\nonumber \end{eqnarray} \begin{thm} The rectangular tile with dimensions given by the solution to (\ref{eq:algfull}) has cardinality equal to one of the upper bounds for tile size from Section \ref{sec:The-Lower-Bound} for a loop program defined by the $\phi_{j}$; in other words, the solution to (\ref{eq:algfull}) equals \begin{equation} \sum_{i=1}^{n}\hat{s}_{\mathscr{Q},i}+\sum_{j\in\mathscr{Q}\st\sum_{i\in R_{j}}\hat{s}_{i}\le1}\left[\beta_{j}\left(1-\sum_{i\in R_{j}}\hat{s}_{\mathscr{Q},i}\right)\right]\label{eq:s5hblobj} \end{equation} for some $\mathscr{Q}\subseteq[q]$, where $\hat{s}_{\mathscr{Q},i}$ satisfies the constraints of the HBL LP (\ref{eq:hbl_largeindex_matrix_constraints}) with the rows indexed by elements of $\mathscr{Q}$ removed:\begin{equation} \label{eq:hblconstraintst3} \begin{bmatrix} \multicolumn{3}{c}{\text{remove rows not in \ensuremath{\mathscr{Q}}}}\\ \vert & & \vert\\ \phi_{1} & \cdots & \phi_{n}\\ \vert & & \vert \end{bmatrix}\begin{bmatrix}\hat{s}_{1}\\ \vdots\\ \hat{s}_{n} \end{bmatrix}\ge\begin{bmatrix}1\\ \vdots\\ 1 \end{bmatrix} \end{equation} \end{thm} Let us write the constraints of (\ref{eq:algfull}) in the following fashion: \begin{equation} \label{eq:algconstraints}\begin{tabular}{l} $\lefteqn{\phantom{\begin{matrix} \phi_1 \\ \vdots \\\phi_n \ \end{matrix}}}$\\ $\left.\lefteqn{\phantom{\begin{matrix} b_0\\ a\\ \ddots\\ b_0\ \end{matrix}}}q\right\{$ \end{tabular} \begin{bmatrix} & - & & \phi_{1} & & -\\ & & & \vdots\\ & - & & \phi_{n} & & -\\ 1 & 0 & \cdots & 0 & 0 & \cdots & 0\\ 0 & 1 & \cdots & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \mathclap{\hspace{-1em}\underbrace{\makebox[7em]{$\hspace{1em}\cdots$}}_{q}} & 1 & 0 & \cdots & 0 \end{bmatrix} \begin{bmatrix}\lambda_{1}\\ \vdots\\ \lambda_{d} \end{bmatrix}\le\begin{bmatrix}1\\ \vdots\\ 1\\ \beta_{1}\\ \vdots\\ \beta_{q} \end{bmatrix} \end{equation} The dual of this linear program, with variables $\zeta_{1},...,\zeta_{q},s_{1},...,s_{n}$ is to minimize \begin{equation} \sum_{i\in[q]}\beta_{i}\zeta_{i}+\sum_{j=1}^{n}s_{j}\label{eq:algdual_obj} \end{equation} subject to \begin{equation} \label{eq:algdualconstraints} \begin{tabular}{l} $\left.\lefteqn{\phantom{\begin{matrix} 1 \\ \vdots \\0 \end{matrix}}}q\right\{$\\ $\lefteqn{\phantom{\begin{matrix} \vdots \\ 0 \end{matrix}}}$ \end{tabular} \begin{bmatrix}1 & \cdots & 0\\ \vdots & \ddots & \vdots & \vert & & \vert\\ 0 & \cdots & 1 & \phi_{1} & \cdots & \phi_{n}\\ \vdots & & \vdots & \vert & & \vert\\ 0 & \cdots & 0 \end{bmatrix}\begin{bmatrix}\zeta_{1}\\ \vdots\\ \zeta_{q}\\ s_{1}\\ \vdots\\ s_{n} \end{bmatrix}\ge\begin{bmatrix}1\\ \vdots\\ 1 \end{bmatrix} \end{equation}(as well as nonnegativity constraints $\zeta_{i}\ge0$ for all $i\in[q]$, $s_{i}\ge0$ for all $i\in[n]$, which we omit from the matrix for brevity) We now show that the optimal value of (\ref{eq:algdual_obj}) is equivalent to (\ref{eq:s5hblobj}) for some $\hat{s}_{i}$ satisfying (\ref{eq:hblconstraintst3}). \begin{proof} By induction on $q$. For the base case, suppose $q=0$. This is just the case in Section \ref{sec:allarge}. Suppose for induction that the solution to \begin{eqnarray} \max\sum_{i}\lambda_{i}\st\label{eq:algfull-inductive}\\ \sum_{i\text{ s.t. }x_{i}\in\supp(\phi_{j})}\lambda_{i} & \le & 1\qquad\forall j\in[n]\nonumber \\ \lambda_{i} & \le & \beta_{i}\qquad\forall i\in[q-1]\nonumber \end{eqnarray} takes the form \[ \sum_{i=1}^{n}\hat{s}_{\mathscr{Q},i}+\sum_{j\in\mathscr{Q}\st\sum_{i\in R_{j}}\hat{s}_{i}\le1}\left[\beta_{j}\left(1-\sum_{i\in R_{j}}\hat{s}_{\mathscr{Q},i}\right)\right] \] for some $\mathscr{Q}\subseteq[q-1]$ and $\hat{s}_{i}$ satisfying (\ref{eq:hblconstraintst3}). Consider the LP: minimize (\ref{eq:algdual_obj}) subject to (\ref{eq:algdualconstraints}). Denote its solution by $\zeta'_{i}$, $s'_{i}$; we wish to discover the minimum value of the objective (\ref{eq:algdual_obj}). We will rewrite the LP (\ref{eq:algdualconstraints}) in such a way that preserves the optimal value of the objective. First, we remove one variable - say, $\zeta_{q}$ - from it. Since there is no benefit to setting $\zeta_{q}$ any larger than necessary (it increases the objective (\ref{eq:algdual_obj}), and does not come into play in any other constraints) we can fix its value as necessary to ensure that either the $q$th constraint or the nonnegativity constraint $\zeta_{q}\ge0$ is tight. We have two cases: Case 1: $\sum_{i\in R_{q}}s'_{i}\ge1$. In this case, the $q$th constraint is satisfied at the optimal point regardless of the value of $\zeta'_{q}$, so we may set $\zeta_{q}$ to $0$. Now, the objective (\ref{eq:algdual_obj}) becomes: \[ \sum_{i=1}^{q-1}\beta_{i}\zeta_{i}+\sum_{j=1}^{n}s_{j} \] Since the $q$th constraint is the only one containing $\zeta_{q}$, we can delete the $q$th column on the left block of the constraint matrix (\ref{eq:algdualconstraints}) and remove $\zeta_{q}$ from the LP entirely. Therefore, the resulting LP is therefore exactly the dual of (\ref{eq:algfull-inductive}), which, by inductive hypothesis, has optimal objective value of the form: \[ \sum_{i=1}^{n}\hat{s}_{\mathscr{Q},i}+\sum_{j\in\mathscr{Q}\st\sum_{i\in R_{j}}\hat{s}_{i}\le1}\left[\beta_{j}\left(1-\sum_{i\in R_{j}}\hat{s}_{\mathscr{Q},i}\right)\right] \] for $\mathscr{Q}\subseteq[q-1]\subset[q]$, and $\hat{s}_{\mathscr{Q},i}$ satisfying (\ref{eq:hblconstraintst3}) as desired. Case 2: $\sum_{i\in R_{q}}s'_{i}<1$. Without loss of generality, assume this holds for $R_{1}$ through $R_{q-1}$ as well (if not, find $j$ such that $\sum_{x\in R_{j}}s'_{i}\ge1$, permute the LP to swap the positions of $\zeta_{j}$ and $\zeta_{q}$, and proceed to case 1). Therefore, we may modify the LP by setting $\zeta_{1}$ to $1-\sum_{i\in R_{1}}s_{i}$ to keep it tight, and do the same with $\zeta_{2}$ through $\zeta_{q}$; this does not change the optimal objective value. Removing those constraints (since they've all been encoded into the objective), we get a new objective to replace (\ref{eq:algdual_obj}) in our linear program: \[ \min\sum_{i=1}^{n}s_{i}+\sum_{j=1}^{q}\left[\beta_{j}\left(1-\sum_{i\in R_{j}}s_{i}\right)\right] \] Furthermore, since $\sum_{i\in R_{j}}s'_{i}<1$ for all $j\in[q]$ this objective at its optimizer, $s_{1}',...,s_{q}'$, is precisely equal to \[ \sum_{i=1}^{n}s'_{i}+\sum_{j\in[q]\st\sum_{i\in R_{j}}\hat{s}_{i}\le1}\left[\beta_{j}\left(1-\sum_{i\in R_{j}}s'_{i}\right)\right] \] which is of the same form as (\ref{eq:s5hblobj}). Furthermore, we may remove the first $q$ constraints from (\ref{eq:algdualconstraints}), since our choices for values of $\zeta_{1},...,\zeta_{q}$ guarantee that they will be satisfied. The resulting constraint matrix is identical to (\ref{eq:hblconstraintst3}). Therefore, the tile whose dimensions are given by \ref{eq:algfull} attains the lower bound given by Lemma \ref{lem:slicing} with $\mathscr{Q}=[q]$, as desired. \end{proof}
2,877,628,091,383
arxiv
\section{Introduction} \subsection{Reduced Words and Standard Tableaux} Fix the \defn{symmetric group} $\mathfrak{S}_n$ and its generating set of \defn{simple transpositions} $S := \{s_i \mid 1\le i <n \}$. The simple transpositions $s_i := (i,i+1)$ satisfy the quadratic relations $s_i^2 = 1$, the \defn{commutations} $s_i s_j = s_j s_i$ for $|i-j|>1$, and the \defn{braid moves} \[ s_i s_{i+1} s_i = s_{i+1} s_i s_{i+1} \quad \text{for $1\le i\le n-2$.} \] The \defn{length} $\ell(w)$ of an element $w \in \mathfrak{S}_n$ is the smallest nonnegative integer $\ell$ for which there exists an expression $w = s_{i_1}s_{i_2}\cdots s_{i_\ell}$. The symmetric group $\mathfrak{S}_n$ has a \defn{longest element} $w_0$, whose length is $\ell(w_0) = N := \frac{n(n-1)}{2}.$ A \defn{reduced word} for $w$ is an expression $\mathbf{w} = s_{i_1}s_{i_2}\cdots s_{i_{\ell(w)}}.$ By Matsumoto's theorem, the set of reduced words for $w$ form a connected graph ${{\sf Red}}(w)$ with edges given by commutation and braid moves. Figure~\ref{fig:matsumoto_graph_S4} illustrates the graph ${{\sf Red}}(w_0)$ for $\mathfrak{S}_4$. \begin{figure}[t] \begin{center} \includegraphics[height=3in]{a3w0reducedwordgraph} \end{center} \caption{The 16 reduced words in ${{\sf Red}}(w_0)$ for $\mathfrak{S}_4$. Solid lines denote braid relations, while dotted lines indicate commutation relations.} \label{fig:matsumoto_graph_S4} \end{figure} It is natural to ask how many edges in this graph correspond to braid moves. In~\cite{reiner.2005}, V.~Reiner proved the following striking theorem, relating the number of such edges to the number of vertices. \begin{theorem}[V. Reiner~\cite{reiner.2005}] The expected number of braid moves for a reduced word for $w_0 \in \mathfrak{S}_n$ is one. \label{thm:braid_moves_reiner} \end{theorem} In other words, there are $\frac{1}{2}|{{\sf Red}}(w_0)|$ edges that correspond to braid moves in the graph ${{\sf Red}}(w_0)$. V.~Reiner's proof relies on P.~Edelman and C.~Greene's equivariant bijection~\cite{edelmann.greene.1987} between reduced words for $w_0$ under the action \[ s_{i_N}\cdots s_{i_2} s_{i_1} \mapsto s_{n-i_1} s_{i_N}\cdots s_{i_2} \] and standard Young tableaux (SYT) of staircase shape $(n-1,n-2,\ldots,1)$ under promotion. Briefly, he rotates the desired braid move to the beginning of the reduced word, so that under the bijection to SYT the braid move is sent to a \defn{standard braid hook}---three cells arranged in the shape $(2,1)$, touching the diagonal, and labeled by consecutive numbers $i-1,i,i+1$. By excising these three cells, it is possible to compute the desired quantity as an explicit summation of a quotient of hook-length formulas. \subsection{Commutation Classes and Right-Justified Tableaux} Given a reduced word $\mathbf{w}$ for $w\in \mathfrak{S}_n$, we can form the subgraph ${{\sf Red}}(\mathbf{w})$ of ${{\sf Red}}(w)$ containing $\mathbf{w}$ and all reduced words connected to $\mathbf{w}$ using only commutations; this is called the \defn{commutation class} of $\mathbf{w}.$ We may now ask for the number of edges emanating from this subgraph (which, by construction, necessarily correspond to braid moves). Figure~\ref{fig:matsumoto_graph_commutation_class_S5} illustrates an example of such a subgraph for $\mathfrak{S}_5$. In general, it is unreasonable to expect as tidy an answer as the one given in Theorem~\ref{thm:braid_moves_reiner}. In fact, the expected number of braid moves is not equal to one on arbitrary commutation classes (this is already evident in Figure~\ref{fig:matsumoto_graph_S4}). However, there is a special commutation class where this is true, as stated in the following attractive specialization of our main result. \begin{figure}[htbp] \begin{center} \includegraphics[width=\textwidth]{a4w0reducedwordgraphcommutation} \end{center} \begin{comment} \begin{tikzpicture}[scale=1] \tikzstyle{rect}=[rectangle,draw,opacity=.5,fill opacity=1] \node[rect] (1) at (4,1) {1234123121}; \node[rect] (2) at (3,1) {1231423121}; \node[rect] (3) at (4,2) {1234121321}; \node[rect] (4) at (2,1) {1231243121}; \node[rect] (5) at ( 1,2) {1213423121}; \node[rect] (6) at (2.5,0) {1231421321}; \node[rect] (7) at (2.5,1.5) {1213243121}; \node[rect] (8) at (1.5,0) {1213241321}; \node[rect] (9) at ( 1,1) {1231241321}; \node[rect] (10) at (1.5,1.5) {1231214321}; \node[rect] (11) at (0,1) {1213214321}; \node[rect] (12) at (2,1) {1213421321}; \draw (1) to (2); \draw (1) to (3); \draw (2) to (4); \draw (2) to (5); \draw (2) to (6); \draw (3) to (6); \draw (4) to (9); \draw (5) to (12); \draw (6) to (12); \draw (7) to (4); \draw (7) to (8); \draw (7) to (5); \draw (8) to (11); \draw (8) to (12); \draw (9) to (8); \draw (9) to (10); \draw (9) to (6); \draw (10) to (11); \end{tikzpicture} \end{comment} \caption{The 12 reduced words in the commutation class ${{\sf Red}}(\mathbf{w_0})$ for $\mathbf{w_0} = (s_1s_2s_3s_4)(s_1s_2s_3)(s_1s_2)(s_1).$ Solid lines denote braid relations leaving the commutation class.} \label{fig:matsumoto_graph_commutation_class_S5} \end{figure} \begin{theorem} The expected number of braid moves for a reduced word in the commutation class of the word $\mathbf{w_0} := (s_1 s_2 \cdots s_{n-1})(s_1 s_2 \cdots s_{n-2})\cdots(s_1 s_2)(s_1)$ in $\mathfrak{S}_n$ is one. \label{thm:braid_moves_commutation_class} \end{theorem} We prove Theorem~\ref{thm:braid_moves_commutation_class} by providing a bijection from ${{\sf Red}}(\mathbf{w_0})$ to the set of all braid moves in elements of ${{\sf Red}}(\mathbf{w_0})$. In a similar spirit to V.~Reiner's translation of Theorem~\ref{thm:braid_moves_reiner} to a statement on standard tableaux, in Section~\ref{section.heaps} we use X.~Viennot's theory of heaps~\cite{viennot.1986} to rephrase Theorem~\ref{thm:braid_moves_commutation_class} as a statement on \defn{shifted tableaux}. This bijection is illustrated in Figure~\ref{fig:matsumoto_graph_shifted_tableaux_S5}. We define a \defn{braid hook} to be a collection of three boundary cells arranged in the shifted shape $(2,1)$, labeled by consecutive numbers $i-1,i,i+1$ (see Definition~\ref{definition.braid hook}). In this language, Theorem~\ref{thm:braid_moves_commutation_class} becomes the following statement. \begin{theorem} The expected number of braid hooks in a shifted SYT of staircase shape is one. \label{thm:braid_hooks} \end{theorem} \begin{figure}[htbp] \begin{center} \includegraphics[width=\textwidth]{a4w0tableauxgraphcommutation} \end{center} \caption{The 12 shifted SYT of staircase shape $(4,3,2,1)$. Compare with Figure~\ref{fig:matsumoto_graph_commutation_class_S5}.} \label{fig:matsumoto_graph_shifted_tableaux_S5} \end{figure} In fact, we conclude Theorem~\ref{thm:braid_hooks} as a corollary of the much more general Theorem~\ref{thm:braid_moves_young_tableaux}, which applies to a certain class of \defn{right-justified} tableaux that contain the shifted staircases as a special case. \subsection{Half-Right-Justified Tableaux} Recall that the \defn{hyperoctahedral group} $B_n$ is the group generated by $\{s_i\}_{i=1}^{n-1}$ (where now $s_i = (i,i+1)(-i,-i-1)$), along with the generator $s_0 := (1,-1)$. In addition to commutations and braid moves, the hyperoctahedral group also satisfies the \defn{long braid move} \[ s_0 s_1 s_0 s_1 = s_1 s_0 s_1 s_0\;. \] M.~Haiman~\cite{haiman.1992} proved that reduced words for the longest element $w_0$ in type $B_n$ are equinumerous with SYT of shifted trapezoidal shape. W.~Kraskiewicz~\cite{kraskiewicz.1989} gave an explicit insertion procedure, which was used by S.~Billey and T.K.~Lam~\cite{billey_lam.1998} to give an interpretation in terms of pattern avoidance and a link to Stanley symmetric functions. Similarly to the case of $\mathfrak{S}_n$ and SYT of staircase shape, promotion on shifted trapezoids corresponds to the action \[ s_{i_1}s_{i_2}\cdots s_{i_N} \mapsto s_{i_2}\cdots s_{i_N}s_{i_1} \] on ${{\sf Red}}(w_0)$, where $N$ is now the length of the longest element $w_0$ in type $B_n$. Using this technology and a similar method to that in~\cite{reiner.2005}, B.~Tenner~\cite{tenner.2007} proved a type $B$ analogue of Theorem~\ref{thm:braid_moves_reiner}. \begin{theorem}[B.~Tenner~\cite{tenner.2007}] The expected number of braid moves in ${{\sf Red}}(w_0)$ in type $B_n$ is $2-4/n$. The expected number of long braid moves is $\frac{2}{n^2-2}$. \label{thm:tenner} \end{theorem} By considering \defn{half-right-justified tableaux}, which are certain tableaux that can be paired with themselves to produce right-justified tableaux (see Figure~\ref{figure.half right}) and which include shifted trapezoidal shapes, we provide a complementary result to Theorem~\ref{thm:tenner} in Section~\ref{sec:half_right_justified}. \begin{theorem} The expected number of braid hooks in a shifted SYT of trapezoidal shape is one half. \label{thm:braid_moves_half_young_tableaux} \end{theorem} \subsection{Homomesy} In Section~\ref{sec:even_odd_homomesy}, we reprove and refine Theorem~\ref{thm:braid_hooks} and its generalization Theorem~\ref{thm:braid_moves_young_tableaux} using \defn{homomesy}. Homomesy was introduced by Panyushev~\cite{panyushev.2009} and later Propp and Roby~\cite{propp.roby.2013}. It involves partitioning the underlying set into orbits under some group action, and proving that the averaging property still holds on each orbit. Formally, let $S$ be a set, $s$ a statistic on $S$, and $G$ a group acting on $S$. Then $s$ is \defn{homomesic} with respect to the action of $G$ if the average of $s$ on orbits is constant. In our case, $G$ is the dihedral group generated by a ``bipartite'' version of promotion, namely the odd and even operators $\tau_o$ and $\tau_e$. \begin{theorem} \label{thm:braid_hooks_homomesy_even_odd} The number of braid hooks is homomesic with respect to the action of the group $\langle \tau_o,\tau_e\rangle$ on shifted SYT of staircase shape. \end{theorem} This statement admits the same generalization, stated in Theorem~\ref{theorem.homomesy}, to right justified tableaux as in Theorem~\ref{thm:braid_moves_young_tableaux}. Section~\ref{sec:even_odd_homomesy} provides a self-contained bijective proof of Theorem~\ref{theorem.homomesy words} (which is a reformulation of Theorem~\ref{theorem.homomesy}) in the terms of reduced words. It turns out that in general the number of braid hooks is \emph{not} homomesic with respect to the abelian subgroups of our dihedral group $\langle \tau_o, \tau_e \rangle$. Hence Theorem~\ref{thm:braid_hooks_homomesy_even_odd} provides an example of homomesy with respect to a nonabelian group that is not implied by a homomesy of an abelian subgroup (see~\cite[Section~2]{roby.2015} for a discussion about this). In Section~\ref{subsection.homomesy poset} we give a homomesy result for more general posets, where the statistic is given by descents. \section*{Acknowledgements} This project began in March 2015 at the workshop ``Dynamical algebraic combinatorics'' at the American Institute of Mathematics (AIM). We are indebted to Z.~Hamaker and V.~Reiner, who were part of our working group at AIM, for many invaluable discussions and suggestions, and to H.~Thomas for his suggestions regarding Section~\ref{subsection.homomesy poset}! We thank AIM for financial support and a stimulating environment for collaboration, and the other organizers J.~Propp, T.~Roby, and J.~Striker for helping to organize the event. The last author would like to thank G.~Panova for useful conversations. AS is partially supported by NSF grants OCI--1147247 and DMS--1500050. \section{Reduced words and Heaps} \label{section.heaps} In this section, we explain the bijection between reduced words in the commutation class of $\mathbf{w_0} := (s_1 s_2 \cdots s_{n-1})(s_1 s_2 \cdots s_{n-2})\cdots(s_1 s_2)(s_1)$ and shifted standard staircase tableaux. This uses X.~Viennot's \defn{heap model}~\cite{viennot.1986} to construct a poset whose linear extensions are in bijection with the reduced words in the commutation class. The linear extensions of the poset can then be interpreted as tableaux. To construct the poset for a reduced word $\mathbf{w} = s_{i_\ell} \cdots s_{i_1}$ of $w\in \mathfrak{S}_n$, associate a column to each simple transposition $s_i$ ($1\le i<n$) of $\mathfrak{S}_n$. We order the columns from left to right with increasing $i$, so that the column for $s_i$ is adjacent to the columns of $s_{i-1}$ and $s_{i+1}$ (whenever they exist). Starting with the rightmost generator $s_{i_1}$ in $\mathbf{w}$ and moving left generator by generator in $\mathbf{w}$, successively drop a ``heap'' in column $i$ for each $s_i$ encountered. These heaps are wide enough such that two heaps in adjacent columns overlap. Note that a heap gets stuck above another heap when the two heaps are in adjacent columns, which coincides with the case that the corresponding simple transpositions do not commute. The vertices of the poset $P_{\mathbf{w}}$ are precisely the heaps, and the covering relations are given by $v_2 \lessdot v_1$ if and only if $v_2$ is the lowest vertex above $v_1$ in a column adjacent to $v_1$. \begin{example} Figure~\ref{figure.heaps} shows the construction of the poset $P_{\mathbf{w}}$ and its linear extension for three reduced words in ${{\sf Red}}(w_0)$ in $\mathfrak{S}_5$. The first word has no particular significance, the second word is $\mathbf{w_0}$, and the third one is in the commutation class ${{\sf Red}}(\mathbf{w_0})$. \end{example} \begin{comment} convention='French' suffix = "fr-" if convention == 'French' else '' word = [3, 2, 3, 1, 4, 3, 2, 3, 1, 4] name = 'Fig/viennot-random-'+suffix plot_reduced_word(word, convention=convention, lines=True, angle=90).save(name+'columns.pdf') plot_reduced_word(word, convention=convention, lines=True, angle=90, labels="pos", box="diamond").save(name+'boxes.pdf') t=.6; i=3 plot_reduced_word(word, convention=convention, lines=True, labels="pos", box="diamond", angle=90, pack=(t,i)).save(name+'falling.pdf') plot_reduced_word(word, convention=convention, lines=True, labels="pos", box="diamond", angle=90, pack=1).save(name+'heap.pdf') plot_reduced_word(word, convention=convention, labels="pos", box="diamond", angle=45, pack=1).save(name+'tableau.pdf') word = 1,2,3,4,1,2,3,1,2,1 name = 'Fig/viennot-min-'+suffix plot_reduced_word(word, convention=convention, lines=True, angle=90).save(name+'columns.pdf') plot_reduced_word(word, convention=convention, lines=True, angle=90, labels="pos", box="diamond").save(name+'boxes.pdf') t=.6; i=3 plot_reduced_word(word, convention=convention, lines=True, labels="pos", box="diamond", angle=90, pack=(t,i)).save(name+'falling.pdf') plot_reduced_word(word, convention=convention, lines=True, labels="pos", box="diamond", angle=90, pack=1).save(name+'heap.pdf') plot_reduced_word(word, convention=convention, labels="pos", box="diamond", angle=45, pack=1).save(name+'tableau.pdf') word = [1,2,1,3,4,2,3,1,2,1] name = 'Fig/viennot-shifted-'+suffix plot_reduced_word(word, convention=convention, lines=True, angle=90).save(name+'columns.pdf') plot_reduced_word(word, convention=convention, lines=True, angle=90, labels="pos", box="diamond").save(name+'boxes.pdf') t=.6; i=3 plot_reduced_word(word, convention=convention, lines=True, labels="pos", box="diamond", angle=90, pack=(t,i)).save(name+'falling.pdf') plot_reduced_word(word, convention=convention, lines=True, labels="pos", box="diamond", angle=90, pack=1).save(name+'heap.pdf') plot_reduced_word(word, convention=convention, labels="pos", box="diamond", angle=45, pack=1).save(name+'tableau.pdf') \end{comment} \begin{figure}[t] \newcommand{\fig}[1]{\includegraphics[height=3.5cm,trim=2cm 0cm 2cm 0cm,clip]{#1}} \newcommand{}%$\longrightarrow$}{ \fig{viennot-random-columns}}%$\longrightarrow$ \fig{viennot-random-boxes}}%$\longrightarrow$ \fig{viennot-random-falling}}%$\longrightarrow$ \fig{viennot-random-heap}}%$\longrightarrow$ \phantom{\fig{viennot-random-tableau}} \fig{viennot-min-columns}}%$\longrightarrow$ \fig{viennot-min-boxes}}%$\longrightarrow$ \fig{viennot-min-falling}}%$\longrightarrow$ \fig{viennot-min-heap}}%$\longrightarrow$ \fig{viennot-min-tableau} \fig{viennot-shifted-columns}}%$\longrightarrow$ \fig{viennot-shifted-boxes}}%$\longrightarrow$ \fig{viennot-shifted-falling}}%$\longrightarrow$ \fig{viennot-shifted-heap}}%$\longrightarrow$ \fig{viennot-shifted-tableau} \caption{Incrementally building the heap posets (and shifted staircase tableaux when relevant) for the reduced words $s_3 s_2 s_3 s_1 s_4 s_3 s_2 s_3 s_1 s_4$, $(s_1 s_2 s_3 s_4)(s_1 s_2 s_3)( s_1 s_2)(s_1)$, and $s_1 s_2 s_1 s_3 s_4 s_2 s_3 s_1 s_2 s_1$ of $w_0$ in $\mathfrak{S}_5$. \label{figure.heaps}} \end{figure} As suggested by the above example, any reduced word in the same commutation class as $\mathbf{w}$ yields the same poset $P_{\mathbf{w}}$. In fact, keeping track of the order in which each heap (or vertex) is added gives a linear extension of this poset; it is not hard to see that the elements of ${{\sf Red}}(\mathbf{w})$ are in bijection with such linear extensions. Let $\mathbf{w}$ be any reduced word in the commutation class of $\mathbf{w_0}$. The poset $P_{\mathbf{w}}$ has $\Delta_n:=(n,n-1,\ldots,1)$ elements on the NE-diagonals. Rotating this poset (resp. linear extension of the poset) counterclockwise by $45^o$ yields a \defn{shifted staircase partition} (resp. \defn{standard shifted staircase tableau}). A shifted staircase tableau is characterized as increasing along rows from left to right and increasing along columns from top to bottom. We denote the set of all standard shifted staircase tableaux of shape $\Delta_n$ by ${{\sf ShSYT}}(\Delta_n)$. From the bijection \[ \nu \colon {{\sf Red}}(\mathbf{w}_0) \to {{\sf ShSYT}}(\Delta_n) \] we obtain the following result. \begin{lemma} \label{lemma.up down braid} The only possible braid moves in elements of ${{\sf Red}}(\mathbf{w}_0)$ are those of the form $s_1 s_2 s_1$. \end{lemma} \begin{proof} Observe that under the bijection $\nu$, braid moves $s_i s_{i+1} s_i$ and $s_{i+1} s_i s_{i+1}$ in a reduced word would result in hooks in the corresponding tableau of the form \begin{equation} \label{equation.hooks} \scalebox{.7}{\tableau[mbY]{k-1 &k\\ \bl &k+1}} \qquad \text{and} \qquad \scalebox{.7}{\tableau[mbY]{k-1& \bl\\ k&k+1}} \end{equation} respectively. Note that the first hook can sit on the diagonal, whereas the second hook has to appear inside the tableau. If the hook appears inside the tableau, there is a letter $a$ in \[ \scalebox{.7}{\tableau[mbY]{k-1 &k\\ a &k+1}} \qquad \text{or} \qquad \scalebox{.7}{\tableau[mbY]{k-1& a\\ k&k+1}} \] such that $k-1<a<k+1$ by the tableau conditions. This implies that $a=k$, which contradicts the fact that the tableau is standard and $k$ already appears. Hence the only possibility is for the first hook in~\eqref{equation.hooks} to appear on the diagonal. Under the bijection $\nu$ this corresponds precisely to a braid move $s_1 s_2 s_1$. \end{proof} \begin{definition} \label{definition.braid hook} Let $t \in {{\sf ShSYT}}(\Delta_n)$. Then we say that $k$ is a \defn{braid hook} of $t$ if there is a sequence of consecutive letters $k-1,k,k+1$ in $t$ with no box below the box containing $k-1$, as in the first picture in~\eqref{equation.hooks}. \end{definition} \begin{example} The following tableau in ${{\sf ShSYT}}(\Delta_6)$ \[ \tableau[sY]{1&2&3&7&9&14\\ \bl&\darkblue{4}&\darkblue{5}&8&12&15\\ \bl&\bl&\darkblue{6}&10&13&17\\ \bl&\bl&\bl&11&16&18\\ \bl&\bl&\bl&\bl&\darkred{19}&\darkred{20}\\ \bl&\bl&\bl&\bl&\bl&\darkred{21}} \] has braid hooks $5$ (involving the letters $4,5,6$ on the second position of the diagonal) and $20$ (involving $19,20,21$ on the fifth position of the diagonal). \end{example} \noindent By the results of this section, and using the bijection $\nu$, we can reformulate Theorem~\ref{thm:braid_moves_commutation_class} entirely in terms of tableaux. { \renewcommand{\thetheorem}{\ref{thm:braid_hooks}} \begin{theorem} The expected number of braid hooks of elements in ${{\sf ShSYT}}(\Delta_n)$ is one. \end{theorem} \addtocounter{theorem}{-1} } Theorem~\ref{thm:braid_hooks} (and, as a corollary, Theorem~\ref{thm:braid_moves_commutation_class}) will be proved in the next section in a more general setting. \section{Right-justified tableaux} \label{section.right-justified} The statement of Theorem~\ref{thm:braid_hooks} regarding the expected number of braid hooks in standard shifted tableaux of staircase shape can be generalized to more general shapes. Let $\lambda = (\lambda_1,\lambda_2,\ldots,\lambda_\ell)$ be a partition, which means that $\lambda_1,\ldots,\lambda_\ell$ are integers satisfying $\lambda_1\ge \lambda_2 \ge \cdots \ge \lambda_\ell \ge 0$. We define ${{\sf rSYT}}(\lambda)$ to be the set of standard tableaux of the diagram given by $\lambda$, where we right-justify all rows. This definition requires as usual that all rows and columns are strictly increasing from left to right and top to bottom. Note that \begin{equation} \label{equation.rSYT=shSYT} {{\sf rSYT}}(\Delta_n)={{\sf ShSYT}}(\Delta_n)\;. \end{equation} A braid hook for $t\in {{\sf rSYT}}(\lambda)$ is defined in the same way as in Definition~\ref{definition.braid hook}. \begin{example} Let $\lambda=(5,2,1)$. Then \[ {{\sf rSYT}}(\lambda) = \left\{ \; \tableau[sY]{1&2&3&4&5\\ \bl&\bl&\bl&\darkblue{6}&\darkblue{7}\\ \bl&\bl&\bl&\bl&\darkblue{8}} \;,\; \tableau[sY]{1&2&\darkblue{3}&\darkblue{4}&6\\ \bl&\bl&\bl&\darkblue{5}&7\\ \bl&\bl&\bl&\bl&8}\; \right\}, \] where the braid hooks are indicated in blue. Note that the expected number of braid hooks is one in this case. \end{example} \begin{theorem} Let $\lambda = (\lambda_1,\lambda_2,\ldots,\lambda_\ell)$ be a partition such that $\lambda_1 > \lambda_2$ and $\lambda_\ell = 1$. Then the expected number of braid hooks in ${{\sf rSYT}}(\lambda)$ is one. \label{thm:braid_moves_young_tableaux} \end{theorem} Note that by~\eqref{equation.rSYT=shSYT}, Theorem~\ref{thm:braid_hooks} is the special case of Theorem~\ref{thm:braid_moves_young_tableaux} for $\lambda=\Delta_n$. In this section we prove the latter by constructing a bijection \begin{equation} \label{equation.varphi} \varphi \colon \{(k,t) \mid \text{$k$ a braid hook in $t\in {{\sf rSYT}}(\lambda)$}\} \to {{\sf rSYT}}(\lambda)\;. \end{equation} The map $\varphi$ is defined using certain operators akin to the promotion operator on tableaux. For $1\le i< |\lambda|$, let \begin{equation} \label{equation.tau} \begin{split} \tau_i \colon {{\sf rSYT}}(\lambda) &\to {{\sf rSYT}}(\lambda)\\ t & \mapsto t. \tau_i \end{split} \end{equation} be the map that interchanges $i$ and $i+1$ in $t$ if the result is again in ${{\sf rSYT}}(\lambda)$ and otherwise leaves $t$ fixed. Define \[ \varphi(k,t) := t.\partial^*_k \partial_k\;, \] where $\partial_k := \tau_k \tau_{k+1} \cdots \tau_{|\lambda|-1}$ and $\partial^*_k := \tau_{k-1} \tau_{k-2} \cdots \tau_1$. Note that the operators $\partial_k$ and $\partial^*_k$ are partial \defn{promotion} and \defn{inverse promotion} operators, respectively. For example, as explained in~\cite{stanley.2009}, the operator \[ \partial = \partial_1 = \tau_1 \tau_2 \cdots \tau_{|\lambda|-1} \] coincides with M.P.~Sch\"utzenberger's \defn{promotion} on tableaux. This promotion operator is more commonly defined using \defn{jeu-de-taquin} as follows: given a tableau, remove the letter 1 and successively slide the smaller of the right and lower neighbor cells (if they exist) into the empty slot, until the empty slot occupies a cell with no nonempty right or lower neighbor cells. Now enter $|\lambda|+1$ into the empty cell and subtract one from each entry. Similarly, the inverse promotion operator $\partial^*=\partial^{-1}$ can be defined using a sliding algorithm starting from the largest letter in the tableau. The inverse promotion operator may be expressed as \[ \partial^* = \partial^*_{|\lambda|} = \tau_{|\lambda|-1}\tau_{|\lambda|-2}\cdots\tau_1\;. \] The sequence of empty slots in the jeu-de-taquin formulation of the promotion operator define the \defn{promotion sliding path}, denoted $\mathcal{L}$. The \defn{inverse promotion sliding path} is denoted by $\mathcal{R}$. Their description might give the impression that $\mathcal{L}$ and $\mathcal{R}$ are oppositely directed (since $\mathcal{L}$ is defined by removing the letter 1 and then sliding into the empty slot, whereas for $\mathcal{R}$ one removes $|\lambda|$). However, we define them only as undirected paths. Later in this section we will treat them both as paths directed from the top left to bottom right. \begin{comment} Pictures obtained with: sage: t = SkewTableau([[1,2,4,6,10,12], ....: [None,3,5,7,11,13], ....: [None,None,8,9,14,17], ....: [None,None,None,15,16,18], ....: [None,None,None,None,19,20], ....: [None,None,None,None,None,21]]) sage: print shifted_tableau_latex2(t, promotion_path=True, inverse_promotion_path=True) \end{comment} \begin{example} \label{example.sliding paths} We illustrate the promotion sliding path $\mathcal{L}$ by bold cells and the inverse promotion path $\mathcal{R}$ by shaded blue cells: \[ \mathcal{L}: \tableau[sY]{\tf 1&\tf 2&4&6&10&12\\ \bl& \tf 3&\tf 5&\tf 7&11&13\\ \bl&\bl&8&\tf 9& \tf 14& 17\\ \bl&\bl&\bl&15&\tf 16& \tf 18\\ \bl&\bl&\bl&\bl&19&\tf 20\\ \bl&\bl&\bl&\bl&\bl&\tf 21} \qquad \qquad \qquad \mathcal{R}: \tableau[sY]{\lightblue{\vrule height\squaresize width\squaresize} \overlay 1&\lightblue{\vrule height\squaresize width\squaresize} \overlay 2&\lightblue{\vrule height\squaresize width\squaresize} \overlay 4&6&10&12\\ \bl& 3&\lightblue{\vrule height\squaresize width\squaresize} \overlay5&7&11&13\\ \bl&\bl&\lightblue{\vrule height\squaresize width\squaresize} \overlay 8&\lightblue{\vrule height\squaresize width\squaresize} \overlay 9& 14& 17\\ \bl&\bl&\bl&\lightblue{\vrule height\squaresize width\squaresize} \overlay 15&\lightblue{\vrule height\squaresize width\squaresize} \overlay 16&18\\ \bl&\bl&\bl&\bl&\lightblue{\vrule height\squaresize width\squaresize} \overlay 19&\lightblue{\vrule height\squaresize width\squaresize} \overlay 20\\ \bl&\bl&\bl&\bl&\bl&\lightblue{\vrule height\squaresize width\squaresize} \overlay 21}\;. \] \end{example} \begin{lemma} \label{lemma.phi bijection} Let $\lambda = (\lambda_1,\lambda_2,\ldots,\lambda_\ell)$ be a partition such that $\lambda_1 > \lambda_2$ and $\lambda_\ell = 1$. Then $\varphi$ is a bijection. \end{lemma} \begin{proof} To show that $\varphi$ is a bijection, we explicitly construct its inverse. To this end, let $t\in {{\sf rSYT}}(\lambda)$. We want to associate to $t$ a pair $(k,t')$, where $k$ is a braid hook in $t' \in {{\sf rSYT}}(\lambda)$ and $t=\varphi(k,t')$. Given that each $\tau_i$ is a bijection, so is $\partial^*_k\partial_k$, and $t'=t.(\partial^*_k\partial_k)^{-1}$ is completely determined by $k$. Hence, all that is needed is to prove is that there exists a unique $k$ such that $k$ is a braid hook of $t.(\partial^*_k\partial_k)^{-1}$. To achieve this, we use that $\partial_k$ and $\partial^*_k$ are the partial promotion and inverse promotion operator, respectively, and study the crossings of the promotion path $\mathcal{L}$ and the inverse promotion path $\mathcal{R}$ in $t$. Namely, note that $k$ is a braid hook of $t'=t.(\partial^*_k\partial_k)^{-1}$ if and only if the promotion path $\mathcal{L}$ and inverse promotion path $\mathcal{R}$ of $t$ cross in the left inner corner specified by $k$ according to the following configuration \begin{equation} \label{equation.allowed} \tableau[sY]{\tf $x$ & \tf \lightblue{\vrule height\squaresize width\squaresize} \overlay $k$\\ \bl & \lightblue{\vrule height\squaresize width\squaresize} \overlay $y$}\;, \end{equation} where $x$ and $y$ are any allowed values. This can be seen as follows: the action of $\partial_k^{-1}=\tau_{|\lambda|-1}\cdots\tau_k$ on $t'$ performs jeu-de-taquin along the suffix of the inverse promotion path $\mathcal{R}$, down to value $k$. At the end, $y$ is replaced by $k+1$ if and only if $k$ moves into the cell of $y$ under jeu-de-taquin, that is, if the inverse promotion path $\mathcal{R}$ of $t$ is as in~\eqref{equation.allowed}. The same reasoning relates the replacement of $x$ by $k-1$ in $t'$ with the position of the promotion path $\mathcal{L}$ of $t$ as in~\eqref{equation.allowed}. It remains to prove that the paths $\mathcal{L}$ and $\mathcal{R}$ of $t$ admit exactly one such crossing. First notice that the $2\times 2$ configuration \begin{equation} \label{equation.forbidden1} \tableau[sY]{\tf $x$ & b \\ \tf \lightblue{\vrule height\squaresize width\squaresize} \overlay $a$& \lightblue{\vrule height\squaresize width\squaresize} \overlay $y$} \end{equation} is forbidden in $t$. Namely, the conditions for $\mathcal{L}$ impose that $a<b$ whereas the conditions for $\mathcal{R}$ require that $a>b$, a contradiction. Symmetrically, the following $2\times 2$ configuration is forbidden: \begin{equation} \label{equation.forbidden2} \tableau[sY]{\tf $x$ & \tf \lightblue{\vrule height\squaresize width\squaresize} \overlay $b$\\ $a$ & \lightblue{\vrule height\squaresize width\squaresize} \overlay $y$}\;. \end{equation} If the letter $a$ below $x$ is missing, however, then~\eqref{equation.forbidden2} \emph{is} allowed, and this recovers configuration~\eqref{equation.allowed} with $b=k$. By the conditions on right-justified tableaux, the letter 1 is in the top leftmost cell of $t$ and the largest letter $|\lambda|$ is in the bottom rightmost cell of $t$. Hence, both sliding paths $\mathcal{L}$ and $\mathcal{R}$ reach from the top leftmost cell to the bottom rightmost cell of $t$. Whenever the two paths overlap on a horizontal step $\tableau[sY]{\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay $a$ & \tf \lightblue{\vrule height\squaresize width\squaresize} \overlay $b$}$, let us consider $\mathcal{R}$ to be (locally) above $\mathcal{L}$. If, on the other hand, they overlap on a vertical step $\tableau[sY]{\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay $a$\\ \tf \lightblue{\vrule height\squaresize width\squaresize} \overlay $b$}$, then we consider $\mathcal{L}$ to be (locally) above $\mathcal{R}$. If the two paths do not overlap, the northeastern path is considered to be (locally) above the other. Notice that the two paths $\mathcal{L}$ and $\mathcal{R}$ overlap in the two top leftmost horizontal cells since $\lambda_1>\lambda_2$. Likewise for the last two vertical steps in the bottom right corner the paths overlap since $\lambda_\ell = 1$. Hence (according to our conventions) the paths start out with $\mathcal{R}$ above $\mathcal{L}$, and finish with $\mathcal{L}$ above $\mathcal{R}$. The forbidden configurations~\eqref{equation.forbidden1} and~\eqref{equation.forbidden2} are exactly those that prevent the two paths from crossing from ($\mathcal{R}$ above $\mathcal{L}$) to ($\mathcal{L}$ above $\mathcal{R}$) or vice versa, with a single exception: configuration~\eqref{equation.allowed} allows for a crossing from ($\mathcal{R}$ above $\mathcal{L}$) to ($\mathcal{L}$ above $\mathcal{R}$) on a left inner corner, and corresponds to an instance of a braid hook (indeed, at this position the paths will not share any steps, but rather pass orthogonally through one another). Because of the initial and final conditions, such a crossing must happen exactly once. \end{proof} \begin{example} \label{example.sliding paths1} Superimposing the two sliding paths of Example~\ref{example.sliding paths} \[ t= \tableau[sY]{\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 1&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 2&\lightblue{\vrule height\squaresize width\squaresize} \overlay 4&6&10&12\\ \bl& \tf 3&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay5&\tf 7&11&13\\ \bl&\bl&\lightblue{\vrule height\squaresize width\squaresize} \overlay 8&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 9& \tf 14& 17\\ \bl&\bl&\bl&\lightblue{\vrule height\squaresize width\squaresize} \overlay 15&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 16&\tf 18\\ \bl&\bl&\bl&\bl&\lightblue{\vrule height\squaresize width\squaresize} \overlay 19&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 20\\ \bl&\bl&\bl&\bl&\bl&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 21} \] one notices that there is precisely one configuration of the form~\eqref{equation.allowed}, namely with $x=3$, $b=5$ and $y=8$. Hence $\varphi^{-1}(t)$ is the braid $k=5$ in \[ t'= \tableau[sY]{1&2&3&7&11&13\\ \bl& 4&5&8&12&14\\ \bl&\bl&6&9&15&18\\ \bl&\bl&\bl&10&16&19\\ \bl&\bl&\bl&\bl&17&20\\ \bl&\bl&\bl&\bl&\bl&21}\;. \] \end{example} \begin{proof}[Proof of Theorem~\ref{thm:braid_moves_young_tableaux}] Since by Lemma~\ref{lemma.phi bijection} $\varphi$ is a bijection, we have that \[ \# \{(k,t) \mid \text{$k$ a braid hook in $t\in {{\sf rSYT}}(\lambda)$}\} = \#{{\sf rSYT}}(\lambda)\;. \] This implies immediately that the expected number of braid hooks (which is the quotient of the two numbers) is one. \end{proof} We now study how the two partial (inverse) promotion operators $\partial_k$ and $\partial_k^*$, that are used in the bijection $\varphi$, interact. This enables us to deduce a variant of Theorem~\ref{thm:braid_moves_young_tableaux} as a statement on full promotion paths in right-justified tableaux. Namely, let $t' \in {{\sf rSYT}}(\lambda)$, $k$ a braid hook in $t'$, and $t=\varphi(k,t')$. When starting at a braid hook $k$, the operators $\partial_k$ and $\partial^*_k$ commute: \begin{equation} \label{eq:promotion_diagram} \begin{tikzpicture} \matrix (m)[matrix of math nodes,row sep=3em,column sep=6em,minimum width=2em,text height=1.5ex,text depth=0.25ex] { t' & t_r \\ t_l & t=\varphi(k,t') \\ }; \path[->] (m-1-1) edge node [above] {$\partial_k$} (m-1-2); \path[->] (m-2-1) edge node [above] {$\partial_k$} (m-2-2); \path[->] (m-1-1) edge node [left ] {$\partial^*_k$} (m-2-1); \path[->] (m-1-2) edge node [right ] {$\partial^*_k$} (m-2-2); \end{tikzpicture} \end{equation} The nice feature of this diagram is that $t_r$ is obtained from $t_l$ by applying a full promotion operator: $t_r=t_l.\partial$. Hence, on the $t_l$ side, we can focus on the combinatorics of just the usual promotion path. \begin{example} \label{example.sliding paths2} Continuing Example~\ref{example.sliding paths1} we obtain the commutative diagram: \newsavebox{\shiftedtableautp} \sbox{\shiftedtableautp}{ t'= \tableau[sY]{1&2&3&7&11&13\\ \bl& 4&5&8&12&14\\ \bl&\bl&6&9&15&18\\ \bl&\bl&\bl&10&16&19\\ \bl&\bl&\bl&\bl&17&20\\ \bl&\bl&\bl&\bl&\bl&21} } \newsavebox{\shiftedtableautl} \sbox{\shiftedtableautl}{ \tableau[sY]{ \tf 1 & \tf 2 & 4 & 7 & 11 & 13\\ \bl & \tf 3 & \tf 5 & 8 & 12 & 14\\ \bl & \bl & \tf 6 & \tf 9 & 15 & 18\\ \bl & \bl & \bl & \tf 10 & \tf 16 & 19\\ \bl & \bl & \bl & \bl & \tf 17 & \tf 20\\ \bl & \bl & \bl & \bl & \bl & \tf 21} } \newsavebox{\shiftedtableautr} \sbox{\shiftedtableautr}{ \tableau[sY]{ \lightblue{\vrule height\squaresize width\squaresize}\overlay 1 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 2 & 3 & 6 & 10 & 12\\ \bl & \lightblue{\vrule height\squaresize width\squaresize}\overlay 4 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 5 & 7 & 11 & 13\\ \bl & \bl & \lightblue{\vrule height\squaresize width\squaresize}\overlay 8 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 9 & 14 & 17\\ \bl & \bl & \bl & \lightblue{\vrule height\squaresize width\squaresize}\overlay 15 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 16 & 18\\ \bl & \bl & \bl & \bl & \lightblue{\vrule height\squaresize width\squaresize}\overlay 19 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 20\\ \bl & \bl & \bl & \bl & \bl & \lightblue{\vrule height\squaresize width\squaresize}\overlay 21} } \newsavebox{\shiftedtableaut} \sbox{\shiftedtableaut}{ t= \tableau[sY]{\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 1&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 2&\lightblue{\vrule height\squaresize width\squaresize} \overlay 4&6&10&12\\ \bl& \tf 3&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay5&\tf 7&11&13\\ \bl&\bl&\lightblue{\vrule height\squaresize width\squaresize} \overlay 8&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 9& \tf 14& 17\\ \bl&\bl&\bl&\lightblue{\vrule height\squaresize width\squaresize} \overlay 15&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 16&\tf 18\\ \bl&\bl&\bl&\bl&\lightblue{\vrule height\squaresize width\squaresize} \overlay 19&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 20\\ \bl&\bl&\bl&\bl&\bl&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 21} } \begin{displaymath} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=5em,minimum width=2em] { \usebox{\shiftedtableautp} & \usebox{\shiftedtableautr} \\ \usebox{\shiftedtableautl} & \usebox{\shiftedtableaut} \\ }; \path[->] (m-1-1) edge node [above] {$\partial_5$} (m-1-2); \path[->] (m-2-1) edge node [above] {$\partial_5$} (m-2-2); \path[->] (m-1-1) edge node [left ] {$\partial^*_5$} (m-2-1); \path[->] (m-1-2) edge node [right ] {$\partial^*_5$} (m-2-2); \end{tikzpicture} \end{displaymath} \begin{comment} sage: tp = SkewTableau([[1,2,3,6,11,12], ....: [None,4,5,7,13,14], ....: [None,None,8,9,15,17], ....: [None,None,None,10,16,18], ....: [None,None,None,None,19,20], ....: [None,None,None,None,None,21]]) sage: k = 8 sage: tr8 = to_shifted_tableau(promotion(to_linear_extension(tp),k)) sage: tl8 = to_shifted_tableau(promotion(to_linear_extension(tp),k,side="left", inverse=True)) sage: t8 = to_shifted_tableau(promotion(to_linear_extension(tl),k)) sage: print shifted_tableau_latex2(tp) sage: print shifted_tableau_latex2(tl, promotion_path=True) sage: print shifted_tableau_latex2(tr, inverse_promotion_path=True) sage: print shifted_tableau_latex2(t, promotion_path=True, inverse_promotion_path=True) \newsavebox{\shiftedtableautp} \sbox{\shiftedtableautp}{ \tableau[sY]{ 1 & 2 & 3 & 6 & 11 & 12\\ \bl & 4 & 5 & 7 & 13 & 14\\ \bl & \bl & 8 & 9 & 15 & 17\\ \bl & \bl & \bl & 10 & 16 & 18\\ \bl & \bl & \bl & \bl & 19 & 20\\ \bl & \bl & \bl & \bl & \bl & 21} } \newsavebox{\shiftedtableautl} \sbox{\shiftedtableautl}{ \tableau[sY]{ \tf 1 & \tf 2 & 4 & 7 & 11 & 12\\ \bl & \tf 3 & \tf 5 & 8 & 13 & 14\\ \bl & \bl & \tf 6 & \tf 9 & 15 & 17\\ \bl & \bl & \bl & \tf 10 & \tf 16 & \tf 18\\ \bl & \bl & \bl & \bl & 19 & \tf 20\\ \bl & \bl & \bl & \bl & \bl & \tf 21} } \newsavebox{\shiftedtableautr} \sbox{\shiftedtableautr}{ \tableau[sY]{ \lightblue{\vrule height\squaresize width\squaresize}\overlay 1 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 2 & 3 & 6 & 10 & 11\\ \bl & \lightblue{\vrule height\squaresize width\squaresize}\overlay 4 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 5 & 7 & 12 & 13\\ \bl & \bl & \lightblue{\vrule height\squaresize width\squaresize}\overlay 8 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 9 & 14 & 16\\ \bl & \bl & \bl & \lightblue{\vrule height\squaresize width\squaresize}\overlay 15 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 17 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 19\\ \bl & \bl & \bl & \bl & 18 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 20\\ \bl & \bl & \bl & \bl & \bl & \lightblue{\vrule height\squaresize width\squaresize}\overlay 21} } \newsavebox{\shiftedtableaut} \sbox{\shiftedtableaut}{ \tableau[sY]{ \tf \lightblue{\vrule height\squaresize width\squaresize}\overlay 1 & \tf \lightblue{\vrule height\squaresize width\squaresize}\overlay 2 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 4 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 7 & 10 & 11\\ \bl & \tf 3 & \tf 5 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 8 & 12 & 13\\ \bl & \bl & \tf 6 & \tf \lightblue{\vrule height\squaresize width\squaresize}\overlay 9 & \tf 14 & \tf 16\\ \bl & \bl & \bl & \lightblue{\vrule height\squaresize width\squaresize}\overlay 15 & \lightblue{\vrule height\squaresize width\squaresize}\overlay 17 & \tf \lightblue{\vrule height\squaresize width\squaresize}\overlay 19\\ \bl & \bl & \bl & \bl & 18 & \tf \lightblue{\vrule height\squaresize width\squaresize}\overlay 20\\ \bl & \bl & \bl & \bl & \bl & \tf \lightblue{\vrule height\squaresize width\squaresize}\overlay 21} } \begin{displaymath} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=5em,minimum width=2em] { \usebox{\shiftedtableautp} & \usebox{\shiftedtableautr} \\ \usebox{\shiftedtableautl} & \usebox{\shiftedtableaut} \\ }; \path[->] (m-1-1) edge node [above] {$\partial_9$} (m-1-2); \path[->] (m-2-1) edge node [above] {$\partial_9$} (m-2-2); \path[->] (m-1-1) edge node [left ] {$\partial^*_9$} (m-2-1); \path[->] (m-1-2) edge node [right ] {$\partial^*_9$} (m-2-2); \end{tikzpicture} \end{displaymath} \end{comment} Note that the promotion path of $t_l$ is made of the first half of the promotion path of $t$ and the second half of the inverse promotion path of $t$. Note also that, viewing the promotion path of $t_l$ as a Dyck path, it has a peak of height one with corresponding values in the tableau of the form $(*,k,k+1)$ (here $k=5$). \end{example} The tableau in Example~\ref{example.sliding paths2} was of shifted staircase shape, so that the promotion and inverse promotion paths could easily be viewed as Dyck paths. For a general right-justified tableau $t\in{{\sf rSYT}}(\lambda)$, we define the analogous notion of a \defn{left partial braid hook} to be an inner corner with values $(*,k,k+1)$ of $t_l$ that lies on the promotion path. The symmetric situation appears in $t_r$ and we define a \defn{right partial hook} to be an inner corner with values $(*,k,k+1)$ of $t_r$ that lies on the inverse promotion path. We thus obtain the following corollary to Theorem~\ref{thm:braid_moves_young_tableaux}. \begin{corollary} \label{corollary.partial hooks} The commutative diagram~\eqref{eq:promotion_diagram} gives bijections between: \begin{enumerate} \item Pairs $(k,t')$ where $t' \in {{\sf rSYT}}(\lambda)$ and $k$ is a braid hook of $t'$. \item Pairs $(k,t_l)$ where $t_l \in {{\sf rSYT}}(\lambda)$ and $k$ is a left partial braid hook of $t_l$. \item Pairs $(k,t_r)$ where $t_r \in {{\sf rSYT}}(\lambda)$ and $k$ is a right partial braid hook of $t_r$. \item Right-justified tableaux $t \in {{\sf rSYT}}(\lambda)$. \end{enumerate} In particular, the number of left partial braid hooks in all tableaux in ${{\sf rSYT}}(\lambda)$ has expected value one. \end{corollary} \section{Half-right-justified tableaux} \label{sec:half_right_justified} In this section, we turn our attention to shifted SYT of \defn{half-right-justified shape}. An SYT $t$ is half-right-justified of shape $\lambda = (\lambda_1,\lambda_2,\ldots,\lambda_\ell)$ if $\lambda_1>\lambda_2 > \cdots > \lambda_\ell$ are strictly decreasing and $t$ is justified so that the rightmost cell of each row is one step below and to the left of the rightmost cell of the previous row. We denote the set of half-right-justified SYT of shape $\lambda$ by ${{\sf hrSYT}}(\lambda)$. This definition is motivated by the fact that tableaux of these shapes can be adjoined to their reflection to create tableaux of right-justified shapes, to which the results of Section \ref{section.right-justified} apply. See Figure \ref{figure.half right} for an example. Braid hooks in half-right-justified tableaux are still defined as in Definition~\ref{definition.braid hook} (with ${{\sf ShSYT}}(\Delta_n)$ replaced by ${{\sf hrSYT}}(\lambda)$). \begin{figure} \begin{tikzpicture}[scale=\squaresize / 1cm * 1pt] \draw (0,3) -- (6,3) -- ++(0,-1) -- ++(-1,0) -- ++(0,-1) -- ++(-1,0) -- ++(0,-1) -- ++(-1,0) -- ++(0,1) -- ++(-1,0) -- ++(-1,0) -- ++(0,1) -- ++(-1,0) -- ++(0,1); \draw (7,0) node {+}; \begin{scope}[xscale=-1, rotate = 90, shift={(-3,8)}, dotted] \draw (0,3) -- (6,3) -- ++(0,-1) -- ++(-1,0) -- ++(0,-1) -- ++(-1,0) -- ++(0,-1) -- ++(-1,0) -- ++(0,1) -- ++(-1,0) -- ++(-1,0) -- ++(0,1) -- ++(-1,0) -- ++(0,1); \end{scope} \draw (12.5,0) node {=}; \begin{scope}[shift={(13,0)}] \draw (0,3) -- (6,3) -- ++(0,-1) -- ++(-1,0) -- ++(0,-1) -- ++(-1,0) -- ++(0,-1) -- ++(-1,0) -- ++(0,1) -- ++(-1,0) -- ++(-1,0) -- ++(0,1) -- ++(-1,0) -- ++(0,1); \begin{scope}[xscale=-1, rotate = 90, shift={(-3,4)}, dotted] \draw (0,3) -- (6,3) -- ++(0,-1) -- ++(-1,0) -- ++(0,-1) -- ++(-1,0) -- ++(0,-1) -- ++(-1,0) -- ++(0,1) -- ++(-1,0) -- ++(-1,0) -- ++(0,1) -- ++(-1,0) -- ++(0,1); \end{scope} \end{scope} \end{tikzpicture} \caption{A half-right-justified shape is joined to its reflection to create a right-justified shape.} \label{figure.half right} \end{figure} As a specific example, it is natural to look at half-right-justified tableaux of trapezoidal shape, which coincide with shifted tableaux of trapezoidal shape. These are SYT of shape $\Delta_n^t = (2n+1, 2n-1, \dots, 5,3,1)$, justified so that the center cells of each row are in the same column. Example~\ref{example.trapezoidal} illustrates such an SYT. By a theorem of M.~Haiman, SYT of shifted trapezoidal shape are in bijection with the set of all reduced words for the longest element in type $B$~\cite{haiman.1992}, although we no longer have the interpretation as braid moves on these words. Regardless, by the heap construction of Section~\ref{section.heaps}, it is clear that the reduced words in the commutation class of $\prod_{i=1}^{2n-1}s_{i}s_{i-2}\cdots s_{2-(i\mod 2)} \in \mathfrak{S}_{2n}$ are in bijection with such tableaux (and that braid relations in the words correspond to braid hooks in the tableaux). \begin{example} \label{example.trapezoidal} The following trapezoidal tableau is in ${{\sf hrSYT}}(\Delta_2^t)$ \[ \tableau[sY]{1&2&3&4&9\\ \bl&\darkblue{5}&\darkblue{6}&8&\bl \\ \bl&\bl&\darkblue{7}&\bl&\bl \\}\;. \] It contains one braid hook, shown in blue. The letters $\{6,7,8\}$ do not form a braid hook --- by definition, braid hook configurations can only occur on the lower left boundary. \end{example} We prove the following theorem. \begin{theorem} The expected number of braid hooks in ${{\sf hrSYT}}(\lambda)$ is at most one half. If $\lambda_1 \geq \lambda_2 + 2$ and $\lambda_\ell = 1$, then the expected number of braid hooks in ${{\sf hrSYT}}(\lambda)$ is exactly one half. \label{thm:braid_moves_half_right_young_tableaux} \end{theorem} Note that half-right-justified tableaux of trapezoidal shape satisfy $\lambda_1=\lambda_2+2$ and $\lambda_\ell = 1$, so Theorem~\ref{thm:braid_moves_half_right_young_tableaux} implies Theorem~\ref{thm:braid_moves_half_young_tableaux}. \subsection{Proof of Theorem~\ref{thm:braid_moves_half_right_young_tableaux}: Injective Case} \label{subsection.injective} We use the techniques of Section~\ref{section.right-justified}. As in that section, we define a map \begin{equation} \psi \colon \{(k,t) \mid \text{$k$ a braid hook in $t\in {{\sf hrSYT}}(\lambda)$}\} \to {{\sf hrSYT}}(\lambda) \end{equation} using the partial promotion and inverse promotion operators $\partial_k := \tau_k \tau_{k+1} \cdots \tau_{|\lambda|-1}$ and $\partial^*_k := \tau_{k-1} \tau_{k-2} \cdots \tau_1$ such that \[ \psi(k,t) := t.\partial^*_k \partial_k\;. \] As before, we seek to understand the image of the map $\psi$. Recall that on right-justified tableaux of shape $\lambda$, the map $\varphi$ is a bijection, which shows that the expected number of braid hooks in a tableau in ${{\sf rSYT}}(\lambda)$ is one. To prove Theorem~\ref{thm:braid_moves_half_right_young_tableaux}, we will show that on ${{\sf hrSYT}}(\lambda)$ the map $\psi$ is an injection whose image is at most half of ${{\sf hrSYT}}(\lambda)$, and that if $\lambda_1 \ge \lambda_2+2$ and $\lambda_\ell = 1$, then the image of $\psi$ is exactly half of ${{\sf hrSYT}}(\lambda)$. As in the proof of Lemma~\ref{lemma.phi bijection}, we consider an element $t \in {{\sf hrSYT}}(\lambda)$, and examine the promotion and inverse promotion paths $\mathcal{L}$ and $\mathcal{R}$. Each appearance of $t$ in the image of $\psi$ corresponds to a crossing from ($\mathcal{R}$ above $\mathcal{L}$) to ($\mathcal{L}$ above $\mathcal{R}$) on a left inner corner. It is impossible for the reverse crossing to occur, as configuration~\eqref{equation.forbidden1} is forbidden, so the paths $\mathcal{L}$ and $\mathcal{R}$ cross at most once in $t$, showing that $\psi$ is injective. However, for $t \in {{\sf hrSYT}}(\lambda)$, it is no longer true that the paths $\mathcal{L}$ and $\mathcal{R}$ must cross. In the top left corner, the two paths overlap, and so by our convention we consider $\mathcal{R}$ to be above $\mathcal{L}$. The path $\mathcal{R}$ will end in the cell containing the largest letter $|\lambda|$, while the path $\mathcal{L}$ could end in a different lower right cell. Example~\ref{example.trapezoidal no intersection} illustrates this behavior. \begin{example} \label{example.trapezoidal no intersection} In \[ \tableau[sY]{\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 1&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 2&\lightblue{\vrule height\squaresize width\squaresize} \overlay 4&\lightblue{\vrule height\squaresize width\squaresize} \overlay 5&\lightblue{\vrule height\squaresize width\squaresize} \overlay 7&\lightblue{\vrule height\squaresize width\squaresize} \overlay 12&13 \\ \bl&\tf 3&\tf 6&\tf 8&11&\lightblue{\vrule height\squaresize width\squaresize} \overlay 16&\bl\\ \bl&\bl&9&\tf 10&\tf 14&\bl&\bl\\ \bl&\bl&\bl&15&\bl&\bl&\bl\\} \in {{\sf hrSYT}}(\Delta_3^t) \] the path $\mathcal{R}$ is always above the path $\mathcal{L}$. \end{example} A tableau $t$ appears in the image of $\psi$ if and only if the paths $\mathcal{L}$ and $\mathcal{R}$ of $t$ cross. Hence, to prove Theorem~\ref{thm:braid_moves_half_right_young_tableaux}, it suffices to show that $\mathcal{L}$ and $\mathcal{R}$ cross in at most half of the tableaux in ${{\sf hrSYT}}(\lambda)$, and exactly half when $\lambda_1 \ge \lambda_2+2$ and $\lambda_\ell = 1$. We will now work towards pairing elements of ${{\sf hrSYT}}(\lambda)$ in which the paths $\mathcal{L}$ and $\mathcal{R}$ cross with those in which the two paths do not cross. For general shapes $\lambda$, some tableaux in which the paths do not cross may remain unpaired, while if $\lambda_1\ge \lambda_2+2$ and $\lambda_\ell = 1$, every such tableau is paired. We will use the \defn{evacuation} and \defn{dual evacuation} maps, defined as \begin{equation*} \begin{split} \epsilon &= (\tau_1\tau_2\cdots\tau_{|\lambda|-1})(\tau_1\tau_2\cdots\tau_{|\lambda|-2})\cdots(\tau_1\tau_2)(\tau_1)\;,\\ \epsilon^* &= (\tau_{|\lambda|-1}\tau_{|\lambda|-2}\cdots\tau_1)(\tau_{|\lambda|-1}\tau_{|\lambda|-2}\cdots \tau_2)\cdots(\tau_{|\lambda|-1}\tau_{|\lambda|-2})(\tau_{|\lambda|-1})\;. \end{split} \end{equation*} Here the $\tau_i$ are as defined in~\eqref{equation.tau} with ${{\sf rSYT}}(\lambda)$ replaced by ${{\sf hrSYT}}(\lambda)$. In order to prove Theorem~\ref{thm:braid_moves_half_right_young_tableaux}, it suffices to show the following proposition. \begin{proposition} \label{proposition.cross pairing} If in the tableau $t \in {{\sf hrSYT}}(\lambda)$, the paths $\mathcal{L}$ and $\mathcal{R}$ cross, then in the tableau $t.\epsilon$, the paths $\mathcal{L}$ and $\mathcal{R}$ do not cross. If $\lambda_1 \geq \lambda_2 + 2$ and $\lambda_\ell = 1$, then the converse is also true. \end{proposition} Given an element $t \in {{\sf hrSYT}}(\lambda)$, we define the \defn{conjugate} of $t$, denoted by $t^\dagger$, as the tableau obtained by reflecting $t$ in the diagonal from bottom left to top right and then reversing the order of the entries. The tableau thus obtained has rows and columns in increasing order because the reflection takes rows and columns in increasing order to columns and rows in decreasing order respectively, and then reversing the entries produces columns and rows in increasing order. Hence $t^\dagger$ is an SYT, although not of the same shape as $t$. Example~\ref{example.conjugation} illustrates this operation. \begin{example} \label{example.conjugation} \[ t = \tableau[sY]{1&2&3&4&9\\ \bl&5&6&8&\bl \\ \bl&\bl&7&\bl&\bl\\} \longrightarrow \tableau[sY]{\bl&\bl&9\\ \bl&8&4 \\ 7&6&3\\ \bl&5&2\\ \bl&\bl&1\\} \longrightarrow \tableau[sY]{\bl&\bl&1\\ \bl&2&6 \\ 3&4&7\\ \bl&5&8\\ \bl&\bl&9\\} = t^\dagger \;. \] \end{example} We will need the following relations between promotion, evacuation and conjugation. \begin{lemma} \label{lemma.identities} The operators $\partial, \partial^*, \epsilon, \epsilon^*$ and $\dagger$ obey the following relations: \begin{align*} \partial^* & = \partial^{-1} \\ \dagger^2 & = 1 \\ \dagger\partial\dagger & = \partial^* \\ \dagger\epsilon\dagger & = \epsilon^* \\ \epsilon^2 & = (\epsilon^*)^2 = 1 \\ \epsilon\partial & = \partial^*\epsilon \\ \epsilon^*\partial & = \partial^*\epsilon^* \end{align*} \end{lemma} \begin{proof} That $\partial^* = \partial^{-1}$ is immediate from their definitions in terms of the involutions $\tau_i$. The conjugation map $\dagger$ is self-inverse because both reflecting the tableau and reversing the entries are self-inverse, and commute with one another. The map $\dagger$ reverses labels but otherwise preserves the poset structure, so we have that $\dagger\tau_i\dagger = \tau_{|\lambda|-i}$. Hence $\dagger\partial\dagger = \partial^*$ and $\dagger\epsilon\dagger = \epsilon^*$. It is a result of Sch\"{u}tzenberger that $\epsilon^2 = 1$, see for example~\cite[Theorem 2.1]{stanley.2009}. The dual evacuation operator $\epsilon^*$ is the conjugate of $\epsilon$ by $\dagger$, so it is also an involution. That $\epsilon\partial = \partial^*\epsilon$ is also stated in~\cite[Theorem 2.1]{stanley.2009}. The dual statement, that $\epsilon^*\partial = \partial^*\epsilon^*$, may be obtained by conjugating the previous identity by $\dagger$. \end{proof} Given $t \in {{\sf hrSYT}}(\lambda)$, let us define the \defn{staircase pair} $(t, (t.\epsilon)^\dagger)$ as follows. Take $t$ and $t^\dagger$, and add $|\lambda|$ to each entry in $t^\dagger$. As in Figure \ref{figure.half right}, align the two tableaux so that the top cell of $t^\dagger$ is to the right of the rightmost cell of $t$, and consider the union of these two tableaux as a larger tableau. Because $t$ and $t^\dagger$ are SYT, the staircase pair $(t, (t.\epsilon)^\dagger)$ is an SYT. This construction is illustrated in the next example. \begin{example} \label{example.staircase pair} With $t$ as in Example~\ref{example.conjugation}, we have \[ t.\epsilon = \tableau[sY]{1&2&3&4&6\\ \bl&5&7&9&\bl \\ \bl&\bl&8&\bl&\bl\\}, \quad (t.\epsilon)^\dagger = \tableau[sY]{\bl&\bl&4\\ \bl&1&6 \\ 2&3&7\\ \bl&5&8\\ \bl&\bl&9\\}, \quad (t.\epsilon)^\dagger+9 = \tableau[sY]{\bl&\bl&13\\ \bl&10&15 \\ 11&12&16\\ \bl&14&17\\ \bl&\bl&18\\} \] so that \[ (t, (t.\epsilon)^\dagger) = \tableau[sY]{1&2&3&4&9&13\\ \bl&5&6&8&10&15 \\ \bl&\bl&7&11&12&16\\ \bl&\bl&\bl&\bl&14&17\\ \bl&\bl&\bl&\bl&\bl&18\\}\;. \] \end{example} \begin{remark} We could also have defined staircase pairs using dual evacuation, because $(t.\epsilon)^\dagger = t^\dagger.\epsilon^*$. \end{remark} Note that a staircase pair $(t, (t.\epsilon)^\dagger)$ is an SYT of right-justified shape, so that by the results of Section~\ref{section.right-justified} the paths $\mathcal{L}$ and $\mathcal{R}$ cross exactly once. For $t$ of general shape $\lambda$, this crossing might take place within the subtableau $t$, within the subtableau $(t.\epsilon)^\dagger$, or overlapping each of the two. For $t \in {{\sf hrSYT}}(\Delta_n^t)$, though, the crossing must be either entirely within the subtableau $t$, or entirely within the subtableau $(t.\epsilon)^\dagger$, because there are no braid hooks crossing the boundary between the two subtableaux. Let $t \in {{\sf hrSYT}}(\lambda)$. We will now examine the relation between the paths $\L$ and $\mathcal{R}$ of a staircase pair $(t, (t.\epsilon)^\dagger)$ and the paths $\L$ and $\mathcal{R}$ of the subtableaux $t$ and $(t.\epsilon)^\dagger$. Let the promotion and inverse promotion paths of the staircase pair $(t, (t.\epsilon)^\dagger)$ be denoted by $\mathcal{L}_s$ and $\mathcal{R}_s$, while the promotion and inverse promotion paths in the subtableau $t$ are denoted by $\mathcal{L}_1$ and $\mathcal{R}_1$ and the promotion and inverse promotion paths in the subtableau $(t.\epsilon)^\dagger$ are denoted by $\mathcal{L}_2$ and $\mathcal{R}_2$. We prove Proposition~\ref{proposition.cross pairing} via the following sequence of lemmas. \begin{lemma} The restriction of $\mathcal{L}_s$ to the subtableau $t$ is exactly $\mathcal{L}_1$. Likewise, the restriction of $\mathcal{R}_s$ to the subtableau $(t.\epsilon)^\dagger$ is exactly $\mathcal{R}_2$. \end{lemma} \begin{proof} Both of the paths $\mathcal{L}_s$ and $\L_1$ can be constructed by starting at the cell containing 1 and continually moving down or to the right, to whichever cell has the smaller entry. Because every entry in the subtableau $t$ is smaller than every other entry of the staircase pair tableau, these two paths will overlap until $\L_s$ leaves the subtableau $t$, at which point $\L_1$ terminates. Hence the restriction of $\mathcal{L}_s$ to $t$ is exactly $\mathcal{L}_1$. In the same way, the paths $\mathcal{R}_s$ and $\mathcal{R}_2$ can both be constructed by starting at the cell containing $2|\lambda|$ and repeatedly moving up or to the left, to whichever cell has the larger entry. These two paths will overlap until $\mathcal{R}_s$ leaves the subtableau $(t.\epsilon)^\dagger$, at which point $\mathcal{R}_2$ terminates. Therefore the restriction of $\mathcal{R}_s$ to $(t.\epsilon)^\dagger$ is exactly $\mathcal{R}_2$. \end{proof} We now state a lemma of~\cite{pon.wang.2011}, and prove a very similar dual statement in Lemma~\ref{lemma.pw2}. Note that with respect to~\cite{pon.wang.2011}, we have interchanged the definitions of promotion and inverse promotion, and those of evacuation and dual evacuation, following~\cite{stanley.2009} rather than~\cite{edelmann.greene.1987}. \begin{lemma}{\cite[Lemma~3.4]{pon.wang.2011}} \label{lemma.pw1} If the letter $|\lambda|$ is in cell $(i,j)$ of $t$, then the promotion path $\L$ of $t.\epsilon$ ends on cell $(i,j)$ of $t.\epsilon$. \end{lemma} \begin{proof} From Lemma~\ref{lemma.identities}, we know that $t.(\epsilon\partial) = t.(\partial^*\epsilon)$. Working first with the right hand side, we see that \[ \partial^*\epsilon = (\tau_1\tau_2\cdots\tau_{|\lambda|-2})(\tau_1\tau_2\cdots\tau_{|\lambda|-3})\cdots(\tau_1\tau_2)(\tau_1)\;. \] Note that the operator $\partial^*\epsilon$ does not move the letter $|\lambda|$, as $\tau_{|\lambda|-1}$ does not appear. Therefore the position of $|\lambda|$ is the same in $t.(\partial^*\epsilon)$ as in $t$. But $t.(\epsilon\partial) = t.(\partial^*\epsilon)$, so the position of $|\lambda|$ must be the same in $t.(\epsilon\partial)$ as in $t$. The position of $|\lambda|$ in $t.(\epsilon\partial)$ is the lower right endpoint of the path $\L$ in $t.\epsilon$, by the sliding definition of promotion. This completes the proof. \end{proof} \begin{lemma} \label{lemma.pw2} If the letter $1$ is in cell $(i,j)$ of $(t.\epsilon)^\dagger$, then the promotion path $\mathcal{R}$ of $(t.\epsilon)^\dagger.\epsilon^*$ ends on cell $(i,j)$ of $(t.\epsilon)^\dagger.\epsilon^*$. \end{lemma} \begin{proof} The proof of this lemma is analogous to that of Lemma~\ref{lemma.pw1}. From Lemma~\ref{lemma.identities}, we know that $(t.\epsilon)^\dagger.(\epsilon^*\partial^*) = (t.\epsilon)^\dagger.(\partial\epsilon^*)$. We examine the right hand side, and see that \[ \partial\epsilon^* = (\tau_{|\lambda|-1}\tau_{|\lambda|-2}\cdots\tau_2)(\tau_{|\lambda|-1}\tau_{|\lambda|-2} \cdots\tau_3)\cdots(\tau_{|\lambda|-1}\tau_{|\lambda|-2})(\tau_{|\lambda|-1})\;. \] The operator $\partial\epsilon^*$ does not move the letter 1, as $\tau_1$ does not appear. Therefore the position of 1 is the same in $(t.\epsilon)^\dagger.(\partial\epsilon^*)$ as in $(t.\epsilon)^\dagger$. But $(t.\epsilon)^\dagger.(\epsilon^*\partial^*) = (t.\epsilon)^\dagger.(\partial\epsilon^*)$, so the position of 1 must be the same in $(t.\epsilon)^\dagger.(\epsilon^*\partial^*)$ as in $(t.\epsilon)^\dagger$. The position of 1 in $(t.\epsilon)^\dagger.(\epsilon^*\partial^*)$ is the upper left endpoint of the path $\mathcal{R}$ in $(t.\epsilon)^\dagger.\epsilon^*$, by the sliding definition of inverse promotion. This completes the proof. \end{proof} \begin{lemma} \label{lemma.mid overlap} The path $\mathcal{R}_s$ passes through the cell containing $|\lambda|$, the maximal entry in $t$. Likewise, the path $\L_s$ passes through the cell containing $|\lambda| + 1$, the minimal entry in $(t.\epsilon)^\dagger$. \end{lemma} \begin{proof} Let the letter $|\lambda|$ be in cell $(i,j)$ of $t$. From Lemma~\ref{lemma.pw1}, we have that the promotion path $\L$ of $t.\epsilon$ ends on cell $(i,j)$ of $t.\epsilon$. The conjugation map $\dagger$ reverses the labels and interchanges the notions of (below or to the left) and (above or to the right), so it takes the path $\L$ of $t.\epsilon$ to the path $\mathcal{R}$ of $(t.\epsilon)^\dagger$. Hence, the path $\mathcal{R}_2$ of $(t.\epsilon)^\dagger$ passes through the image under $\dagger$ of the cell $(i,j)$ in $(t.\epsilon)^\dagger$. As $\mathcal{R}_s$ agrees with $\mathcal{R}_2$ up to this point, we have that $\mathcal{R}_s$ passes through the cell $(i,j)^\dagger$. By the construction of the staircase pair, this cell is immediately to the right of the cell $(i,j)$, which is in $t$. The path $\mathcal{R}_s$ moves from the cell $(i,j)^\dagger$ to the cell above or to the left, whichever has the larger entry. But both of these cells (if they exist), are in the subtableau $t$, and $|\lambda|$ is the largest entry in $t$. Therefore $\mathcal{R}_s$ passes through $(i,j)$, the cell containing $|\lambda|$. The proof of the second part of the lemma is similar. Let the letter $|\lambda|+1$ be in cell $(i',j')$ of $(t.\epsilon)^\dagger$. From Lemma~\ref{lemma.pw2}, we have that the inverse promotion path $\mathcal{R}$ of $(t.\epsilon)^\dagger.\epsilon^*$ ends on cell $(i',j')$ of $(t.\epsilon)^\dagger.\epsilon^*$. Note that $(t.\epsilon)^\dagger.\epsilon^* = t^\dagger$, by Lemma~\ref{lemma.identities}. The conjugation map $\dagger$ takes the path $\mathcal{R}$ of $(t.\epsilon)^\dagger.\epsilon^*$ to the path $\L$ of $((t.\epsilon)^\dagger.\epsilon^*)^\dagger = t$. Therefore the path $\L_1$ of $t$ passes through the image under $\dagger$ of the cell $(i',j')$ in $t$. As $\L_s$ agrees with $\L_1$ up to this point, we have that $\L_s$ passes through the cell $(i',j')^\dagger$. By the construction of the staircase pair, this cell is immediately to the left of the cell $(i',j')$, which is in $(t.\epsilon)^\dagger$. The path $\L_s$ moves from the cell $(i',j')^\dagger$ to the cell below or to the right, whichever has the smaller entry. But both of these cells (if they exist), are in the subtableau $(t.\epsilon)^\dagger$, and $|\lambda|+1$ is the smallest entry in $(t.\epsilon)^\dagger$. Therefore $\L_s$ passes through $(i',j')$, the cell containing $|\lambda|+1$. \end{proof} \begin{lemma} The restriction of $\mathcal{R}_s$ to $t$ is exactly $\mathcal{R}_1$, and the restriction of $\L_s$ to $(t.\epsilon)^\dagger$ is exactly $\L_2$. \end{lemma} \begin{proof} The path $\mathcal{R}_1$ starts at $|\lambda|$ and moves up or to the left, to whichever cell has the larger entry. The path $\mathcal{R}_s$ moves in the same way, and passes through the cell containing $|\lambda|$ by Lemma~\ref{lemma.mid overlap}. The subtableau $t$ has no cells below or to the right of the cell containing $|\lambda|$, so the restriction of $\mathcal{R}_s$ to $t$ is exactly $\mathcal{R}_1$. Similarly, the path $\L_2$ starts at $|\lambda|+1$ and moves down or to the right, to whichever cell has the smaller entry. The path $\L_s$ moves in the same way, and passes through the cell containing $|\lambda|+1$ by Lemma~\ref{lemma.mid overlap}. The subtableau $(t.\epsilon)^\dagger$ has no cells above or to the left of the cell containing $|\lambda|+1$, so the restriction of $\L_s$ to $(t.\epsilon)^\dagger$ is exactly $\L_2$. \end{proof} \begin{corollary} Given $t \in {{\sf hrSYT}}(\lambda)$, the path $\mathcal{L}$ of the staircase pair $(t, (t.\epsilon)^\dagger)$ is the concatenation of the paths $\mathcal{L}$ in the subtableaux $t$ and $(t.\epsilon)^\dagger$. The same is true when each $\L$ is replaced by an $\mathcal{R}$. \end{corollary} \begin{corollary} \label{corollary.crossings} Given $t \in {{\sf hrSYT}}(\lambda)$, the paths $\mathcal{L}$ and $\mathcal{R}$ of $t$ cross if and only if in the staircase pair $(t, (t.\epsilon)^\dagger)$, the paths $\mathcal{L}$ and $\mathcal{R}$ of $(t, (t.\epsilon)^\dagger)$ cross in the subtableau $t$. Likewise, the paths $\mathcal{L}$ and $\mathcal{R}$ of the staircase pair $(t, (t.\epsilon)^\dagger)$ cross in the subtableau $(t.\epsilon)^\dagger$ if and only if the paths $\mathcal{L}$ and $\mathcal{R}$ of $(t.\epsilon)^\dagger$ cross. \end{corollary} \begin{lemma} For any $t \in {{\sf hrSYT}}(\lambda)$, if the paths $\L$ and $\mathcal{R}$ cross in $t$ then in $t.\epsilon$, the paths $\L$ and $\mathcal{R}$ do not cross. If $\lambda_1 \geq \lambda_2 + 2$ and $\lambda_\ell = 1$, then the converse is also true. That is, exactly one of $t$ and $t.\epsilon$ has its paths $\L$ and $\mathcal{R}$ cross. \end{lemma} \begin{proof} If the paths $\L$ and $\mathcal{R}$ did cross in $t.\epsilon$, then in $(t.\epsilon)^\dagger$ the paths $\mathcal{R}$ and $\L$ would cross, as the map $\dagger$ takes the paths $\L$ and $\mathcal{R}$ in $t.\epsilon$ to the paths $\mathcal{R}$ and $\L$ in $(t.\epsilon)^\dagger$. But then by Corollary~\ref{corollary.crossings}, in the staircase pair $(t, (t.\epsilon)^\dagger)$ the paths $\L$ and $\mathcal{R}$ would cross at least twice, which contradicts the fact that in a right-justified tableau, $\L$ and $\mathcal{R}$ may cross at most once. This completes the proof of the first part of the lemma. If $\lambda_\ell = 1$, then every braid hook in the staircase pair $(t, (t.\epsilon)^\dagger)$ is entirely contained within one of the subtableaux $t$ and $(t.\epsilon)^\dagger$. This is because a braid hook spanning both subtableaux would be formed only of cells in the bottom row of $t$ and in the left column of $(t.\epsilon)^\dagger$. If $\lambda_\ell = 1$, then there are only two such cells. If $\lambda_1 \geq \lambda_2 + 2$ then the staircase pair $(t, (t.\epsilon)^\dagger)$ is a right-justified tableau whose first row is longer that its second, and with a last row of a single cell. As in the proof of Lemma~\ref{lemma.phi bijection}, these are the conditions under which we know that the paths $\L$ and $\mathcal{R}$ of $(t, (t.\epsilon)^\dagger)$ cross exactly once. Because they must cross on a braid hook, this crossing must happen entirely within one of the subtableaux $t$ and $(t.\epsilon)^\dagger$. By Corollary~\ref{corollary.crossings}, in one of the tableaux $t$ and $(t.\epsilon)^\dagger$, the paths $\L$ and $\mathcal{R}$ cross. Finally, the paths $\L$ and $\mathcal{R}$ of $(t.\epsilon)^\dagger$ cross if and only if the paths $\L$ and $\mathcal{R}$ of $t.\epsilon$ cross, completing the proof. \end{proof} We have shown that the paths $\mathcal{L}$ and $\mathcal{R}$ cannot cross in both $t$ and $t.\epsilon$, and that if $\lambda_1 \geq \lambda_2 + 2$ and $\lambda_\ell = 1$, then they cross in exactly one of those tableaux. This completes the proof of Proposition~\ref{proposition.cross pairing} and thus of Theorem~\ref{thm:braid_moves_half_right_young_tableaux}. \subsection{Surjective Case} The map $\varphi$ of Section~\ref{section.right-justified} is a bijection, because for the tableau shapes under consideration in that section, the paths $\L$ and $\mathcal{R}$ always cross exactly once. In Section~\ref{subsection.injective}, the map $\psi$ is an injection, because for the relevant shapes, the paths $\L$ and $\mathcal{R}$ may cross either 0 or 1 times. Understanding the image of $\psi$ allows us to determine the expected number of braid hooks for a tableau of, for example, trapezoidal shape. In this section, we consider tableaux of \defn{skew right-justified shape}. Let $\mu \subset \lambda=(\lambda_1,\ldots,\lambda_\ell)$ be two partitions. Then we may consider standard tableaux of skew right-justified shape $\lambda/\mu$, denoted by ${{\sf rSYT}}(\lambda/\mu)$. If the skew shape $\lambda/\mu$ is connected (i.e., for each pair of consecutive rows, there are at least two cells (one in each row) which have a common edge), $\lambda_1>\lambda_2$ and $\lambda_\ell=1$, then the paths $\L$ and $\mathcal{R}$ in a tableau $t \in {{\sf rSYT}}(\lambda/\mu)$ must cross at least once and potentially cross more than once. In this case, the corresponding map $\psi$ is surjective. An example of a connected skew right-justified shape and a skew right-justified tableau with paths $\L$ and $\mathcal{R}$ that cross more than once is given in Figure~\ref{figure.top.diagonal}. In general, the path $\mathcal{R}$ is above $\L$ in the top left corner if $\lambda_1>\lambda_2$. In the bottom right, $\L$ is above $\mathcal{R}$ if $\lambda_\ell=1$, so the paths cross at least once. Unlike the shapes we have previously considered, it is possible for the second hook of~\eqref{equation.hooks} to appear, in the top right corner. If this happens, then the paths cross in the other direction --- from ($\L$ above $\mathcal{R}$) to ($\mathcal{R}$ above $\L$). \begin{figure} \[ \tableau[sY]{&&&\bl\\ \bl&&&\\ \bl&\bl&& \\ \bl&\bl&\bl&\\} \qquad \qquad \qquad \tableau[sY]{\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 1& \tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 2& \tf 3&\bl\\ \bl&\lightblue{\vrule height\squaresize width\squaresize} \overlay 4&\tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 5 & \lightblue{\vrule height\squaresize width\squaresize} \overlay 7\\ \bl&\bl&\tf 6 & \tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 8 \\ \bl&\bl&\bl& \tf \lightblue{\vrule height\squaresize width\squaresize} \overlay 9\\} \] \caption{Left: a connected skew right-justified shape $\lambda/\mu=(4,3,2,1)/(1)$. Right: a tableau in ${{\sf rSYT}}(\lambda/\mu)$ in which the paths $\L$ and $\mathcal{R}$ cross more than once due to the jagged top right boundary.} \label{figure.top.diagonal} \end{figure} While it is possible for there to be more than one crossing, the difference between the number of crossings of each type must be exactly one. That is, there is exactly one more crossing on the lower left boundary than on the upper right boundary. The precise statement is given in the following proposition. \begin{proposition} Let $\mu\subset \lambda=(\lambda_1,\ldots,\lambda_\ell)$ be two partitions such that $\lambda/\mu$ is connected, $\lambda_1>\lambda_2$, and $\lambda_\ell=1$. Then the promotion and inverse promotion paths $\L$ and $\mathcal{R}$ in a tableau $t\in {{\sf rSYT}}(\lambda/\mu)$ cross at least once, and the difference between the number of crossings ($\mathcal{R}$ above $\L$) to ($\L$ above $\mathcal{R}$) minus the number of the opposite crossings, is one. \end{proposition} Translating this back via X.~Viennot's heap map $\nu^{-1}$ to commutation classes in $\mathfrak{S}_n$, states that in commutation classes corresponding to connected skew shapes the expected difference between the number of braid moves of the form $s_is_{i+1}s_i$ and the number of braid moves of the form $s_{i+1}s_is_{i+1}$ is one. Note that, since the shapes are skew, the words in $\mathfrak{S}_n$ are not necessarily reduced. \begin{example} The statement corresponding to the shape of Figure \ref{figure.top.diagonal} is that the expected difference between `up' and `down' braid moves in the commutation class of the word $\mathbf{w} := (s_1s_2s_3)(s_1s_2s_3)(s_1s_2)(s_1)$ is one. Note that $\mathbf{w}$ is not reduced. We may verify this by listing the four words in this commutation class, {\color{red}121}321321, {\color{red}121}{\color{blue}323}{\color{red}121}, 123{\color{red}121}321 and 123123{\color{red}121}. Observe that there are five `up' braid moves, colored red, and one `down' braid move, colored blue. \end{example} \section{Homomesy} \label{sec:homomesy} In Section~\ref{sec:even_odd_homomesy}, we prove a refinement of Theorem~\ref{thm:braid_moves_young_tableaux} by showing that the number of braid hooks is homomesic with respect to the action of the dihedral group $\langle \tau_o, \tau_e \rangle$, where $\tau_o=\prod_{i \text{ odd}}\tau_i$ and $\tau_e=\prod_{i \text{ even}}\tau_i$ are the odd and even promotion operators, respectively. We reformulate the result in terms of reduced words in Theorem~\ref{theorem.homomesy words} and prove them in this setting. In Section~\ref{subsection.homomesy poset} we prove an analogous result for more general posets, where the statistic of descents is proven to be homomesic with respect to an even-odd action. \subsection{Homomesy with respect to even-odd--promotion} \label{sec:even_odd_homomesy} Consider the group $G$ generated by the \defn{odd} and \defn{even promotion operators} $\tau_o=\prod_{i \text{ odd}}\tau_i$ and $\tau_e=\prod_{i \text{ even}}\tau_i$, respectively. Note that, within each operator, the $\tau_i$'s commute. Hence their relative order is not relevant, which implies that $\tau_o$ and $\tau_e$ are involutions. In particular, $G$ is a dihedral group. In this section, we prove the following generalization of Theorem~\ref{thm:braid_hooks_homomesy_even_odd}, which corresponds to the special case $\lambda=\Delta_n$. \begin{theorem} \label{theorem.homomesy} The number of braid hooks is homomesic with respect to the action of the dihedral group $\langle \tau_o,\tau_e\rangle$ on ${{\sf rSYT}}(\lambda)$ if and only if $\lambda_1>\lambda_2$ and $\lambda_\ell=1$ for a partition $\lambda$ with $\ell$ parts. \end{theorem} We reformulate this result in terms of reduced words. Define ${{\sf rW}}(\lambda)$ to be the commutation class of reduced words which under Viennot's bijection correspond to ${{\sf rSYT}}(\lambda)$: \[ \nu \colon {{\sf rW}}(\lambda) \to {{\sf rSYT}}(\lambda)\;. \] For $\lambda=\Delta_n$ we recover the commutation class of the reduced word $\mathbf{w}_0$ for $w_0$, that is, ${{\sf rW}}(\Delta_n)={{\sf Red}}(\mathbf{w}_0)$. \begin{theorem} \label{theorem.homomesy words} The number of braid moves in ${{\sf rW}}(\lambda)$ has expected value at most one. Furthermore, the expected number of braid moves is one if and only if $\lambda$ satisfies \begin{equation} \label{eq.lambda_condition} \lambda_1>\lambda_2 \qquad \text{and} \qquad \lambda_\ell=1 \end{equation} for a partition $\lambda$ with $\ell$ parts or, equivalently, if every word $\mathbf{w} \in {{\sf rW}}(\lambda)$ satisfies $\mathbf{w}_1\leq \mathbf{w}_3$ and $\mathbf{w}_{N-2}\geq \mathbf{w}_N$ where $N=|\lambda|$. In this case, the number of braid moves is homomesic with respect to $\langle \tau_o, \tau_e \rangle$-orbits. \end{theorem} Note that the analogous statement fails for $n=7$ if one replaces the group $\langle \tau_o, \tau_e \rangle$ by the cyclic group generated by the gyration operator $\tau_o\tau_e$ or any order two subgroup. Hence, this theorem provides an example of a homomesy under a dihedral group action, which in general is not homomesic under the cyclic subgroup generated by $\tau_o \tau_e$ or order two subgroups (and hence any abelian subgroup by~\cite[Lemma 1]{roby.2015}). We also note that the bijection $\varphi$ of \eqref{equation.varphi} does not preserve $\langle \tau_o,\tau_e\rangle$-orbits, so one cannot use it to prove Theorem~\ref{theorem.homomesy words} (or equivalently, Theorem~\ref{theorem.homomesy}). To prove Theorem~\ref{theorem.homomesy words}, we define a $\langle \tau_e, \tau_o \rangle$-orbit preserving map \begin{equation} \Phi \colon \{ (k,\mathbf{w}) \mid \mathbf{w} \in {{\sf rW}}(\lambda), \text{$k$ a braid in $\mathbf{w}$} \} \to {{\sf rW}}(\lambda) \end{equation} by \begin{equation} \Phi(k,\mathbf{w}) := \mathbf{w}. \tau_{o(k-2)}\cdots \tau_{o(1)}\;, \end{equation} where for convenience: \begin{equation} \label{equation.tau oi} \tau_{o(i)} := \begin{cases} \tau_o & \text{if $i$ is odd},\\ \tau_e & \text{if $i$ is even.} \end{cases} \end{equation} Theorem~\ref{theorem.homomesy words} is then a direct consequence of the following lemma. \begin{lemma} \label{lemma.gyration} $\Phi$ is injective. Furthermore $\Phi$ is a bijection if and only if $\lambda$ satisfies Equation~\eqref{eq.lambda_condition}. \end{lemma} To prove Lemma~\ref{lemma.gyration}, we need some preliminary notation and results. For simplicity, we write all reduced words $s_{i_1} \cdots s_{i_k}$ simply as a word $i_1 \ldots i_k$. Take $\mathbf{w}\in {{\sf rW}}(\lambda)$. Recall that $\mathbf{w}$ cannot contain a factor of the form $aa$ (which we call the \defn{quadratic rule}) and that, if it contains a factor of the form $aba$, then $aba=a(a+1)a$ (which we call the \defn{braid rule}) by a slight extension of Lemma~\ref{lemma.up down braid}. We say that $1<k<N$ is a \defn{braid} in $\mathbf{w}$ if there is a braid $a(a+1)a$ with the $a+1$ in position $k$ of $\mathbf{w}$. For $j\ge 0$, define \begin{equation} \label{equation.wj} \mathbf{w}^{(j)} := \mathbf{w}. \tau_{o(1)} \cdots \tau_{o(j)}\;. \end{equation} Note that $\mathbf{w}^{(j)}$ runs through the $\langle \tau_o, \tau_e \rangle$-orbit of $\mathbf{w}$. As it moves through the first half of the orbit, we follow what happens in a moving window of length $2$, setting $a_i:=\mathbf{w}^{(i-2)}_{i-1}$ and $c_i:=\mathbf{w}^{(i-2)}_{i+1}$. Here is an example for $\mathbf{w}=1231423121 \in {{\sf Red}}(\mathbf{w}_0)$: \renewcommand{\r}[1]{{\color{red}#1}} \begin{displaymath} \begin{array}{|c|c|c|c|c|} \hline i & \mathbf{w}^{(i-2)} & a_i & c_i & c_i-a_i \\\hline 2 & \r12\r31423121 & 1 & 3 & 2\\ 3 & 1\r21\r3241321 & 2 & 3 & 1\\ 4 & 12\r13\r214321 & 1 & 2 & 1\\ 5 & 123\r12\r14321 & 1 & 1 & 0\\ 6 & 1231\r24\r1321 & 2 & 1 & -1\\ 7 & 12134\r23\r121 & 2 & 1 & -1\\ 8 & 121342\r31\r21 & 3 & 2 & -1\\ 9 & 1231241\r32\r1 & 3 & 1 & -2\\\hline \end{array} \end{displaymath} Note that there exists a unique position $k$ where $a_k=c_k$, namely $k=5$; for $i<k$, $a_i<c_i$ while for $i>k$, $a_i>c_i$. In fact, $k$ is the position of a braid in $\mathbf{w}^{(k-2)}$. This implies that $\mathbf{w}$ admits exactly one preimage by $\Phi$, namely $\Phi^{-1}(\mathbf{w})=(k, \mathbf{w}^{(k-2)})$. We now move on to proving that this is a general feature whenever $\lambda$ satisfies Equation~\eqref{eq.lambda_condition}; this implies that $\Phi$ is indeed a bijection. When the conditions are not satisfied, uniqueness still holds but existence fails for at least one word $\mathbf{w}\in {{\sf rW}}(\lambda)$, and surjectivity will be lost. \begin{lemma} \label{lemma.comparison} Let $\mathbf{w} \in {{\sf rW}}(\lambda)$, $1<i<N-1$ and define $\mathbf{w}'=\mathbf{w}.\tau_{o(i+1)}$. Then, $\mathbf{w}_{i-1}<\mathbf{w}_{i+1}$ if and only if $\mathbf{w}'_{i}\le \mathbf{w}'_{i+2}$. \end{lemma} \begin{proof} Let $abcd$ and $xyzt$ be the subwords of $\mathbf{w}$ and $\mathbf{w}'$ at positions $i-1,\ldots,i+2$. With this notation, we want to prove that \begin{equation} \label{equation.sign} c-a > 0 \Longleftrightarrow t-y \geq 0\;. \end{equation} From the action of $\tau_{o(i+1)}$, we have $xy=ab$ if $b=a\pm 1$ and $xy=ba$ otherwise. Similarly, $zt=cd$ if $d=c\pm 1$ and $zt=cd$ otherwise. It follows that $t-y$ differs from $c-a$ by at most $\pm 2$. A counterexample to Equation~\eqref{equation.sign} can therefore only occur if $c-a$ is close to zero, namely in one of the following three cases: \smallskip \noindent \textbf{Case 1:} $c-a=-2$ and $t-y=0$; from the action of $\tau_{o(i+1)}$, one necessarily has $xyzt=abcd=ab(a-2)b$ with $b=a-1$; this is forbidden by the braid rule. \smallskip \noindent \textbf{Case 2:} $c-a=0$; then by the braid rule $abcd=a(a+1)ad$; since $\mathbf{w}$ is reduced $d\ne a+1$; if $d=a-1$ then $xyzt=a(a+1)a(a-1)$ and $t-z=-2<0$; otherwise $xyzt=a(a+1)da$ and $t-z=-1$; in both cases Equation~\eqref{equation.sign} is satisfied. \smallskip \noindent \textbf{Case 3:} $c-a=\epsilon$ with $\epsilon=\pm 1$; from the action of $\tau_{o(i+1)}$, $xyzt$ takes one of the following forms: \begin{equation} xyzt = \begin{cases} a (a\pm 1)(a+\epsilon)(a+\epsilon\pm 1),\\ a (a\pm 1)d(a+\epsilon),\\ ba(a+\epsilon)(a+\epsilon\pm 1),\\ bad(a+\epsilon).\\ \end{cases} \end{equation} If the third form is $ba(a+1)a$, then $\epsilon=1$, $t-y=0$, and Equation~\eqref{equation.sign} is satisfied. Otherwise, using the quadratic and braid rules one further deduces that $y=a-\epsilon$ in the two first forms and that $t=a+2\epsilon$ in the third form; it follows that, in all forms, $t-y$ has the same sign as $\epsilon$ and Equation~\eqref{equation.sign} is satisfied. \end{proof} \begin{lemma} \label{lemma.unique} Let $\mathbf{w}\in {{\sf rW}}(\lambda)$ and define $\mathbf{w}^{(j)}$ as in~\eqref{equation.wj}. Then there exists at most one $1<k<N$ such that \begin{equation} \label{equation.equality} \mathbf{w}_{k-1}^{(k-2)} = \mathbf{w}_{k+1}^{(k-2)}. \end{equation} If $\lambda$ further satisfies Equation~\eqref{eq.lambda_condition}, then existence is guaranteed. \end{lemma} \begin{proof} The statement of Lemma~\ref{lemma.comparison} can be reformulated as $\mathbf{w}^{(i)}_{i-1}\ge \mathbf{w}^{(i)}_{i+1}$ if and only if $\mathbf{w}^{(i+1)}_{i}>\mathbf{w}^{(i+1)}_{i+2}$. Hence, if $\mathbf{w}^{(k)}_{k-1}=\mathbf{w}^{(k)}_{k+1}$, then $\mathbf{w}^{(j)}_{j-1}> \mathbf{w}^{(j)}_{j+1}$ for all $j>k$. This implies uniqueness. Suppose now that $\lambda$ satisfies Equation~\eqref{eq.lambda_condition} and that there is no $k$ such that~\eqref{equation.equality} holds. Using that $\lambda_1>\lambda_2$ and $\lambda_\ell=1$, it follows that $\mathbf{w}^{(0)}_1<\mathbf{w}^{(0)}_3$ and $\mathbf{w}^{(N-3)}_{N-2}>\mathbf{w}^{(N-3)}_N$. Note that $\mathbf{w}^{(i+1)} = \mathbf{w}^{(i)}.\tau_{o(i+1)}$, so that we can move from $\mathbf{w}^{(0)}$ to $\mathbf{w}^{(N-3)}$ by successive applications of the operator $\tau_{o(i+1)}$ for $1<i<N-1$. By Lemma~\ref{lemma.comparison}, it is not possible to move directly from $\mathbf{w}_{i-1}^{(i-2)} < \mathbf{w}_{i+1}^{(i-2)}$ to $\mathbf{w}_{i}^{(i-1)} > \mathbf{w}_{i+2}^{(i-1)}$. This proves the existence of a $k$ such that~\eqref{equation.equality} holds. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma.gyration}] Recall that, by the braid rule, for any $\mathbf{w}\in {{\sf rW}}(\lambda)$ and any position $i$, the equality $\mathbf{w}_{i-1}=\mathbf{w}_{i+1}$ occurs if and only if $i$ is a braid of $\mathbf{w}$. Assume first that $\lambda$ satisfies Equation~\eqref{eq.lambda_condition}. Take $\mathbf{w} \in {{\sf rW}}(\lambda)$. By Lemma~\ref{lemma.unique}, there exists a unique $k$ with $1<k<N$ such that $k$ is a braid of $\mathbf{w}^{(k-2)}$. Hence $(k, \mathbf{w}^{(k-2)})$ is the unique preimage of $\mathbf{w}$ by $\Phi$. Therefore, $\Phi$ is a bijection, as desired. Otherwise Lemma~\ref{lemma.unique} still guarantees that there exists at most one preimage of $\mathbf{w}$ by $\Phi$; hence $\Phi$ is still an injection. However, if $\lambda_1=\lambda_2$, there exists a word of the form $\mathbf{w}=120\cdots$ in ${{\sf rW}}(\lambda)$; for this word, $a_2>c_2$ and therefore $a_i>c_i$ for $2\leq i<N$; hence $k$ is never a braid of $\mathbf{w}^{(k-2)}$, and $\mathbf{w}$ is not in the image of $\Phi$. When instead $\lambda_\ell=1$ there exists some word of the form $\mathbf{w}=\cdots021$ in ${{\sf rW}}(\lambda)$, and the same argument applies. Therefore, in both cases, $\Phi$ is not surjective. \end{proof} \begin{remark} It would be interesting to explain the homomesy property stated in this section by finding an equivariant bijection from right-justified tableaux (equipped with the action of the even and odd promotion operators) to some other combinatorial model equipped with a natural dihedral action. \end{remark} \subsection{Homomesy for posets} \label{subsection.homomesy poset} As discussed in Section~\ref{section.heaps}, the set ${{\sf rSYT}}(\lambda)$ can be viewed as the set of linear extensions of a poset with a unique minimal and maximal element. In this section, we provide a homomesy result of similar nature for posets, where the statistic is descents with respect to order ideals. Let $P$ be a finite poset with $n:=|P|$. Denote by $\mathcal{L}(P)$ the set of linear extensions of $P$ and by $J(P)$ the set of order ideals of $P$. For $L \in \mathcal{P}$ and $I \in \mathcal{J}(P)$, let \[ {\sf des}_I(L):=\{ p \in I \mid p \lessdot L^{-1}(L(p)+1) \not \in I \} \] be the set of elements $p$ of $I$ that are covered by an element not in $I$ whose labeling under $L$ is exactly one greater than the label of $p$. We call an element $p \in {\sf des}_I(L)$ a \defn{descent} of $L$. We can define operators $\tau_i$ for $1\le i < n$ on a linear extension $L$ by interchanging $i$ and $i+1$ in $L$ if the result is a linear extension of $P$, and $L$ otherwise. As before, $\tau_o=\prod_{i \text{ odd}}\tau_i$, $\tau_e=\prod_{i \text{ even}}\tau_i$, and $\tau_{o(i)}$ as in~\eqref{equation.tau oi}. \begin{theorem} Let $P$ be a poset with minimal element $\hat{0}$ and maximal element $\hat{1}$, and fix $I \in \mathcal{J}(P)\setminus \{ \emptyset,P\}$. Then there is a $\langle \tau_o,\tau_e\rangle$-orbit-preserving bijection between $\{(p,L) \mid L \in \mathcal{L}(P), p \in {\sf des}_I(L)\}$ and $\mathcal{L}(P)$. In particular, the number of descents in $\mathcal{L}(P)$ is homomesic with respect to $\langle \tau_o,\tau_e\rangle$-orbits, with expected value one. \label{thm:edges} \end{theorem} \begin{proof} Given $L \in \mathcal{L}(P)$, consider the sequence of linear extensions $L_1,L_2,\ldots,L_n$ defined as $L_1:=L$ and $L_{i+1}:=L_i.\tau_{o(i)}.$ As $i$ increases, the sequence of elements of $P$ labeled by $i$ in $L_i$ form a path from $\hat{0}$ to $\hat{1}$ as follows. At each step from $L_i$ to $L_{i+1}$, there are two choices: \begin{itemize} \item if $\tau_{o(i)}$ swaps the labels $i$ and $i+1$, then our path remains constant; \item otherwise, $i+1$ covers $i$ and so we have extended the path. \end{itemize} Since this is a path from $\hat{0}$ to $\hat{1}$, there is a unique position $k:=k(L)$ in the sequence $L_1,L_2,\ldots,L_n$ such that $L_{k-1}^{-1}(k-1) \in I$ but $L_{k}^{-1}(k) \not \in I$. We may therefore define the $\langle \tau_o,\tau_e\rangle$-orbit-preserving bijection \[ \Phi: \{(p,L) \mid L \in \mathcal{L}(P), p \in {\sf des}_I(L)\} \to \mathcal{L}(P) \] by \[ \Phi(p,L) := L.\tau_{o(L(p))}.\tau_{o(L(p)-1)}.\ldots.\tau_{o(1)}\;.\qedhere \] \end{proof} \begin{corollary} Let $P$ be a poset with $\hat{0}$ and $\hat{1}$, and fix $I \in \mathcal{J}(P)\setminus \{ \emptyset,P\}.$ Then \[ |\mathcal{L}(P)| = \sum_{L \in \mathcal{L}(P)} \mid {\sf des}_I(L)|\;. \] \label{cor:edges} \end{corollary} H.~Thomas has kindly provided a beautiful geometric proof of Corollary~\ref{cor:edges}. We recall that the \defn{order polytope} $\mathcal{O}(P)$ of $P$ is the $n$-dimensional polytope in $\mathbb{R}^P$, whose vertices are given by the points $\{\mathbbm{1}_I \mid I \in \mathcal{J}(P)\}.$ The volume of $\mathcal{O}(P)$ is equal to $|\mathcal{L}(P)|/n!$, and the facets of $\mathcal{O}(P)$ are indexed by covers $e:=p \lessdot q$ of $P$; restricting to a facet $F_e$, we see that its volume is given by $|\mathcal{L}(P_e)|/(n-1)!$, where $P_e$ is $P$ with the edge $e$ contracted. In other words, the volume of the facet $F_e$ counts the number of linear extensions of $P$ such that $L(p)+1=L(q)$. For more details, see~\cite{stanley.1986}. \begin{proof}[Proof of Corollary~\ref{cor:edges} (H.~Thomas)] Let $E$ be the set of covers $\{ p \lessdot q \mid p \in I, q \not \in I\}.$ Then the order polytope $\mathcal{O}(P)$ decomposes as the union of the cones with apex given by the vertex $\mathbbm{1}_I$ over the facet $F_e$, for $e \in E$: \[ \mathcal{O}(P) = \bigcup_{e \in E} \mathsf{Conv}(\mathbbm{1}_I,F_e)\;. \] Since the volume of the order polytope is given by the number of linear extensions, and the cones all have height one, taking volumes of the decomposition above gives: \begin{align*} \frac{|\mathcal{L}(P)|}{n!} & = \mathsf{Vol}(\mathcal{O}(P)) = \sum_{e \in E} \mathsf{Vol}(\mathsf{Conv}(\mathbbm{1}_I,F_e)) \\ & = \sum_{e \in E} \frac{1}{n} \cdot \frac{|\mathcal{L}(P_e)|}{(n-1)!} = \frac{1}{n!}\sum_{p \lessdot q \in E} \sum_{L \in \mathcal{L}(P)} \mathbbm{1}_{L(p)+1=L(q)} = \frac{1}{n!} \sum_{L \in \mathcal{L}(P)} |{\sf des}_I(L)|\;. \end{align*} \end{proof} \begin{remark} It would be interesting if the previous proof could be refined to a bijection. \end{remark} It would be desirable to extend this geometric viewpoint to the previous parts of this paper. \begin{remark} Is there a geometric proof of Theorem~\ref{theorem.homomesy}? It is natural to interpret a braid hook as the codimension 2 face in the order polytope coming from the intersection of the two facets corresponding to the relevant edges. The problem is to again come up with a decomposition of the order polytope by coning (now twice!) over all such faces. \end{remark} \bibliographystyle{alpha}
2,877,628,091,384
arxiv
\section*{Acknowledgment} \footnotesize \fi Part of this work has been supported by Deutsche Forschungsgemeinschaft (DFG) within the Collaborative Research Center SFB 876 ``Providing Information by Resource--Constrained Analysis'', projects A4 and B4 as well as the German Federal Ministry of Education and Research (BMBF) for the project A--DRZ (Establishment of the German Rescue Robotics Center, 13N14857) and the Ministry of Economic Affairs, Innovation, Digitalization and Energy of the state of North Rhine--Westphalia in the course of the Competence Center 5G.NRW under grant number 005--01903--0047. \ifacm \end{acks} \fi \section{\acs{RIS}--enhanced Hybrid Vehicular Communications} \label{sec:approach} As discussed earlier, the controlled utilization of additional reflected paths may not only enhance the multipath richness or rank of \ac{MIMO} channels, but also extend the network coverage in terms of an improved accessibility of \ac{NLOS} regions. This especially applies for wireless communications at the \ac{mmWave} and \si{\tera\hertz} bands, where shorter wavelengths allow for reduced size requirements of \ac{RIS} installations. However, vehicular communications comes with various challenges, which could be met by utilizing the \ac{RIS} technology as indicated in \autoref{fig:intro:usecases}. The utilization of \ac{mmWave} and \si{\tera\hertz} bands is heavily based on directional antennas, primarily with an electrically steerable main lobe (e.g. through a phased array). Due to the directional propagation of these signals, the \textit{beams} need to track mobile users. For this reason, the beam management is a crucial challenge of current \acs{5G} networks operating in the so--called \ac{FR2}, which is the \ac{mmWave} domain. With \acsp{RIS}, the available coverage area is extended, while beam management methods may still be applicable to the reflection beams. This allows for persisting with mobile network supply even during \ac{LOS} blockages. Nevertheless, the vehicular mobility still involves more complexity than the so--called \ac{FWA} like for broadband provisioning of stationary network subscribers. For a proper alignment of the reflection direction, a precise estimation of the \ac{CSI} may be required and could be supported geometry--based computations utilizing location and map information. In spite of that, also the volatile \ac{V2V} links may benefit from the reliable and predictable disposability of static \acs{RIS} installations dedicated to the purpose of enhanced coverage or network availability. In this regard, a great advantage of \acp{RIS} could lie in their low deployment and operation costs. Due to their mostly passive nature, they have low energy demands and unlike the deployment of additional base stations, they do not require a high capacity communication backhaul, but only some control link, which could be realized by in--band or out--of--band signaling. For this reason, the fixed installation of \acp{RIS} on building surfaces, noise barriers, and other static surfaces in road traffic would lead to a \ac{SRE} for several applications including but not limited to vehicular ones. In addition, a dynamic and on--demand provisioning of \acp{RIS} by mobile entities like \acp{UAV} is conceivable as proposed in~\cite{Abdalla/etal/2020a,Zhang/etal/2019}. The mobility in the three--dimensional space allows for a high flexibility and adaptability to varying requirements. For example, in case of a traffic accident, a \ac{UAV} with a mounted \ac{RIS} could increase the network capacity at the crash site. This may allow for forwarding comprehensive and detailed situation information and preparation of the rescue forces. In addition, such a \acs{UAV} may supply a \acs{RIS}--enhanced coverage to intercept network load peaks for a cluster of vehicles within a traffic congestion. Thus, dynamic cluster hovering constitutes another challenge of a dynamic coverage enhancement by means of dedicated \acs{RIS}--\acsp{UAV}. The dynamic placement of \acp{RIS} may even accelerate the exploration of locations, which are best suited for a fixed and permanent deployment. In this way, machine learning is applicable not only to optimize the \ac{UAV} trajectory, but also to derive recommendations for static \acs{RIS} installations~\cite{Ntontin/etal/2020a,Venturini/etal/2021a}. Even cooperative mechanisms for proactive steering \acs{mmWave} beams and their \acs{RIS}--controlled reflections based on the predicted trajectories of the mobile vehicles constitute a promising field of research. Initial feasibility studies such as~\cite{Mavromatis/etal/2017a} could be carried on towards predictive steering algorithms, which among other objectives may take a reduction of handoffs between \acs{LOS} and \acs{RIS} reflection paths into account. \fig{t}{fig/introduction} {Different \ac{RIS} integration options within hybrid vehicular networks. The application of~\acsp{RIS} is believed to escalate the performance of vehicular communications not only in case of static deployments but also when mounted on~\acsp{UAV}. In addition to a (static or dynamic) placement exclusively dedicated to improve mobile networks, a ubiquitous \acs{RIS} deployment may be utilized opportunistically in future.}{fig:intro:usecases \wfig{t}{fig/integration} {Details on the \acs{RIS} model integration into and extension of our joint mobility and network simulation framework \acs{LIMoSim} with \mbox{\acs{ns3}/\textit{5G--LENA}}. On the basis of generic \acs{ns3} classes, the \acs{3GPP} channel model and \acs{LIMoSim} mobility framework are extended to account for \acs{RIS}--enabled propagation paths in addition to the direct paths, which might be obstructed by buildings in some circumstances. }{fig:modeldetails}{0.95} Future vehicular communication environments may integrate the \ac{RIS} technology even deeper: As a vision, all kinds of vehicles could be coated with meta--material. Like the holographic \acs{MIMO} surfaces in~\cite{Huang/etal/2020}, this outer layer could be used for both an active communication of the vehicle itself and to support the \ac{SRE} for wireless communications in its vicinity. This would lead to a vast amount of available \acsp{RIS} in urban and vehicular environments. Different from their discussed dedicated (static or dynamic) deployment with the primary purpose of enhancing wireless communications, a massive amount of \acsp{RIS} would enrich the \acs{SRE} subsidiary. At the same time, the vehicles and objects with integrated \acs{RIS} functionality pursue their usual transportation task and opportunistically offer a reflection occasion like the bus depicted at the bottom right of \autoref{fig:intro:usecases}. In general, vehicular applications necessitate a timely control of the \acs{RIS} configuration (particularly its reflection direction) due to the highly mobile nature and the resulting volatile radio channel. Their opportunistic utilization requires a distributed and predictive control and exploitation of the \acs{RIS} resources. However, platoons of trucks and \ac{UAV} swarms may profit form such a design, since they embody huge obstructions or lack multipath richness at the aerial wireless channel, respectively. Even innovative business models are imaginable, where surface resources could be offered to be leveraged on demand for intelligent reflections. \section{Conclusion} \label{sec:conclusion} The \acs{RIS} technology is believed to bear great potential for future wireless communications. Especially, these intelligent surfaces can be leveraged to facilitate the utilization of the volatile \acs{mmWave} and \si{\tera\hertz} bands within obstructed \acs{LOS} conditions. In this work, we introduced concepts for the application of \acsp{RIS} in vehicular environments, where not only a dedicated deployment of static or mobile \acs{RIS} resources enriches the \acs{SRE}, but also their opportunistic utilization when installed on any kind of building and vehicle surface, which is not (primarily) dedicated to enhance the network supply. The proposed system architecture model gives insights in applications like \acs{RIS} network planning, while our sophisticated simulation framework has been extended to allow for a joint mobility and network based system--level evaluation of \acs{RIS} reflections and \acsp{SRE}. Results show, that static and dynamic deployments of \acsp{RIS} can successively eliminate \textit{dark zones} and reduce the path loss allowing for the balance between the number of introduced \acsp{RIS} and the link performance. The discussed opportunistic utilization and thereby required distributed control of the configured reflection direction through the network will be addressed in future work. \section{Introduction} \label{sec:introduction} \Acp{RIS} show disruptive potential in enabling beyond \ac{LOS} communications in vehicular networks for technologies having a critical dependency on unobstructed \ac{LOS} transmissions \cite{Wu/etal/2020, AlHilo/etal/2021a}. The great data requirements issued by novel applications such as autonomous and connected driving have motivated the exploration of the \ac{mmWave} spectrum and \si{\tera\hertz} bands in search of higher bandwidths. Network communications in these bands however suffer greatly from obstacle--induced path loss, hereby posing an additional challenge to their utilization in urban environments. The \ac{RIS} concept has been presented as a promising solution for beyond \ac{LOS} coverage in \ac{mmWave} vehicular networks in~\cite{Heimann/etal/2020}. The general potential for coverage enhancement was illustrated through the simulative investigation of a vehicular application use--case using our \acf{LIMoSim} from~\cite{Sliwa/etal/2020f}, a mobility and network co--simulation model framework with particular support for hybrid aerial and ground-based vehicular networks. Based on these preparatory works, \autoref{fig:architecture} depicts the proposed system architecture model. It composes capabilities for environment definitions, various mobility models and an integrated network simulator based on ns--3 and its recent extension \textit{5G--LENA} for \acs{5G} networks from~\cite{Patriciello/etal/2019}, which has been extended by a \acs{RIS} channel model as elaborated in more detail in the further course. Beside the contribution of a system--level simulation framework, this work also presents novel solution approaches offered by the integration of the \acs{RIS} technology into future vehicular networks based on \autoref{fig:intro:usecases}. While the strategic deployment of \ac{RIS} in urban environments can enhance network coverage and facilitate \ac{V2V} communications, a more flexible exploitation of the \ac{RIS} potential lies in extending their mobility. By embedding \acp{RIS} on \acp{UAV}, which offer higher degrees of freedom regarding position and trajectory, a more flexible and responsive network provisioning can be realized. Reacting more timely to sudden situations or events such as accidents and traffic jams would also become possible. In addition, ubiquitous \acs{RIS} deployments in a \acf{SRE} would allow for a controlled but opportunistic utilization of \acs{RIS} reflected communication paths. It is assumed opportunistic, since in contrast to the operation of a \ac{UAV} dedicated to offer a \ac{RIS} reflection occasion, most of the vehicles may follow their objectives to deliver goods or person and thus their mounted \acp{RIS} are only available by chance. The contributions provided by this work are as follows: \begin{itemize} \item Investigation of the potential of intelligent surfaces and \acp{SRE} for hybrid vehicular communications at \acs{mmWave} and \si{\tera\hertz} bands. \item Presentation of realistic application use cases in the context of future \ac{ITS}. \item Development of a \acs{RIS} enhanced system architecture model. \item Example evaluation of case studies by means of joint mobility and network simulations. \end{itemize} The remainder of this work is structured as follows: Referring to related work, insights are given on the fundamentals of intelligent surfaces and their prospective advantages for future mobile networks as well as the integration of \acsp{UAV} bolstering \ac{ITS}. \hyperref[sec:approach]{Section~\ref*{sec:approach}} elaborates on the main concept of \acsp{RIS}-aided hybrid vehicular communication networks and gives an outlook towards a ubiquitous deployment and opportunistic utilization by also considering open challenges. The following \autoref{sec:methods} illustrates the proposed simulation framework and contains the conducted simulation studies highlighting the envisaged potentials. Finally, a summary of the key findings concludes this work. \section{Simulation Enabled Evaluation Approach} \label{sec:methods} As pointed out before, the wireless communications of \ac{ITS} --- especially in the \acs{mmWave} and \si{\tera\hertz} bands --- may highly profit from smart radio environments. By means of joint mobility and network simulations, this section elaborates on the suggested opportunities of the \acs{RIS} technology for hybrid vehicular communications. \subsection{Simulation Framework} The simulation environment is based on our \ac{LIMoSim} framework from~\cite{Sliwa/etal/2020f} with its system architecture model extended by \acs{RIS} support as depicted in \autoref{fig:architecture}. In terms of vehicular mobility simulations, it implements \acs{UAV} and motor vehicle models. The vehicles' environment includes a road topology, static obstacles like buildings, and a height profile of the terrain allowing for the application of geometry--based radio channel and signal propagation models. While the road topology and the buildings can be imported from OpenStreetMap, the terrain's height profile may be available depending on the region, e.g. through the pan--European digital surface model\;\textit{EU--DEM}. As a model for the communication network, the \acf{ns3} and its \acs{5G} \ac{NR} module \textit{5G--LENA}~\cite{Patriciello/etal/2019} is coupled with the framework and extended by the \acs{RIS} path loss model of~\cite{Ozdogan/etal/2020} to account for \acs{RIS}--enabled \acs{NLOS} reflection paths. While this work focuses on the evaluation of \acs{RIS}--enhanced network coverage, the integration of control links into the \acs{5G}\;\acs{NR} signaling and the interaction with beam management procedures may be addressed in future work. Finally, alongside a graphical visualization of the simulation scenario, the event--driven system simulation model offers performance indicators for a detailed evaluation of the simulation results. With this, the model facilitates the analysis of mobility- and topology--aware applications like predictive \acs{mmWave} beam steering on the one hand. On the other hand, \acs{RIS} network planning for the proposed dedicated or opportunistic utilization of both static and dynamic \acsp{RIS} can be applied also taking the surface dimensioning and its construction design into account. \fig{t}{fig/process_path_loss} {Process of propagation loss calculation within the simulation model. The channel condition (\acs{LOS} or \acs{NLOS}) as well as the \acs{RIS} availability is determined by examination of the obstacles within the \acs{LIMoSim} \textit{world}. With a perfect knowledge of all available paths and their corresponding losses, the best--suited path can be selected to evaluate the general potential of the utilization of \acs{RIS} reflected paths for coverage enhancements. }{fig:process} Apart from this overview of the simulation model architecture, \autoref{fig:modeldetails} comprises more details on the integration of the \acs{RIS} propagation loss model into \acs{LIMoSim}: In \acs{ns3}, basic classes are provided to allow for modeling the mobility of nodes, channel conditions and the propagation loss of a wireless signal transmission. As one implementation, the \ac{3GPP} channel models~\cite{3gppTR38901} are available therein, also forming the basis for \acs{5G} network simulations in the \acs{mmWave} domain as elaborated in~\cite{Patriciello/etal/2019}. Especially, the \acs{3GPP}\;\ac{UMi} street canyon scenario is appropriate for simulations of the selected environment. However, the associated channel condition model is derived from a probability distribution and needs to be replaced, since \acs{LIMoSim} allows for a geometry--based assessment of the \acs{LOS} condition. To retrieve the required geometry information like the vehicle locations, \acs{LIMoSim} implements a derived mobility model, which is linked to its vehicle class. For the introduction of \acs{RIS} objects, further vehicle subclasses are generated. In doing so, \acsp{RIS} can be integrated as either static vehicles (i.e. they stay on a predefined position) or mounted onto some other vehicle (i.e. following its movement with a fixed offset). As a result, they are made available to the extended models for assessing the channel condition and calculating the propagation loss by means of the \acs{LIMoSim} vehicle manager class, which holds a list of all vehicle instances defined in the simulation. As further detailed in \autoref{fig:process}, the \acs{RIS} extension is thus able to screen the \acsp{RIS} in reach, i.e. those with \acs{LOS} condition to both transmitter and receiver. To determine the availability of a certain \acs{RIS} in terms of existing \acs{LOS}, obstacles like buildings are taken into account by means of the \acs{LIMoSim} world singleton. This allows for the assessment of the channel conditions of both the direct path between transmitter and receiver as well as the reflection paths through a \acs{RIS} from the list. As already mentioned, the vehicles' mobility models are available for geometric considerations like computing their horizontal or \acs{3D} distance as required by the propagation loss model. Beside the computation of the propagation loss of the direct path, which may be \acs{LOS} or \acs{NLOS}, the \acs{RIS} path loss model from~\cite{Ozdogan/etal/2020} is applied for each of \acs{RIS} with \acs{LOS} condition to both transmitter and receiver. The simulation solely takes first order reflections into account for the time being, as this is currently a constraint of the implemented model from~\cite{Ozdogan/etal/2020}. Finally, within the simulation, all available \acs{RIS} reflection paths can be evaluated and compared to the supposedly obstructed direct path. \subsection{\acs{RIS} Application Scenarios} \fig{b}{fig/map2}{Excerpt of the simulation scenario (Map data: $\copyright$\;OpenStreetMap Contributors, CC BY-SA). At the university campus, both the static deployment of \acsp{RIS} and a \acs{RIS} mounted on a \acs{UAV} improve the coverage of a notional, central located base station. Especially obstructed areas are supplied with a \acs{RIS}--enhanced communication link mitigating the poor \acs{mmWave} propagation at \acs{NLOS} conditions.}{fig:visualization} For the case study, a university campus is chosen as simulation area with some notional base station at a central location. As described in the previous section, the topology model includes buildings and roads from OpenStreetMap as well as a height profile to also account for the rather hilly terrain. Since the communication link performance highly depends on the \acs{LOS} conditions, this realistic model of the distribution of obstacles allows for a geometry--based evaluation of the network coverage with regard to the aforementioned channel and propagation loss models. As a first sample vehicular application, a campus shuttle bus trajectory covers the roads surrounding the campus area. The base station operates a \acs{5G} \ac{NR} mobile network at the \ac{mmWave} domain. Hence, the experienced path loss at the vehicle bears on the distance to the base station as well as on the \acs{LOS} condition. However, due to the numerous buildings, the \acs{LOS} coverage is initially rather poor as evaluated in the next subsection. \autoref{fig:visualization} portrays the campus setting from an aerial perspective highlighting in green color the currently utilized \acs{RIS} reflection path to the vehicle due to an obstructed \acs{LOS} (red line). In this scene, the \acs{RIS} is carried by an \acs{UAV}, but up to seven additional static \acsp{RIS} are deployed throughout the entire setting. \begin{table}[t] \centering \caption{Simulation details} \begin{tabularx}{\columnwidth}{lX} \hline Objective & Vehicular network coverage analysis with static or \ac{UAV} mounted \ac{RIS} and central base station\\ \hline Scenario & University campus as depicted in \autoref{fig:visualization},\\ & a) Base station to vehicle\\ & b) Vehicle to vehicle with \SI{400}{\meter} initial distance\\ \hline \multirow{2}{\widthof{Propagation loss}}{Propagation loss model} & Direct path: 3GPP UMi Street Canyon according to~\cite{3gppTR38901}, \\ & Reflection path: \ac{RIS} model from~\cite{Ozdogan/etal/2020}\\ \hline \multirow{2}{\widthof{Channel condition}}{Channel condition model} & deterministic based on geometry (direct\\ & path), considering buildings as obstacles\\ \hline Beam management & ideal, refer to \textit{5G--LENA}~\cite{Patriciello/etal/2019}\\ \hline \acs{RIS} related &\\ \hline ~~~Shape/dimension & square, $\SI{0.5}{\meter} \cdot \SI{0.5}{\meter}$, operated @\SI{28}{\giga\hertz}\\ \hline ~~~Deployment & a) Static, for enhanced street canyon coverage\\ & b) Mounted on \acs{UAV}, for dynamic supply\\ \hline ~~~Control & ideal/not considered\\ \hline ~~~\acs{RIS}--\acs{UAV} & Follows the vehicle at a height of \SI{60}{\meter}, applies fixed \acs{RIS} downtilt of \SI{30}{\degree}, horizontally aligns \acs{RIS} with\\ & a) base station or b) second vehicle\\ \hline \end{tabularx} \label{tab:simualtionParameters} \end{table} As a second vehicular use case, a \ac{V2V} \acs{mmWave} communication link is studied, where one vehicle tracks the other. Starting from different crossroads, their initial distance is about\;\SI{400}{\meter} and may vary during the pursuit due to the mobility and acceleration model. Such a mobile communication link could for example be leveraged to share sensor data for a collective perception of the vehicles' environment and to coordinate maneuvers in case of cooperative autonomous driving. While there is presumably a \acs{LOS} situation when driving straight ahead, buildings at corners may lead to obstructions during turns degrading the link performance. For this reason, the introduction of \acsp{RIS} may lead to a coverage enhancement especially at the \ac{NLOS} regions. During preparatory intuitive and straightforward tests of different \ac{RIS} locations, it turned out, that for the first case with a static base station, a \acs{RIS} deployment near crossroads is effective and preferable to supply two road sections (street canyons) with an improved coverage by first order reflections via the \acsp{RIS}. A total of seven \acsp{RIS} is introduced into the setting to allow for a comprehensive improvement of the experienced path loss as evaluated in the subsequent section. Although the \acs{RIS} deployment is conducted in an arbitrary manner, works like~\cite{Ntontin/etal/2020a} propose sophisticated solutions for an automated placement. In contrast to a fixed placement of \acsp{RIS} for example as installation on building walls or at light poles, a dynamic and situational deployment may be feasible by mounting a \ac{RIS} on a \ac{UAV} leveraging its advantageous \acs{3D} movement abilities. To evaluate this approach, a \acs{RIS}--\acs{UAV} follows the designated vehicle at a fixed height of\;\SI{60}{\meter} in the subsequent simulations. This \acs{RIS} has a fixed downtilt of\;\SI{30}{\degree} and aligns horizontally towards the base station by applying an angle of yaw to the sample quadrotor \acs{UAV} accordingly. Beside this approach of tracking the designated vehicle while aligning the \acs{RIS} towards the base station, a machine learning--based solution approach is presented in~\cite{Zhang/etal/2019}. \autoref{tab:simualtionParameters} summarizes the aforementioned details on the simulation model and its configuration. \subsection{Performance Evaluation} \fig{t}{fig/simulationresultsovertime.tex}{Exemplary excerpt of the path loss over time for the three cases: Without \acs{RIS}, with deployed static \acsp{RIS}, and with additional support by a \acs{RIS} mounted on a \acs{UAV}. While the deteriorated \acs{NLOS} link is sporadically improved by means of transient \acs{LOS} flares, \acs{RIS}--enabled reflection paths are able to mitigate the path loss despite the absence of a \acs{LOS} condition. The colored areas measure the performance gain as product of mitigated path loss and time.}{fig:resultovertime} During the joint mobility and network simulation, the path loss is evaluated according to the \acs{LOS} condition and current geometry between transmitter, \acs{RIS}, and receiver. The underlying deterministic and geometry--based channel and propagation loss models thus provide new path loss values after a position update of any vehicle. \autoref{fig:resultovertime} depicts an excerpt of a time series of the experienced path loss for three different deployment strategies. Without any \acs{RIS} (red line), the link performance is quite deteriorated at the predominant absence of \acs{LOS} conditions. A \acs{LOS} path is only sporadically available and rather transient due to the high blockage probability. However, the introduction of static \acsp{RIS} (dashed green line) mitigates the path loss by selecting a \acs{RIS} reflection path in case of \acs{NLOS} conditions. The additional utilization of a \acs{RIS} mounted on a \acs{UAV} mitigates the path loss even more significantly. It leads to a more consistent time response compared to the static \acsp{RIS} and drastically reduces the experienced path loss. With an assumed link budget of~\SI{142}{\decibel} according to~\cite{Kutty/etal/2016}, the \acs{RIS}--enhanced coverage is able to avoid outages as subsequently analyzed in more detail. In addition, the green and blue areas illustrate the performance gain in terms of the product of path loss and time when utilizing static \acsp{RIS} and a \acs{RIS}--\acs{UAV}, respectively. This means, that the size of this area measures the advantage of the associated \acs{RIS} deployment. \fig{b}{fig/simulationresultsECDF.tex}{Statistical analysis of the overall coverage of a \acs{RIS}--enhanced base station. While the link performance degrades without \acs{RIS}, the successive introduction of static \acsp{RIS} enables meeting the link budget. The additional utilization of a \acs{RIS} mounted on a \acs{UAV}, which follows the vehicle, significantly improves the link reliability and grants path losses even below \SI{127}{\decibel} in the underlying scenario.}{fig:statisticalresults} \fig{t}{fig/simulationresultsViolin.tex {Statistical analysis of \acs{NLOS} path loss of \acs{RIS}--enhanced \acs{V2V} communications. Due to the predominant \acs{LOS} condition, this analysis focuses on the \acs{NLOS} regions, which constitute about \SI{15}{\percent} of the evaluated trajectory. In general, the short distance between the vehicles lead to substantially lower path losses than in the base station scenario. However, the successive allocation of static \acsp{RIS} improve the experienced path loss during turns, where the \acs{LOS} is obstructed by buildings at the street corner. Again, the combined utilization of both the static \acsp{RIS} and the dynamic \acs{RIS}--\acs{UAV} leads to notable improvements with a maximum path loss of \SI{120}{\decibel}.}{fig:statisticalresultsV2V} As the next step, the overall coverage of a \acs{RIS}--enhanced base station is statistically analyzed in \autoref{fig:statisticalresults}. The \ac{ECDF} illustrates the statistical distribution of the experienced path loss levels for different \acs{RIS} deployment strategies. Initially, the central base station placement appears questionable when not using any \acs{RIS}, since it only covers~\SI{26}{\percent} of the whole track. However, the successive introduction of statically deployed \acsp{RIS} improves the path loss to meet the link budget requirements. A comprehensive coverage is achievable by deploying seven static \acsp{RIS}. Nevertheless, the amount of added \acsp{RIS} can be balanced to cater for a distinct coverage level and path loss distribution. Finally, with additional support by a private \acs{RIS} mounted on a \acs{UAV}, the path loss is again significantly improved as also seen before. Albeit such a \acs{UAV} can only supply a single vehicle or a cluster of vehicles within the same area. According to~\cite{Ozdogan/etal/2020}, the reduced path loss with the \acs{UAV}--\acs{RIS} is realizable due to a shorter distance to either the transmitter or receiver. In contrast to varying distances to the static \acsp{RIS}, the \acs{UAV} is able to mostly keep the mounted \acs{RIS} at a short distance. In the \acs{V2V} use case, the path loss is lower in general due to the shorter distance between the vehicles and higher \acs{LOS} probability of them driving behind one another. In spite of that, utilizing static \acsp{RIS} at the turns and the \acs{UAV}--\acs{RIS} lead again to improved propagation conditions as evaluated in \autoref{fig:statisticalresultsV2V}. Since the predominant \acs{LOS} condition leads to the same path loss measurements regardless of the \acs{RIS} strategy as already observed in \autoref{fig:resultovertime}, \autoref{fig:statisticalresultsV2V} focuses on the path loss under \acs{NLOS} conditions, which are especially predominant at street corners. In general, the introduction of a \acs{RIS} at a \acs{LOS} obstructing street corner causes a mitigated path loss. However, a large number of \acs{RIS} would be required to cover all occurring blockages during the track. In case of the deployment of only some static \acsp{RIS}, the path loss distribution partially still exceeds\;\SI{140}{\decibel}. This is not the case for the \acs{UAV}--\acs{RIS}. When utilizing both the static and the \acs{UAV}--\acs{RIS}, an overall reduced path loss with a condensed spread limited to a maximum of\;\SI{120}{\decibel} can be seen as blue or rightmost violin of \autoref{fig:statisticalresultsV2V}. This further reduction can again be explained by the distance dependency of the path loss model: While the \acs{UAV}--\acs{RIS} may have a low mean distance to the vehicle, the static \acsp{RIS} on buildings and light poles have a reduced altitude compared to the \acs{UAV} and thus they may also have an even lower minimum distance when the vehicle is passing by. In summary, the simulation results prove the advantageous coverage in terms of path loss when utilizing \acsp{RIS}. While a static deployment has only a limited range for coverage or path loss enhancement, the \acs{RIS}--\acs{UAV} is able to continuously supply reflection occasions dedicated to a single vehicle (or a local cluster of vehicles). Even \acs{V2V} communications may profit from the \acs{RIS} technology, since the path loss degrades under \acs{NLOS} conditions, which are especially present at street corners. The combination of static and dynamic \acsp{RIS} turns out to join the advantages of both deployment strategies: On the one hand, a static \acs{RIS} is dedicated to a certain street canyon or cross road and is thus able to drastically improve the experienced path loss but only at a limited area. On the other hand, the dynamic \acs{RIS}, which is intentionally brought close to the vehicle by an \acs{UAV}, can supply improved coverage anywhere but maybe with some restrictions due to no--fly zones or a constrained minimum altitude and its power consumption. As discussed earlier, future vehicular environments may even utilize \aclp{RIS} in an opportunistic manner, whenever the direct paths degrades and a suited \acs{RIS} mounted on a \acs{UAV} or bus is intentionally or by chance in reach. \section{Lessons Learned and Open Challenges} \label{sec:challenges} \section{Related Work} \label{sec:related} In~\cite{Akyildiz/etal/2020}, the authors provide an extensive literature survey on recent and future technology trends for next generation mobile networks. Among the various prospective use cases mentioned in this article, a smart infrastructure is believed to enable comprehensive network coverage. Thus, it facilitates the omnipresence of wireless systems with the aid of controllable wireless signal propagation. In addition, applications like multi--sensory holographic teleportation, autonomous cyber--physical systems, and intelligent industrial automation put high requirements on the performance of subsequent mobile networks. To meet these challenges, smart radio environments enabled by the \ac{RIS} technology are envisioned as one of the key drivers for \acs{6G} and beyond. Especially wireless communication at the \acs{mmWave} and \si{\tera\hertz} bands, which suffer from limited distances and sparse coverage, can be improved by controlled reflections. The \acsp{RIS} are particularly suitable, because their reflection gain strongly depends on the dimension in relation to the wavelength of the radio signal. Consequently, shorter wavelengths as in case of these bands allow for a higher gain per \acs{RIS} area and thus a more efficient surface size utilization or in turn, a smaller required surface size compared to the conventional sub\;\SI{6}{\giga\hertz} bands for example. The \acs{RIS} technology derives its origin from the intelligent metasurfaces with its development and state of the art especially in terms of realization concepts and material physics considerations thoroughly elaborated in~\cite{Tsilipakos/etal/2020}. An overview of the \ac{RIS} technology --- also called \acf{IRS} --- and its applications in wireless networks is provided in~\cite{Wu/etal/2020}: With \acs{RIS}--controlled reflections, signal propagation can be guided around obstacles leading to a virtually extended \acs{LOS} path. In doing so, the main application of \acs{RIS} can be regarded as providing a more comprehensive network coverage by circumventing blockages. However, the physical layer security profits from controlled destructive signal superposition at eavesdroppers as well. Concerning the controlled superposition of direct and reflected signals at the receiving node, also the rank of \ac{MIMO} channels can be increased or the interference by other base stations can be mitigated at the cell edge. For these applications, the deployment of \acsp{RIS} at proper locations is a crucial task to fully leverage the potential of \acs{RIS}--enhanced mobile networks, which might be supported by machine learning--based techniques. The article~\cite{Huang/etal/2020} gives an overview of holographic \acs{MIMO} surfaces, with \acs{RIS} as one manifestation. In doing so, different technology approaches and concepts like discrete surfaces as passive reflectors are categorized. Generally, there are four functionalities of passive reflectors defined allowing for the control of wireless signal propagation: Polarization, scattering, focusing, and absorption. Besides an improved radio link quality especially for outdoor--to--indoor applications, the surfaces even bear the potential of accurate indoor positioning due to their spatial resolution. In addition to~\cite{Heimann/etal/2020}, authors in~\cite{AlHilo/etal/2021a} have analyzed the potentials of \ac{RIS}--enabled communications in challenging vehicular environments. For providing coverage in so--called \textit{dark zone} areas, which are affected by signal blocking, the authors utilize deep reinforcement learning for joint resource scheduling and passive beamforming. In \ac{ITS}, \acp{UAV} are expected to become a key enabler for fully automated transportation according to~\cite{Menouar/etal/2017}. Beside parcel delivery tasks for which they can provide a performance boost while complying with complex constraints such as the sanitary measures of the \acs{COVID--19} pandemic as proposed in \cite{Patchou2021flying}, they can act as report agents supporting accident ambulance by providing information of crash sites and establishing communication links to involved persons. In addition, \acp{UAV} can act as communication relay and \acp{RSU} to offer better radio conditions for \ac{V2X} communications. Furthermore, the detailed tutorial~\cite{Zeng/etal/2019} emphasizes the advantageous opportunities of \acs{UAV}--assisted cellular networks. While the remote control of \acsp{UAV} connected via cellular networks has virtually no range limitation, applications like flying relays profit from a high \acs{LOS} probability to both base stations and user/terminal devices. In addition to \ac{GNSS}, the cellular--aided localization may even enhance the robustness and performance of \acs{UAV} navigation. Although the authors in~\cite{Zeng/etal/2019} do not take \acp{SRE} and \acp{RIS} into account, sophisticated models for the air--to--air and air--to--ground communication performance are presented. The \acs{UAV}--aided cellular coverage and communication link performance might be even further improved by means of \acp{SRE} as studied in the subsequent sections. As pointed out by a visionary analysis in~\cite{Abdalla/etal/2020a}, \ac{RIS}--mounted \ac{UAV} systems offer the potential of exploiting the unique mobility characteristics of these aerial vehicles for highly efficient on--demand network provisioning. An optimization approach based on machine learning regarding the location and configuration of such a flying \acs{RIS} is proposed in~\cite{Zhang/etal/2019}. While the emerging topic of the \acs{RIS} technology and its application in vehicular networks is mostly being addressed by means of analytical and numerical simulations in literature, this work focuses on a system level, discrete event simulation. \section{Results} \label{sec:results}
2,877,628,091,385
arxiv
\section{Introduction} Let $\alpha$ be a real number. The {\em irrationality exponent} $\mu(\alpha)$ is defined as the supremum of the set of real numbers $\mu$ such that the inequality $$\left|\alpha-\frac{p}{q}\right|<\frac{1}{q^\mu}$$ has infinitely many solutions $(p,q)\in\mathbb{Z}\times\mathbb{N}.$ For example, Liouville \cite{L1844} proved that $\mu(\sum_{n\geqslant 0}10^{-n!})=\infty$, and Roth \cite{R1955} showed that if $\alpha$ is an irrational algebraic number, then $\mu(\alpha)=2$. Note also that $\mu(\alpha)\geqslant 2$ for all irrational $\alpha$. In this Addendum, we prove the following theorem. \begin{theorem}\label{main} Let $s_2(n)$ be the sum of the binary digits of $n$. Then for each integer $b\geq 2$ we have $$\mu\left(\sum_{n\geqslant 0}\frac{s_2(n)}{b^n}\right)=2.$$ \end{theorem} \noindent Transcendence of these numbers was proved by Toshimitsu \cite{T1998} using Mahler's method. \section{Preliminaries} As above, let $s_2(n)$ denote the sum of the binary digits of $n$, and set $\mathcal{S}(x):=\sum_{n\geqslant 0} s_2(x)x^n.$ Note that for all $n\geqslant 0$, we have that both $s_2(2n)=s_2(n)$ and $s_2(2n+1)=s(n)+1.$ We prove our result by exploiting a connection between the sequences $\{s_2(n)\}_{n\geqslant 0}$ and $\{f(n)\}_{n\geqslant 1}$, where we define $f(n)$ by its generating series $$\mathcal{F}(x):=\sum_{n\geqslant 1}f(n)x^n=\sum_{n=0}^\infty\frac{x^{2^n}}{1+x^{2^n}}.$$ The series $\mathcal{F}(x)$ and its special values have been studied by many authors, including Golomb \cite{G1963}, Duverney \cite{D2001}, and Schwarz \cite{S1967}. This series is of special interest as $\mathcal{F}(1/2)$ is the sum of the reciprocals of the Fermat numbers. Our interest here is tied to the following result. \begin{proposition}[Coons \cite{C2012}]\label{IEF} Let $b\geqslant 2$ be a positive integer. Then $\mu(\mathcal{F}(1/b))=2.$ \end{proposition} To use Proposition \ref{IEF} to prove Theorem \ref{main}, we will use the relationship contained in the following lemma. \begin{lemma} For all $n\geqslant 1$ we have $f(n)=s_2(n)-s_2(n-1)$. \end{lemma} \begin{proof} Note that the generating function for $\mathcal{F}(x)$ implies that $f(n)$ is multiplicative, and on prime powers given by $$f(p^k)=\begin{cases} 1-k &\mbox{if $p=2$}\\ 1 &\mbox{if $p\neq 2$}.\end{cases}$$ Consider the function $v(n):=s_2(n)-s_2(n-1)$ for $n\geqslant 1$. To this end, note that if $n$ is odd, say $n=2k+1$, then $$v(2k+1)=s_2(2k+1)-s_2(2k)=s_2(k)+1-s_2(k)=1=f(2k+1).$$ If $n$ is even, say $n=2^k(2\ell+1)$, then \begin{align*} v(2^k(2\ell+1))&=s_2(2^k(2\ell+1))-s_2(2^k(2\ell+1)-1)\\ &=s_2(2\ell+1)-s_2(2^{k+1}\ell+2^{k}-1)\\ &=s_2(\ell)+1-s_2(2^{k+1}\ell)-s_2(2^{k}-1)\\ &=s_2(\ell)+1-s_2(\ell)-k\\ &=1-k. \end{align*} From here it is easy to see that $v(n)$ is multiplicative and $f(p^k)=v(p^k)$ for all primes $p$ and integers $k\geqslant 1$. Thus $v(n)=f(n)$. \end{proof} \section{Proof of the main result} \begin{proof}[Proof of Theorem \ref{main}] Using the above lemma and the fact that $s_2(0)=0$, we have that \begin{align*} \mathcal{F}(x)=\sum_{n\geqslant 1}f(n)x^n&=\sum_{n\geqslant 1}(s_2(n)-s_2(n-1))x^n=\sum_{n\geqslant 1}s_2(n)x^n-\sum_{n\geqslant 1}s_2(n-1)x^n\\ &=\sum_{n\geqslant 0}s_2(n)x^n-x\sum_{n\geqslant 0}s_2(n)x^n=(1-x)\mathcal{S}(x),\end{align*} so that $\mathcal{S}(1/b)=\frac{b}{b-1}\cdot\mathcal{F}(1/b)$. Since $\mathcal{S}(1/b)$ is a (nonzero) rational multiple of $\mathcal{F}(1/b)$, they have the same irrationality exponent. Appealing to Theorem \ref{IEF} proves the theorem. \end{proof} \noindent{\em Remark.} Since $\mathcal{F}(1/b)$ is a (nonzero) rational multiple of $\mathcal{S}(1/b)$, the transcendence of $\mathcal{S}(1/b)$ as proved by Toshimitsu \cite{T1998} provides an alternative proof (to that of Duverney \cite{D2001}) of the transcendence of $\mathcal{F}(1/b)$. \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
2,877,628,091,386
arxiv
\section{Introduction} Since the early part of the last century, estimates of Weyl sums have played crucial roles in many problems in additive number theory. The classical bounds for Weyl sums have stemmed from Weyl's method [$\ref{ref17}$] and Vinogradov's method [$\ref{ref18}$]. In particular, these bounds have been widely used in studying the distribution of polynomial modulo $1$, initiated by a question posed by Hardy and Littlewood [$\ref{ref19}$] asking, when $\alpha\in {\mathbb R}$, $k\in {\mathbb N}$ and $\epsilon>0$, whether there exists $\sigma>0$ not depening on $\alpha$ such that \begin{equation*} \min_{1\leq x\leq X}\|\alpha x^k\|\leq X^{-\sigma+\epsilon}, \end{equation*} where $\|\cdot\|$ denotes the distance to the nearest integer and $X$ is sufficiently large in terms of $k$ and $\epsilon.$ By exploiting such bounds for Weyl sums, Heilbronn [$\ref{ref20}$] and Danicic [$\ref{ref21}$] obtained $\sigma=2^{1-k}.$ Subsequently, the exponent $1/2$ in the case $k=2$ was improved to $\sigma=4/7$ by Zaharescu [$\ref{ref29}$]. By exploiting estimates for smooth Weyl sums, Wooley [$\ref{ref11}$] obtained the permissible exponent $\sigma=1/(k\log k+O(k\log\log k)).$ Furthermore, combined with the recent progress on bounds for Weyl sums, stemming from the resolution of Main Conjecture in Vinogradov's mean value theorem, Baker [$\ref{ref4}$] shows that $\sigma=1/(k(k-1))$ is permissible, and also derives the explicit exponent $\sigma(s,k)=s/(k(k-1))$ such that \begin{equation}\label{1} \min_{\substack{0\leq \boldsymbol{x}\leq X\\\boldsymbol{x}\neq \boldsymbol{0}}}\|\alpha_1x^k_1+\cdots+\alpha_s x_s^k\|\leq X^{-\sigma(s,k)+\epsilon}, \end{equation} for $1\leq s\leq k(k-1)$. Here and throughout, we write $0\leq \boldsymbol{x}\leq X$ and $\boldsymbol{x}\neq \boldsymbol{0}$ to abbreviate the conditions $0\leq x_1,\ldots,x_s\leq X$ and $(x_1,\ldots,x_s)\neq (0,\ldots,0).$ In this paper, we seek to make the bound $(\ref{1})$ sharper via mean values of exponential sum, rather than exploiting bounds for Weyl sums. Furthermore, by applying new mean value estimates for exponential sums related to Vinogradov's mean value theorem, the method described here shall deliver bounds for small fractional parts of polynomial in the generalized shape $\varphi_1(x_1)+\varphi_2(x_2)+\cdots+\varphi_s(x_s),$ where $$\varphi_i(x)=\alpha_{1i}x^{k_1}+\alpha_{2i}x^{k_2}+\cdots+\alpha_{ti}x^{k_t}.$$ \bigskip \begin{te} Let $\epsilon>0$ and $s,k$ be natural numbers with $k\geq 6.$ Suppose that $X$ is sufficiently large in terms of $s,k$ and $\epsilon.$ Consider $\alpha_i\in {\mathbb R}$ with $1\leq i\leq s.$ Then, whenever $s\geq \frac{k(k+1)}{2}$, one has \begin{equation} \min_{\substack{0\leq \boldsymbol{x}\leq X\\ \boldsymbol{x}\neq \boldsymbol{0}}}\|\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k\| \leq X^{-1+\epsilon}. \end{equation} \end{te} For comparison, the work of Baker [$\ref{ref4}$, Theorem 3] shows that $$\min_{\substack{0\leq \boldsymbol{x}\leq X\\ \boldsymbol{x}\neq \boldsymbol{0}}}\|\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k\|\leq X^{-\frac{s}{k(k-1)}+\epsilon}$$ for $1\leq s\leq k(k-1)$. His work also gives results when $s> k(k-1),$ too complicated to state in full here. It is sufficient to report that the exponents $s/(k(k-1))$ is replaced by an exponent $\sigma$ in Baker [$\ref{ref4}$, Theorem 3], with $\sigma\rightarrow 2$ as $s\rightarrow \infty.$ Theorem 1.1 improves on this result when $\frac{k(k+1)}{2}\leq s< k(k-1)$. We note that with additional effort, for $s\geq k+2$ one may get $$\min_{\substack{0\leq \boldsymbol{x}\leq X\\ \boldsymbol{x}\neq \boldsymbol{0}}}\|\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k\|\leq X^{-\sigma(s,k)+\epsilon},$$ where \begin{equation}\label{333} \sigma(s,k)=\min\biggl\{\frac{s}{k(k+1)-s}, 1\biggr\}. \end{equation} Notice that this improves on result of Baker [$\ref{ref4}$] described above when $2k<s<k(k-1)$. We record this result in section 4 (see Theorem 4.1 below). We also note that experts may expect that the exponent $(\ref{333})$ can be improved for large $k$ by using estimates for smooth Weyl sums. However, to obtain results for $s>1$ one encounters a number of technical complications that threaten to obstruct useful conclusions. Consequently, we focus in this paper on conclusions made accessible by our new mean value estimates for exponential sums. \bigskip As we explained above, the method described here delivers bounds for small fractional parts of more general polynomials. Thus, in order to describe these polynomials and the following theorems, we require some notation. Consider a fixed $t$-tuple $\mathbf{k}=(k_1,\ldots,k_t)$ of positive integers satisfying \begin{equation*} k=k_1>k_2>\cdots>k_t\geq 1. \end{equation*} We denote $\{1,2,\ldots,k_1\}\setminus\{k_1,\ldots,k_t\}$ by $\{i_1,\ldots,i_{k-t}\}$ with $i_1>\cdots>i_{k-t}.$ Furthermore, we write $\sigma=\sigma(\mathbf{k})$ for \begin{align}\label{eq1.41.4} \sigma= \max_{1\leq l\leq k-t} \frac{l}{(k-i_l)(k-i_l+1)}. \end{align} \bigskip \begin{te} Let $\epsilon>0$. Suppose that $s, k_1,\ldots,k_t, t$ are natural numbers satisfying $k_1\geq 6$, $k_1>t\geq 2$ and $k_1>k_2>\cdots>k_t.$ Suppose that $X$ is sufficiently large in terms of $s,k_1$ and $\epsilon.$ Consider $\alpha_{ji}\in {\mathbb R}$ with $1\leq i\leq s$ and $1\leq j\leq t$. Define $\varphi_i(x)=\alpha_{1i}x^{k_1}+\cdots+\alpha_{ti} x^{k_t}$ with $1\leq i\leq s$. Then, whenever $s>k_1^2+k_1+2\lceil\sigma(1-k_1)\rceil$, one has \begin{equation}\label{4} \min_{\substack{0\leq \boldsymbol{x}\leq X\\\boldsymbol{x}\neq \boldsymbol{0}}}\|\varphi_1(x_1)+\varphi_2(x_2)+\cdots +\varphi_{s}(x_{s})\|\leq X^{-1+\epsilon}. \end{equation} \end{te} The reader will observe that the condition on $s$ in the conclusion of Theorem 1.2 is almost twice as restrictive as that in Theorem 1.1. The explanation for this reduction in strength lies with the generality of the polynomials $\varphi_i$, and correspondingly weaker estimate available for associated exponential sums. \bigskip To describe the following theorems regarding new mean values of exponential sums, we introduce some notation. Define the exponential sum $F(\alpha_{k_1},\boldsymbol{\alpha}^{t-1})=F_{\mathbf{k}}(\alpha_{k_1},\ldots,\alpha_{k_t};X)$ by $$F(\alpha_{k_1},\boldsymbol{\alpha}^{t-1})=\displaystyle\sum_{1\leq x \leq X}e(\alpha_{k_1}x^{k_1}+\alpha_{k_2}x^{k_2}+\cdots+\alpha_{k_t}x^{k_t}).$$ Denote $d\alpha_{k_t}d\alpha_{k_{t-1}}\cdots d\alpha_{k_2}$ by $d\boldsymbol{\alpha}^{t-1}$, and write $$\displaystyle\oint |F(\alpha_{k_1},\boldsymbol{\alpha}^{t-1})|^{2s}d\boldsymbol{\alpha}^{t-1}=\displaystyle\int_{[0,1)^{t-1}} |F(\alpha_{k_1},\boldsymbol{\alpha}^{t-1})|^{2s}d\alpha_{k_t}d\alpha_{k_{t-1}}\cdots d\alpha_{k_2}.$$ Furthermore, we write $$f(\alpha_{k_1},\boldsymbol{\alpha})=\displaystyle\sum_{1\leq x\leq X}e(\alpha_{k_1}x^{k_1}+\alpha_{k_1-1}x^{k_1-1}+\cdots+\alpha_1 x)$$ and $$ \displaystyle\oint|f(\alpha_{k_1},\boldsymbol{\alpha})|^{2s}d\boldsymbol{\alpha}=\displaystyle\int_{[0,1)^{k_1-1}}|f(\alpha_{k_1},\boldsymbol{\alpha})|^{2s} d\boldsymbol{\alpha}.$$ \begin{te} Let $s,t$ and $k$ be natural numbers with $t<k$. Let $l$ be an integer with $1\leq l\leq k-t$. Consider a rational approximation to $\alpha_{k}$ satisfying $|\alpha_{k}-a/q|\leq 1/q^2$ with $(q,a)=1$. Then, for $\epsilon>0$, one has \begin{equation*} \displaystyle\oint\left|F(\alpha_k,\boldsymbol{\alpha}^{t-1})\right|^{2s}d\boldsymbol{\alpha}^{t-1}\ll R_lX^{i_1+\cdots+i_{k-t}+\epsilon}\displaystyle\oint\left|f(\alpha_k,\boldsymbol{\alpha})\right|^{2s}d\boldsymbol{\alpha}, \end{equation*} where $$R_l=\displaystyle\prod_{j=1}^{l}\left(X^{-i_j}+X^{-k+i_j}+q^{-1}+qX^{-k}\right)^{\frac{1}{(k-i_l)(k-i_l+1)}}.$$ \end{te} As a consequence of Theorem 1.3, one finds that the mean value over all coefficients but the leading coefficient has an upper bound in terms of the denominator of the rational approximation to $\alpha_k$. From this, we obtain mean value estimates by integrating over $\alpha_k$ lying over major arcs and minor arcs, respectively. In order to describe these estimates, which we record in Theorem 1.4, and for the argument used throughout this paper, we must introduce sets of major arcs and minor arcs. Define the major arcs $\mathfrak{M}_l$ with $l>0$ by \begin{equation}\label{eq1.6} \mathfrak{M}_l=\bigcup_{\substack{0\leq a\leq q \leq X\\(q,a)=1}}\mathfrak{M}_l(q,a), \end{equation} where $\mathfrak{M}_l(q,a)=\{\alpha\in [0,1)|\ |q\alpha-a|\leq (lk)^{-1}X^{-k+1}\}$. Define the minor arcs to be $\mathfrak{m}_l=[0,1)\setminus \mathfrak{M}.$ We abbreviate $\mathfrak{M}_2$ simply to $\mathfrak{M}.$ Throughout this paper, we use $\mathfrak{M}$ and $\mathfrak{m}$ without further comments, unless specified otherwise. Furthermore, we recall the definition $(\ref{eq1.41.4})$ of the exponent $\sigma,$ and write $D$ for \begin{equation}\label{1.7} D=k_1+k_2+\cdots+k_t. \end{equation} \begin{te}\label{thm1.2} One has the following: $(\romannumeral1)$ When $s$ is a natural number with $2s\geq k^2+(1-2\sigma)k+2\sigma$, one has \begin{equation}\label{7777} \displaystyle\int_{\mathfrak{M}}\displaystyle\oint\left|F(\alpha_{k},\boldsymbol{\alpha}^{t-1})\right|^{2s}d\boldsymbol{\alpha}^{t-1}d\alpha_k\ll X^{2s-D+\epsilon}. \end{equation} $(\romannumeral2)$ When $s$ is a natural number with $2s\geq k(k+1),$ one has \begin{equation}\label{8888} \displaystyle\int_{\mathfrak{m}}\displaystyle\oint\left|F(\alpha_{k},\boldsymbol{\alpha}^{t-1})\right|^{2s}d\boldsymbol{\alpha}^{t-1}d\alpha_k\ll X^{2s-D-\sigma+\epsilon}. \end{equation} \end{te} Wooley [$\ref{ref12}$, Theorem 1.3] provided the mean value estimates of exponential sums over minor arcs, which is $(\ref{8888})$ with $F(\alpha_k,\boldsymbol{\alpha})=\sum_{1\leq x\leq X} e(\alpha_k x^k).$ This mean value estimate delivered improvements in the number of variables required to establish the asymptotic formula in Waring's problem, the density of integral solutions of diagonal Diophantine equations and slim exceptional sets for the asymptotic formula in Waring's problem. Wooley [$\ref{ref13}$, Theorem 1.1] established an essentially optimal estimate for ninth moment of exponential sum having argument $\alpha x^3+\beta x$ (see also [$\ref{ref28}$, Theorem 1.3]), by introducing ($\ref{8888}$) with $F(\alpha_3,\boldsymbol{\alpha})=\sum_{1\leq x\leq X} e(\alpha_3x^3+\alpha_1x).$ Furthermore, Wooley [$\ref{ref15}$, Theorem 14.4] recorded bounds for $(\ref{7777})$ and $(\ref{8888})$ with $k_2<k_1-1.$ In Theorem 1.4, we provide mean values of $F(\alpha_{k},\boldsymbol{\alpha}^{t-1})=\sum_{1\leq x \leq X}e(\alpha_{k}x^{k_1}+\alpha_{k_2}x^{k_2}+\cdots+\alpha_{k_t}x^{k_t})$ with no restrictions on the exponents $k_1,\ldots,k_t.$ Combining with Theorem 1.4, the method described in the proof of Theorem 1.1 shall deliver the proof of Theorem 1.2. We also note that by applying H\"older's inequality and the trivial bound $\left|F(\alpha_{k},\boldsymbol{\alpha}^{t-1})\right|\leq X$, it follows from Theorem 1.4 $(\romannumeral2)$ that there exists $s_0$ with $s_0<k(k+1)/2$ such that whenever $s\geq s_0$ we have $$\displaystyle\int_{\mathfrak{m}}\displaystyle\oint\left|F(\alpha_{k},\boldsymbol{\alpha}^{t-1})\right|^{2s}d\boldsymbol{\alpha}^{t-1}d\alpha_k\ll X^{2s-D+\epsilon}.$$ Therefore, we find that there exists $s_0$ with $s_0<\frac{k(k+1)}{2}$ such that whenever $s\geq s_0$ one has \begin{equation*} \begin{aligned} &\displaystyle\int\displaystyle\oint\left|F(\alpha_{k},\boldsymbol{\alpha}^{t-1})\right|^{2s}d\boldsymbol{\alpha}^{t-1}d\alpha_k\\ &=\displaystyle\int_{\mathfrak{M}}\displaystyle\oint\left|F(\alpha_{k},\boldsymbol{\alpha}^{t-1})\right|^{2s}d\boldsymbol{\alpha}^{t-1}d\alpha_k+\displaystyle\int_{\mathfrak{m}}\displaystyle\oint\left|F(\alpha_{k},\boldsymbol{\alpha}^{t-1})\right|^{2s}d\boldsymbol{\alpha}^{t-1}d\alpha_k\ll X^{2s-D+\epsilon}. \end{aligned} \end{equation*} This range of $s$ is superior to those trivially obtained by Vinogradov's mean value theorem. \bigskip The consequences of Theorem 1.2 and Theorem 1.4 are dependent on $\sigma$, which is the quantity determined by $\mathbf{k}=(k_1,\ldots,k_t)$. Thus, we shall see how this quantity $\sigma$ varies according to the number of exponents and its arrangement. Recall the definition ($\ref{eq1.41.4}$) of the exponent $\sigma$ and that $\{i_1,\ldots,i_{k-t}\}=\{1,2\ldots,k_1\}\setminus\{k_t,\ldots,k_1\}$ with $i_1>\cdots>i_{k-t}.$ Then, we observe following: \ (1) Let $\mathbf{k}=(k,k-1,\ldots,k-(t-1))$ with $t<k/2.$ Then, by taking $l=t,$ one obtains $$\sigma=\max_{1\leq l\leq k-t}\frac{l}{(k-i_l)(k-i_l+1)}\geq\frac{t}{(2t-1)(2t)}=O(t^{-1}).$$ \bigskip (2) Let $t=m_1+m_2$. Let $$\mathbf{k}=(k,k-1,\ldots,k-(m_1-1),m_2,\ldots,1)$$ with $m_1+m_2<k/2.$ Then, by taking $l=m_1,$ one has $$\sigma=\max_{1\leq l\leq k-t}\frac{l}{(k-i_l)(k-i_l+1)}\geq\frac{m_1}{(2m_1-1)(2m_1)}=O(m_1^{-1}).$$ \bigskip (3) Let $\mathbf{k}=(k,k_2,\ldots,k_t)$ with $k_1=k, k_2=k-1$ and $k_3\neq k-2.$ Then, by taking $l=1$, one has $\sigma=1/2.$ \bigskip Thus, if we assume that $\mathbf{k}=(k_1,\ldots,k_t)$ with $t<k/2,$ then one infers from the observations above that $\sigma$ is at least $O(t^{-1})$. In section 2, we provide the proof of Theorem 1.3. The method of the proof of Theorem 1.3 mainly follows the argument in [$\ref{ref12}$] together with the argument used in [$\ref{ref5}$]. In section 3, we provide the proof of Theorem 1.4, by making use of Theorem 1.3. In section 4, we introduce applications of mean values of exponential sums to fractional parts of polynomials and provide the proof of Theorem 1.1. Furthermore, we record in Theorem 4.1 a more quantitative result than Theorem 1.1 and provide its proof at the end of section 4. In section 5, we give the proof of Theorem 1.2 by exploiting Theorem 1.4 and the method introduced in section 4. Throughout this paper, we use $\gg$ and $\ll$ to denote Vinogradov's well-known notation, and write $e(z)$ for $e^{2\pi iz}$. We adopt the convention that whenever $\epsilon$ appears in a statement, then the statement holds for each $\epsilon>0$, with implicit constants depending on $\epsilon.$ \section*{Acknowledgment} The author acknowledges support from NSF grant DMS-2001549 under the supervision of Trevor Wooley. The author is grateful for support from Purdue University. Especially, the author would like to thank Trevor Wooley for careful reading and helpful comments which have improved the exposition. \bigskip \section{Proof of Theorem 1.3} In this section, we provide three lemmas and combine all to prove Theorem 1.3. \subsection{Auxiliary lemmas} In order to describe Lemma 2.1, we recall that \begin{align*} F(\alpha_{k_1},\boldsymbol{\alpha}^{t-1})=\displaystyle\sum_{1\leq x \leq X}e(\alpha_{k_1}x^{k_1}+\alpha_{k_2}x^{k_2}+\cdots+\alpha_{k_t}x^{k_t}) \end{align*} and \begin{equation*} f(\alpha_{k_1},\boldsymbol{\alpha})=\displaystyle\sum_{1\leq x\leq X}e(\alpha_{k_1}x^{k_1}+\alpha_{k_1-1}x^{k_1-1}+\cdots+\alpha_1 x). \end{equation*} Furthermore, recall $\{i_1,\ldots,i_{k-t}\}=\{1,2,\ldots,k_1\}\setminus\{k_1,\ldots,k_t\}.$ In advance of the statement of the following lemma, we define $\mathcal{I}(\alpha_k)=\mathcal{I}(\alpha_k;l)$ with $1\leq l\leq k-t$ by $$\mathcal{I}(\alpha_k)=\displaystyle\sum_{|g_{i_1}|\leq sX^{i_1}}\cdots\displaystyle\sum_{|g_{i_l}|\leq sX^{i_l}}\displaystyle\oint|f(\alpha_{k},\boldsymbol{\alpha})|^{2s} e(-\boldsymbol{\alpha}^l\cdot\boldsymbol{g})d\boldsymbol{\alpha},$$ where $d\boldsymbol{\alpha}$=$d\alpha_{k-1}\cdots d\alpha_{1}$ and $\boldsymbol{\alpha}^l\cdot\boldsymbol{g}=\alpha_{i_1}g_{i_1}+\cdots+\alpha_{i_l}g_{i_l}.$ \begin{llll} We have \begin{align*} \displaystyle\oint|F(\alpha_{k},\boldsymbol{\alpha}^{t-1})|^{2s}d\boldsymbol{\alpha}^{t-1} \ll X^{i_{l+1}+i_{l+2}+\cdots+i_{k-t}}\mathcal{I}(\alpha_k). \end{align*} \end{llll} \begin{proof} Denote by $F(\alpha_{k},\boldsymbol{\alpha}^{t-1},\boldsymbol{\beta}^{k-t-l})=F(\alpha_{k_1},\ldots,\alpha_{k_t},\beta_{l+1},\ldots,\beta_{k-t};X)$ the exponential sum $$\displaystyle\sum_{1\leq x \leq X}e(\alpha_{k_1}x^{k_1}+\alpha_{k_2}x^{k_2}+\cdots+\alpha_{k_t}x^{k_t}+\beta_{l+1}x^{i_{l+1}}+\cdots+\beta_{k-t}x^{i_{k-t}}).$$ Furthermore, we denote \begin{equation*} \sigma_{s,j}(\mathbf{x})=\displaystyle\sum_{i=1}^{s}(x_i^j-x_{s+i}^j)\ \ \ \ \ \ (1\leq j\leq k) \end{equation*} and recall $k=k_1.$ We emphasize that in order to suppress multiple layer of suffices, it is convenient to write $k$ in place of $k_1$ in many places. As a preliminary manoeuvre, we represent the mean value involving $F(\alpha_{k},\boldsymbol{\alpha}^{t-1})$ in terms of an analogous one involving $F(\alpha_{k},\boldsymbol{\alpha}^{t-1},\boldsymbol{\beta}^{k-t-l}).$ Observe that when $\mathbf{m}=(m_{l+1},\ldots, m_{k-t})\in {\mathbb Z}^{k-t-l},$ if we define \begin{equation*} G(\alpha_k, \mathbf{m}):=\displaystyle\oint|F(\alpha_{k},\boldsymbol{\alpha}^{t-1},\boldsymbol{\beta}^{k-t-l})|^{2s}e(-\beta_{l+1}m_{l+1}-\cdots-\beta_{k-t}m_{k-t})d\boldsymbol{\beta}^{k-t-l}d\boldsymbol{\alpha}^{t-1}, \end{equation*} then one has \begin{align}\label{99999} \begin{aligned} G(\alpha_k, \mathbf{m}) =\displaystyle\sum_{1\leq\mathbf{x}\leq X}\delta(\mathbf{x}, \mathbf{m})\displaystyle\oint e(\alpha_{k_1}\sigma_{s,k_1}(\mathbf{x})+\cdots+\alpha_{k_t}\sigma_{s,k_t}(\mathbf{x}))d\boldsymbol{\alpha}^{t-1}, \end{aligned} \end{align} where \begin{equation*} \delta(\mathbf{x}, \mathbf{m})=\displaystyle\prod_{j=l+1}^{k-t}\left(\displaystyle\int_0^1e(\beta_{j}(\sigma_{s,i_j}(\mathbf{x})-m_{j}))d\beta_{i_j} \right). \end{equation*} By orthogonality, one has \begin{equation*} \displaystyle\int_0^1e(\beta_{j}(\sigma_{s,i_j}(\mathbf{x})-m_{j}))d\beta_{j}=\left\{\begin{array}{l} 1,\ \ \textrm{when}\ \sigma_{s,i_j}(\mathbf{x})=m_{j},\\ 0,\ \ \textrm{when}\ \sigma_{s,i_j}(\mathbf{x})\neq m_{j}.\end{array}\right. \end{equation*} When $1\leq\mathbf{x}\leq X,$ moreover, one has $|\sigma_{s,i_j}(\mathbf{x})|\leq sX^{i_j}\ (l+1\leq j\leq k-t),$ and so $$\displaystyle\sum_{|m_{l+1}|\leq sX^{i_{l+1}}}\cdots\displaystyle\sum_{|m_{k-t}|\leq sX^{i_{k-t}}}\delta(\mathbf{x}, \mathbf{m})=1.$$ Consequently, on noting that $$\displaystyle\sum_{1\leq \mathbf{x}\leq X}e(\alpha_{k}\sigma_{s,k_1}(\mathbf{x})+\alpha_{k_2}\sigma_{s,k_1}(\mathbf{x})+\cdots+\alpha_{k_t}\sigma_{s,k_t}(\mathbf{x}))=|F(\alpha_{k},\boldsymbol{\alpha}^{t-1})|^{2s},$$ we deduce from ($\ref{99999}$) that \begin{equation}\label{9999} \begin{aligned} &\displaystyle\sum_{|m_{l+1}|\leq sX^{i_{l+1}}}\displaystyle\sum_{|m_{k-t}|\leq sX^{i_{k-t}}}G(\alpha_k,\mathbf{m})\\ &=\displaystyle\oint\displaystyle\sum_{1\leq\mathbf{x}\leq X}\biggl(\displaystyle\sum_{\mathbf{m}}\delta(\mathbf{x},\mathbf{m})\biggr)e(\alpha_{k}\sigma_{s,k_1}(\mathbf{x})+\alpha_{k_2}\sigma_{s,k_1}(\mathbf{x})+\cdots+\alpha_{k_t}\sigma_{s,k_t}(\mathbf{x}))d\boldsymbol{\alpha}^{t-1}\\ &=\displaystyle\oint|F(\alpha_{k},\boldsymbol{\alpha}^{t-1})|^{2s}d\boldsymbol{\alpha}^{t-1}. \end{aligned} \end{equation} Therefore, it follows from ($\ref{99999}$) and ($\ref{9999}$) with the triangle inequality that \begin{equation}\label{5'} \begin{aligned} & \displaystyle\oint|F(\alpha_{k},\boldsymbol{\alpha}^{t-1})|^{2s}d\boldsymbol{\alpha}^{t-1}\\ &\leq \displaystyle\sum_{|m_{l+1}|\leq sX^{i_{l+1}}}\cdots\displaystyle\sum_{|m_{k-t}|\leq sX^{i_{k-t}}} \displaystyle\oint|F(\alpha_{1},\boldsymbol{\alpha}^{t-1},\boldsymbol{\beta}^{k-t-l})|^{2s}d\boldsymbol{\beta}^{k-t-l}d\boldsymbol{\alpha}^{t-1} \\ &\ll X^{i_{l+1}+i_{l+2}+\cdots+i_{k-t}}\displaystyle\oint|F(\alpha_{1},\boldsymbol{\alpha}^{t-1},\boldsymbol{\beta}^{k-t-l})|^{2s} d\boldsymbol{\beta}^{k-t-l}d\boldsymbol{\alpha}^{t-1}. \end{aligned} \end{equation} Next, an argument similar to that used above allows us to show that \begin{equation}\label{5} \begin{aligned} &\displaystyle\oint|F(\alpha_{k},\boldsymbol{\alpha}^{t-1},\boldsymbol{\beta}^{k-t-l})|^{2s} d\boldsymbol{\beta}^{k-t-l}d\boldsymbol{\alpha}^{t-1}\\ &=\displaystyle\sum_{|g_{i_1}|\leq sX^{i_1}}\cdots\displaystyle\sum_{|g_{i_l}|\leq sX^{i_l}}\displaystyle\oint|f(\alpha_{k},\boldsymbol{\alpha})|^{2s} e(-\boldsymbol{\alpha}^l\cdot\boldsymbol{g})d\boldsymbol{\alpha}. \end{aligned} \end{equation} Thus, on substituting ($\ref{5}$) into ($\ref{5'}$), we complete the proof of Lemma 2.1. \end{proof} \bigskip \bigskip In order to describe Lemma 2.2, we must require a preliminary step. Observe that by shifting the variable of summation, for each integer $y$ one has \begin{equation}\label{6} f(\alpha_{k},\boldsymbol{\alpha})=\displaystyle\sum_{1+y\leq x\leq X+y}e(\psi(x-y;\alpha_k,\boldsymbol{\alpha})), \end{equation} where $$\psi(z;\alpha_k,\boldsymbol\alpha)=\alpha_1z+\cdots+\alpha_kz^k.$$ But as a consequence of the Binomial Theorem, if we adopt the convention that $\alpha_0=0$, then we may write $\psi(x-y;\alpha_k,\boldsymbol\alpha)$ in the shape $$\psi(x-y;\alpha_k,\boldsymbol{\alpha})=\displaystyle\sum_{i=0}^k\beta_ix^i,$$ where $$\beta_i=\displaystyle\sum_{j=i}^k \binom{j}{i}(-y)^{j-i}\alpha_j\ \ (0\leq i \leq k).$$ Write \begin{equation}\label{2.62.6} K(\gamma)=\displaystyle\sum_{1\leq z \leq X}e(-\gamma z). \end{equation} Then we deduce from ($\ref{6}$) that when $1\leq y\leq X$, one has \begin{equation}\label{7} f(\alpha_k,\boldsymbol{\alpha})=\displaystyle\int_0^1f_y(\alpha_k,\boldsymbol{\alpha};\gamma)K(\gamma)d\gamma, \end{equation} where we have written $$f_y(\alpha_k,\boldsymbol{\alpha}; \gamma)=\displaystyle\sum_{1\leq x\leq 2X}e(\psi(x-y;\alpha_k,\boldsymbol{\alpha})+\gamma(x-y))).$$ Define \begin{equation*}\label{2.72.7} \mathcal{F}_y(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})=\displaystyle\prod_{i=1}^{s}f_y(\alpha_k,\boldsymbol{\alpha};\gamma_i)f_y(-\alpha_k,-\boldsymbol{\alpha};-\gamma_{s+i}), \end{equation*} and $$\omega_{y,\boldsymbol{\gamma}}=e(-(\gamma_1+\cdots+\gamma_s-\gamma_{s+1}-\cdots-\gamma_{2s})y)=e(-\Gamma y).$$ To facilitate the statement of Lemma 2.2, it is convenient to introduce some notation. Recall $\{i_1,\ldots,i_{k-t}\}=\{1,2,\ldots,k_1\}\setminus\{k_1,\ldots,k_t\}.$ Furthermore, we adopt the notation $\alpha_i=0$ for $i\notin \{1,\ldots, k\}.$ Then, we define the exponential sum $\Xi(\alpha_k,\boldsymbol{\alpha})=\Xi(\alpha_k,\boldsymbol{\alpha};l;\boldsymbol{\gamma})$ with $1\leq l\leq k-t$ by \begin{equation*}\label{2.112.11} \Xi(\alpha_k,\boldsymbol{\alpha}) =X^{-1}\displaystyle\sum_{1\leq y\leq X}\displaystyle\sum_{|h_{i_1}|\leq sX^{i_1}}\cdots\displaystyle\sum_{|h_{i_l}|\leq sX^{i_l}}\omega_{y,\boldsymbol{\gamma}}e\biggl(-\displaystyle\sum_{ m=0}^{k-i_l}\delta_{m} y^{m}\biggr), \end{equation*} where \begin{equation}\label{2.82.8} \delta_{m}=\displaystyle\sum_{n=1}^l\alpha_{m+i_n}\binom{m+i_n}{i_n}h_{i_n}. \end{equation} Therefore, on recalling that the definition of $\mathcal{I}(\alpha_k)$ in the statement of Lemma 2.1, we have the following lemma. \begin{llll} We have \begin{equation*} \mathcal{I}(\alpha_k)\ll \displaystyle\oint \displaystyle\oint \mathcal{F}_0(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})\Xi(\alpha_k,\boldsymbol{\alpha})\tilde{K}(\boldsymbol{\gamma})d\boldsymbol{\alpha}d\boldsymbol{\gamma}, \end{equation*} where $\tilde{K}(\boldsymbol{\gamma})=\displaystyle\prod_{i=1}^s K(\gamma_i)K(-\gamma_{s+i}).$ \end{llll} \begin{proof} On substituting ($\ref{7}$) into $\mathcal{I}(\alpha_k)$, we deduce that when $1\leq y\leq X$, one has \begin{equation}\label{8} \begin{aligned} \mathcal{I}(\alpha_k)=\displaystyle\sum_{|g_{i_1}|\leq sX^{i_1}}\cdots\displaystyle\sum_{|g_{i_l}|\leq sX^{i_l}}\displaystyle\oint I_{\boldsymbol{g}}(\boldsymbol{\gamma},y)\tilde{K}(\boldsymbol{\gamma})d\boldsymbol{\gamma}, \end{aligned} \end{equation} where \begin{equation}\label{9} I_{\boldsymbol{g}}(\boldsymbol{\gamma},y)=\displaystyle\oint\mathcal{F}_y(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})e(-\boldsymbol{\alpha}^l\cdot \boldsymbol{g})d\boldsymbol{\alpha}. \end{equation} By orthogonality, one finds that \begin{equation}\label{10} \displaystyle\oint\mathcal{F}_y(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})e(-\boldsymbol{\alpha}^l\cdot \boldsymbol{g})d\boldsymbol{\alpha}=\displaystyle\sum_{1\leq \mathbf{x}\leq 2X}\Delta(\alpha_k,\boldsymbol{\gamma},\boldsymbol{g},y), \end{equation} where $\Delta(\alpha_k,\boldsymbol{\gamma},\boldsymbol{g},y)$ is equal to $$e\biggl(\displaystyle\sum_{i=1}^s(\alpha_k((x_{i}-y)^k-(x_{s+i}-y)^k)+\gamma_i(x_i-y)-\gamma_{s+i}(x_{s+i}-y))\biggr),$$ when \begin{equation}\label{2.102.10} \displaystyle\sum_{i=1}^s((x_i-y)^j-(x_{s+i}-y)^j)=h_j\ \textrm{with}\ 1\leq j\leq k-1, \end{equation} in which $h_j=g_{j}$ when $j\in \{i_1,\ldots,i_l\}$, and $h_j=0$ when $j\notin \{i_1,\ldots,i_l\}$. Otherwise, one finds that $\Delta(\alpha_k,\boldsymbol{\gamma},\boldsymbol{g},y)=0.$ By applying the Binomial Theorem within $(\ref{2.102.10})$, we have \begin{equation}\label{13} \displaystyle\sum_{i=1}^s(x_i^j-x_{s+i}^j)=\displaystyle\sum_{l=1}^{j}\binom{j}{l}h_ly^{j-l}\ \ \ (1\leq j\leq k-1), \end{equation} and \begin{equation}\label{15} \displaystyle\sum_{i=1}^s(x_i^k-x_{s+i}^k)=\displaystyle\sum_{l=1}^{k-1}\binom{k}{l}h_ly^{k-l}+\displaystyle\sum_{i=1}^s((x_i-y)^k-(x_{s+i}-y)^k). \end{equation} By orthogonality, one infers from ($\ref{10}$),($\ref{13}$) and ($\ref{15}$) that by putting $h_k=0$ \begin{equation*} \begin{aligned} \displaystyle\oint\mathcal{F}_y(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})e(-\boldsymbol{\alpha}^l\cdot \boldsymbol{g})d\boldsymbol{\alpha}= \omega_{y,\gamma}\displaystyle\oint \mathcal{F}_0(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})e\biggl(-\displaystyle\sum_{j=1}^k\alpha_{j}\biggl(\displaystyle\sum_{ l=1}^j\binom{j}{l}h_ly^{j-l}\biggr)\biggr)d\boldsymbol{\alpha}, \end{aligned} \end{equation*} where $\omega_{y,\boldsymbol{\gamma}}=e(-\Gamma y)$ in which $\Gamma=\gamma_1+\cdots+\gamma_s-\gamma_{s+1}-\cdots-\gamma_{2s}.$ We now collect together terms corresponding to each power of $y$. On recalling $h_n=0$ when $n\notin \{i_1,\ldots,i_l\}$ and since by $j\leq k$, the highest degree of $y$ is $k-i_l$. Furthermore, on recalling that $\alpha_j=0$ for $j\notin\{1,\ldots,k\}$ and the definition ($\ref{2.82.8}$) of $\delta_m,$ we find that \begin{equation}\label{2.172.17} \begin{aligned} \displaystyle\sum_{ j=1}^k\alpha_{j}\biggl(\displaystyle\sum_{ l=1}^j\binom{j}{l}y^{j-l}h_l\biggr)&=\displaystyle\sum_{ m=0}^{k-i_l}\biggl(\displaystyle\sum_{n=1}^l\alpha_{m+i_n}\binom{m+i_n}{i_n}h_{i_n}\biggr)y^{m}=\displaystyle\sum_{ m=0}^{k-i_l}\delta_my^{m}. \end{aligned} \end{equation} Since $\alpha_{m+i_n}=0$ for $m+i_n>k,$ it is worth noting that no contribution arises from $n$ with $i_n> k-m$, in $\delta_m.$ From here, we are led from ($\ref{9}$) to the relation \begin{equation*} \begin{aligned} &\displaystyle\sum_{|g_{i_1}|\leq sX^{i_1}}\cdots\displaystyle\sum_{|g_{i_l}|\leq sX^{i_l}} I_{\boldsymbol{g}}(\boldsymbol{\gamma},y)\\ &=\displaystyle\oint \mathcal{F}_0(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})\displaystyle\sum_{|h_{i_1}|\leq sX^{i_1}}\cdots\displaystyle\sum_{|h_{i_l}|\leq sX^{i_l}}\omega_{y,\boldsymbol{\gamma}}e\biggl(-\displaystyle\sum_{ m=0}^{k-i_l}\delta_{m}y^{m}\biggr)d\boldsymbol{\alpha}. \end{aligned} \end{equation*} Since we took $y$ in $[1,X]$, we may conclude thus far \begin{equation}\label{17} \begin{aligned} X^{-1}\displaystyle\sum_{1\leq y\leq X}\displaystyle\sum_{|g_{i_1}|\leq sX^{i_1}}\cdots\displaystyle\sum_{|g_{i_l}|\leq sX^{i_l}} I_{\boldsymbol{g}}(\boldsymbol{\gamma},y)=\displaystyle\oint \mathcal{F}_0(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})\Xi(\alpha_k,\boldsymbol{\alpha})d\boldsymbol{\alpha}. \end{aligned} \end{equation} Therefore, from ($\ref{8}$) and $(\ref{17})$, we conclude thus far that \begin{equation*}\label{2.17} \begin{aligned} \mathcal{I}(\alpha_k) &\ll X^{-1}\displaystyle\sum_{1\leq y\leq X}\mathcal{I}(\alpha_k)\\ & =X^{-1}\displaystyle\sum_{1\leq y\leq X}\displaystyle\sum_{|g_{i_1}|\leq sX^{i_1}}\cdots\displaystyle\sum_{|g_{i_l}|\leq sX^{i_l}}\displaystyle\oint I_{\boldsymbol{g}}(\boldsymbol{\gamma},y)\tilde{K}(\boldsymbol{\gamma})d\boldsymbol{\gamma}\\ &=\displaystyle\oint \displaystyle\oint \mathcal{F}_0(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})\Xi(\alpha_k,\boldsymbol{\alpha})\tilde{K}(\boldsymbol{\gamma})d\boldsymbol{\alpha}d\boldsymbol{\gamma}. \end{aligned} \end{equation*} \end{proof} \bigskip We recall that \begin{equation*} \Xi(\alpha_k,\boldsymbol{\alpha}) =X^{-1}\displaystyle\sum_{1\leq y\leq X}\displaystyle\sum_{|h_{i_1}|\leq sX^{i_1}}\cdots\displaystyle\sum_{|h_{i_l}|\leq sX^{i_l}}\omega_{y,\boldsymbol{\gamma}}e\biggl(-\displaystyle\sum_{ m=0}^{k-i_l}\delta_{m} y^{m}\biggr), \end{equation*} where \begin{equation*} \delta_{m}=\displaystyle\sum_{n=1}^l\alpha_{m+i_n}\binom{m+i_n}{i_n}h_{i_n}. \end{equation*} Furthermore, recall that $|\alpha_k-a/q|\leq q^{-2}$ with $(q,a)=1$ in the hypothesis of Theorem 1.3. We provide the upper bound for $\Xi(\alpha_k,\boldsymbol{\alpha})$ in terms of $q$, by obtaining savings from all summations over $h_{i_1},\ldots, h_{i_l}.$ \begin{llll} We have $$\Xi(\alpha_k,\boldsymbol{\alpha})\ll X^{i_1+\cdots+i_l+\epsilon}\left(\displaystyle\prod_{j=1}^l\left(q^{-1}+X^{-i_j}+{X^{-k+i_j}}+qX^{-k}\right)\right)^{1/((k-i_l)(k-i_l+1))}.$$ \end{llll} In the proof of Lemma 2.3, we bound $\Xi(\alpha_k,\boldsymbol{\alpha})$ by mean value type estimates. Furthermore, we use Vinogradov's mean value theorem to deal with these mean value type estimates. The argument described here can be applicable to all possible arrangements of exponents $\mathbf{k}=(k_1,\ldots,k_t)$ with $t<k$. Especially, this argument is useful for the case $k_1-1=k_2$ and $t<k_1/2.$ Even for the case that $k_1-1>k_2$, experts will recognize that by taking $l=1$ the sum $\Xi(\alpha_k,\boldsymbol{\alpha})$ becomes the exponential sum with phase linear in $y$, and in this case a variant of our arguments coincides with the proof of [$\ref{ref12}$, Theorem 1.3] and [$\ref{ref15}$, Theorem 14.4]. \begin{proof}[Proof of Lemma 2.3] On recalling that $\omega_{y,\boldsymbol{\gamma}}=e(-\Gamma y),$ we may rewrite summands in $\Xi(\alpha_k,\boldsymbol{\alpha})$ as $e(-\sum_{m=0}^{k-i_l}\delta_{m}'y^{m}),$ where $\delta'_{n}=\delta_n\ (n\neq 1)$ and $\delta'_1=\delta_1+\Gamma.$ Define $$S^*(\boldsymbol{\delta};X)=\sup_{I\subseteq [1,X]}\left|\displaystyle\sum_{ y\in I}e\biggl(-\displaystyle\sum_{1\leq m\leq k-i_l}\delta_{m}'y^{m}\biggr)\right|$$ where $I$ runs over all intervals in $[1,X].$ In particular, we write $S(\boldsymbol{\delta};X)$ for the sum with $I=[1,X]$. Here and later, we put $2p=(k-i_l)(k-i_l+1).$ Define \begin{equation}\label{19} \Upsilon_{p}(\boldsymbol{\delta};X)=\displaystyle\sum_{|h_{i_1}|\leq sX^{i_1}}\cdots\displaystyle\sum_{|h_{i_l}|\leq sX^{i_l}}\left|S^*(\boldsymbol{\delta};X)\right|^{2p}. \end{equation} Then, by applying H\"older's inequality to $\Xi(\alpha_k,\boldsymbol{\alpha})$, we have \begin{equation}\label{2.18} \begin{aligned} \Xi(\alpha_k,\boldsymbol{\alpha}) \leq X^{-1}(\Upsilon_{p}(\boldsymbol{\delta};X))^{1/(2p)}X^{(i_1+\cdots+i_l)\left(1-1/(2p)\right)}. \end{aligned} \end{equation} We first analyze $\Upsilon_{p}(\boldsymbol{\delta};X)$. Define $\Omega(X)$ to be the box $A_{1}\times A_{2}\times\cdots\times A_{k-i_l},$ where \begin{equation*} A_n:=A_n(\boldsymbol{\delta})=\{\theta_{n}\in [0,1): \|\delta_{n}'-\theta_{n}\|\leq 1/(4kX^{n})\}. \end{equation*} Then, by [$\ref{ref5}$, Lemma 1], one infers that \begin{equation}\label{21} S^*(\boldsymbol{\delta};X)^{2p} \ll\displaystyle (\text{vol}(\Omega(X)))^{-1} \displaystyle\int_{A_{1}}\displaystyle\int_{A_{2}}\cdots\displaystyle\int_{A_{k-i_l}} S^*(\boldsymbol{\theta};X)^{2p}d\boldsymbol{\theta}. \end{equation} Recall the definition $\delta_n$ and the remark following ($\ref{2.172.17}$). Then, we see that $\delta_{k-i_j}$ is a linear combination of $h_{i_j},\ldots,h_{i_l}$. We define the quantity $H_l(\boldsymbol{\theta})$ to be the number of solutions $(h_{i_1},h_{i_2},\ldots,h_{i_l})$ with $|h_{i_j}|\leq sX^{i_j}$ of the system \begin{equation*} \|\delta_{n}'-\theta_{n}\|\leq 1/(4kX^{n})\ \ \ \ (n=k-i_1,k-i_2,\ldots,k-i_l), \end{equation*} and put \begin{equation*} H_l=\sup_{\boldsymbol{\theta}\in [0,1)^l}H_l(\boldsymbol{\theta}). \end{equation*} Therefore, on substituting ($\ref{21}$) into ($\ref{19}$), and expanding $A_j$ to $[0,1)$ for $$j\notin \{k-i_1,k-i_2\ldots,k-i_l\},$$ we obtain the bound \begin{equation*} \begin{aligned} &\Upsilon_{p}(\boldsymbol{\delta};X)\\ &\ll\displaystyle(\text{vol}(\Omega(X)))^{-1}\displaystyle\int_0^1\cdots\displaystyle\int_0^1\displaystyle\sum_{|h_{i_1}|\leq sX^{i_1}}\cdots\displaystyle\sum_{|h_{i_l}|\leq sX^{i_l}} \displaystyle\int_{A_{k-i_1}}\displaystyle\int_{A_{k-i_2}}\cdots\displaystyle\int_{A_{k-i_l}} S^*(\boldsymbol{\theta};X)^{2p}d\boldsymbol{\theta}.\\ \end{aligned} \end{equation*} Since $(\text{vol}(\Omega(X)))^{-1}=X^{1+\cdots+(k-i_l)}$ and by the definition of $H_l$, we infer that \begin{equation}\label{22} \begin{aligned} \Upsilon_{p}(\boldsymbol{\delta};X)\ll& X^{1+\cdots+(k-i_l)}H_l\displaystyle\int_0^1\cdots\displaystyle\int_0^1S^*(\boldsymbol{\theta};X)^{2p}d\boldsymbol{\theta}, \end{aligned} \end{equation} To bound $H_l$, we first analyse $H_l(\boldsymbol{\theta})$. Recall again the definition $\delta_m$ and the remark following ($\ref{2.172.17}$). Then, we have $$ \delta_{k-i_j}=\alpha_k\binom{k}{i_j}h_{i_j}+\displaystyle\sum_{n=j+1}^{l}\alpha_{k-i_j+i_n}\binom{k-i_j+i_n}{i_n}h_{i_n},$$ for all $j=1,\ldots, l.$ Recall that $\delta'_{k-i_j}=\delta_{k-i_j}+\Gamma $ for $k-i_j= 1$, and $\delta'_{k-i_j}=\delta_{k-i_j},$ otherwise. Meanwhile, by [$\ref{ref5}$, Lemma 3], when $m\in {\mathbb N}$, $\alpha,\beta\in {\mathbb R}$ and $|\alpha-a/q|\leq q^{-2}$, the number of solutions of $$\|m\alpha x+\beta\|\leq 1/Y,$$ with $|x|\leq X$, is at most $(1+4q/Y)(1+4mX/q).$ Put $\alpha=\alpha_k$ with $|\alpha_k-a/q|\leq q^{-2}$, $m=\binom{k}{i_j},$ $X=sX^{i_j}$, $Y=4kX^{k-i_j}.$ Then, for fixed $h_{i_{j+1}},\ldots, h_{i_l}$, the number of $h_{i_j}$ of $$\|\delta_{k-i_j}'-\theta_{k-i_j}\|\leq 1/(4kX^{k-i_j}),$$ with $|h_{i_j}|\leq sX^{i_j},$ is at most $\ll X^{i_j}(q^{-1}+X^{-i_j}+X^{-(k-i_j)}+qX^{-k}).$ If we proceed this in descending order $j=l,l-1,\ldots,1$, we infer that \begin{equation}\label{27} H_l(\boldsymbol{\theta})\ll X^{i_1+i_2+\cdots+i_l}\displaystyle\prod_{j=1}^l\left(q^{-1}+X^{-i_j}+{X^{-k+i_j}}+qX^{-k}\right). \end{equation} By taking supremum over $\boldsymbol{\theta},$ we may replace $H_l(\boldsymbol{\theta})$ with $H_l$ in $(\ref{27}).$ For concision, we write \begin{equation}\label{2.242.24} R_l=\displaystyle\prod_{j=1}^l\left(q^{-1}+X^{-i_j}+{X^{-k+i_j}}+qX^{-k}\right) \end{equation} Therefore, from ($\ref{22}$) and ($\ref{27}$), one has by applying the Carleson-Hunt theorem \begin{align*} \Upsilon_{p}(\boldsymbol{\delta};X)&\ll X^{1+\cdots+(k-i_l)}H_l\displaystyle\oint S^*(\boldsymbol{\theta};X)^{2p}d\boldsymbol{\theta}\\ &\ll X^{i_1+i_2+\cdots+i_l}R_lX^{1+\cdots+(k-i_l)}\displaystyle\oint S(\boldsymbol{\theta};X)^{2p}d\boldsymbol{\theta}. \end{align*} Hence, by Vinogradov's mean value theorem, the last expression is $O(X^{(2p+\epsilon)}X^{i_1+i_2+\cdots+i_l}R_l).$ Consequently, by ($\ref{2.18}$), we see that \begin{equation}\label{28} \Xi(\alpha_k,\boldsymbol{\alpha})\ll X^{i_1+i_2+\cdots+i_l+\epsilon}R_l^{1/(2p)}. \end{equation} On recalling the definition $R_l$, we complete the proof of Lemma 2.3. \end{proof} \subsection{Proof of Theorem 1.3} \begin{proof} We combine all lemmas in section 2.1 to prove Theorem 1.3. On recalling ($\ref{2.242.24}$) and $2p=(k-i_l)(k-i_l+1)$, by Lemma 2.2 and Lemma 2.3, we have \begin{equation}\label{2.272.27} \mathcal{I}(\alpha_k)\ll X^{i_1+\cdots+i_l+\epsilon}R_l^{1/(2p)}\displaystyle\oint \displaystyle\oint \mathcal{F}_0(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})\tilde{K}(\boldsymbol{\gamma})d\boldsymbol{\alpha}d\boldsymbol{\gamma}. \end{equation} Meanwhile, by applying the H\"older's inequality and a change of variable, one sees that \begin{equation}\label{2.282.28} \displaystyle\oint \mathcal{F}_0(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})d\boldsymbol{\alpha}\leq \sup_{\gamma\in [0,1)}\displaystyle\oint |f_0(\alpha_k,\boldsymbol{\alpha};\gamma)|^{2s}d\boldsymbol{\alpha}=\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{2s}d\boldsymbol{\alpha}. \end{equation} Furthermore, on recalling $(\ref{2.62.6}),$ we find that $$ \displaystyle\int_0^1|K(\gamma)|d\gamma\leq \displaystyle\int_0^1 \textrm{min}\{X,\|\gamma\|^{-1}\}d\gamma\ll \log X,$$ and hence \begin{equation}\label{2.292.29} \displaystyle\oint|\tilde{K}(\boldsymbol{\gamma})|d\boldsymbol{\gamma}\ll (\log X)^{2s}. \end{equation} On substituting ($\ref{2.282.28}$) and ($\ref{2.292.29}$) into the right hand side in ($\ref{2.272.27}$), we find that $$ \mathcal{I}(\alpha_k)\ll X^{i_1+\cdots+i_l+\epsilon}R_l^{1/(2p)}\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{2s}d\boldsymbol{\alpha}.$$ Therefore, we conclude from Lemma 2.1 that $$\displaystyle\oint|F(\alpha_{k_1},\boldsymbol{\alpha}^{t-1})|^{2s}d\boldsymbol{\alpha}^{t-1}\ll R_l^{1/(2p)} X^{i_1+i_2+\cdots+i_{k-t}+\epsilon}\displaystyle\oint \left|f(\alpha_k,\boldsymbol{\alpha})\right|^{2s}d\boldsymbol{\alpha}.$$ \end{proof} \section{Proof of Theorem 1.4} In this section, we provide the proof of Theorem 1.4. In the previous section, we obtained the mean value over all coefficients but the leading coefficient. Thus, Theorem 1.4 follows by integrating over $\alpha_k$ lying on each of major arcs and minor arcs. To be specific, minor arcs estimates in Theorem 1.4 $(\romannumeral2)$ follow immediately from Theorem 1.3 and Diophantine approximation of the leading coefficient. For major arc estimates in Theorem 1.4 $(\romannumeral1)$, we use a consequence of [$\ref{ref14}$, Theorem 14.4] with applications of H\"older's inequality. \bigskip \begin{proof}[Proof of Theorem 1.4] It follows from ($\ref{28}$) with $2p=(k-i_l)(k-i_l+1)$ that whenever $|\alpha_k-a/q|\leq q^{-2},$ one has \begin{equation}\label{30} \begin{aligned} \Xi(\alpha_k,\boldsymbol{\alpha})&\ll X^{i_1+i_2+\cdots+i_l+\epsilon}\left(\displaystyle\prod_{j=1}^l\left(q^{-1}+X^{-i_j}+{X^{-k+i_j}}+qX^{-k}\right)\right)^{1/(2p)}\\ &\ll X^{i_1+i_2+\cdots+i_l+\epsilon}\left(q^{-1}+X^{-1}+q{X^{-k}}\right)^{\sigma}, \end{aligned} \end{equation} where \begin{align*} \sigma= \frac{l}{(k-i_l)(k-i_l+1)}. \end{align*} We first provide estimates for the major arcs. Assume that $\alpha_k\in \mathfrak{M}$. Then by recalling the definition of $\mathfrak{M}$ and by applying transference principle [$\ref{ref28}$, Theorem 14.1], one deduces that whenever $b\in {\mathbb Z}$ and $r\in{\mathbb N}$ satisfy $(b,r)=1$ and $|\alpha_k-b/r|\leq r^{-2}$, then one has \begin{equation*} \Xi(\alpha_k,\boldsymbol{\alpha})\ll X^{i_1+i_2+\cdots+i_l+\epsilon}(\lambda^{-1}+X^{-1}+\lambda X^{-k})^{\sigma}, \end{equation*} where $\lambda=r+X^k|r\alpha_k-b|.$ Moreover, when $\alpha_k\in \mathfrak{M}(r,b)\subseteq\mathfrak{M},$ one has $r\leq X$ and $X^k|r\alpha_k-b|\leq X$, so that $\lambda\leq 2X.$ Therefore, we see from it that one has \begin{equation*} \Xi(\alpha_k,\boldsymbol{\alpha})\ll X^{i_1+i_2+\cdots+i_l+\epsilon}\Psi(\alpha_k), \end{equation*} where $\Psi(\alpha_k)$ is the function taking the value $(q+X^k|q\alpha_k-a|)^{-\sigma},$ when one has $\alpha_k\in \mathfrak{M}(q,a)\subseteq \mathfrak{M},$ otherwise $\Psi(\alpha_k)=0.$ Hence, one has \begin{equation}\label{eq373737} \begin{aligned} \displaystyle\int_{\mathfrak{M}}\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{2s} \Xi(\alpha_k,\boldsymbol{\alpha})d\boldsymbol{\alpha}d\alpha_k\ll X^{i_1+\cdots+i_l+\epsilon}\displaystyle\int_{\mathfrak{M}}\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{2s} \Psi(\alpha_k)d\boldsymbol{\alpha}d\alpha_k \end{aligned} \end{equation} Let us first assume that $2s\geq k(k+1).$ Then, since $\Psi(\alpha_k)\leq 1$, one finds that by Vinogradov's mean value theorem \begin{equation}\label{32'} \displaystyle\int_{\mathfrak{M}}\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{2s} \Psi(\alpha_k)d\boldsymbol{\alpha}d\alpha_k\ll \displaystyle\int_{0}^1\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{2s} d\boldsymbol{\alpha}d\alpha_k\ll X^{2s-k(k+1)/2+\epsilon}. \end{equation} Next, let us assume that $k^2+(1-2\sigma)k+2\sigma\leq 2s< k(k+1)$. By applying H\"older's inequality, one obtains that \begin{equation}\label{32} \begin{aligned} & \displaystyle\int_{\mathfrak{M}}\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{2s} \Psi(\alpha_k)d\boldsymbol{\alpha}d\alpha_k\\ & \ll \left(\displaystyle\int_{\mathfrak{M}}\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{2s_0} \Psi(\alpha_k)^{\frac{1}{\sigma}}d\boldsymbol{\alpha}d\alpha_k\right)^{\sigma}\left(\displaystyle\int_{\mathfrak{M}}\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{k(k+1)} d\boldsymbol{\alpha}d\alpha_k\right)^{1-\sigma}, \end{aligned} \end{equation} with $s_0=(2s-k(k+1)(1-\sigma))/(2\sigma)$. Notice from the range of $2s$ that $k(k-1)\leq 2s_0\leq k(k+1)$ As a consequence of [$\ref{ref6}$, Lemma 2], one finds that when $2s_0$ is an even number \begin{equation}\label{3.63.6} \displaystyle\int_{\mathfrak{M}}\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{2s_0} \Psi(\alpha_k)^{\frac{1}{\sigma}}d\boldsymbol{\alpha}d\alpha_k\ll X^{\epsilon-k}(XI_1+I_2), \end{equation} where \begin{equation*} I_1=\displaystyle\int_0^1\displaystyle\oint|f(\alpha_k,\boldsymbol{\alpha})|^{2s_0}d\boldsymbol{\alpha}d\alpha_k,\ \textrm{and}\ I_2=\displaystyle\oint |f(0,\boldsymbol{\alpha})|^{2s_0}d\boldsymbol{\alpha}. \end{equation*} By Vinogradov's mean value theorem, whenever $k(k-1)\leq 2s_0\leq k(k+1),$ we have $I_1\ll X^{s_0+\epsilon}.$ On the other hands, when $2s_0\geq k(k-1),$ we have $ I_2\ll X^{2s_0-k(k-1)/2+\epsilon}.$ Thus, for all even numbers $2s_0$ with $k(k-1)\leq 2s_0\leq k(k+1)$, we find from ($\ref{3.63.6}$) that \begin{equation}\label{33} \begin{aligned} \displaystyle\int_{\mathfrak{M}}\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{2s_0} \Psi(\alpha_k)^{\frac{1}{\sigma}}d\boldsymbol{\alpha}d\alpha_k\ll X^{s_0-k+1+\epsilon}+X^{2s_0-k(k+1)/2+\epsilon}. \end{aligned} \end{equation} Notice here that the situation that two terms of the bound in ($\ref{33}$) are same occurs when $2s_0=k^2-k+2$, which is an even number. Thus, by interpolation between even numbers $2s_0$, one finds that ($\ref{33}$) also holds for any real numbers $2s_0$ between $k(k-1)$ and $k(k+1).$ On substituting ($\ref{33}$) into ($\ref{32}$) and applying Vinogradov's mean value theorem, one has \begin{equation*} \begin{aligned} \displaystyle\int_{\mathfrak{M}}\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{2s} \Psi(\alpha_k)d\boldsymbol{\alpha}d\alpha_k&\ll \left(X^{s_0-k+1+\epsilon}+X^{2s_0-k(k+1)/2+\epsilon}\right)^{\sigma}(X^{k(k+1)/2})^{1-\sigma}. \end{aligned} \end{equation*} Since we have $2s_0\sigma+k(k+1)(1-\sigma)=2s,$ this bound is seen to be $$ X^{s-\sigma(k-1)+\epsilon}+X^{2s-k(k+1)/2+\epsilon}.$$ Furthermore, since $2s\geq k^2+(1-2\sigma)k+2\sigma,$ this bound can be replaced by $O(X^{2s-k(k+1)/2+\epsilon}).$ Thus, one concludes that whenever $ k^2+(1-2\sigma)k+2\sigma\leq 2s<k(k+1)$ \begin{equation}\label{36'} \displaystyle\int_{\mathfrak{M}}\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{2s} \Psi(\alpha_k)d\boldsymbol{\alpha}d\alpha_k\ll X^{2s-k(k+1)/2+\epsilon}. \end{equation} Thus, by ($\ref{eq373737}$), ($\ref{32'}$) and ($\ref{36'}$), whenever $2s\geq k^2+(1-2\sigma)k+2\sigma$ we find that \begin{equation*} \displaystyle\int_{\mathfrak{M}}\displaystyle\oint |f(\alpha_k,\boldsymbol{\alpha})|^{2s} \Xi(\alpha_k,\boldsymbol{\alpha})d\boldsymbol{\alpha}d\alpha_k\ll X^{i_1+\cdots+i_l+\epsilon}X^{2s-k(k+1)/2+\epsilon}. \end{equation*} Then, on recalling the definition of $\mathcal{F}_0(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})$, it follows from H\"older's inequality and a change of variable that \begin{equation*} \displaystyle\int_{\mathfrak{M}} \displaystyle\oint \mathcal{F}_0(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})\Xi(\alpha_k,\boldsymbol{\alpha})d\boldsymbol{\alpha}d\alpha_k\ll X^{i_1+\cdots+i_l+\epsilon}X^{2s-k(k+1)/2+\epsilon}. \end{equation*} Consequently, combining this with Lemma 2.1 and Lemma 2.2, we deduce that \begin{align*} &\displaystyle\int_{\mathfrak{M}}\displaystyle\oint\left|F(\alpha_{k_1},\boldsymbol{\alpha}^{t-1})\right|^{2s}d\boldsymbol{\alpha}^{t-1}d\alpha_{k_1}\\ &\ll X^{i_{l+1}+\cdots+i_{k-t}}\displaystyle\int_{\mathfrak{M}}\mathcal{I}(\alpha_k)d\alpha_k\\ &\ll X^{i_{l+1}+\cdots+i_{k-t}}\displaystyle\int_{\mathfrak{M}}\displaystyle\oint\doint\mathcal{F}_0(\alpha_k,\boldsymbol{\alpha};\boldsymbol{\gamma})\Xi(\alpha_k,\boldsymbol{\alpha})\tilde{K}(\boldsymbol{\gamma})d\boldsymbol{\alpha}d\boldsymbol{\gamma}d\alpha_k\\ &\ll X^{2s-D+\epsilon}, \end{align*} where we have used $(\ref{2.292.29})$. Next, we provide estimates for the minor arcs. When $\alpha_k\in \mathfrak{m}$, there exists $q$ and $a$ with $(q,a)=1$ such that $|\alpha_k-a/q|\leq (2k)^{-1}q^{-1}X^{-k+1}$ with $X<q<X^{k-1}.$ Thus, on recalling ($\ref{30}$), when $\alpha_k\in \mathfrak{m},$ we deduce that $\Xi(\alpha_k,\boldsymbol{\alpha})\ll X^{i_1+\cdots+i_{k-t}-\sigma+\epsilon}.$ Therefore, by applying Theorem 1.3 together with Vinogradov's mean value theorem, whenever $2s\geq k_1(k_1+1)$ one has \begin{align*} \displaystyle\int_{\mathfrak{m}}\displaystyle\oint\left|F(\alpha_{k_1},\boldsymbol{\alpha}^{t-1})\right|^{2s}d\boldsymbol{\alpha}d\alpha_{k_1}&\ll X^{i_1+\cdots+i_{k-t}-\sigma+\epsilon}\displaystyle\int_{0}^1\displaystyle\oint\left|f(\alpha_k,\boldsymbol{\alpha})\right|^{2s}d\boldsymbol{\alpha}d\alpha_{k_1}\\ &\ll X^{2s-D-\sigma+\epsilon}. \end{align*} Therefore, by taking $l$ that maximizes the exponent $\sigma,$ the conclusion of Theorem 1.4 follows. \end{proof} \bigskip \section{Proof of Theorem 1.1} \bigskip In this section, we provide Theorem 4.1, which is more quantitative than Theorem 1.1. It is worth noting that Theorem 1.1 immediately follows from Theorem 4.1. The main ingredients of the proof in this section are the arguments in [$\ref{ref13}$, Theorem 1.3]. Wooley [$\ref{ref13}$, Theorem 1.3] provided upper bounds for exponential sums by bounding the pointwise estimates by mean value estimates over major and minor arcs. Meanwhile, a classical way widely used in studying fractional parts of polynomial is closely related to the upper bounds of associated exponential sum. Thus, we exploit the argument in [$\ref{ref13}$] to obtain upper bounds of associated exponential sums in terms of mean values of exponential sums. Thus, upper bounds for these mean values of exponential sums deliver the conclusion of Theorem 4.1. \bigskip \begin{te} Let $\epsilon>0$ and $s,k$ be natural numbers with $k\geq 6.$ Suppose that $X$ is sufficiently large in terms of $s,k$ and $\epsilon$. Consider $\alpha_i\in{\mathbb R}$ with $1\leq i\leq s.$ Then, for $s\geq k+2$ one has $$\min_{\substack{0\leq \boldsymbol{x}\leq X\\ \boldsymbol{x}\neq \boldsymbol{0}}}\|\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k\|\leq X^{-\sigma(s,k)+\epsilon},$$ where \begin{equation*} \sigma(s,k)=\min\biggl\{\frac{s}{k(k+1)-s}, 1\biggr\}. \end{equation*} \end{te} \begin{proof}[Proof of Theorem 1.1] Note that whenever $s\geq k(k+1)/2$ the exponent $\sigma(s,k)$ in Theorem 4.1 becomes $1$. Therefore, Theorem 1.1 immediately follows from Theorem 4.1. \end{proof} \subsection{Outline of the proof of Theorem 4.1} We provide outline of the proof of Theorem 4.1. We begin with stating a classical lemma from the theory of fractional parts of polynomials [$\ref{ref2}$, Theorem 2.2], which relates fractional parts of a sequence of real numbers to the associated exponential sum. \begin{llll} Let $x_1,\ldots,x_N$ be real numbers. Suppose that $\|x_n\|\geq H^{-1}$ for every $n$ with $1\leq n\leq X$. Then, \begin{equation*} \displaystyle\sum_{1\leq h\leq H}\bigl|\displaystyle\sum_{n=1}^Ne(hx_n)\bigr|\gg N. \end{equation*} \end{llll} Let $H$ be a positive number with $H\leq X^{1-\nu}$ for sufficiently small $\nu>0$. Suppose that \begin{equation}\label{4.14.1} \min_{\substack{0\leq \boldsymbol{x}\leq X\\ \boldsymbol{x}\neq \boldsymbol{0}}}\|\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k\| > H^{-1}. \end{equation} Then, by Lemma 4.2, we have \begin{equation}\label{34} \displaystyle\sum_{1\leq h\leq H}\bigl|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k))\bigr|\gg X^{s}. \end{equation} For concision, here and throughout, we write $[1,H]=[1,H]\cap {\mathbb Z}.$ Recall the definition ($\ref{eq1.6}$) of $\mathfrak{M}$ and $\mathfrak{m}$. On observing that each real number $h\alpha_j$ lies either on $\mathfrak{M}$ or $\mathfrak{m},$ one can decompose the set $ [1,H]$ into $2^{s}$ sets, $H_1,\ldots,H_{2^s}$, such that the set $\{h\alpha_j|\ h\in H_i\}\subseteq\mathfrak{M}$ or $\{h\alpha_j|\ h\in H_i\}\subseteq\mathfrak{m}$, for all $1\leq j\leq s$ and $1\leq i\leq 2^s.$ Our goal is to show that for every $H_i\ (i=1,\ldots,2^s)$, we have \begin{equation}\label{35}\displaystyle\sum_{h\in H_i}\bigl|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k))\bigr|\ll X^{s-\eta}\ \textrm{for some}\ \eta=\eta(k,\nu)>0, \end{equation} which contradicts ($\ref{34}$) for sufficiently large $X$ in terms of $\eta$ and $s$. Thus, this forces us to conclude that for sufficiently large $X$, we have \begin{equation*} \min_{\substack{0\leq \boldsymbol{x}\leq X\\ \boldsymbol{x}\neq \boldsymbol{0}}}\|\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k\| \leq H^{-1}. \end{equation*} Therefore, by letting $\nu\rightarrow 0$, we are done to prove Theorem 4.1. \bigskip \subsection{Preliminary manoeuvre} Under the assumption ($\ref{4.14.1}$), we can obtain extra information about $\alpha_1,\ldots,\alpha_s$. In order to describe this information, we must define $\mathfrak{M}_{H}$ by $$\mathfrak{M}_{H}=\bigcup_{\substack{0\leq a\leq q \leq X\\(q,a)=1}}\mathfrak{M}_{H}(q,a),$$ where $\mathfrak{M}_H(q,a)=\left\{\alpha\in[0,1):\ \left|q\alpha-a\right|<X^{1-k}H^{-1}\right\}.$ Define $\mathfrak{m}_H$ by $[0,1)\setminus \mathfrak{M}_H$. Note that if there exists $\alpha_j$ contained in $\mathfrak{M}_H$, it follows by putting $x_j=q$ and $x_i=0\ (i\neq j)$ that $$\min_{\substack{0\leq \boldsymbol{x}\leq X\\ \boldsymbol{x}\neq \boldsymbol{0}}}\|\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k\|\leq\min_{1\leq x_j\leq X}\|\alpha_jx_j^k\|\leq \|\alpha_j q^k\|\leq q^{k-1}\|\alpha_jq\|\leq H^{-1},$$ which contradicts ($\ref{4.14.1}$). Hence, from the assumption ($\ref{4.14.1}$), we may assume that all $\alpha_j\ (j=1,\ldots,s)$ are in $\mathfrak{m}_H.$ Furthermore, whenever $\alpha_j\in \mathfrak{m}_H$ with $H\leq X^{1-\nu}$ for sufficiently small $\nu>0,$ one has for all $h\in [1,H]\cap {\mathbb Z}$ \begin{equation}\label{46} \displaystyle\sum_{1\leq x\leq X}e(h\alpha_j x^k)\ll X^{1-\delta_1} \end{equation} for some positive number $\delta_1=\delta_1(k,\nu)$. Indeed, suppose that there exists $h\in H$ such that \begin{equation*} \displaystyle\sum_{1\leq x\leq X}e(h\alpha_j x^k)\geq X^{1-\delta_1}. \end{equation*} Then, the Weyl's inequality [$\ref{ref10}$, Lemma 2.4] readily confirms that there exist $q\in {\mathbb N}$ and $a\in {\mathbb Z}$ such that $q<X^{\eta}$ and $$|h\alpha_j-a/q|\leq q^{-1}X^{\eta-k},$$ where $\eta=\eta(\delta_1).$ This gives $$|\alpha_j-a/(qh)|\leq (qh)^{-1}X^{\eta-k}.$$ For sufficiently small $\delta_1>0$ so that $\eta=\eta(\delta_1)$ is smaller than $\nu,$ one has $qh<X^{\eta}X^{1-\nu}<X$ and $$|\alpha_j-a/(qh)|\leq (qh)^{-1}X^{1-k}H^{-1}.$$ This yields that $\alpha_j\in \mathfrak{M}_H$, which contradicts $\alpha_j\in \mathfrak{m}_H.$ \bigskip \subsection{Lemma and proposition} To prove ($\ref{35}$), we require arguments used in [$\ref{ref13}$, Theorem 1.3], which relate pointwise estimates of exponential sums to mean value type estimates using the following classical lemma. \begin{llll}[Gallagher-Sobolev inequality]$\label{lem4.2}$ Let $f\ :\ [a,b]\rightarrow{\mathbb C}$ be continuously differentiable. Then $$|f(u)|\leq (b-a)^{-1}\displaystyle\int_a^b|f(x)|dx+\displaystyle\int_a^b|f'(x)|dx $$ for any $u\in[a,b].$ \end{llll} \bigskip In order to describe the following proposition, we define the sets $\mathcal{D}_1=\mathcal{D}_1(\alpha)$ and $\mathcal{D}_2=\mathcal{D}_2(\alpha)$ with $\alpha\in{\mathbb R}$ by \begin{equation*} \mathcal{D}_1=\{h\in [1,H]\cap {\mathbb Z}|\ h\alpha\in \mathfrak{M}\ \text{mod}\ 1\} \end{equation*} and \begin{equation*} \mathcal{D}_2=\{h\in [1,H]\cap {\mathbb Z}|\ h\alpha\in \mathfrak{m}\ \text{mod}\ 1\}. \end{equation*} \begin{pr} Let $\alpha\in {\mathbb R}$, and $H>0.$ Suppose that $|q\alpha-a|\leq q^{-1}$ with $(q,a)=1.$ Then, we have \begin{equation}\label{4.34.3} \displaystyle\sum_{h\in \mathcal{D}_1}\bigl|\displaystyle\sum_{1\leq x\leq X}e(h\alpha x^k)\bigr|^{k+1}\ll H\left(q^{-1}+H^{-1}+qH^{-1}X^{-k}\right) X^{k+1+\epsilon}, \end{equation} and \begin{equation}\label{4.44.4} \displaystyle\sum_{h\in \mathcal{D}_2}\bigl|\displaystyle\sum_{1\leq x\leq X}e(h\alpha x^k)\bigr|^{k(k+1)}\ll H\left(q^{-1}+H^{-1}+qH^{-1}X^{-k}\right)X^{k(k+1)-1+\epsilon}. \end{equation} \end{pr} \begin{proof}[Proof of Proposition 4.4] We shall first derive ($\ref{4.34.3}$). Define a set $\Gamma(h)$ to be $$\Gamma(h)=\{\gamma\in [0,1)|\ \|h\alpha-\gamma\|<(4k)^{-1}X^{-k}\}.$$ By applying Lemma $\ref{lem4.2}$ to $\sum_{1\leq x\leq X}e(h\alpha x^k)$, one has \begin{equation}\label{37} \begin{aligned} &\displaystyle\sum_{h\in \mathcal{D}_1}\bigl|\displaystyle\sum_{1\leq x\leq X}e(h\alpha x^k)\bigr|^{k+1} \\ &\ll \displaystyle\sum_{h\in \mathcal{D}_1}\left(X^k\displaystyle\int_{\Gamma(h)}\bigl|\displaystyle\sum_{1\leq x\leq X}e(\gamma x^k)\bigr|d\gamma+\displaystyle\int_{\Gamma(h)}\bigl|\displaystyle\sum_{1\leq x\leq X}x^ke(\gamma x^k)\bigr|d\gamma\right)^{k+1}\\ &\ll \displaystyle\sum_{h\in \mathcal{D}_1}\left(X^k\displaystyle\int_{\Gamma(h)}\bigl|\displaystyle\sum_{1\leq x\leq X}e(\gamma x^k)\bigr|d\gamma\right)^{k+1}+\displaystyle\sum_{h\in \mathcal{D}_1}\left(\displaystyle\int_{\Gamma(h)}\bigl|\displaystyle\sum_{1\leq x\leq X}x^ke(\gamma x^k)\bigr|d\gamma\right)^{k+1}, \end{aligned} \end{equation} where we used $(A+B)^{k+1}\ll A^{k+1}+B^{k+1}$ for the second inequality. For concision, we write $\Xi_1$ and $\Xi_2$ for the first term and the second term in the bound ($\ref{37}$). Furthermore, for the sake of the next discussion, we freely assume that $X$ is an integer. We first analyse the sum $\Xi_2.$ By applying partial summation, we have \begin{equation*} \begin{aligned} &\displaystyle\sum_{1\leq x\leq X}x^k e(\gamma x^k)=X^kS_{X+1}-S_{1}- \displaystyle\sum_{2\leq x \leq X}(x^k-(x-1)^k)S_x, \end{aligned} \end{equation*} where $$S_x=\displaystyle\sum_{x\leq m\leq 2X}e(\gamma m^k).$$ Then, we find that $\Xi_2$ is \begin{equation}\label{38} \begin{aligned} &\ll \displaystyle\sum_{h\in \mathcal{D}_1}\biggl(\left(X^k\displaystyle\int_{\Gamma(h)}|S_{X+1}|d\gamma\right)^{k+1}+\left(X^{k-1}\displaystyle\sum_{2\leq x\leq X}\displaystyle\int_{\Gamma(h)}|S_x|d\gamma\right)^{k+1}+\left(\displaystyle\int_{\Gamma(h)}|S_{1}|d\gamma\right)^{k+1}\biggr). \end{aligned} \end{equation} Meanwhile, on noting that mes$(\Gamma(h))\asymp X^{-k}$ and by applying H\"older's inequality, we have \begin{equation*} \left(\displaystyle\int_{\Gamma(h)}|S_x|d\gamma\right)^{k+1}\leq X^{-k^2}\displaystyle\int_{\Gamma(h)}|S_x|^{k+1}d\gamma. \end{equation*} Thus, we deduce from ($\ref{38}$) that \begin{equation}\label{39} \begin{aligned} & \Xi_2\ll X^k \sup_{1\leq x\leq X+1}\displaystyle\sum_{h\in \mathcal{D}_1}\displaystyle\int_{\Gamma(h)}|S_x|^{k+1}d\gamma. \end{aligned} \end{equation} Note that if $h\alpha\in\mathfrak{M}$, there exists $q\in{\mathbb N}$ with $1\leq q\leq X$ such that $\|qh\alpha\|\leq (2k)^{-1}X^{1-k}$. Thus, when $\|h\alpha-\gamma\|\leq (4k)^{-1}X^{-k}$ and $h\alpha \in \mathfrak{M}$, one has $\left\|q\gamma\right\|\leq \|qh\alpha\|+\|q(h\alpha-\gamma)\|\leq (2k)^{-1}X^{1-k}+(4k)^{-1}qX^{-k}\leq k^{-1}X^{1-k}$. Thus, on recalling the definition ($\ref{eq1.6}$) of $\mathfrak{M}_l$, one finds that $h\alpha\in \mathfrak{M}$ and $\|h\alpha-\gamma\|<(4k)^{-1}X^{-k}$ implies $\gamma\in \mathfrak{M}_1$. Let us write $$M(H,\gamma)=|\{h\in [1,H]\cap {\mathbb Z}|\ \|h\alpha-\gamma\|<(4k)^{-1}X^{-k}\}|$$ and $$M(H)=\displaystyle\sup_{\gamma\in [0,1)}M(H,\gamma).$$ Hence, by discussion above, we infer from ($\ref{39}$) that \begin{equation}\label{40} \Xi_2\ll X^kM(H)\sup_{1\leq x\leq X+1}\displaystyle\int_{\mathfrak{M}_1}|S_x|^{k+1}d\gamma. \end{equation} Meanwhile, by applying [$\ref{ref8}$, Lemma 6], one has $$M(H)\ll H\left(q^{-1}+H^{-1}+qH^{-1}X^{-k}\right)$$ Furthermore, the Hardy-Littlewood method [$\ref{ref10}$, Theorem 4.4] readily confirms that $$\displaystyle\int_{\mathfrak{M}_1}|S_x|^{k+1}d\gamma\ll X^{1+\epsilon}.$$ Therefore, we see from ($\ref{40}$) that \begin{equation}\label{41} \begin{aligned} &\Xi_2\ll H\left(q^{-1}+H^{-1}+qH^{-1}X^{-k}\right)X^{k+1+\epsilon}. \end{aligned} \end{equation} Next, it remains to estimate $\Xi_1$. By applying H\"older's inequality, we deduce that \begin{equation}\label{42} \begin{aligned} \Xi_1&\ll X^k\displaystyle\int_{\Gamma(h)}|S_1-S_{X+1}|^{k+1}d\gamma\ll X^k \sup_{1\leq x\leq X+1}\displaystyle\sum_{h\in \mathcal{D}_1}\displaystyle\int_{\Gamma(h)} |S_x|^{k+1}d\gamma. \end{aligned} \end{equation} Then, by the same argument from ($\ref{39}$) to ($\ref{41}$), we have \begin{equation}\label{4222} \Xi_1\ll H(q^{-1}+H^{-1}+qH^{-1}X^{-k})X^{k+1+\epsilon}. \end{equation} Therefore, by $(\ref{37})$, $(\ref{41})$ and $(\ref{4222}),$ we conclude that \begin{equation}\label{43} \displaystyle\sum_{h\in \mathcal{D}_1}\biggl|\displaystyle\sum_{1\leq x\leq X}e(h\alpha x^k)\biggr|^{k+1}\ll H\left(q^{-1}+H^{-1}+qH^{-1}X^{-k}\right)X^{k+1+\epsilon}. \end{equation} This confirms the estimate $(\ref{4.34.3}).$ We next derive ($\ref{4.44.4}$). Recall the definition $(\ref{eq1.6})$ of $\mathfrak{M}_l$ and $\mathfrak{m}_l=[0,1)\setminus \mathfrak{M}_l$. Note that if $h\alpha\in \mathfrak{m}$ and $\|h\alpha-\gamma\|<(4k)^{-1}X^{-k}$, then $\gamma\in\mathfrak{m}_4$. Indeed, if $\gamma\in \mathfrak{M}_4$, there exists $q\in {\mathbb N}$ with $1\leq q\leq X$ such that $\|q\gamma\|\leq (4k)^{-1}X^{1-k}$, and thus one has $\|qh\alpha\|\leq \|q(h\alpha-\gamma)\|+\|q\gamma\|\leq q(4k)^{-1}X^{-k}+(4k)^{-1}X^{1-k} \leq (2k)^{-1}X^{1-k} ,$ which contradicts $h\alpha\in \mathfrak{m}.$ Therefore, the same treatment leading from ($\ref{37}$) to ($\ref{42}$) with the exponent $k(k+1)$ in place of $k+1$ gives the upper bound \begin{equation}\label{44} \displaystyle\sum_{h\in \mathcal{D}_2}\biggl|\displaystyle\sum_{1\leq x\leq X}e(h\alpha x^k)\biggr|^{k(k+1)}\ll X^kH\left(q^{-1}+H^{-1}+qH^{-1}X^{-k}\right)\sup_{1\leq x\leq X+1}\displaystyle\int_{\mathfrak{m}_4}|S_x|^{k(k+1)}d\gamma. \end{equation} An application of the argument used in [$\ref{ref12}$, Theorem 2.1] confirms that \begin{align*} &\displaystyle\int_{\mathfrak{m}_4}|S_x|^{k(k+1)}d\gamma\ll X^{k(k+1)-k-1+\epsilon}. \end{align*} Thus, on substituting this estimate into ($\ref{44}$), we obtain $(\ref{4.44.4})$. Therefore, we complete the proof of Proposition 4.4. \end{proof} \begin{rmk} Recall from section 4.2 that under the assumption ($\ref{4.14.1}$), we may assume that $\alpha_j\in\mathfrak{m}_H$ with $1\leq j\leq s$. For a given index $j$ with $1\leq j\leq s$, it follows Dirichilet's approximation theorem that there exists $a\in {\mathbb Z}$ and $q\in {\mathbb N}$ with $1\leq q\leq HX^{k-1}$ and $(q,a)=1$ such that $|q\alpha_j-a|\leq H^{-1}X^{1-k}.$ Since $\alpha_j\in \mathfrak{m}_H$, moreover, one has $q>X.$ Thus, Proposition 4.4 with the assumption ($\ref{4.14.1}$) delivers that for $1\leq j\leq s$ one has \begin{equation}\label{45} \displaystyle\sum_{h\in \mathcal{D}_1(\alpha_j)}\biggl|\displaystyle\sum_{1\leq x\leq X}e(h\alpha_j x^k)\biggr|^{k+1}\ll (1+H/X)X^{k+1+\epsilon}, \end{equation} and \begin{equation}\label{4646} \displaystyle\sum_{h\in \mathcal{D}_2(\alpha_j)}\biggl|\displaystyle\sum_{1\leq x\leq X}e(h\alpha_j x^k)\biggr|^{k(k+1)}\ll (1+H/X)X^{k(k+1)-1+\epsilon}. \end{equation} \end{rmk} \bigskip \subsection{Proof of Theorem 4.1} \begin{proof} Let $H=X^{\sigma(s,k)-\nu}$ for sufficiently small $\nu>0.$ Suppose that \begin{equation}\label{ineq4.18} \min_{\substack{0\leq \boldsymbol{x}\leq X\\ \boldsymbol{x}\neq \boldsymbol{0}}}\|\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k\| > H^{-1}. \end{equation} From section 4.1, recall that the sets $H_1,\ldots, H_{2^s}$ are such that the set $\{h\alpha_j|\ h\in H_i\}\subseteq\mathfrak{M}$ or $\{h\alpha_j|\ h\in H_i\}\subseteq\mathfrak{m},$ for all $1\leq j\leq s$ and $1\leq i\leq 2^s.$ By relabelling $\alpha_i$, we may assume that for $1\leq i\leq m$, the set $\{h\alpha_i|\ h\in H_1\}\subseteq \mathfrak{M}$, and for $m+1\leq i\leq s$, the set $\{h\alpha_i|\ h\in H_1\}\subseteq \mathfrak{m}.$ Note from the explanation following the proof of Proposition 4.4 that we have $(\ref{45})$ and $(\ref{4646})$. We first consider the case when $m\geq k+1.$ Recall from section 4.2 that the assumption ($\ref{ineq4.18}$) implies that $\alpha_j\in \mathfrak{m}_H$ with $1\leq j\leq s.$ Then, by making use of our hypothesis $s\geq k+2$, together with H\"older's inequality and ($\ref{46}$), we deduce that \begin{equation}\label{4.184.18} \begin{aligned} & \displaystyle\sum_{h\in H_1}\biggl|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k))\biggr|\\ & \ll X^{s-(k+1)-\delta_1}\displaystyle\prod_{1\leq j\leq k+1}\biggl(\displaystyle\sum_{h\in H_1}\biggl|\displaystyle\sum_{1\leq x_j\leq X}e(h\alpha_jx_j^k)\biggr|^{k+1}\biggr)^{\frac{1}{k+1}}. \end{aligned} \end{equation} Meanwhile, on recalling the definition of $H_1$ and $\mathcal{D}_1$ following Lemma 4.3, we notice that $H_1\subseteq \mathcal{D}_1(\alpha_j)$ for $1\leq j\leq k+1.$ Then, by applying ($\ref{45}$) with $H\leq X$, it follows from ($\ref{4.184.18}$) that \begin{equation*} \begin{aligned} & \displaystyle\sum_{h\in H_1}\biggl|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k))\biggr|\\ &\ll X^{s-(k+1)-\delta_1}\displaystyle\prod_{1\leq j\leq k+1}\biggl(\displaystyle\sum_{h\in \mathcal{D}_1(\alpha_j)}\biggl|\displaystyle\sum_{1\leq x_1\leq X}e(h\alpha_jx_j^k)\biggr|^{k+1}\biggr)^{\frac{1}{k+1}}\ll X^{s-\eta}, \end{aligned} \end{equation*} for some $\eta=\eta(\delta_1)>0.$ Next, consider the case when $m<k+1.$ We write \begin{equation*} A_i=\displaystyle\sum_{h\in H_1}\bigl|\displaystyle\sum_{1\leq x_i\leq X}e(h\alpha_i x_i^k)\bigr|^{k},\ B_i=\displaystyle\sum_{h\in H_1}\bigl|\displaystyle\sum_{1\leq x_i\leq X}e(h\alpha_i x_i^k)\bigr|^{k(k+1)}, \end{equation*} and put $m_1=\min\{k(k+1-m), s-m\}$. Then it follows from H\"older's inequality that \begin{equation}\label{ineq48} \begin{aligned} & \displaystyle\sum_{h\in H_1}\biggl|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k))\biggr|\\ &\ll \left(\displaystyle\sum_{h\in H_1}1\right)^{1-\frac{km+m_1}{k(k+1)}}A_1^{\frac{1}{k+1}}\cdots A_m^{\frac{1}{k+1}}B_{m+1}^{\frac{1}{k(k+1)}}\cdots B_{m+m_1}^{\frac{1}{k(k+1)}}X^{s-(m+m_1)}. \end{aligned} \end{equation} On recalling the definition $H_1$, $\mathcal{D}_1$ and $\mathcal{D}_2$ following Lemma 4.3, notice that $H_1\subseteq \mathcal{D}_1({\alpha_i})$ for $1\leq i\leq m$, and $H_1\subseteq \mathcal{D}_2({\alpha_i})$ for $m+1\leq i\leq m+m_1.$ Thus, for $1\leq i\leq m$ we have $$A_i\leq \displaystyle\sum_{h\in \mathcal{D}_1(\alpha_i)}\bigl|\displaystyle\sum_{1\leq x_i\leq X}e(h\alpha_i x_i^k)\bigr|^{k}$$ and for $m+1\leq i\leq m+m_1$ we have $$B_i\leq \displaystyle\sum_{h\in \mathcal{D}_2(\alpha_i)}\bigl|\displaystyle\sum_{1\leq x_i\leq X}e(h\alpha_i x_i^k)\bigr|^{k(k+1)}.$$ Then, on substituting these inequalities into ($\ref{ineq48}$), it follows by applying ($\ref{45}$) and ($\ref{4646}$) that \begin{equation}\label{4.194.19} \begin{aligned} & \displaystyle\sum_{h\in H_1}\biggl|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k))\biggr|\\ & \ll H^{1-\frac{km+m_1}{k(k+1)}}X^{m}X^{m_1-\frac{m_1}{k(k+1)}}X^{s-(m+m_1)}X^{\epsilon}. \end{aligned} \end{equation} Recall that $H=X^{\sigma(s,k)-\nu}.$ Then, the right hand side in ($\ref{4.194.19}$) is $O(X^{\phi})$ where \begin{equation}\label{4.22} \phi=s+\left(1-\frac{km+m_1}{k(k+1)}\right)(\sigma(s,k)-\nu)-\frac{m_1}{k(k+1)}+\epsilon. \end{equation We shall show that $\phi\leq s-\eta$ for some $\eta>0$. Recall the definition of $m_1$. When $m\geq\frac{k(k+1)-s}{k-1}$, one has $m_1=k(k+1-m)$. Thus, one has $\phi= s-1+\frac{m}{k+1}+\epsilon< s-\eta$ for some $\eta>0$, since $m< k+1$. When $m< \frac{k(k+1)-s}{k-1}$, one has $m_1=s-m$. In this case, we have $1-\frac{km+m_1}{k(k+1)}>0$, and thus it follows from ($\ref{4.22}$) that \begin{equation}\label{eq4.22} \phi=s+\left(1-\frac{km+m_1}{k(k+1)}\right)\sigma(s,k)-\frac{m_1}{k(k+1)}-\eta, \end{equation} for some $\eta=\eta(\nu)$. First, consider the case $s\geq k(k+1)/2$. Then, it follows from ($\ref{333}$) that $\sigma(s,k)=1$. Hence, since $m_1=s-m,$ it follows from ($\ref{eq4.22}$) that $$\phi= s+\left(1-\frac{(k-2)m+2s}{k(k+1)}\right)-\eta,$$ for some $\eta=\eta(\nu)>0.$ Hence, it follows by $s\geq k(k+1)/2$ and $m\geq 0$ that $\phi\leq s-\eta$ for some $\eta>0.$ Next, recall the hypothesis $s\geq k+2$ in the statement of Theorem 4.1, and consider next the case $k+2\leq s\leq k(k+1)/2.$ Then, it follows from $(\ref{333})$ that $\sigma(s,k)=\frac{s}{k(k+1)-s}.$ Hence, since $m_1=s-m$, it follows from $(\ref{eq4.22})$ that \begin{equation} \begin{aligned} \phi&=s+\biggl(\frac{k(k+1)-s}{k(k+1)}+\frac{-km+m}{k(k+1)}\biggr)\left(\frac{s}{k(k+1)-s}\right)-\frac{s-m}{k(k+1)}-\eta\\ &=s+\frac{s}{k(k+1)}+\left(\frac{(-km+m)s}{k(k+1)(k(k+1)-s)}\right)-\frac{s-m}{k(k+1)}-\eta\\ &=s+\frac{m}{k(k+1)}\left(1-\frac{(k-1)s}{k(k+1)-s}\right)-\eta, \end{aligned} \end{equation} for some $\eta=\eta(\nu)>0$. Hence, it follows by $s\geq k+2$ and $m\geq 0$ that $\phi\leq s-\eta$ for some $\eta>0.$ Therefore, in all cases, we have \begin{equation} \displaystyle\sum_{h\in H_1}\biggl|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k))\biggr|\ll X^{s-\eta}, \end{equation} for some $\eta>0.$ Then, by the same treatment, we have ($\ref{35}$) for every $H_i\ (i=1,\ldots,2^s)$, which contradicts ($\ref{34}$) stemming from ($\ref{ineq4.18}$). Therefore, we are forced to conclude that \begin{equation*} \min_{\substack{0\leq\boldsymbol{x}\leq X\\ \boldsymbol{x}\neq \boldsymbol{0}}}\|\alpha_1x_1^k+\alpha_2x_2^k+\cdots+\alpha_sx_s^k\| \leq H^{-1}. \end{equation*} Hence, by letting $\nu\rightarrow 0$, we complete the proof of Theorem 4.1. \end{proof} \section{Proof of Theorem 1.2} \bigskip In this section, we provide the proof of Theorem 1.2. We recall the major arcs $\mathfrak{M}=\mathfrak{M}_2$ defined in ($\ref{eq1.6}$), and their complement $\mathfrak{m}=\mathfrak{m}_2.$ In the proof of Theorem 4.1, we used major arcs estimates [$\ref{ref10}$, Theorem 4.4] \begin{equation}\label{51} \displaystyle\int_{\mathfrak{M}}\biggl|\displaystyle\sum_{1\leq x\leq X}e(\alpha x^k)\biggr|^{k+1}d\alpha \ll X^{1+\epsilon} \end{equation} and minor arcs estimates [$\ref{ref12}$, Theorem 2.1] \begin{equation}\label{52} \displaystyle\int_{\mathfrak{m}}\biggl|\displaystyle\sum_{1\leq x\leq X}e(\alpha x^k)\biggr|^{k(k+1)}d\alpha \ll X^{k(k+1)-1+\epsilon}. \end{equation} To prove Theorem 1.2, we replace the mean values ($\ref{51}$) and ($\ref{52}$) with those in Theorem 1.4, and follow the same argument with the proof of Theorem 4.1 \subsection{Outline of the proof of Theorem 1.2} Let $s>k_1^2+k_1+2\lceil\sigma(1-k_1)\rceil$. Throughout this section, we put $H=X^{1-\nu}$ for sufficiently small $\nu>0$ unless specified otherwise. Recall $\varphi_j(x)=\alpha_{1j}x^{k_1}+\cdots+\alpha_{tj}x^{k_t}$. Suppose that \begin{equation}\label{ineq5.35.3} \min_{\substack{0\leq \boldsymbol{x}\leq X\\\boldsymbol{x}\neq \boldsymbol{0}}}\|\varphi_1(x_1)+\varphi_2(x_2)+\cdots +\varphi_{s}(x_{s})\|> H^{-1}. \end{equation} Then, by Lemma 4.2, we have \begin{equation}\label{53} \displaystyle\sum_{1\leq h\leq H}\bigl|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\varphi_1(x_1)+\cdots+\varphi_{s}(x_{s})))\bigr|\gg X^{s}. \end{equation} On observing that each real number $h\alpha_{1j}$ lies either on $\mathfrak{M}$ or $\mathfrak{m}$, one can decompose the set $[1,H]\cap {\mathbb Z}$ into $2^s$ sets, $H_1,\ldots,H_{2^{s}}$, such that the set $\{h\alpha_{1j}|\ h\in H_i\}\subseteq \mathfrak{M}$ or $\{h\alpha_{1j}|\ h\in H_i\}\subseteq \mathfrak{m}$, for all $1\leq j\leq s$ and $1\leq i\leq 2^s.$ Our goal is to show that for every $H_i\ (i=1,\ldots,2^s)$, we have \begin{equation}\label{54} \displaystyle\sum_{h\in H_i}\bigl|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\varphi_1(x_1)+\cdots+\varphi_s(x_{s})))\bigr|\ll X^{s-\eta}, \end{equation} for some $\eta=\eta(k,\nu)>0$. This contradicts ($\ref{53}$) for sufficiently large $X$ in terms of $\eta$ and $s$. Thus, this forces us to conclude that whenever $s>k_1^2+k_1+2\lceil\sigma(1-k_1)\rceil$ and $X$ is sufficiently large, one has \begin{equation*} \min_{\substack{0\leq \boldsymbol{x}\leq X\\ \boldsymbol{x}\neq \boldsymbol{0}}}\|\varphi_1(x_1)+\varphi_2(x_2)+\cdots +\varphi_{s}(x_{s})\|\leq H^{-1}. \end{equation*} Therefore, by letting $\nu\rightarrow 0$, we are done to prove Theorem 1.2. \bigskip \subsection{Preliminary manoeuvre} As in the previous section, we can obtain extra information about $\alpha_{ij}$ with $1\leq i\leq t, 1\leq j\leq s$, under the assumption ($\ref{ineq5.35.3}$). In order to describe this information, we must define $\widetilde{\mathfrak{M}}_H$ by \begin{equation*} \widetilde{\mathfrak{M}}_H=\bigcup_{\substack{0\leq a_1,\ldots,a_t\leq q\leq X\\(q,a_1,\ldots,a_t)=1}} \widetilde{\mathfrak{M}}_H(q,a_1,\ldots,a_t), \end{equation*} where $$\widetilde{\mathfrak{M}}_H(q,a_1,\ldots,a_t)=\{(\alpha_1,\ldots,\alpha_t)\in [0,1)^t|\ |\alpha_i-a_i/q|\leq t^{-1}q^{-1}X^{-k_i+1}H^{-1}\ \text{for}\ 1\leq i\leq t\}.$$ Define $\widetilde{\mathfrak{m}}_H=[0,1)\setminus \widetilde{\mathfrak{M}}_H$. Note that if there exists $j$ such that $(\alpha_{1j},\ldots,\alpha_{tj})\in \widetilde{\mathfrak{M}}_H$, it follows by putting $x_j=q$ and $x_i=0\ (i\neq j)$ that \begin{multline*} \min_{\substack{0\leq \boldsymbol{x}\leq X\\ \boldsymbol{x}\neq \boldsymbol{0}}}\|\varphi_1(x_1)+\cdots+\varphi_{s}(x_{s})\|\leq \min_{1\leq x_j\leq X}\|\varphi_j(x_j)\|\leq \|\varphi_j(q)\|\\ \leq q^{k_1-1}\|q\alpha_{1j}\|+q^{k_2-1}\|q\alpha_{2j}\|+\cdots+q^{k_t-1}\|q\alpha_{tj}\|\leq H^{-1}, \end{multline*} which contradicts ($\ref{ineq5.35.3}$). Hence, under the assumption ($\ref{ineq5.35.3}$), we may assume that $(\alpha_{1j},\ldots,\alpha_{tj})$ is in $\widetilde{\mathfrak{m}}_H$ for every $j=1,\ldots,s.$ Furthermore, whenever $(\alpha_{1j},\alpha_{2j},\ldots,\alpha_{tj})\in \widetilde{\mathfrak{m}}_H$ with $H\leq X^{1-\nu}$ for sufficiently small $\nu>0$, one has for all $h\in [1,H]\cap {\mathbb Z}$ \begin{equation}\label{62} \displaystyle\sum_{1\leq x\leq X}e(h(\alpha_{1j}x^{k_1}+\cdots+\alpha_{tj}x^{k_t}))\ll X^{1-\delta_1} \end{equation} for some positive number $\delta_1=\delta_1(k_1,\nu)$. Indeed, suppose that there exists $h\in H$ such that \begin{equation*} \displaystyle\sum_{1\leq x\leq X}e(h(\alpha_{1j}x^{k_1}+\cdots+\alpha_{tj}x^{k_t}))\geq X^{1-\delta_1}. \end{equation*} Then, by [$\ref{ref2}$, Theorem 4.3] and [$\ref{ref2}$, Lemma 4.6], there exist $q,a_1,\ldots,a_t$ such that $q<X^{\eta}$ and $$|h\alpha_{ij}-a_i/q|<q^{-1}X^{\eta-k_i}\ (i=1,\ldots,t)$$ where $\eta=\eta(\delta_1,k_1).$ This gives $$|\alpha_{ij}-a_i/(qh)|<(qh)^{-1}X^{\eta-k_i}\ (i=1,\ldots,t).$$ For sufficiently small $\delta_1$ so that $\eta$ is smaller than $\nu,$ one has $qh<X^{\eta}X^{1-\nu}<X$ and $$|\alpha_{ij}-a_i/(qh)|<(qh)^{-1}X^{1-k_i}H^{-1}\ (i=1,\ldots,t).$$ By dividing the greatest common divisor of $a_i$ and $qh$, this readily confirms that ($\alpha_{1j},\ldots,\alpha_{tj}$) $\in \widetilde{\mathfrak{M}}_H,$ which contradicts ($\alpha_{1j},\ldots,\alpha_{tj}$) $\in \widetilde{\mathfrak{m}}_H. $ \bigskip \subsection{Auxiliary proposition} Recall the definition ($\ref{eq1.41.4}$) of $\sigma$ with $\mathbf{k}=(k_1,\ldots,k_t)$. To show ($\ref{54}$), we require following proposition analogous to Proposition 4.4. In order to describe the following proposition, it is convenient to define $N(H,\boldsymbol{\gamma},\alpha_1,\ldots,\alpha_t)$ with $\boldsymbol{\gamma}\in [0,1)^t$, $(\alpha_1,\ldots,\alpha_t)\in [0,1)^t$ and $H>0$ by \begin{equation*} N(H,\boldsymbol{\gamma},\alpha_1,\ldots,\alpha_t)=|\{h\in [1,H]\cap {\mathbb Z}|\ \|h\alpha_j-\gamma_j\|<(4k)^{-1}X^{-k_j}\ \textrm{for}\ j=1,\ldots,t\}|, \end{equation*} and define $N(H):=N(H,\alpha_1,\ldots,\alpha_t)=\sup_{\boldsymbol{\gamma}\in [0,1)^t}N(H,\boldsymbol{\gamma},\alpha_1,\ldots,\alpha_t).$ We recall the definition $\mathcal{D}_1=\mathcal{D}_1(\alpha)$ and $\mathcal{D}_2=\mathcal{D}_2(\alpha)$ with $\alpha\in {\mathbb R}$, following Lemma 4.3. Furthermore, let us put $L=(k_1^2+k_1)/2+\lceil\sigma(1-k_1)\rceil$. \begin{pr} Let $H>0.$ Suppose that $\alpha_j\in {\mathbb R}$ with $t\geq 2$ and $1\leq j\leq t$. Then, we have \begin{equation}\label{5.75.7} \displaystyle\sum_{h\in \mathcal{D}_1(\alpha_1)}\biggl|\displaystyle\sum_{1\leq x\leq X}e(h(\alpha_1x^{k_1}+\alpha_2x^{k_2}+\cdots+\alpha_t x^{k_t}))\biggr|^{2L}\ll N(H)X^{2L+\epsilon}, \end{equation} and \begin{equation}\label{5.85.8} \displaystyle\sum_{h\in \mathcal{D}_2(\alpha_1)}\biggl|\displaystyle\sum_{1\leq x\leq X}e(h(\alpha_1x^{k_1}+\alpha_2x^{k_2}+\cdots+\alpha_t x^{k_t}))\biggr|^{k_1(k_1+1)}\ll N(H)X^{k_1(k_1+1)-\sigma+\epsilon}. \end{equation} \end{pr} \begin{proof} For simplicity, throughout this proof, we write $\mathcal{D}_1=\mathcal{D}_1(\alpha_1)$ and $\mathcal{D}_2=\mathcal{D}_2(\alpha_1).$ Define $\Gamma(h)$ to be $$\Gamma(h)=\{(\gamma_1,\ldots,\gamma_t)\in [0,1)^t|\ \|h\alpha_j-\gamma_j\|\leq (4k)^{-1}X^{-k}\}.$$ Recall the definition ($\ref{1.7}$) of $D$. By applying [$\ref{ref5}$, Lemma 1] to $$\sum_{1\leq x\leq X}e(h(\alpha_1x^{k_1}+\cdots+\alpha_tx^{k_t}))$$, we infer that \begin{equation*} \displaystyle\sum_{h\in \mathcal{D}_1}\biggl|\displaystyle\sum_{1\leq x\leq X}e(h(\alpha_1x^{k_1}+\alpha_2x^{k_2}+\cdots+\alpha_t x^{k_t}))\biggr|^{2L} \end{equation*} \begin{equation}\label{58'} \ll X^D \displaystyle\sum_{h\in \mathcal{D}_1}\displaystyle\int_{\Gamma(h)}\biggl|\sup_{I\subseteq [1,X]}\displaystyle\sum_{x\in I}e(\gamma_1x^{k_1}+\gamma_2x^{k_2}+\cdots+\gamma_t x^{k_t}))\biggr|^{2L} d\boldsymbol{\gamma} \end{equation} where $I$ runs over all intervals in $[1,X].$ In the proof of Proposition 4.4, we have seen that for $h\alpha_1\in \mathfrak{M}$, the set $\{\gamma_1|\ \|h\alpha_1-\gamma_1\|<(4k)^{-1}X^{-k_1}\}$ is a subset of $\mathfrak{M}_1$. Then, by making use of $N(H)$, we deduce that the bound ($\ref{58'}$) is \begin{equation}\label{81} \ll N(H)X^D \displaystyle\int_{\mathfrak{M}_1}\displaystyle\int_0^1\cdots\displaystyle\int_0^1\biggl|\sup_{I\subseteq [1,X]}\displaystyle\sum_{x\in I}e(\gamma_1x^{k_1}+\gamma_2x^{k_2}+\cdots+\gamma_t x^{k_t}))\biggr|^{2L}d\boldsymbol{\gamma}. \end{equation} Therefore, by applying the Caleson-Hunt theorem with respect to the integral over $\gamma_t$ and Theorem 1.4 ($i$) with $\mathfrak{M}=\mathfrak{M}_1$, one concludes that the bound ($\ref{81}$) is $O( N(H_1)X^{2L+\epsilon}).$ This confirms ($\ref{5.75.7}$). Similarly, in the proof of Proposition 4.4, we have seen that for $h\alpha_1\in \mathfrak{m}$, the set $$\{\gamma_1|\ \|h\alpha_1-\gamma_1\|<X^{-k_1}\}$$ is a subset of $ \mathfrak{m}_4$. Thus, we infer that \begin{align*} &\displaystyle\sum_{h\in \mathcal{D}_2}\biggl|\displaystyle\sum_{1\leq x\leq X}e(h(\alpha_1x^{k_1}+\alpha_2x^{k_2}+\cdots+\alpha_t x^{k_t}))\biggr|^{k_1(k_1+1)}\\ & \ll N(H)X^D \displaystyle\int_{\mathfrak{m}_4}\displaystyle\int_0^1\cdots\displaystyle\int_0^1\biggl|\sup_{I\subseteq [1,X]}\displaystyle\sum_{x\in I}e(\gamma_1x^{k_1}+\gamma_2x^{k_2}+\cdots+\gamma_t x^{k_t}))\biggr|^{k_1(k_1+1)}d\boldsymbol{\gamma}, \end{align*} where $I$ runs over all intervals in $[1,X].$ Thus, by applying the Carleson-Hunt theorem with respect to the integral over $\gamma_t$ and Theorem 1.4 ($ii$) with $\mathfrak{m}=\mathfrak{m}_4$, we find that the last expression is $O(N(H)X^{k_1(k_1+1)-\sigma+\epsilon}).$ This confirms ($\ref{5.85.8}$). \end{proof} \begin{rmk} The Carleson-Hunt Theorem could be avoided at the cost of a factor $\log (6X)$ by standard use of a Dirichlet kernel argument (see, for example, [$\ref{ref30}$, Lemma 7.1]) \end{rmk} \begin{rmk} Recall from section 5.2 that under the assumption ($\ref{ineq5.35.3}$), we may assume that $(\alpha_{1j},\ldots,\alpha_{tj})$ is in $\widetilde{\mathfrak{m}}_H$ for every $j\ (j=1,\ldots,s)$. We see that whenever $(\alpha_{1j},\ldots,\alpha_{tj})\in \widetilde{\mathfrak{m}}_H$, we have $N(H,\alpha_{1j},\ldots,\alpha_{tj})\leq 1.$ Indeed, if $N(H)>1$, there exists $h_1$, $h_2$ ($1\leq h_1,h_2\leq H$, $h_1\neq h_2$) and $\boldsymbol{\gamma}=(\gamma_1,\ldots,\gamma_t)\in [0,1)^t$ such that \begin{equation*} \|h_1\alpha_{ij}-\gamma_i\|< X^{-k_i},\ \|h_2\alpha_{ij}-\gamma_i\|<X^{-k_i} \ \ \ (i=1,\ldots,t). \end{equation*} By triangle inequality, \begin{equation}\label{58} \|(h_1-h_2)\alpha_{ij}\|\leq \|h_1\alpha_{ij}-\gamma_i\|+\|h_2\alpha_{ij}-\gamma_i\|< 2X^{-k_i} \end{equation} for all $i$ $ (1\leq i \leq t).$ Since $2X^{-k_i}<t^{-1}X^{-k_i+1}H^{-1}$ for sufficiently large $X$, it follows from ($\ref{58}$) that for every $i\ (1\leq i \leq t)$ \begin{equation} \|(h_1-h_2)\alpha_{ij}\|<t^{-1}X^{-k_i+1}H^{-1}. \end{equation} Since $0<|h_1-h_2|< X,$ one has $(\alpha_{1j},\alpha_{2j},\ldots,\alpha_{tj})\in \widetilde{\mathfrak{M}}_H.$ This contradicts our assumption that $(\alpha_{1j},\alpha_{2j},\ldots,\alpha_{tj})\in \widetilde{\mathfrak{m}}_H.$ Hence, Proposition 5.1 with the assumption ($\ref{ineq5.35.3}$) delivers that for every $j\ (j=1,\ldots,s)$ one has \begin{equation}\label{60} \displaystyle\sum_{h\in \mathcal{D}_1(\alpha_{1j})}\biggl|\displaystyle\sum_{1\leq x\leq X}e(h(\alpha_{1j}x^{k_1}+\cdots+\alpha_{tj}x^{k_t}))\biggr|^{2L}\ll X^{2L+\epsilon}, \end{equation} and \begin{equation}\label{61} \displaystyle\sum_{h\in \mathcal{D}_2(\alpha_{1j})}\biggl|\displaystyle\sum_{1\leq x\leq X}e(h(\alpha_{1j}x^{k_1}+\cdots+\alpha_{tj}x^{k_t}))\biggr|^{k_1(k_1+1)}\ll X^{k_1(k_1+1)-\sigma+\epsilon}. \end{equation} \end{rmk} \bigskip \subsection{Proof of Theorem 1.2} \begin{proof}[Proof of Theorem 1.2] Suppose that ($\ref{ineq5.35.3}$) holds. From section 5.1, recall that the set $\{h\alpha_{1j}|\ h\in H_i\}\subseteq\mathfrak{M}$ or $\{h\alpha_{1j}|\ h\in H_i\}\subseteq\mathfrak{m}$, for all $1\leq j\leq s$ and $1\leq i\leq 2^s.$ By relabelling $\alpha_{1j}$, we may assume that for $1\leq i\leq m$, the set $\{h\alpha_{1i}|\ h\in H_1\}\subseteq \mathfrak{M}$, and for $m+1\leq i\leq s$, the set $\{h\alpha_{1i}|\ h\in H_1\}$ is a subset of $\mathfrak{m}.$ We put again $L=(k_1^2+k_1)/2+\lceil\sigma(1-k_1)\rceil$ and recall that $(\alpha_{1j},\ldots,\alpha_{tj})$ is in $\widetilde{\mathfrak{m}}_{H}$ for every $j=1,\ldots,s$. Note from Remark 2 above and section 5.2 that we have ($\ref{60}$), ($\ref{61}$) and ($\ref{62}$). We first consider the case $m\geq 2L.$ By making use of our hypothesis $s>2L$ together with H\"older's inequality and ($\ref{62}$), we deduce that \begin{equation}\label{ineq5.15} \begin{aligned} &\displaystyle\sum_{ h\in H_1}\bigl|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\varphi_1(x_1)+\cdots+\varphi_s(x_{s})))\bigr|\\ &\ll X^{s-2L-\delta_1}\displaystyle\prod_{l=1}^{2L}\biggl(\displaystyle\sum_{h \in H_1}\bigl|\displaystyle\sum_{1\leq x_l\leq X}e(h\varphi_l(x_l))\bigr|^{2L}\biggr)^{1/2L}. \end{aligned} \end{equation} Meanwhile, on recalling the definition of $H_1$ and $\mathcal{D}_1$, we notice that $H_1\subseteq \mathcal{D}_1(\alpha_{1l})$ for $1\leq l\leq 2L$. Then, by applying ($\ref{60}$), it follows from $(\ref{ineq5.15})$ that \begin{equation*} \begin{aligned} &\displaystyle\sum_{ h\in H_1}\bigl|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\varphi_1(x_1)+\cdots+\varphi_s(x_{s})))\bigr|\\ &\ll X^{s-2L-\delta_1}\displaystyle\prod_{l=1}^{2L}\biggl(\displaystyle\sum_{h \in \mathcal{D}_1(\alpha_{1l})}\bigl|\displaystyle\sum_{1\leq x_l\leq X}e(h\varphi_l(x_l))\bigr|^{2L}\biggr)^{1/2L}\ll X^{s-\eta}, \end{aligned} \end{equation*} for some $\eta=\eta(\delta_1)>0.$ Next, consider the case $m<2L.$ We write \begin{equation*} \begin{aligned} & A_l=\displaystyle\sum_{h\in H_1}\bigl|\displaystyle\sum_{1\leq x_l\leq X}e(h\varphi_l(x_l))\bigr|^{2L}\\ & B_l=\displaystyle\sum_{h \in H_1}\bigl|\displaystyle\sum_{1\leq x_l\leq X}e(h\varphi_l(x_l))\bigr|^{k_1(k_1+1)}, \end{aligned} \end{equation*} and put $m_1=2L-m.$ Then, it follows from H\"older's inequality that \begin{equation}\label{64} \begin{aligned} &\displaystyle\sum_{h\in H_1}\bigl|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\varphi_1(x_1)+\cdots+\varphi_{s}(x_{s})))\bigr|\\ &\ll \bigl(\displaystyle\sum_{h\in H_1}1\bigr)^{1-\left(\frac{m}{2L}+\frac{m_1}{k_1(k_1+1)}\right)}\biggl(\displaystyle\prod_{l=1}^{m}A_l^{1/2L}\biggr)\biggl(\displaystyle\prod_{l=m+1}^{m+m_1}B_l^{1/(k_1(k_1+1))}\biggr)X^{s-(m+m_1)}. \end{aligned} \end{equation} On recalling the definitions of $H_1,$ $\mathcal{D}_1$ and $\mathcal{D}_2$, notice that $H_1\subseteq \mathcal{D}_1(\alpha_{1l})$ for $1\leq l\leq m$, and $H_1\subseteq \mathcal{D}_2(\alpha_{1l})$ for $m+1\leq l\leq m+m_1.$ Thus, we have for $1\leq l\leq m$ the bound $$A_l\leq \displaystyle\sum_{h\in \mathcal{D}_1(\alpha_{1l})}\bigl|\displaystyle\sum_{1\leq x_l\leq X}e(h\varphi_l(x_l))\bigr|^{2L}, $$ and for $m+1\leq l\leq m+m_1$ the bound $$ B_l\leq \displaystyle\sum_{h \in \mathcal{D}_2(\alpha_{1l})}\bigl|\displaystyle\sum_{1\leq x_l\leq X}e(h\varphi_l(x_l))\bigr|^{k_1(k_1+1)}.$$ Then, on substituting these inequalities into ($\ref{64}$), it follows by ($\ref{60}$), ($\ref{61}$) and $|H_1|\leq H\ll X^{1-\nu}$ that \begin{equation}\label{65} \begin{aligned} & \displaystyle\sum_{h\in H_1}\bigl|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\varphi_1(x_1)+\cdots+\varphi_{s}(x_{s})))\bigr| \\ & \ll X^{1-(\frac{m}{2L}+\frac{m_1}{k_1(k_1+1)})}X^{m}X^{m_1-\frac{m_1\sigma}{k_1(k_1+1)}}X^{s-(m+m_1)-\eta}=X^{\phi-\eta}, \end{aligned} \end{equation} where $\eta$ is suitably small positive number in terms of $\nu$, and $$\phi=1-\left(\frac{m}{2L}+\frac{m_1}{k_1(k_1+1)}\right)-\frac{m_1\sigma}{k_1(k_1+1)}+s.$$ Since $m_1=2L-m$ with $m,m_1\geq 0,$ $$\phi=1-\frac{m}{2L}-\frac{(2L-m)(1+\sigma)}{k_1(k_1+1)}+s.$$ On noting $2L\geq k_1^2+(1-2\sigma)k_1+2\sigma,$ simple calculations lead to the lower bound $2L(1+\sigma)\geq k_1(k_1+1).$ Hence, since $\phi$ is a linear function in $m$ with positive slope, we find that the function $\phi$ attains the maximum when $m=2L$, and thus $\phi\leq s.$ Thus, in all cases, we have $$\displaystyle\sum_{h\in H_1}|\displaystyle\sum_{1\leq \boldsymbol{x}\leq X}e(h(\varphi_1(x_1)+\cdots+\varphi_s(x_{s})))|\ll X^{s-\eta}.$$ Then, by the same treatment, it follows that for every $H_i\ (i=1,\ldots,2^s)$, we have ($\ref{54}$). This contradicts ($\ref{53}$) stemming from ($\ref{ineq5.35.3}$). Thus, we are forced to conclude that whenever $s>k_1^2+k_1+2\lceil\sigma(1-k_1)\rceil$, one has \begin{equation*} \min_{\substack{0\leq \boldsymbol{x}\leq X\\ \boldsymbol{x}\neq \boldsymbol{0}}}\|\varphi_1(x_1)+\varphi_2(x_2)+\cdots +\varphi_{s}(x_{s})\|\leq H^{-1}. \end{equation*} Hence, by letting $\nu\rightarrow 0$, we complete the proof of Theorem 1.2. \end{proof}
2,877,628,091,387
arxiv
\section{Introduction} The symmetric and the alternating groups are ubiquitous in the study of the monodromy of curves. In particular, they are the unique possible examples of monodromy if we are in the case of an indecomposable cover $X \rightarrow \P^1$ with $X$ a generic complex curve of genus greater than $3$ (see for instance \cite{GS} and \cite{GM}). The proof of the existence of such a covering for the general curve and symmetric monodromy is classical and, for alternating monodromy, can be found in \cite{MV} and \cite{AP}. Coverings with odd ramification have been studied starting from the seminal works of Serre in \cite{SerreRelevament}, and Fried in \cite{FriedAlternating}. This is particularly interesting due to the relations with theta characteristics and modular towers studied in \cite{BF}. Serre proved in \cite{SerreRelevament} that the moduli spaces of absolute and inner covers of $\P^1$ of genus zero with monodromy group $A_n$ and elements in conjugacy classes of odd order cycles are connected. For a good reference about alternating groups in which one can find all the precise definition of conjugacy classes and coverings see \cite{WG}. It is proved in \cite{FriedAlternating} that the spin structure determines the irreducible components of the Hurwitz space of coverings of $\P^1$ of degree $n$ branched on $r$ points with $r \geq n \geq 5$, and monodromy given by the conjugacy class of $3$-cycles in $A_n$. In both cases, the crucial point consists on the construction of the so-called lifting invariant, described algebraically by using a lifting to the double cover of $A_n$. This strategy does not work if one considers elements with even order. The cases of alternating groups of order six and seven are exceptional because they are the only two examples of alternating groups that admit coverings of degree three and six. This makes it possible to construct lifting invariants in case of other ramification type, for instance of order two. Bogomolov and Kulikov prove in \cite{BK} that the number of the connected components of the Hurwitz scheme is asymptotically determined by the ambiguity index, defined in terms of group coverings. The cases of $A_6$ and $A_7$ turn out to be different, as expected, with respect to the other alternating groups. This makes interesting to study the irreducible components of the Hurwitz schemes for lower genus for these cases. In this work, we study the irreducible components of the Hurwitz scheme in the case of the conjugacy class of products of two disjoint cycles of $A_6$. The strategy consists of an induction on the number of the branch points of the covering, using the lifting invariant to distinguish the different components. Our main result is Theorem \ref{teor:finalresult}, which asserts that the Hurwitz scheme has two irreducible components when the genus is greater than zero, and only one component for genus zero. We also study in Theorem \ref{teor:finalresultinner} the case of the inner moduli space, proving that there are three irreducible components for genus greater than zero and two for genus zero. This provides an explicit example in which the space of the Galois closures of coverings of $\P^1$ has more connected components than the Hurwitz scheme. As for the $3$-cycles, treated in \cite{FriedAlternating}, we get that the minimal bound on the genus $g$ of $X$ from which the results of \cite{BK} hold is $g > 0$, for the case of the product of two disjoint cycles of $A_6$. \textbf{The plan of the paper.} Some preliminaries concerning coverings, monodromy and Hurwitz spaces are carried on in Section \ref{sec:prel}. Theorem \ref{teor:finalresult}, concerning the absolute moduli space, is mainly proved in Section \ref{sec_absolute}. The two base cases of the induction, namely the case of five and six points, are carried on in Section \ref{App:MonGenZero} and Section \ref{App:MonGenOne}, respectively. The proof of Theorem \ref{teor:finalresultinner}, that is the main result in the case of inner moduli spaces, carried on in Section \ref{sec_inner}, uses the results of the previous sections. Some open problems and some ideas for further work are stated in Section \ref{sec:open}. Finally, Appendix \ref{Apx:code} contains all the MAGMA codes used in the proofs. \section{Preliminaries} \label{sec:prel} In this paper all the varieties will be defined over the complex numbers. Let $X \xrightarrow{f} \P^1$ be a covering of the sphere of degree $n$. From now on, $k$ will always denote the number of ramification points of such a covering. A branch point for $f$ is a $\bar{z} \in \P^1$ such that the fibre over $\bar{z}$ is composed by a number of points that is strictly less than $n$. Let $Z$ be the branch locus of $f$; the fundamental group $\pi_1(\P^1 \smallsetminus Z, z_0)$ is generated by laces $[\gamma_1], \ldots, [\gamma_r]$ around the branch points modulo the relation $\prod [\gamma_i]=1$. Denote by $F_0$ the fibre $f^{-1}(z_0)$, $z_0 \notin Z$. After the choice of $p_0 \in f^{-1}(z_0)$, by the unicity of the lifting, every element $[\gamma_i]$ lifts in a unique way to $\bar{\gamma}_i \in \pi_1(X \smallsetminus f^{-1}(Z), p_0)$ as a path starting in $p_0$. This gives a well defined map \begin{align*} m:\pi_1(\P^1 \smallsetminus Z, z_0) &\rightarrow \operatorname{Aut}(F_0)\\ [\gamma_i] &\mapsto (m([\gamma_i]): p_j \mapsto \bar{\gamma}_i(1)) \end{align*} The image of $m$ is called the \textbf{monodromy group} of $f$ in $p_0$. If a name of the points of the fibre $F_0$ is given, one can associate an automorphism $m([\gamma_i])$ to an element of the symmetric group in $n$ elements. From the geometric point of view, one can consider the Hurwitz space of coverings $\{X \rightarrow \P^1\}$ with fixed ramification type. There are several equivalence relations on this space. In a natural way, two coverings $\{X \xrightarrow{f} \P^1\}$ and $\{Y \xrightarrow{g} \P^1\}$ are considered equivalent if there exists a biholomorphism $\psi:X \rightarrow Y$ such that $f=g \circ \psi$. It is easy to prove that this equivalence relation is the strongest that fixes the branch points, the ramification type, and, up to conjugation, the monodromy group $G$ of the cover. This defines the \textbf{Hurwitz spaces} $\H(G,C)^{abs}$ or $\H([G],C)^{abs}$, depending if one want to fix the monodromy $G$ or just to consider it up to conjugation in $S_n$. Another useful equivalence relation is obtained by considering the Galois closure of such coverings. In this case one has to choose a connected component of the $n$-fold fibre product of $f$, and the \textbf{inner Hurwitz space} $\H(G,C)^{in}$ is defined by considering all such choices to be equivalent. An algebraic description of these spaces can be given by using Nielsen classes. An introduction to the theory of these classes can be found in \cite{FriedOnline} and \cite{MS} contains some background on Hurwitz spaces. Let $G$ be a transitive subgroup of $S_n$ and consider $r$ conjugacy classes $C:=(c_i)_{i=1}^r$ of $G$. An element $\pmb{g}$ in the \textbf{Nielsen class} $\operatorname{Ni}(G,C)$ is given by $r$ elements $g_i$ of $G$ such that $\prod g_i = 1$, $g_i \in c_i$ and the subgroup $\langle g_i \rangle$ generated by the $g_i$ is $G$. The \textbf{Riemann's existence theorem} guarantees that given a covering $X \xrightarrow{f} \P^1$, then the $r$ elements of $S_n$ associated to the images $m([\gamma_i])$ belong to a certain Nielsen class. Conversely, given an element $\pmb{g}$ in a Nielsen class, there exists a covering $X \xrightarrow{f} \P^1$ that is associated to $\pmb{g}$. Some background on Riemann's existence theorem can be found in \cite{FEnhanced}, \cite{FTwist}, \cite{MM}, \cite{SGal} and \cite{V}. There are two group actions that can be defined on $\operatorname{Ni}(G,C)$: choosing a different name for the elements of the fibre $F_0$ would give a right action of $S_n$. In this way only the conjugacy class of $G$ in $S_n$ is fixed and we will denote the Nielsen space as $\operatorname{Ni}([G],C)$. If instead we want to keep the group $G$ fixed, we have to act just by elements of $N_{S_n}(G)$, the normalizer of $G$ in $S_n$. This is called the \textbf{absolute action}. The group $G$ also acts on $\operatorname{Ni}(G,C)$ by conjugation, giving the so-called \textbf{inner action}. If $\pmb{g}:=(g_1, \ldots, g_r)$ is an element in $\operatorname{Ni}(G,C)$ and $s$ is the acting element in $N_{S_n}(G)$ or in $G$, the action, denoted by $\phi_s$ is $$\phi_s(\pmb{g}) = \phi_s(g_1, \ldots, g_r) = (s^{-1} g_1 s, \ldots, s^{-1} g_r s).$$ Hurwitz space are related with Nielsen spaces via the quotient by these equivalence relations: $\operatorname{Ni}(G,C)/N_{S_n}(G)$ is related to $\H(G,C)^{abs}$, and $\operatorname{Ni}(G,C)/G$ is related to $\H(G,C)^{in}$. As specified before, it is also possible to consider only the conjugacy class of $G$ in $S_n$, obtaining the relation between $\operatorname{Ni}([G],C)/{S_n}$ and $\H([G],C)^{abs}$. There is another action on these spaces, called the \textbf{Hurwitz action}. The cardinality of the spaces after the quotient by this action is equal to the number of the connected components of the Hurwitz spaces. From the geometric point of view, one can imagine exchanging two branch points by a continued movement, whereas from the algebraic point of view, this action is described by the following \begin{defn} \label{defn:HAC} Let $\pmb{g}:=(g_1, \ldots, g_r)$ be an element in $\operatorname{Ni}(A_n,C^r)$. The braid group $B_r$ on $r$ elements acts on the right on this set. By following the notation of \cite{HBC}, consider a generator $\sigma_i$ of $B_r$. The action of $\sigma_i$ on $\pmb{g}$ is given by $$\sigma_i(\pmb{g}):=(g_1, \ldots, g_{i+1}, g_{i+1}^{-1} g_i g_{i+1}, \ldots, g_r).$$ \end{defn} \noindent A description of the Hurwitz action, together with example of monodromies, can be found in \cite{HM} and a point of view on the study of the connected components of the Hurwitz schemes by means of semigroups over groups is carried out in \cite{KK}. Notice that if the group $G$ is the alternating group $A_n$, then $N_{S_n}(A_n)$ is the whole $S_n$, because $A_n$ is unique in its conjugacy class. We will need the following lemma that implies immediately that the inner action on $\operatorname{Ni}(G,C)$ can actually be obtained by using the Hurwitz action only. \begin{lemma}[Lemma 2.6 of \cite{FriedAlternating}] \label{lemma:friedproductone} Let $\pmb{g}:=(g_1, \ldots, g_r)$ be an element in $Ni(G,C)$ such that there exists $j<r$ consecutive integers $\{g_i, g_i+1, \ldots, g_j\}$ with $\prod_{h=i}^j g_h =1$. If we denote by $\gamma$ an element in the subgroup generated by $\{g_i, g_i+1, \ldots, g_j\}$, then there exists an element $Q \in B_r$ such that $$Q(\pmb{g}) = (g_1, \ldots, g_{i-1}, \gamma g_i \gamma^{-1}, \gamma g_{i+1} \gamma^{-1}, \ldots, \gamma g_j \gamma^{-1}, g_{j+1}, \ldots g_r)$$ \end{lemma} The triple covering of $A_6$ is a group of $1080$ elements called the \textbf{Valentiner group}. This group was discovered by Valentiner in \cite{V}, and then studied by Wiman and Gerbaldi in \cite{W} and \cite{Ger}. This covering is described by using the following exact sequence, where $V$ is the Valentiner group and $C_3$ is the cyclic group of order three. $$0 \rightarrow C_3 \rightarrow V \rightarrow A_6 \rightarrow 0$$ There exist many explicit descriptions of the Valentiner group, together with the covering map to $A_6$, see for instance \cite{At}. In this work we will identify $A_6$ with the subgroup of $S_6$ generated by $$\{s_1, s_2\}=\{(1,2)(3,4),(1,2,4,5)(3,6)\}\text{.}$$ The Valentiner group is described as the subgroup of $S_{18}$ generated by \begin{align*} \{v_1,v_2\}=\{&(2, 6)(4, 11)(7, 9)(8, 13)(10, 14)(12, 16), \\ &(1, 2, 7, 4)(3, 8, 6, 10)(5, 9, 13, 12)(11, 15)(14, 17)(16, 18)\}\text{.} \end{align*} The covering map $V \xrightarrow{\pi} A_6$ is defined on the generators by $$v_1 \mapsto s_1 \qquad v_2 \mapsto s_2.$$ Let ${C_{2 \times 2}}$ denote the conjugacy class of $A_6$ given by the product of two disjoint cycles. An element $x$ in ${C_{2 \times 2}}$ admits a unique lift $\hat{x}$ to $V$ of order $2$. Let $\pmb{g}:=(g_1,\ldots, g_k)$ be an element in the Nielsen class $\operatorname{Ni}(A_6,{C_{2 \times 2}}^k)$; for all the $g_i$, let $\hat{g}_i$ be the lifting of order $2$ to the Valentiner group. Since the product of the $g_i$ is the identity, then the product of the liftings $\gamma(\pmb{g}):= \prod{\hat{g}_i}$ belongs to the preimage $\pi^{-1}(1_{A_6})$. \begin{prop} \label{prop:InnerAbsolute} Let $\pmb{g}:=(g_1,\ldots, g_k)$ be an element in $\operatorname{Ni}(A_6,{C_{2 \times 2}}^k)$. \begin{enumerate} \item The Hurwitz action commutes with the absolute and inner actions. \item The element $\gamma(\pmb{g})$ is an invariant of the Hurwitz action. \item The element $\gamma(\pmb{g})$ is an invariant of the inner action. \item The order of the element $\gamma(\pmb{g})$ is an invariant of the absolute action. \end{enumerate} The element $\gamma(\pmb{g})$ will be called the lifting invariant of $\pmb{g}$. \end{prop} \begin{proof} Even if part (1) can be proved by direct computation, it is interesting to look at it from the point of view of geometry: since the absolute and inner actions are just a choice of the names of the fibres the claim is straightforward. \noindent To prove (2), consider one of the generators of the braid group $B_r$, $\sigma_1$, which acts on $\pmb{g}$ by $\sigma_1(g_1,\ldots, g_k)=(g_2, g_2^{-1} g_1 g_2, \ldots)$. Then one obtains $$\gamma(\sigma_1(\pmb{g}))=\hat{g}_2 \cdot \hat{g}_2^{-1} \cdot \hat{g}_1 \cdot \hat{g}_2 \cdot \ldots = \gamma(\pmb{g})\text{.}$$ \noindent To prove (3), let $t$ be an element in $A_6$, and let $\hat{t}$ be a lifting of $t$ in $V$. Then one obtains $$\gamma(t^{-1} \pmb{g} t) = \gamma(t^{-1} g_1 t, t^{-1} g_2 t, \ldots) = \hat{t}^{-1} \hat{g}_1 \hat{t} \cdot \hat{t}^{-1} \hat{g}_2 \hat{t} \cdot \ldots = \hat{t}^{-1} \gamma(\pmb{g}) \hat{t} = \gamma(\pmb{g})\text{.}$$ The last equality holds because $\gamma(\pmb{g})$ is in the center of $V$. \noindent Part (4) holds because all the lifting maps in $$0 \rightarrow C_3 \rightarrow V \rightarrow A_6 \rightarrow 0$$ can be chosen in a natural way, then the order of $\gamma(\pmb{g})$ is well defined and does not change under the absolute action. \end{proof} Notice that the number of possible choices for $\gamma(\pmb{g})$ coincides with the ambiguity index $a(A_6,{C_{2 \times 2}})$ used in \cite{BK}. \begin{ese} Code \ref{prog:liftinginvariant} computes the lifting invariant of an element in $\operatorname{Ni}(A_6,{C_{2 \times 2}}^k)$, the strategy being just a direct computation by using the definition. As an example of the fact that only the order of the lifting is an invariant in $\operatorname{Ni}(A_6,{C_{2 \times 2}}^k)^{abs}$ one can take these two elements of $\operatorname{Ni}(A_6,{C_{2 \times 2}}^5)$ $$\{\a{1234},\a{1324},\a{1425},\a{1623},\a{1635}\}\text{,}$$ $$\{\a{1234},\a{1324},\a{1426},\a{1523},\a{1536}\}\text{.}$$ The two lifting invariants are the two liftings of the identity of order three, and one can see immediately that the automorphism $\phi_{\b{56}}$ sends one element to the other. \end{ese} The main result in the case of the absolute moduli space is provided by the following theorem, and it is completely analogous to the results of \cite{FriedAlternating}, namely there is only one connected component for genus zero and exactly two connected components for higher genera. \begin{teor} \label{teor:finalresult} The spaces $\H(A_6,{C_{2 \times 2}}^k)^{abs}$, for $k$ greater or equal to six have exactly two connected components $$\H_+(A_6,{C_{2 \times 2}}^k)^{abs} \text{ and } \H_-(A_6,{C_{2 \times 2}}^k)^{abs}\text{.}$$ The space $\H(A_6,{C_{2 \times 2}}^5)^{abs}$ is connected. That is, the Hurwitz scheme has exactly one irreducible component if the genus is equal to zero and two irreducible components for genus greater than zero. \end{teor} The case of the inner moduli space is slightly different from the results of \cite{FriedAlternating}, due to the fact that the lifting invariant has order three. The connected components turn out to be two for genus zero and three for higher genera. \begin{teor} \label{teor:finalresultinner} Fix once for all $\sigma$, a lifting of order three of the identity. The spaces $\H(A_6,{C_{2 \times 2}}^k)^{in}$, for $k$ greater or equal to six have exactly three connected components \begin{itemize} \item $\H_{\conn{0}}(A_6,{C_{2 \times 2}}^k)^{in}$, the elements that lifts to $\sigma^0=1_V$. \item $\H_{\conn{1}}(A_6,{C_{2 \times 2}}^k)^{in}$, the elements that lifts to $\sigma^1$. \item $\H_{\conn{2}}(A_6,{C_{2 \times 2}}^k)^{in}$, the elements that lifts to $\sigma^2$. \end{itemize} The space $\H(A_6,{C_{2 \times 2}}^5)^{in}$ has two connected components \begin{itemize} \item $\H_{\conn{1}}(A_6,{C_{2 \times 2}}^5)^{in}$, the elements that lifts to $\sigma^1$. \item $\H_{\conn{2}}(A_6,{C_{2 \times 2}}^5)^{in}$, the elements that lifts to $\sigma^2$. \end{itemize} \end{teor} For the sake of simplicity we will denote in the same way an element in a Nielsen class and a class of an element in the Hurwitz space. Choosing an element $\pmb{g}$ in $\H_{\conn{1}}(A_6,{C_{2 \times 2}}^k)^{in}$ means choosing an element in $Ni(A_6,{C_{2 \times 2}}^k)$ with lifting invariant $\sigma^1$, that makes the corresponding covering belong to the connected component $\H_{\conn{1}}(A_6,{C_{2 \times 2}}^k)^{in}$. \section{The space $\H(A_6,{C_{2 \times 2}}^r)^{abs}$} \label{sec_absolute} The study of the space $\H(A_6,{C_{2 \times 2}}^r)^{abs}$ for all the genera relies on the study of the monodromy group for genus zero and one. Then, it is possible to carry on an induction on the number of branch points in order to conclude the classification. Algebraically this means passing from $k$ to $k-1$ elements of the conjugacy class by multiplying two of them; geometrically, if these elements correspond to two points $P_1$ and $P_2$, that coincides with considering the loop in the fundamental group obtained by composing a loop around $P_1$ and a loop around $P_2$. This gives rise to a subgroup of the fundamental group that describes a monodromy in a fewer number of points. In order to carry on the induction step, two further results are necessary. First, one has to prove that it is possible to reduce the number of points without changing the monodromy group and second, one has to prove that is possible to perform such a reduction by keeping all the elements in the conjugacy class ${C_{2 \times 2}}$. In order to solve the first issue it is convenient to consider the problem of finding the minimum number of generators contained in a sequence of elements of $A_6$. \begin{defn} Let $G$ be a finite group. The max-length of $G$ is a natural number that coincides with the maximum possible length of a chain of subgroups of $G$. \end{defn} \noindent The following proposition holds in general for every finite group. \begin{prop} \label{prop:fivegenerators} Let $G$ be a finite group and let $l$ be the max-length of $G$. If $S:=\{s_1, \ldots, s_n\}$ generates $G$, and $n \geq l$, then there are $l$ elements of $S$ that are still generators. \end{prop} \begin{proof} The proof is an induction on the cardinality of $S$. If $n=l$ the claim is trivially true. Assume $n>l$ and let the result be true for $n-1$. Consider $S_1 := S \smallsetminus \{s_1\}$. If $\langle S_1 \rangle =\langle S \rangle$, one can use the induction hypothesis on $S_1$. If $\langle S_1 \rangle \subseteq \langle S \rangle$, then one can consider $S_2 := S_1 \smallsetminus \{s_2\}$ and proceed as before, constructing a chain of subgroups of $G$ that has maximum possible length $l$. It means that, at least at the l-th step, $S_{l+1}$ must be equal to $S_l$ and then one can use the induction hypothesis to conclude the proof. \end{proof} The max-length of $A_6$ is five, thus it is always possible to find five generators in a set of cardinality $n \geq 5$. Notice that there exists a set of five generators of $A_6$ such that it is not possible to find among them $4$ elements that still generate the whole $A_6$. However, the following proposition shows that is it possible to get a better result if one considers only elements in the conjugacy class ${C_{2 \times 2}}$. \begin{lemma} Let $H$ be a subgroup of $G$ of order $h$, and $g$ an element in $G \smallsetminus H$ of order $2$. Then the subgroup $\langle H,g\rangle$ has order $2*h*x$ for a certain natural number $x$. \end{lemma} \begin{lemma} \label{lemma:fourgenerators} Let $S:=\{s_1, \ldots, s_n\}$ generate $A_6$, with $n \geq 4$, and let the $s_i$ belong to the conjugacy class ${C_{2 \times 2}}$. Then there exists a subset of four elements of $S$ that still generate all $A_6$. \end{lemma} \begin{proof} By Proposition \ref{prop:fivegenerators} one can assume $n$ to be equal to five. Then Code \ref{prog:from5to4} shows that the thesis holds. The strategy is just a case by case analysis listing all the possible sets of $5$ generators, which can be chosen to be disjoint and ordered, and then checking that the claim holds. \end{proof} \noindent The reduction from $4$ generators to $3$ generators is not as simple as the previous step. In fact there exist sets of $4$ generators in the conjugacy class ${C_{2 \times 2}}$ that can not be reduced to have cardinality three just by taking one out; an example is provided by \begin{equation} \label{Eqn:exampleNonRed4To3} \{\a{1234}, \a{1235}, \a{1246}, \a{1324}\}\text{.} \end{equation} In order to proceed further, one has to use the Hurwitz action defined in \ref{defn:HAC} to modify the elements. \begin{prop} \label{Prop:threeGeneartors} Let $S:=\{s_1, \ldots, s_n\}$ be a ordered set of generators of $A_6$ with cardinality $n \geq 4$, such that all the $s_i$ belong to the conjugacy class ${C_{2 \times 2}}$. Then, up to the Hurwitz action on $S$, it is possible to find three elements that still generate $A_6$. \end{prop} \begin{proof} By Lemma \ref{lemma:fourgenerators} one can assume $n$ equal to four. Code \ref{prog:final4to3} concludes the proof using a case by case analysis. \end{proof} The second issue concerns finding two elements $g_1$ and $g_2$ that can be used to reduce the number of branch points. As described before, from the algebraic point of view the monodromy type can be computed just by multiplying the two elements $g_1$ and $g_2$. Since we are working in ${C_{2 \times 2}}$, it is necessary that $g_1 \cdot g_2$ still belongs to ${C_{2 \times 2}}$. \begin{prop} \label{Prop:FixedPointReduction} Let $g_1$ and $g_2$ be two elements in the conjugacy class ${C_{2 \times 2}}$ of $A_6$. If $g_1$ and $g_2$ have the same fixed points, then $g_1 g_2$ either belongs to ${C_{2 \times 2}}$, or is the identity. \end{prop} \begin{proof} Up to an external automorphism of even parity, one can assume the first element to be $\a{1234}$ and the second to be either $\a{1234}$ or $\a{1324}$. The claim is then straightforward. \end{proof} The best case scenario for the induction would be finding two elements with the same fixed points in a set of cardinality four. \begin{prop} \label{Prop:reductionfour} Let $\{g_1, g_2, g_3, g_4\}$ be an ordered set of elements of $A_6$. Then, up to the Hurwitz action and to external automorphisms either it is possible to find two elements with the same fixed points or the set is $$\{\a{1234}, \a{1235}, \a{1634}, \a{1645}\}\text{.}$$ \end{prop} \begin{proof} One can perform a case by case analysis in which the Hurwitz action is used. This is done with Code \ref{prog:find2fixedpoints}. The strategy is a case by case analysis listing all the possible sets of four elements and then using the Hurwitz action to see if there are two elements with the same fixed points. Its output is made by all the sets for which the Hurwitz action does not work. Up to external automorphisms all these sets are equivalent to $$\{\a{1234}, \a{1235}, \a{1634}, \a{1645}\}\text{.}$$ \end{proof} The previous proposition shows that is not always possible to find the expected reduction in a set of cardinality four. As a consequence, the proof for $k=7$ will not be part of the induction step, and is carried on at the beginning of the proof of Theorem \ref{teor:finalresult}. Luckily, the following proposition shows that it is always possible to find two elements with the same fixed points in a set of cardinality five. \begin{prop} \label{Prop:reductionfive} Let $S:=\{g_1, \ldots, g_5\}$ be an ordered set of elements of $A_6$. Up to the Hurwitz action it is possible to find two elements with the same fixed points. \end{prop} \begin{proof} This is done with a case by case analysis carried out with Code \ref{prog:find2fixedpoints5}. \end{proof} Let $\pmb{g}:=\{g_1, \ldots, g_k\}$ be a ordered set of elements in ${C_{2 \times 2}}$, with $k \geq 5$. By Proposition \ref{Prop:reductionfive}, there are two elements with the same fixed points. Up to the Hurwitz action one can assume them to be $g_1$ and $g_2$. By multiplying these elements, as proved in Proposition \ref{Prop:FixedPointReduction}, two cases can arise. If $g_1 = g_2$, that is the product of $g_1$ and $g_2$ is the identity, one can reduce $\pmb{g}$ to $\bar{\pmb{g}}:=\{g_3, \ldots, g_k\}$, an element of length $k-2$; this is called \textbf{2-reduction}. If $g_1 \neq g_2$, one can reduce $\pmb{g}$ to $\bar{\pmb{g}}:=\{g_1 \cdot g_2, g_3, \ldots, g_k\}$, an element of length $k-1$; this is called \textbf{1-reduction}. The following proposition describes the behaviour of the lifting invariant under such reductions. \begin{prop} \label{prop:liftingandreduction} Let $\pmb{g}:=\{g_1, \ldots, g_k\}$ be an ordered set of $k \geq 5$ elements in the conjugacy class ${C_{2 \times 2}}$ such that the product of the $g_i$ is equal to the identity. Let $\bar{\pmb{g}}$ be the reduction of $\pmb{g}$. Then, $\pmb{g}$ and $\bar{\pmb{g}}$ have the same lift to the Valentiner group. \end{prop} \begin{proof} If $\bar{\pmb{g}}$ is a $2$-reduction, $g_1$ was equal to $g_2$, then also the liftings $\hat{g}_1$ and $\hat{g}_2$ are equal. Then $\hat{g}_1 \cdot \hat{g}_2$ is the identity and the lifting invariants of $\pmb{g}$ and $\bar{\pmb{g}}$ are the same. If $\bar{\pmb{g}}$ is a $1$-reduction, one has to consider the lift of the element $g_1 \cdot g_2$. Since $g_1 \cdot g_2$ is still in ${C_{2 \times 2}}$ it means that the lifting of $g_1 \cdot g_2$ is the product of the lifting of $g_1$ and the lifting of $g_2$, thus the lifting of $\pmb{g}$ and $\bar{\pmb{g}}$ are the same. \end{proof} Neither $\pmb{g}$ or $\bar{\pmb{g}}$ is supposed to be transitive in Proposition \ref{prop:liftingandreduction}, because the canonical lift can be defined for every sequence of elements with products one. Every time this reduction is used on an element $\pmb{g}$ in a Nielsen class, one should check that the result $\bar{\pmb{g}}$ is still transitive. In the induction step of the proof of Theorem \ref{teor:finalresult}, this is guaranteed by Proposition \ref{Prop:threeGeneartors} to select three generators, which guarantees the transitivity. Notice that, in order to obtain an element of ${C_{2 \times 2}}$ from the multiplications of two other elements, it is sufficient that the two elements generate a subgroup of order lesser or equal that four. Despite of this more general result, in the proof of Theorem \ref{teor:finalresult}, it is convenient to show that it is possible to consider only reduction of type $1$ in order to apply the following \begin{prop} \label{prop:liftinghurwitz} Let $\pmb{g}$ and $\pmb{h}$ belong to $Ni(A_6,{C_{2 \times 2}}^k)$, $k > 5$, with $1$-reduction to $\bar{\pmb{g}}$ and $\bar{\pmb{h}}$. If $\bar{\pmb{g}}$ and $\bar{\pmb{h}}$ are equivalent under the Hurwitz action, then this action can be lifted to obtain an equivalence between $\pmb{g}$ and $\pmb{h}$. \end{prop} \begin{proof} One can always assume that the $1$-reduction takes place between the first two elements of $\pmb{g}$ and $\pmb{h}$. Fix an external automorphism such that the reduction $\pmb{g} \rightarrow \bar{\pmb{g}}$ is the following $$(\a{1324}, \a{1423}, g_3, \ldots, g_k)\rightarrow (\a{1234}, g_3, \ldots, g_k)$$ By hypothesis, there exists an Hurwitz action on $\bar{\pmb{h}}$ that makes $$(h_1\cdot h_2, h_3, \ldots, h_k)$$ equal to $$(\a{1234}, g_3, \ldots, g_k)$$ This action can be extended to an action on $\pmb{h}$ just by considering the same action on $h_3 \ldots h_k$ and by conjugating always together the elements $h_1$ and $h_2$. This action makes $\pmb{h}$ equal to $$(\bar{h}_1, \bar{h}_2, g_3, \ldots, g_k)$$ where $\bar{h}_1\bar{h}_2=\a{1234}$. And $\bar{h}_1$ and $\bar{h}_2$ are elements of a $1$-reduction. Up to the Hurwitz action, just on $\bar{h}_1$ and $\bar{h}_2$, it is possible to choose $\bar{h}_1=\a{1324}$ and $\bar{h}_2=\a{1423}$. This gives an equivalence between $\pmb{g}$ and $\pmb{h}$ and concludes the proof. \end{proof} Let us recall the base steps of the induction, which are carried out in Sections \ref{App:MonGenZero} and \ref{App:MonGenOne}. \begin{prop} \label{Prop:FivePoints} The monodromy arising from a degree $6$ covering of $\P^1$ ramified on $5$ points with ramification in the conjugacy class ${C_{2 \times 2}}$ of $A_6$ generates three possible subgroups of $A_6$. One subgroup of order $24$, denoted by $G_{24}$, that corresponds to the unique, up to conjugation, transitive immersion of $S_4$ inside $A_6$. One of order $60$, denoted by $G_{60}$, that corresponds to the unique, up to conjugation, transitive immersion of $A_5$ inside $A_6$, and the whole $A_6$. Each space $\H(A_6,{C_{2 \times 2}}^5)^{abs}$, $\H([G_{24}],{C_{2 \times 2}}^5)^{abs}$ and $\H([G_{60}],{C_{2 \times 2}}^5)^{abs}$ is connected. \end{prop} \begin{prop} \label{Prop:SixPoints} The space $\H(A_6,{C_{2 \times 2}}^6)^{abs}$ has exactly two connected components, denoted by $$\H_+(A_6,{C_{2 \times 2}}^6)^{abs} \text{ and } \H_-(A_6,{C_{2 \times 2}}^6)^{abs}$$ \end{prop} The following is the proof of the main theorem for the absolute moduli spaces. \begin{proof}[Proof of Theorem \ref{teor:finalresult}] Let us proceed by induction on the number of points $k$. Propositions \ref{Prop:FivePoints} and \ref{Prop:SixPoints} prove the thesis for $k=5$ and $k=6$, respectively. Assume the thesis holds for less than $k$ points and prove the claims for $k$. It is easy to prove that all the considered components are not empty. This is done explicitly for $k=5$ and $6$ in the related sections. In the general case, take for instance the element $$(\a{1234}, g_2, \ldots, g_{k-1} ) \in \H_+(A_6,{C_{2 \times 2}}^{k-1})^{abs}\text{.}$$ By Proposition \ref{prop:liftingandreduction}, the element $$(\a{1324},\a{1423}, g_2, \ldots, g_{k-1} )$$ has the same lifting invariant, hence belongs to $\H_+(A_6,{C_{2 \times 2}}^{k})^{abs}$. Proposition \ref{prop:InnerAbsolute} makes clear that $\H_+(A_6,{C_{2 \times 2}}^{k})^{abs}$ and $\H_-(A_6,{C_{2 \times 2}}^{k})^{abs}$ are two different irreducible components. It remains to prove that these components are connected, namely if $\pmb{g}$ and $\pmb{g}'$ belong to the same component, that is they have the same lifting invariant, then they are equivalent under the Hurwitz and the absolute actions. The strategy is to use a $1$-reduction in order to obtain a monodromy on $k-1$ points, then use the induction hypothesis and then lift the equivalence using Proposition \ref{prop:liftinghurwitz}. Let us work on the element $\pmb{g}$. By using Proposition \ref{Prop:threeGeneartors} one can assume the last three elements, that we will denote as $\{h_1, h_2, h_3\}$, to be generators of $A_6$. This ensures that the reductions will have maximal monodromy. Let us now focus on the remaining $k-3$ elements. If $k > 7$, there are enough elements to apply Proposition \ref{Prop:reductionfive} and get a reduction. For $k=7$, the only possibility is to apply Proposition \ref{Prop:reductionfour} to the remaining $4$ elements. One gets that either there still are two elements with the same fixed points, or the element is in the form $$(\a{1234}, \a{1235}, \a{1634}, \a{1645},h_1, h_2, h_3)$$ Notice that the product of the first four elements is the identity, and then also the product of the last three elements must be the identity, and this is not compatible with the last three elements generating $A_6$. Then a reduction is possible even for the case $k=7$. Assume that we are facing $2$-reduction, if $k>5$ the element is then in the form $$\pmb{g}:=(x,x,g_1,\ldots,g_{k-5},h_1, h_2, h_3)$$ In this case, the product $g_1 \ldots h_3$ is the identity, since the $h_i$ are generators there exists an element $\gamma$ in $\langle g_1,\ldots,g_{k-5},h_1, h_2, h_3 \rangle$ such that $x$ and $\gamma g_1 \gamma^{-1}$ makes a $1$-reduction. By Lemma \ref{lemma:friedproductone}, there exists an element $Q \in B_r$ such that $$Q(\pmb{g}):=(x,x,\gamma \cdot g_1 \cdot \gamma^{-1},\ldots,\gamma \cdot g_{k-5} \cdot \gamma^{-1},\gamma \cdot h_1\cdot \gamma^{-1}, \gamma \cdot h_2 \cdot \gamma^{-1}, \gamma \cdot h_3 \cdot \gamma^{-1})$$ The last three elements are still generators and the pair $(x,\gamma \cdot g_1 \cdot \gamma^{-1})$ shows that it always exists a $1$-reduction. To conclude the proof, we just showed that the two elements $\pmb{g}$ and $\pmb{g}'$ admit a $1$-reduction to $\bar{\pmb{g}}$ and $\bar{\pmb{g}}'$. By Proposition \ref{prop:liftingandreduction}, $\bar{\pmb{g}}$ and $\bar{\pmb{g}}'$ belong to the same component of $\H(A_6,{C_{2 \times 2}}^{k-1})^{abs}$ and then, by the induction hypothesis, they are equivalent by the Hurwitz action. Proposition \ref{prop:liftinghurwitz} ensures that this action can be lifted to obtain an equivalence also between $\pmb{g}$ and $\pmb{g}'$. \end{proof} \section{Curve of genus zero, case of five points} \label{App:MonGenZero} This section aims to classify the elements of $Ni([G],{C_{2 \times 2}}^5)$ with $G$ being a subgroup of $A_6$. We will use the notation $\a{xy--}$ to underline that we are focusing on a specific part of the permutation and let the other part vary. For example, $\a{1---}$ denotes all the elements with $1$ in the first place. Recall that the external action of an element $s$ of $S_n$ is denoted by $\phi_s$. There is a natural notion of lexicographic order on $S_n$ that we will use in the calculation. The following lemma ensures that one can always restrict to work with ordered elements. \begin{lemma} \label{Lemma:ordering} Up to the Hurwitz action, every element of $Ni([G],{C_{2 \times 2}}^k)$ is equivalent to an ordered one. \end{lemma} \begin{proof} Let $\pmb{g}:=\{g_1, \ldots, g_k\}$ be in $Ni([G],{C_{2 \times 2}}^k)$, if it is not ordered then there is $g_i > g_{i+1}$. Then one can use the Hurwitz action to obtain a new element in which the couple $g_{i+1}, g_{i+1}^{-1} g_i g_{i+1}$ appears. This procedure can be repeated until an ordered element is reached. The process must end due to the fact that the whole number of elements in $Ni([G],{C_{2 \times 2}}^k)$ is bounded. \end{proof} Notice that this procedure does not necessarily provide a minimal element of $Ni([G],{C_{2 \times 2}}^k)$. The following element $X$ is ordered but the Hurwitz action on the first two elements produces $Y$ that is still ordered and $Y < X$. $$X:= \a{1235}, \a{1245}, \a{1326}, \a{1345}, \a{2635}\text{,}$$ $$Y:= \a{1234}, \a{1235}, \a{1326}, \a{1345}, \a{2635}\text{.}$$ The action of the external automorphism can also change the ordering. The following element $X$ is ordered but, by applying $\phi_{\b{56}}$ one finds an element $Y$ that is still ordered and $Y<X$. $$X:= (\a{1234} , \a{1236} , \a{1345} , \a{1435}, \a{1546})\text{,}$$ $$Y:= (\a{1234} , \a{1235} , \a{1346} , \a{1436}, \a{1645})\text{.}$$ Up to the action of an external automorphism, one can assume the first element to be $\a{1234}$. This assumption reduces the external automorphism that one can use further to be $\phi_{\b{12}}$, $\phi_{\b{34}}$, $\phi_{\b{56}}$ and $\phi_{\a{1324}}$. Let now choose an element $\pmb{g}$ in $Ni([G],{C_{2 \times 2}}^5)$. The following lemmas apply the Hurwitz action and the external automorphisms in order to find the different classes of these equivalence relations in $Ni([G],{C_{2 \times 2}}^5)$, and then, the different connected components of $\H([G],{C_{2 \times 2}}^5)^{abs}$. Without writing it explicitly, every assumption will be made up to the Hurwitz action and external automorphisms. \begin{lemma} \label{Lemma:SpecDependingFive} The element $\pmb{g}=(g_1, \ldots, g_5)$ falls in one of the following cases \begin{enumerate} \item[$\operatorname{[1]}$] $(\a{1234}, \a{1234}, \a{1325}, \a{1346}, \a{2546})$, \item[$\operatorname{[2]}$] $(\a{1234}, \a{1324}, \a{1423}, \a{1526}, \a{1526})$, \item[$\operatorname{[3]}$] $(\a{1234}, \a{1324}, \a{1425}, \ldots)$, \item[$\operatorname{[4]}$] $(\a{1234}, \a{1324}, \a{1456}, \ldots)$, \item[$\operatorname{[5]}$] $(\a{1234}, \a{1324}, \a{15--}, \ldots)$. \end{enumerate} \end{lemma} \begin{proof} Assume $g_1$ to be $\a{1234}$. By Proposition \ref{Prop:reductionfive}, the second element can be either $\a{1234}$, carried on in Part $1$ of the proof, or $\a{1324}$, carried on in Part $2$. \textbf{Part 1.} The product of the last three components of $\pmb{g}$ must be the identity. For transitivity, there must be at least another $1$ or $2$; up to $\phi_{\b{12}}$, the third element has the form $\a{1abc}$. This gives rise only to three possibilities for the fourth elements. If $g_4$ is $\a{1bac}$ or $\a{1cab}$, we can reduce to Part $2$ of the proof. If $g_4$ is $\a{1axy}$ with $\{x,y\} \neq \{b,c\}$, the element is in the form $$\pmb{g} = (\a{1234}, \a{1234}, \a{1abc}, \a{1axy}, \a{bcxy})\text{.}$$ The case $a=2$ can not occur: $\pmb{g}$ has to be transitive, then again one need $1$ or $2$ in the last element, but this gives rise to a contradiction. If $a=4$ the external automorphism $\phi_{\b{34}}$ allows to reduce to $a=3$, and similarly $a=6$ reduces to $a=5$. It remains to prove that the case $i=5$ reduces to $i=3$. So let now consider $$\pmb{g} = (\a{1234}, \a{1234}, \a{15bc}, \a{15xy}, \a{bcxy})\text{.}$$ It has to be transitive, then $3$ or $4$ must appear more than one time; up to $\phi_{\b{34}}$, assume $b=3$. The transitivity shows also that $c$ must be different from $4$. The two possibilities for $c$ are only $2$ and $6$. If $c=6$ then $\{x,y\}$ must be equal to $\{2,4\}$. By the Hurwitz action and $\phi_{\b{34}}$ one can reduce to the case $c=2$. This would give the final form $$\pmb{g} = (\a{1234}, \a{1234}, \a{1532}, \a{1546}, \a{3246})\text{.}$$ But then, the automorphism $\phi_{\a{1324}}$ gives an equivalence with an element in the case $a=3$. $$\pmb{g} = (\a{1234}, \a{1234}, \a{13bc}, \a{13xy}, \a{bcxy})\text{.}$$ Since $\{b,c,x,y\}$ is $\{2,4,5,6\}$, one can assume $b=2$. For transitivity it follows that $c$ is either $5$ or $6$, but then up to $\phi_{\b{56}}$ one gets the final claim for Case (1): $$\pmb{g} = (\a{1234}, \a{1234}, \a{1325}, \a{1346}, \a{2546})\text{.}$$ \textbf{Part 2.} Like before, there must be at least another $1$. Let us first prove that $g_3$ is not in the form $\a{13--}$. In such a case, it would follow that it has to be at least another $1$. So in case there are exactly four $1$, $\pmb{g}$ would be $$\pmb{g} = (\a{1234}, \a{1324}, \a{13--}, \a{1---}, \a{----})\text{,}$$ but the product being the identity gives easily a contradiction. If follows that the $1$ must be five in total: $$\pmb{g} = (\a{1234}, \a{1324}, \a{13--}, \a{1---}, \a{1---})\text{.}$$ By some calculations observing that the composition of the last three elements has to be $\a{1423}$, it follows that also this case can not occur. Due to the external automorphism $\phi_{\b{23}}$ also the case $g_3=\a{12--}$ is not possible. Using external automorphisms it is easy to show that one can always reduce to have $g_3$ equal to $\a{1423}$, $\a{1425}$, $\a{1456}$ or $\a{15--}$, this gives rise to Cases 2, 3, 4, 5. \textbf{Final form of Case 2.} Let us specify the form of an element in the case $[2]$. Such an element must be in the form $$\left(\a{1234}, \a{1324}, \a{1423}, \a{abcd}, \a{abcd}\right)$$ And the external automorphisms that can act on it are all the permutation of $\{1,2,3,4\}$ and $\phi_{\b{56}}$. We can then assume $a=1$ and $b=5$ in order to keep the element transitive, it follows than $d$ must be equal to $6$ and, up to external automorphisms one can choose $c$ to be $2$. \end{proof} \begin{lemma} \label{Lemma:List} The only possible elements that generates the monodromy group, following the simplifications of the previous lemmas are \begin{enumerate} \item[$\operatorname{[1]}$] $(\a{1234},\a{1234},\a{1325},\a{1346},\a{2546})$, \item[$\operatorname{[2]}$] $(\a{1234},\a{1324},\a{1423},\a{1526},\a{1526})$, \item[$\operatorname{[3.1]}$] $(\a{1234},\a{1324},\a{1425},\a{1623},\a{1635})$, \item[$\operatorname{[3.2]}$] $(\a{1234},\a{1324},\a{1425},\a{2346},\a{3546})$, \item[$\operatorname{[4]}$] $(\a{1234},\a{1324},\a{1456},\a{2536},\a{2635})$, \item[$\operatorname{[5]}$] $(\a{1234},\a{1324},\a{1546},\a{1645},\a{2356})$. \end{enumerate} \end{lemma} \begin{proof} Code \ref{prog:list5points} lists all the possible ordered elements, belonging to cases $[3]$, $[4]$ and $[5]$ of Lemma \ref{Lemma:SpecDependingFive}. By the Hurwitz action on $g_4$ and $g_5$, the list shrinks to the one presented in the lemma. \end{proof} \begin{prop} \label{prop:finalformelements5} Up to conjugation and external automorphisms every element of $Ni([G],{C_{2 \times 2}}^5)$ with $G$ a subgroup of $A_6$ can be reduced to one of the following cases \begin{enumerate} \item[$\operatorname{[1]}$] $(\a{1234},\a{1234},\a{1325},\a{1346},\a{2546})$, \item[$\operatorname{[2]}$] $(\a{1234},\a{1324},\a{1423},\a{1526},\a{1526})$, \item[$\operatorname{[3.1]}$] $(\a{1234},\a{1324},\a{1425},\a{1623},\a{1635})$. \end{enumerate} and these object are not connected by the action of the braid group because they generate groups of order $60$, $24$ and $360$ respectively. \end{prop} \begin{proof} By Lemma \ref{Lemma:List} it remains to prove that $[3.2]$, $[4]$ and $[5]$ can be reduced to one of the three last cases. Cases $[4]$ and $[5]$ are equivalent to Case $[2]$, this can achieved by conjugating $\a{1456},\a{2536}$ in $[4]$ and $\a{1645},\a{2356}$ in $[5]$. Eventually, $[3.2]$ is equivalent to $[3.1]$ by using $\phi_{\b_{14}}$. \end{proof} As an immediate consequence of this proposition one obtains the following \begin{proof}[Proof of Proposition \ref{Prop:FivePoints}] Each space $\H(A_6,{C_{2 \times 2}}^5)^{abs}$, $\H([G_{24}],{C_{2 \times 2}}^5)^{abs}$ and $\H([G_{60}],{C_{2 \times 2}}^5)^{abs}$ is not empty, thanks to the elements of Cases $[3.1]$, $[2]$ and $[1]$, respectively. The constructions in the lemmas describe explicitly an equivalence between two elements of the same space, giving the connectedness. \end{proof} \begin{oss} In the classification carried on this section we used the external action of the whole group $S_6$, and so we considered only the conjugacy classes of the monodromy groups $[G_{60}]$ and $[G_{24}]$. This is sufficient for the aim of proving Theorems \ref{teor:finalresult} and \ref{teor:finalresultinner}. However, the spaces $\H(G_{60},{C_{2 \times 2}}^5)^{abs}$ and $\H(G_{24},{C_{2 \times 2}}^5)^{abs}$ are also connected for every choice of $G_{60}$ and $G_{24}$ in their conjugacy class. The strategy of proving that consists of proving the connectedness for a particular choice of $G_{24}$ or $G_{60}$, and then using Proposition \ref{prop:InnerAbsolute} to extend the result to all the other cases by conjugation. \end{oss} \section{Curve of genus one, case of six points} \label{App:MonGenOne} This appendix is devoted to prove Proposition \ref{Prop:FivePoints}. The idea is to exploit Proposition \ref{Prop:reductionfive} taking into account the issues that makes this case different from the induction step. \begin{ese} The space $\H(A_6,{C_{2 \times 2}}^6)^{abs}$ has at least two connected components. To see this it is sufficient to compute the lifting invariant of these two elements $$\a{1234}, \a{1234}, \a{1236}, \a{1236}, \a{1325}, \a{1325}$$ $$\a{1234}, \a{1234}, \a{1236}, \a{1256}, \a{1435}, \a{1456}$$ \end{ese} One can try to reduce the problem to five points, but the situation here is not easy as for $8$ and more points, since there are not enough elements to use both Propositions \ref{Prop:reductionfive} and \ref{Prop:threeGeneartors}; this is the reason for which one need to treat this case separately and not as a part of the induction step of Theorem \ref{teor:finalresult}. The strategy is still to use Proposition \ref{Prop:reductionfive} in order to reduce the number of points from $6$ to $5$ and then get rid of the problems. \begin{oss} \label{rmk:situations} Assume that Proposition \ref{Prop:reductionfive} is used on the first five elements of $\pmb{g}$ in $Ni(A_6,{C_{2 \times 2}}^6)$. The following situations can arise: \begin{enumerate} \item A $1$-reduction is possible, and the resulting element is a valid monodromy on $5$ points. \item A $1$-reduction is possible, but the transitivity is lost after the reduction. \item A $2$-reduction is possible, but in this case, the resulting element would for certain not be transitive, because of the Riemann Hurwitz theorem. \end{enumerate} \end{oss} \begin{lemma} If $\pmb{g}$ belongs to case $(3)$ of Remark \ref{rmk:situations}, then it is always possible to reduce to case $(1)$ or $(2)$ by using Hurwitz actions and Proposition \ref{Prop:reductionfour}. \end{lemma} \begin{proof} Since $\pmb{g}$ belongs to case $(3)$, we can assume it to be $$\{g_1, g_2, g_3, g_4, g_5, g_5\}\text{.}$$ One can apply Proposition \ref{Prop:reductionfour} to the ordered set $\{g_1, g_2, g_3, g_4\}$. If a $1$-reduction is obtained, the proof is concluded. If not, $\pmb{g}$ has one of these forms $$\{h_1, h_1, h_2, h_2, g_5, g_5\}$$ $$\{\a{1234}, \a{1235}, \a{1634}, \a{1645}, g_5, g_5 \}$$ The first form is given by three pairs of elements ${h_1, h_2, h_3}$ such that $\langle h_1, h_3, g_5 \rangle$ is the whole $A_6$. The second form is simply the third possible outcome of Proposition \ref{Prop:reductionfour}. Code \ref{prog:solve6pointscase2} shows that every case can be reduced to $(1)$ or $(2)$. The strategy consists in listing all the possibilities for such forms, and then using the Hurwitz action until a $1$-reduction is found. \end{proof} The following lemma shows that it is always possible to obtain case $(1)$ in Remark \ref{rmk:situations}. \begin{lemma} If $\pmb{g}$ belongs to case $(2)$ of Remark \ref{rmk:situations}, then it is always possible to reduce to case $(1)$ by using Hurwitz actions and Proposition \ref{Prop:reductionfour}. \end{lemma} \begin{proof} This is done with a case by case analysis carried out with Code \ref{prog:list6pointscase1}. \end{proof} Then one has only to deal with case $(1)$ of Remark \ref{rmk:situations}. Proposition \ref{Prop:FivePoints} shows that three case can arise, $(g_1 g_2, g_3, \ldots, g_6)$ belonging to $\H(A_6,{C_{2 \times 2}}^5)^{abs}$, $\H([G_{60}],{C_{2 \times 2}}^5)^{abs}$ or $\H([G_{24}],{C_{2 \times 2}}^5)^{abs}$, respectively. Let define $\H_-(A_6,{C_{2 \times 2}}^6)^{abs}$ as the space of $(g_1 g_2, g_3, \ldots, g_6)$ that can be reduced to an element in $\H(A_6,{C_{2 \times 2}}^5)^{abs}$ and $\H_+(A_6,{C_{2 \times 2}}^6)^{abs}$ as the space of $(g_1 g_2, g_3, \ldots, g_6)$ that can be reduced to an element in $\H(G_{60},{C_{2 \times 2}}^5)^{abs}$. It remains to study what happens if the element reduces to $\H(G_{24},{C_{2 \times 2}}^5)^{abs}$. \begin{lemma} Let $\pmb{g}$ be an element in $Ni(A_6,{C_{2 \times 2}}^5)$ of this form $$(\a{1234}, \a{1324}, g_3, g_4, g_5, g_6)$$ and assume that the reduction $$\bar{\pmb{g}} := (\a{1423}, g_3, g_4, g_5, g_6)$$ is a valid monodromy. Then $\bar{\pmb{g}}$ can not have order 24. \end{lemma} \begin{proof} By Proposition \ref{prop:finalformelements5}, the element $\bar{\pmb{g}}$ is equivalent to $$(\a{1234}, \a{1324}, \a{1425}, \a{1623}, \a{1635})$$ then the element $\pmb{g}$ equivalent to one of the form $$(\a{1423}, \a{1324}, \a{1324}, \a{1425}, \a{1623}, \a{1635})$$ with the $1$-reduction taking place in the first element. But the order of this element is not $360$ and this give a contradiction. \end{proof} Now it is possible to conclude that $\H(A_6,{C_{2 \times 2}}^6)^{abs}$ has exactly two connected components, proving Proposition \ref{Prop:FivePoints}. \begin{proof}[Proof of Proposition \ref{Prop:FivePoints}] From the previous lemmas and propositions, one know that every element of $\H(A_6,{C_{2 \times 2}}^6)^{abs}$ falls either in $\H_+(A_6,{C_{2 \times 2}}^6)^{abs}$, the space of the elements that admits a reduction to an element of $\H([G_60],{C_{2 \times 2}}^5)^{abs}$, or in $\H_-(A_6,{C_{2 \times 2}}^6)^{abs}$, the space of the elements that admits a reduction to an element of $\H(A_6,{C_{2 \times 2}}^5)^{abs}$. These two spaces are well defined because, as proved in Proposition \ref{prop:liftingandreduction}, the lifting invariant does not change via this kind of reductions and the lifting invariant of $\H(A_6,{C_{2 \times 2}}^5)^{abs}$ and $\H([G_60],{C_{2 \times 2}}^5)^{abs}$ are different. Eventually, the connectedness of $\H(A_6,{C_{2 \times 2}}^5)^{abs}$ and $\H([G_60],{C_{2 \times 2}}^5)^{abs}$, and Proposition \ref{prop:liftinghurwitz} ensures that $\H_+(A_6,{C_{2 \times 2}}^6)^{abs}$ and $\H_-(A_6,{C_{2 \times 2}}^6)^{abs}$ are also connected. \end{proof} \section{The space $\H(A_6,{C_{2 \times 2}}^r)^{in}$} \label{sec_inner} If one considers two elements in $\H(A_6,{C_{2 \times 2}}^r)^{in}$, the right action of the external automorphism can be performed only with elements of $A_6$. Proposition \ref{prop:InnerAbsolute} shows that in this setting, two elements with two different lifting invariants of order three are no longer equivalent. The number of different connected components should then increase. Fix a lifting of the identity of order three $\sigma$. Notice that the expected result is different from the result of \cite{FriedAlternating}, due to the fact that, in that case, the lifting invariant is an order two lifting of the identity. The following is the proof of the final theorem in the case of inner moduli space. \begin{proof}[Proof of Theorem \ref{teor:finalresultinner}] In order to exploit Theorem \ref{teor:finalresult}, notice that an element in $\H_{\conn{1}}(A_6,{C_{2 \times 2}}^k)^{in}$ becomes an element in $\H_{\conn{2}}(A_6,{C_{2 \times 2}}^k)^{in}$ after the right action of a single $2$-cycle. Proposition \ref{prop:InnerAbsolute} shows that there are at least three connected components, that will be denoted by of $\H_{\conn{0}}(A_6,{C_{2 \times 2}}^k)^{in}$, $\H_{\conn{1}}(A_6,{C_{2 \times 2}}^k)^{in}$ and $\H_{\conn{2}}(A_6,{C_{2 \times 2}}^k)^{in}$, depending on the lifting invariant being $1_V$, $\sigma$ and $\sigma^2$, respectively. It remains to prove that there are no more components. That is straightforward for $\H_{\conn{1}}(A_6,{C_{2 \times 2}}^k)^{in}$ and $\H_{\conn{2}}(A_6,{C_{2 \times 2}}^k)^{in}$, so let us check only the case $\H_{\conn{0}}(A_6,{C_{2 \times 2}}^6)^{in}$. Consider first the following element in $\H_{\conn{0}}(A_6,{C_{2 \times 2}}^6)^{in}$: $$\pmb{g} := \a{1234},\a{1324}, \a{1236},\a{1245},\a{1526},\a{4536}.$$ Applying the odd external automorphism $\phi_{\a{1234}\b{56}}$, we obtain the following element $$\pmb{g}' := \a{1234},\a{1324}, \a{1245},\a{1236},\a{1526},\a{4536}$$ Then by the Hurwitz action we can exchange the third and the fourth elements, going back again to the first element. Hence these two elements in $\H_{\conn{0}}(A_6,{C_{2 \times 2}}^6)^{abs}$ differ from an external automorphism of odd parity and are still related by the Hurwitz action. Let now $s$ be another element of $S_6$, and consider $\pmb{h} := s^{-1} \pmb{g} s$. If $s$ has even parity, Proposition \ref{prop:InnerAbsolute} show that $\pmb{h} $ still belongs to $\H_{\conn{0}}(A_6,{C_{2 \times 2}}^6)^{in}$, if $s$ has odd parity, then $\pmb{h}$ will differ from $\pmb{g}'$ by the action of $s \circ \phi_{\a{1234}\b{56}}$, that has even parity. Also in this case, then, $\pmb{h}$ still belongs to $\H_{\conn{0}}(A_6,{C_{2 \times 2}}^6)^{in}$. The same reasoning can be applied to the case of $\H_{\conn{1}}(A_6,{C_{2 \times 2}}^6)^{in}$ and $\H_{\conn{2}}(A_6,{C_{2 \times 2}}^6)^{in}$, and also in the case of $5$ points. Consider for example the following element of $\H_{\conn{1}}(A_6,{C_{2 \times 2}}^6)^{in}$ $$(\a{1234},\a{1234}, \a{1236},\a{1256},\a{1435},\a{1456})$$ Applying the external automorphism $\phi_{\a{1234}\b{56}}$, that is of odd parity, one obtains the following element $$(\a{1234},\a{1234}, \a{1245},\a{1256},\a{2346},\a{2356})\text{.}$$ that belongs to $\H_{\conn{2}}(A_6,{C_{2 \times 2}}^6)^{in}$. The final part follows exactly as in the case of $\H_{\conn{0}}(A_6,{C_{2 \times 2}}^6)^{in}$. Finally, for the induction part to be true it is sufficient to show that the computations of Section \ref{sec_absolute} can be performed with external automorphisms of even parity. These automorphisms are used in Proposition \ref{Prop:FixedPointReduction}, in which an even automorphism is used and Proposition \ref{Prop:reductionfour}, that is used only to show that a particular case does not arise for $k=7$ in the proof of Theorem \ref{teor:finalresult}, but the result is still valid even if one compose with a cycle. Then the proof follows. \end{proof} \section{Open problems} \label{sec:open} The problem of studying the Hurwitz spaces is still widely open. The same technique used in this paper can be in principle used to study all the other cases described asymptotically in \cite{BK}. \begin{que} Complete the study of the lower genus cases described in \cite{BK}, Theorem 4.14, Theorem 4.15 and Proposition 4.16. Find a generalization of the lifting invariant suitable for all the possible conjugacy classes of $A_n$, and use it to classify spaces of mixed monodromy type. \end{que} A possible strategy to attach the case of mixed monodromy type is deforming the base making different branch point collide. That in principle could allow us to start from a space with homogeneous monodromy type and then deform it to one with mixed monodromy type. The difficult part would be describing the degenerate situation, in which the monodromy group becomes not transitive. \begin{que} Study from the algebraic point of view what happened if the transitivity hypothesis is dropped in the definition of the Nielsen classes. \end{que} Eventually, even by knowing the connected component of the Hurwitz spaces, it is very difficult to explicitly give functions with an assigned type of monodromy. An intriguing question is then the following \begin{que} For each component provided by Theorem \ref{teor:finalresult} and Theorem \ref{teor:finalresultinner}, find an explicit example of a rational function defined over $\P^1$ that gives rise to a monodromy belonging to such a component. \end{que} \medskip {\small\noindent{\bf Acknowledgements.} The first named author was supported by the Department of Mathematics and Natural Sciences of University of Stavanger in the framework of the grant 230986 of the Research Council of Norway. The second named author is partially supported by INdAM (GNSAGA); PRIN 2012 \emph{``Moduli, strutture geometriche e loro applicazioni''} and FAR 2014 (PV) \emph{``Variet\`a algebriche, calcolo algebrico, grafi orientati e topologici''.} The authors are especially grateful to Professor Michael Fried and Alice Cuzzucoli for their valuable comments on a preliminary version of this paper and to Fedor A. Bogomolov for drawing the work \cite{BK} to our attention. }
2,877,628,091,388
arxiv
\section{Introduction} Deep neural networks are demonstrating a large impact on Natural Language Processing. Neural machine translation (NMT) \cite{bahdanau2014neural,luong2015effective,wu2016google,ashish2017google} has especially gained increasing popularity, as it can leverage neural networks to directly perform translations with a simple end-to-end architecture. NMT has shown remarkable results in several shared tasks \cite{denkowski2017stronger,nakazawa2017overview}, and its effective approach has had a strong influence on other related NLP tasks such as dialog generation \cite{vinyals20152} and automatic summarization \cite{rush2015neural}. Although NMT can potentially perform end-to-end translation, many NMT systems are still relying on language-dependent pre- and postprocessors, which have been used in traditional statistical machine translation (SMT) systems. Moses\footnote{\url{http://www.statmt.org/moses/}}, a de-facto standard toolkit for SMT, implements a reasonably useful pre- and postprocessor. However, it is built upon hand-crafted and language dependent rules whose effectiveness for NMT has not been proven. In addition, these tools are mainly designed for European languages where words are segmented with whitespaces. To train NMT systems for non-segmented languages such as Chinese, Korean and Japanese, we need to run word segmenters independently. Such language-dependent processing also makes it hard to train multilingual NMT models \cite{johnson2016google}, as we have to carefully manage the configurations of pre- and postprocessors per language, while the internal deep neural architectures are language-independent. As NMT approaches are standardized and moving forward to more language-agnostic architectures, it is becoming more important for the NLP community to develop a simple, efficient, reproducible and language independent pre- and postprocessor that can easily be integrated into Neural Network-based NLP systems, including NMT. In this demo paper, we describe SentencePiece, a simple and language independent text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the size of vocabulary is predetermined prior to the Neural model training. SentencePiece implements two subword segmentation algorithms, byte-pair-encoding (BPE) \cite{sennrichneural} and unigram language model \cite{kudo2018}, with the extension of direct training from raw sentences. SentencePiece enables building a purely end-to-end system that does not depend on any language-specific processing. \section{System Overview} SentencePiece comprises four main components: {\bf Normalizer}, {\bf Trainer}, {\bf Encoder}, and {\bf Decoder}. Normalizer is a module to normalize semantically-equivalent Unicode characters into canonical forms. Trainer trains the subword segmentation model from the normalized corpus. We specify a type of subword model as the parameter of Trainer. Encoder internally executes Normalizer to normalize the input text and tokenizes it into a subword sequence with the subword model trained by Trainer. Decoder converts the subword sequence into the normalized text. The roles of Encoder and Decoder correspond to preprocessing (tokenization) and postprocessing (detokenization) respectively. However, we call them encoding and decoding as SentencePiece manages the vocabulary to id mapping and can directly convert the text into an id sequence and vice versa. Direct encoding and decoding to/from id sequences are useful for most of NMT systems as their input and output are id sequences. Figure \ref{fig1:command} presents end-to-end example of SentencePiece training (\verb+spm_train+), encoding (\verb+spm_encode+), and decoding (\verb+spm_decode+). We can see that the input text is reversibly converted through \verb+spm_encode+ and \verb+spm_decode+. \begin{figure}[t] \begin{lstlisting}[language=sh, caption=Commandline usage of SentencePiece, label=fig1:command language={python}, lineskip=-0.5ex,% basicstyle={\normalsize},% identifierstyle={\normalsize},% commentstyle={\normalsize\itshape},% keywordstyle={\normalsize\bfseries},% ndkeywordstyle={\normalsize},% stringstyle={\normalsize\ttfamily}] --model_prefix=spm --vocab_size=1000 _He ll o _world . --model=spm.model --output_format=id 151 88 21 887 6 Hello world. --input_format=id Hello world. \end{lstlisting} \vspace*{-4mm} \end{figure} \section{Library Design} This section describes the design and implementation details of SentencePiece with command line and code snippets. \subsection{Lossless Tokenization} The following raw and tokenized sentences are an example of language-dependent preprocessing. \begin{itemize} \item {\bf Raw text:}\,\, Hello world. \item {\bf Tokenized:} [Hello] [world] [.] \end{itemize} One observation is that the raw text and tokenized sequence are not reversibly convertible. The information that no space exists between ``world'' and ``.'' is not kept in the tokenized sequence. Detokenization, a process to restore the original raw input from the tokenized sequence, has to be language-dependent due to these irreversible operations. For example, while the detokenizer usually puts whitespaces between the primitive tokens in most European languages, no spaces are required in Japanese and Chinese. \begin{itemize} \setlength{\leftskip}{-1em} \item {\bf Raw text:} \begin{CJK}{UTF8}{ipxm}[こんにちは世界。]\end{CJK} ({\it Hello world.}) \item {\bf Tokenized:} \begin{CJK}{UTF8}{ipxm}[こんにちは] [世界] [。]\end{CJK} \end{itemize} Such language specific processing has usually been implemented in manually crafted rules, which are expensive to write and maintain. SentencePiece implements the Decoder as an inverse operation of Encoder, i.e., \begin{eqnarray*} \Decode(\Encode(\Normalize(text))) = \\ \Normalize(text). \end{eqnarray*} We call this design {\bf lossless tokenization}, in which all the information to reproduce the normalized text is preserved in the encoder's output. The basic idea of lossless tokenization is to treat the input text just as a sequence of Unicode characters. Even whitespace is handled as a normal symbol. For the sake of clarity, SentencePiece first escapes the whitespace with a meta symbol \textunderscore\,\,(U+2581), and tokenizes the input into an arbitrary subword sequence, for example: \begin{itemize} \item {\bf Raw text:}\,\,\,\,Hello\textunderscore world. \item {\bf Tokenized:} [Hello] [\textunderscore wor] [ld] [.] \end{itemize} As the whitespace is preserved in the tokenized text, we can detokenize the tokens without any ambiguities with the following Python code. {\small \begin{Verbatim} detok = ''.join(tokens).replace('_', ' ') \end{Verbatim} } It should be noted that subword-nmt\footnote{\url{https://github.com/rsennrich/subword-nmt}} adopts a different representation for subword units. It focuses on how the word is segmented into subwords and uses \verb+@@+ as an intra-word boundary marker. \begin{itemize} \item {\bf Tokenized:} [Hello] [wor] [\verb+@@+ld] [\verb+@@+.] \end{itemize} This representation can not always perform lossless tokenization, as an ambiguity remains in the treatment of whitespaces. More specifically, it is not possible to encode consecutive whitespaces with this representation. \subsection{Efficient subword training and segmentation} Existing subword segmentation tools train subword models from pre-tokenized sentences. Such pre-tokenization was introduced for an efficient subword training \cite{sennrichneural}. However, we can not always assume that pre-tokenization is available, especially for non-segmented languages. In addition, pre-tokenization makes it difficult to perform lossless tokenization. SentencePiece employs several speed-up techniques both for training and segmentation to make lossless tokenization with a large amount of raw data. For example, given an input sentence (or word) of length $N$, BPE segmentation requires $O(N^2)$ computational cost when we naively scan the pair of symbols in every iteration. SentencePiece adopts an $O(N\log(N))$ algorithm in which the merged symbols are managed by a binary heap (priority queue). In addition, the training and segmentation complexities of unigram language models are linear to the size of input data. \subsection{Vocabulary id management} SentencePiece manages the vocabulary to id mapping to directly convert the input text into an id sequence and vice versa. The size of vocabulary is specified with the \verb+--vocab_size=<size>+ flag of \verb+spm_train+. While subword-nmt specifies the number of merge operations, SentencePiece specifies the final size of vocabulary, as the number of merge operations is a BPE specific parameter and can not be applicable to other segmentation algorithms, e.g., unigram language model \cite{kudo2018}. SentencePiece reserves vocabulary ids for special meta symbols, e.g., unknown symbol (\verb+<unk>+), BOS (\verb+<s>+), EOS (\verb+</s>+) and padding (\verb+<pad>+). Their actual ids are configured with command line flags. We can also define custom meta symbols to encode contextual information as virtual tokens. Examples include the language-indicators, \verb+<2ja>+ and \verb+<2de>+, for multilingual models \cite{johnson2016google}. \subsection{Customizable character normalization} Character normalization is an important preprocessing step for handling real world text, which consists of semantically-equivalent Unicode characters. For example, Japanese fullwidth Latin characters can be normalized into ASCII Latin characters. Lowercasing is also an effective normalization, depending on the application. Character normalization has usually been implemented as hand-crafted rules. Recently, Unicode standard Normalization Forms, e.g., NFC and NFKC, have been widely used in many NLP applications because of their better reproducibility and strong support as Unicode standard. By default, SentencePiece normalizes the input text with the Unicode NFKC normalization. The normalization rules are specified with the \verb+--normalization_rule_name=nfkc+ flag of \verb+spm_train+. The normalization in Sentencepiece is implemented with string-to-string mapping and leftmost longest matching. The normalization rules are compiled into a finite state transducer (Aho-Corasick automaton) to perform an efficient normalization\footnote{ The original NFKC normalization requires CCC (Canonical Combining Class) reordering, which is hard to model in a finite state transducer. SentencePiece does not handle the full CCC reordering and only implements a subset of NFKC normalization.}. SentencePiece supports custom normalization rules defined as a TSV file. Figure \ref{fig1:tsv} shows an example TSV file. \begin{figure}[t] \begin{lstlisting}[language=C, caption=Custom normalization rule in TSV, label=fig1:tsv] U+41 U+302 U+300 <tab> U+1EA6 U+41 U+302 U+301 <tab> U+1EA4 ... \end{lstlisting} \vspace{-6mm} \end{figure} In this example, the Unicode sequence [U+41 U+302 U+300] is converted into U+1EA6\footnote{Note that tabs are used as the delimiter for source and target sequence and spaces are used as the delimiter for individual characters.}. When there are ambiguities in the conversion, the longest rule is applied. User defined TSV files are specified with the \verb+--normalization_rule_tsv=<file>+ flag of \verb+spm_train+. Task-specific rules can be defined by extending the default NFKC rules provided as a TSV file in SentencePiece package. \subsection{Self-contained models} Recently, many researchers have provided pre-trained NMT models for better reproduciblity of their experimental results. However, it is not always stated how the data was preprocessed. \cite{post2018} reported that subtle differences in preprocessing schemes can widely change BLEU scores. Even using the Moses toolkit, it is not guaranteed to reproduce the same settings unless the configurations of Moses (e.g., version and command line flags) are clearly specified. Strictly speaking, NFKC normalization may yield different results depending on the Unicode version. Ideally, all the rules and parameters for preprocessing must be embedded into the model file in a self-contained manner so that we can reproduce the same experimental setting as long as we are using the same model file. The SentencePiece model is designed to be purely self-contained. The model file includes not only the vocabulary and segmentation parameters, but also the pre-compiled finite state transducer for character normalization. The behavior of SentencePiece is determined only by the model file and has no external dependencies. This design guarantees a perfect reproducibility as well as allowing to distribute the SentencePiece model file as part of an NMT model. In addition, the developers of SentencePiece can refine the (default) normalization rules without having to worry about breaking existing preprocessing behaviors. The SentencePiece model is stored as a binary wire format Protocol buffer\footnote{\url{https://developers.google.com/}\\\hspace*{30mm}\url{protocol-buffers/}}, a platform neutral and extensible mechanism for serializing structured data. Protocol buffers help to safely serialize structured data while keeping backward compatibility as well as extensibility. \subsection{Library API for on-the-fly processing} Text preprocessing is usually considered as offline processing. Prior to the main NMT training, raw input is preprocessed and converted into an id sequence with a standalone preprocessor. Such off-line preprocessing has two problems. First, standalone tools are not directly integrated into the user-facing NMT applications which need to preprocess user input on-the-fly. Second, off-line preprocessing makes it hard to employ sub-sentence level data augmentation and noise injection, which aim at improving the accuracy and robustness of the NMT models. There are several studies to inject noise to input sentences by randomly changing the internal representation of sentences. \cite{kudo2018} proposes a subword regularization that randomly changes the subword segmentation during NMT training. \cite{guillaume2017,mikel2017} independently proposed a denoising autoencoder in the context of sequence-to-sequence learning, where they randomly alter the word order of the input sentence and the model is trained to reconstruct the original sentence. It is hard to emulate this dynamic sampling and noise injection only with the off-line processing. \begin{figure}[t] \begin{lstlisting}[language=C++, caption=C++ API usage {\small (The same as Figure 1.)}, label=fig1:c++] #include <sentencepiece_processor.h> #include <sentencepiece_trainer.h> SentencePieceTrainer::Train( "--input=input.txt " "--model_prefix=spm " "--vocab_size=1000"); SentencePieceProcessor sp; sp.Load("spm.model"); std::vector<std::string> pieces; sp.Encode("Hello world.", &pieces); std::vector<int> ids; sp.Encode("Hello world.", &ids); std::string text; sp.Decode({151, 88, 21, 887, 6}, &text); \end{lstlisting} \vspace*{-3mm} \end{figure} \begin{figure}[t] \begin{lstlisting}[language=Python, caption=Python API usage {\small (The same as Figure 1.)}, label=fig1:python] import sentencepiece as spm params = ('--input=input.txt ' '--model_prefix=spm ' '--vocab_size=1000') spm.SentencePieceTrainer.Train(params) sp = spm.SentencePieceProcessor() sp.Load('spm.model') print(sp.EncodeAsPieces('Hello world.')) print(sp.EncodeAsIds('Hello world.')) print(sp.DecodeIds([151, 88, 21, 887, 6])) \end{lstlisting} \vspace*{-5mm} \end{figure} \begin{figure}[t] \begin{lstlisting}[language=Python, label=fig1:tf, caption=TensorFlow API usage] import tensorflow as tf import tf_sentencepiece as tfs model = tf.gfile.GFile('spm.model', 'rb').read() input_text = tf.placeholder(tf.string, [None]) ids, lens = tfs.encode(input_text, model_proto=model, out_type=tf.int32) output_text = tfs.decode(ids, lens, model_proto=model) with tf.Session() as sess: text = ['Hello world.', 'New York'] ids_, lens_, output_text_ = sess.run([ids, lens, output_text], feed_dict={input_text:text}) \end{lstlisting} \vspace*{-3mm} \begin{spacing}{0.5} {\footnotesize The SentencePiece model (model proto) is an attribute of the TensorFlow operation and embedded into the TensorFlow graph so the model and graph become purely self-contained.} \end{spacing} \vspace*{4mm} \end{figure} SentencePiece not only provides a standalone command line tool for off-line preprocessing but supports a C++, Python and Tensorflow library API for on-the-fly processing, which can easily be integrated into existing NMT frameworks. Figures \ref{fig1:c++}, \ref{fig1:python} and \ref{fig1:tf} show example usages of the C++, Python and TensorFlow API\footnote{As the Python and TensorFlow wrappers call the native C++ API, there is no performance drop in their interfaces.}. Figure \ref{fig1:python2} presents example Python code for subword regularization where one subword sequence is sampled according to the unigram language model. We can find that the text ``New York'' is tokenized differently on each \verb+SampleEncodeAsPieces+ call. Please see \cite{kudo2018} for the details on subword regularization and its sampling hyperparameters. \begin{figure}[t] \begin{lstlisting}[language=Python, caption=Subword sampling with Python API, label=fig1:python2,lineskip=-0.3ex] >>> sp.Load('spm.model') >>> for n in range(5): ... sp.SampleEncodeAsPieces('New York', -1, 0.1) ['_', 'N', 'e', 'w', '_York'] ['_', 'New', '_York'] ['_', 'New', '_Y', 'o', 'r', 'k'] ['_', 'New', '_York'] ['_', 'New', '_York'] \end{lstlisting} \vspace*{-5mm} \end{figure} \section{Experiments} \subsection{Comparison of different preprocessing} We validated the performance of the different preprocessing on English-Japanese translation of Wikipedia articles, as specified by the Kyoto Free Translation Task (KFTT) \footnote{\url{http://www.phontron.com/kftt}}. The training, development and test data of KFTT consist of 440k, 1166 and 1160 sentences respectively. We used GNMT \cite{wu2016google} as the implementation of the NMT system in our experiments. We generally followed the settings and training procedure described in \cite{wu2016google}, however, we changed the node and layer size of LSTM to be 512 and 6 respectively. A word model is used as a baseline system. We compared to SentencePiece (unigram language model) with and without pre-tokenization. SentencePiece with pre-tokenization is essentially the same as the common NMT configuration with subword-nmt. SentencePiece without pre-tokenization directly trains the subword model from raw sentences and does not use any external resources. We used the Moses tokenizer\footnote{\url{http://www.statmt.org/moses/}} and KyTea\footnote{\url{http://www.phontron.com/kytea}} for English and Japanese pre-tokenization respectively. The same tokenizers are applied to the word model. We used the case-sensitive BLEU score \cite{papineni2002bleu} as an evaluation metric. As the output sentences are not segmented in Japanese, we segmented them with KyTea for before calculating BLEU scores. \begin{table}[t] \renewcommand{\arraystretch}{0.9} \begin{center} \begin{tabular}[c]{c|l|l|l} \hline {\scriptsize Lang pair} & {\small setting (source/target)} & {\small\shortstack{\# vocab.}} & BLEU \\ \hline {\small ja$\rightarrow$en} & {\small Word model (baseline)} & 80k/80k & 28.24 \\ & {\small SentencePiece} & 8k {\scriptsize (shared)}& 29.55 \\ & {\small SentencePiece w/ pre-tok.\!\!} & 8k {\scriptsize (shared)}& 29.85 \\ & {\small Word/SentencePiece} & 80k/8k & 27.24 \\ & {\small SentencePiece/Word} & 8k/80k & 29.14 \\ \hline {\small en$\rightarrow$ja} & {\small Word model (baseline)} & 80k/80k & 20.06 \\ & {\small SentencePiece} & 8k {\scriptsize (shared)} & 21.62 \\ & {\small SentencePiece w/ pre-tok.\!\!} & 8k {\scriptsize (shared)}& 20.86 \\ & {\small Word/SentencePiece} & 80k/8k & 21.41\\ & {\small SentencePiece/Word} & 8k/80k & 19.94 \\ \hline \end{tabular} \end{center} \vspace*{-4mm} \caption{Translation Results (BLEU(\%))} \label{result} \vspace*{-5mm} \end{table} Table \ref{result} shows the experimental results. First, as can be seen in the table, subword segmentations with SentencePiece consitently improve the BLEU scores compared to the word model. This result is consistent with previous work \cite{sennrichneural}. Second, it can be seen that the pre-tokenization is not always necessary to boost the BLEU scores. In Japanese to English, the improvement is marginal and has no significant difference. In English to Japanese, the BLEU score is degraded with pre-tokenization. We can find larger improvements in BLEU when 1) SentencePiece is applied to Japanese, and 2) the target sentence is Japanese. As Japanese is a non-segmented language, pre-tokenization acts as a strong constraint to determine the final vocabulary. It can be considered that the positive effects of unsupervised segmentation from raw input worked effectively to find the domain-specific vocabulary in Japanese. \subsection{Segmentation performance} Table \ref{result2} summarizes the training and segmentation performance of various configurations. We can see that the training and segmentation speed of both SentencePiece and subword-nmt is almost comparable on English data set regardless of the choice of pre-tokenization. This is expected, as English is a segmented language and the search space for the vocabulary extraction is largely restricted. On the other hand, SentencePiece shows larger performance improvements when applying it to raw Japanese data (w/o pre-tok). The segmentation speed of SentencePiece is about 380 times faster than that of subword-nmt in this setting. This result strongly supports our claim that SentencePiece is fast enough to be applied to raw data and the pre-tokenization is not always necessary. Consequently, SentencePiece helps to build a purely data-driven and language-independent system. The segmentation speed of SentencePiece is around 21k and 74k sentences/sec. in English and Japanese respectively, which is fast enough to be executed on-the-fly. \begin{table}[t] \renewcommand{\arraystretch}{0.9} \begin{center} \begin{tabular}[c]{l|c|c|r|r} \hline & & & \multicolumn{2}{c}{time (sec.)} \\ \cline{4-5} {\small Task} & {\footnotesize Tool} & {\small Pre-tok.} & {\small Japanese} & {\small English} \\ \hline {\small Train} & {\small subword-nmt} & {\small yes} & 56.9 & 54.1 \\ & {\small SentencePiece} & {\small yes} & 10.1 & 16.8 \\ & {\small subword-nmt} & {\small no} & 528.0 & 94.7 \\ & {\small SentencePiece} & {\small no} & 217.3 & 21.8 \\ \hline {\small Seg.} & {\small subword-nmt} & {\small yes} & 23.7 & 28.6 \\ & {\small SentencePiece} & {\small yes} & 8.2 & 20.3 \\ & {\small subword-nmt} & {\small no} & 216.2 & 36.1 \\ & {\small SentencePiece} & {\small no} & 5.9 & 20.3 \\ \hline \multicolumn{3}{c|}{{\small Pre-tokenizaion}{\scriptsize\,\,KyTea(ja)/Moses(en)}} & 24.6 & 15.8 \\ \hline \end{tabular} \vspace*{-4mm} \caption{Segmentation performance. {\footnotesize KFTT corpus (440k sentences) is used for evaluation. Experiments are executed on Linux with Xeon 3.5Ghz processors. The size of vocabulary is 16k. Moses and KyTea tokenizers are used for English and Japanese respectively. Note that we have to take the time of pre-tokenization into account to make a fair comparison with and without pre-tokenization. Because subword-nmt is based on BPE, we used the BPE model in SentencePiece. We found that BPE and unigram language models show almost comparable performance.}} \label{result2} \vspace*{-6mm} \end{center} \end{table} \section{Conclusions} In this paper, we introduced SentencePiece, an open-source subword tokenizer and detokenizer designed for Neural-based text processing. SentencePiece not only performs subword tokenization, but directly converts the text into an id sequence, which helps to develop a purely end-to-end system without replying on language specific resources. The model file of SentencePiece is designed to be self-contained to guarantee perfect reproducibility of the normalization and subword segmentation. We hope that SentencePiece will provide a stable and reproducible text processing tool for production use and help the research community to move to more language-agnostic and multilingual architectures.
2,877,628,091,389
arxiv
\section{S\lowercase{amples and experimental setups}\label{exp}} The active parts of our samples consist of a monolayer (ML) of semiconducting transition metal dichalcogenides (S-TMD), $i.e.$ WSe$_2$, MoS$_2$, WS$_2$, and MoSe$_2$, which has been encapsulated in hexagonal boron nitride (hBN) and deposited on a bare Si substrate. They were fabricated by two-stage polydimethylsiloxane (PDMS)-based~\cite{gomezSM} mechanical exfoliation of S-TMD and hBN bulk crystals. The encapsulating hBN layers were rather thick, the bottom hBN layers were 35-40 nm-thick in case of WS2 and WSe2 monolayers and 60-65 nm-thick in case of MoS2 and MoSe2 monolayers. The thickness of the top hBN was of about 20nm in all structures studied. The $\mu$-photoluminescence ($\mu$-PL) and $\mu$-reflectance contrast ($\mu$-RC) experiments were performed using a $\lambda$=515~nm CW laser diode and a 100~W tungsten halogen lamp, respectively. Micro-magneto-PL measurements were performed in the Faraday configuration using an optical-fiber-based insert placed in a superconducting magnetic coil producing magnetic fields up to 14 T. The sample was mounted on top of an $x-y-z$ piezo-stage kept in gaseous helium at $T$= 4.2 K. The excitation light was coupled to an optical fiber with a core of 5~$\mu$m diameter and focused on the sample by an aspheric lens (spot diameter around 1~$\mu$m). The signal was collected by the same lens, injected into a second optical fiber of 50~$\mu$m diameter, and analyzed by a \mbox{0.5 m} long monochromator equipped with a CCD camera. A combination of a quarter wave plate and a polarizer are used to analyse the circular polarization of signals. The measurements were performed with a fixed circular polarization, whereas reversing the direction of magnetic field yields the information corresponding to the other polarization component due to time-reversal symmetry. Investigations at zero magnetic field were carried out with the aid of a continuous flow cryostat mounted on $x-y$ motorized positioners. The sample was placed on a cold finger of the cryostat. The excitation light was focused by means of a 50x long-working distance objective with a 0.5 numerical aperture producing a spot of about 1~$\mu$m. The signal was collected via the same microscope objective, sent through a 0.5 m monochromator, and then detected by a CCD camera. \section{E\lowercase{xcitonic spectrum and eigenfunctions in the} K\lowercase{ratzer potential}\label{kratzer}} We solve two-dimensional (2D) Schr\"{o}dinger equation with the Kratzer potential~\cite{kratzerSM} for wave-function $\psi(\mathbf{r})=\psi(r,\varphi)$ \begin{equation} \Big\{-\frac{\hbar^2}{2\mu}\Big[\frac{\partial^2}{\partial r^2}+\frac{1}{r}\frac{\partial}{\partial r}+ \frac{1}{r^2}\frac{\partial^2}{\partial\varphi^2}\Big]+U_{ext}(r) - \epsilon\Big\}\psi(r,\varphi)=0, \end{equation} in which $U_{ext}(r)=-e^2/r_0(r_0^*/r-g^2r_0^{*2}/r^2)$ is the modified Kratzer potential. $r$ is in-plane electron-hole distance, $\mu$~denotes the reduced electron-hole mass, $\varepsilon$ represents the dielectric constant of the material surrounding the monolayer, $r_0^*=r_0/\varepsilon$ is the reduced screening length, and $g$ is a tunable parameter. Taking $\psi_m(r,\varphi)=e^{im\varphi}\phi_m(r)/\sqrt{2\pi}$ and introducing the new variable $\xi=r/r_0^*$, we obtain the equation \begin{equation} \Big\{\frac{d^2}{d\xi^2}+\frac{1}{\xi}\frac{d}{d\xi}+ \frac{-k^2\xi^2+\kappa^2\xi-g^2\kappa^2-m^2}{\xi^2}\Big\}\phi_m(\xi)=0 \end{equation} with $k^2=-2\mu\epsilon r_0^{*2}/\hbar^2>0$ and $\kappa^2=2\mu r_0e^2/\hbar^2\varepsilon^2>0$. The solution to this eigenvalue problem is \begin{equation} \label{eq:full_wave_function} \phi_{n,m}(r)=\frac{\beta_{n,m}}{\sqrt{2n+2\delta_m-1}}\sqrt{\frac{(n-|m|-1)!}{\Gamma(n+|m|+2\delta_m)}}\times (\beta_{n,m}r)^M e^{-\beta_{n,m}r/2} L_{n-|m|-1}^{2M}(\beta_{n,m}r), \end{equation} with $M=\sqrt{m^2+g^2\kappa^2}$, $\delta_m=M-|m|$ and $\beta_{n,m}=2\mu e^2/\hbar^2\varepsilon(n+\delta_m-1/2)$, respectively. $n$=1, 2\dots~is a principal quantum number, $m$=0, $\pm1$, $\pm2$\dots~is an angular momentum quantum number, and $L_n^\alpha(x)$ is the modified Laguerre polynomial. The energy spectrum for such system is described with: \begin{equation} \label{eq:full_spectrum} \epsilon_{n,m}=-\frac{\mu e^4}{2\hbar^2\varepsilon^2}\frac{1}{(n+\delta_m-\frac12)^2}. \end{equation} For $g$=0, our result coincides with 2D hydrogen model~\cite{YangSM}. In the case of $s$-type states ($n$=1, 2\dots~and $m$=0), the excitonic spectrum simplifies to \begin{equation} \label{eq:partial_spectrum} \epsilon_n=-\frac{\mu e^4}{2\hbar^2\varepsilon^2}\frac{1}{(n+g\kappa-\frac12)^2}, \end{equation} We mention the following consequences of this model: (i) the energy scale (prefactor in $\epsilon_n$) does not depend on the screening length $r_0$. It coincides with the Rydberg constant $Ry^*$ for an exciton with reduced mass $\mu$ in an environment with dielectric constant $\varepsilon$, $i.e.$ $Ry^*=\mu e^4/2\hbar^2\varepsilon^2$; (ii) the information about relative positions of the energy levels of the system is encoded in the denominators in Eq.~\ref{eq:full_spectrum} and \ref{eq:partial_spectrum}; (iii) the Kratzer potential lifts the Coulomb degeneracy of the $s$- ($m$=0) and $p$-type ($m$=$\pm$1) states, as one can be noticed from Eq.~(\ref{eq:full_spectrum}); (iv) since $Ry^*\propto 1/\varepsilon^2$ and $\delta+1/2\propto 1/\varepsilon$, the energy ladder of the excitons can be progressively tuned by changing the dielectric constant $\varepsilon$ of the surrounding medium. Surprisingly, the results of numerical simulations performed in the Rytova-Keldysh potential~\cite{rytovaSM,keldyshSM}, discussed in the next section, demonstrate the similar behavior. This fact can be interpreted as the indirect confirmation, that the Kratzer potential is a good approximation for the considered model. Using the wave-functions obtained in Eq. (\ref{eq:full_wave_function}), we calculate the mean value of $r^2$, which can be useful for analysis of diamagnetic shift of excitons, which reads \begin{equation} \langle r^2\rangle_{n,m}=\frac{2}{(\beta_{n,m})^2}[3-3m^2+5n(n-1)-5\delta_m-6|m|\delta_m+2\delta_m(5n+\delta_m)]. \end{equation} For $s$-type states characterized by $m$=0, it takes the form \begin{equation} \langle r^2\rangle_{n,0}=\frac{2}{(\beta_{n,0})^2}(2g^2\kappa^2+10g\kappa n-5g\kappa+5n^2-5n+3). \end{equation} Moreover, for the special case $g\kappa=1/2$, while Eq.~\ref{eq:partial_spectrum} resembles the three-dimensional (3D) hydrogen model, the mean value of the $r^2$ parameter is given by \begin{equation} \langle r^2\rangle_{n,0}|_{g\kappa=1/2}=(a_0^*)^2n^2(5n^2+1)/2\approx 5(a_0^*)^2n^4/2, \label{eq:diamag} \end{equation} where $a_0^*$=$\hbar^2\varepsilon/\mu e^2$ is the effective Bohr radius. It is interesting to note that the latter formula coincides with the mean value of $r^2$ for 3D hydrogen atom~\cite{betheSM}. The eigenfunctions of $s$-states of the Schr\"{o}dinger Hamiltonian with the Kratzer potential (\ref{eq:full_wave_function}) tend to zero at $r\rightarrow 0$. This is the consequence of the repulsive part of the potential at short distances. Therefore, such solutions can not be good approximation for $s$-state exciton wave-functions at small distances, since the Rytova-Keldysh potential is attractive. In order to improve the current result, one needs to modify the Kratzer potential at small distances. \section{N\lowercase{umerical analysis of the excitonic spectrum in the} R\lowercase{ytova-}K\lowercase{eldysh potential}\label{keldysh}} \label{sec:S3} \begin{figure}[t] \includegraphics[width=8cm]{Fig_S1.eps} \caption{\label{fig:spectrum_RK} Interpolation lines for $(-W_n)^{-1/2}$ for $n$=1, 2,\dots, 5 and different values of the dielectric constant from $\varepsilon_{min}=1$ (top blue line) until $\varepsilon_{max}=5$ (bottom purple line) with step $\Delta\varepsilon=0.5$. Blue, yellow, and green circles represent the values of function $(-W_n)^{-1/2}$ for different $n$, when $\varepsilon$=1, 1.5, 2, respectively. } \end{figure} We solve numerically the eigenvalue problem for 2D Schr\"{o}dinger Hamiltonian with the Rytova-Keldysh potential~\cite{rytovaSM,keldyshSM}. We analyse the scale laws for the spectrum both as a function of principal number $n$ and dielectric constant of surrounding medium $\varepsilon$. We start from the radial equation on wave function $\phi(r)$ for the $s$-type states characterized by zero angular momentum: \begin{equation} \Big\{-\frac{\hbar^2}{2\mu}\frac{1}{r}\frac{d}{dr}\Big[r\frac{d}{dr}\Big]-\frac{\pi e^2}{2 r_0}\Big[\text{H}_0\Big(\frac{r\varepsilon}{r_0}\Big)-Y_0\Big(\frac{r\varepsilon}{r_0}\Big)\Big] - \epsilon\Big\}\phi(r)=0, \end{equation} where $\mathrm{H}_0(x)$ and $Y_0(x)$ are the zeroth order Struve and Neumann functions. Introducing new variables $\xi=r\varepsilon/r_0=r/r_0^*$ and $\epsilon=(\mu e^4/2\hbar^2\varepsilon^2)W=WRy^*$, we rewrite the equation \begin{equation} \Big\{-b^2\frac{1}{\xi}\frac{d}{d\xi}\Big[\xi\frac{d}{d\xi}\Big]-\pi b\Big[\text{H}_0(\xi)-Y_0(\xi)\Big] - W\Big\}\phi(\xi)=0 \label{shred} \end{equation} with $b=\hbar^2\varepsilon^2/(\mu e^2r_0)=a_B^*/r_0^*$ -- the ratio of the natural scales in the system. We derive the spectrum of this differential equation as a function of $\varepsilon$ and $n$. \begin{figure}[b] \includegraphics[width=8cm]{Fig_S2.eps} \caption{\label{fig:spectrum_RK_diff} Interpolation lines for $(-W_n)^{-1/2}-n$ for $n$=1, 2, \dots, 5 and different values of the dielectric constant from $\varepsilon_{min}=1$ (top blue line) to $\varepsilon_{max}=5$ (bottom purple line) with step $\Delta\varepsilon=0.5$. Blue, yellow, and green circles represent the values of function $(-W_n)^{-1/2}-n$ for different $n$, when $\varepsilon$=1, 1.5, 2, respectively.} \end{figure} According to our hypothesis described in the main text, the excitonic spectrum should be described with \begin{equation} \label{eq:energy_ladder} \epsilon_n=Ry^*W_n=-\frac{Ry^*}{(\alpha n+\beta)^2}=-Ry^*\frac{\gamma}{(n+\delta)^2}, \end{equation} where $\gamma=\alpha^{-2}\simeq1$, while $\beta$ (and hence $\delta=\beta/\alpha$) strongly depends on $\varepsilon$. Therefore, we estimate the linear behaviour of $(-W_n)^{-1/2}$ with $n$. The results of numerical simulations (performed in "Mathematica") for the case of WSe$_2$ ($\mu$=0.2~$m_0$~\cite{stierSM}, $r_0=45$~$\mbox{{\AA}}$~\cite{berkelbachSM}) are presented in Figs \ref{fig:spectrum_RK} and \ref{fig:spectrum_RK_diff}. Indeed, the linear growth of $(-W_n)^{-1/2}$ as a function of $n$ for different values of $\varepsilon$ can be appreciated in Fig.~\ref{fig:spectrum_RK}. The Fig. \ref{fig:spectrum_RK_diff} qualitatively confirms that $\alpha\simeq1$. The precision of this result (relative deviation of the curve $(-W_n)^{-1/2}-n$ from its average value) becomes higher for larger values of $\varepsilon$. Let us unify the aforementioned result for the case of other S-TMD monolayers and determine the limits of the applicability of our model. First, we derive numerically the spectrum $W_n\,(n=1,2\dots 5)$ from Eq.~\ref{shred} for different values of the parameter $b$. Then we fit the obtained data with the formula $\mathcal{W}_n=-\gamma/(n+\delta)^2$ and extract the parameters $\gamma$ and $\delta$ for each $b$. In further, we focus mainly on the range of parameters $b\in[0.05,2.5]$. Such a domain contains all $b$, accessible in the experiment, for all S-TMD monolayers and/or average dielectric constant of surrounding medium (such as PDMS, sapphire, Si/SiO$_2$ or hBN). The parameters $\gamma$ and $\delta$, as functions of $b$, are presented in Figs. \ref{fig:gamma} and \ref{fig:delta} respectively. \begin{figure}[t] \includegraphics[width=8cm]{gamma.eps} \caption{\label{fig:gamma} The parameter $\gamma$, extracted from $\mathcal{W}_n\,(n=1,2\dots5)$ for different values of the parameter $b$. } \end{figure} \begin{figure}[b] \includegraphics[width=8cm]{delta.eps} \caption{\label{fig:delta} The parameter $\delta$, extracted from $\mathcal{W}_n\,(n=1,2\dots5)$ for different values of the parameter $b$. } \end{figure} Note that the values of $\gamma$ slightly deviate from $1$ at $b>0.3$ and approach the limit $\gamma_\infty=1$ at $b\rightarrow\infty$. Therefore, at the relatively large $b$ the energy ladder of our problem coincides with the experimentally observed spectrum of excitons $\epsilon_n \approx -Ry^*/(n+\delta)^2$. The values of $\delta$ are represented by monotonically decreasing function, with the limit $\delta_\infty=-1/2$ at $b\rightarrow\infty$, which is nothing but the case of 2D hydrogen atom. It is interesting to note that for $b\approx 0.9$ the parameter $\delta$ becomes zero and the exciton spectrum reproduces 3D hydrogen atom energy ladder. In order to define the limits of the applicability of our model, we calculate the relative deviations $[(\mathcal{W}_n-W_n)/W_n]\times 100\%$ for $n=1,2,\dots 5$ as a function of parameter $b$. The corresponding plots are presented in Fig.\ref{fig:deviation}. One can see that starting from $b>0.3$ all the relative deviations become smaller than $2\%$ and tend to zero value at $b\rightarrow\infty$. The variational procedure, described in the next chapter of Supplementary Materials, predicts the ground state energy of 2D exciton in the Rytova-Keldysh potential with the same precision. Therefore, in order to have the self-consistent picture of all our calculations we choose the ``$2\%$ deviation rule'' as a formal criterium, which defines the limits of the applicability of our model. \begin{figure} \includegraphics[width=8cm]{deviation.eps} \caption{\label{fig:deviation} The relative deviation (in $\%$) between the spectrum $W_n$ and its fit $\mathcal{W}_n$ as a function of the parameter $b$ for $n=1,2\dots5$. } \end{figure} Finally, let us discuss the specific symmetry of the studied Hamiltonian \begin{equation} H(\mu,\varepsilon,r_0)=-\frac{\hbar^2}{2\mu}\Delta_{2D}-\frac{\pi e^2}{2 r_0}\Big[\text{H}_0\Big(\frac{r\varepsilon}{r_0}\Big)-Y_0\Big(\frac{r\varepsilon}{r_0}\Big)\Big], \end{equation} where $\Delta_{2D}$ is a two-dimensional Laplacian. The Hamiltonian is a homogeneous function $H(\lambda\mu,\lambda\varepsilon,\lambda r_0)=\lambda^{-1}H(\mu,\varepsilon,r_0)$ of its parameters $\{\mu,\varepsilon,r_0\}$ for any $\lambda>0$. Therefore the spectrum of the problem should have the same property: $\epsilon_n(\lambda\mu,\lambda\varepsilon,\lambda r_0)=\lambda^{-1}\epsilon_n(\mu,\varepsilon,r_0)$. One can demonstrate that our expression for the energy ladder (Eq.~\ref{eq:energy_ladder}) satisfies this scaling law too. \section{D\lowercase{erivation of the excitonic spectrum in} WS\lowercase{e}$_2$ \lowercase{monolayer encapsulated in h}BN\label{spectrum}} We consider the solution of 2D Schr\"{o}dinger equation in the potential, defined in the main text, which reads \begin{equation} U_{app}(\xi)=\left\{\begin{array}{cc} -U_0\Big[\frac{1}{\xi}-\frac{0.21}{\xi^2}\Big], & \text{for}\quad \xi>\xi_0; \\ -1.71134\,U_0, & \text{for}\quad \xi<\xi_0. \end{array}\right. \end{equation} We restrict our consideration only to the $s$-type states characterized by zero angular momentum. The regular radial solution of the Schr\"{o}dinger equation in the region $\xi<\xi_0$ is \begin{equation} \phi_1(\xi)\sim J_0\big(\kappa\sqrt{v_0-|\mathcal{E}|}\,\xi\big), \end{equation} where $J_0(x)$ is the zero-order Bessel function of the first kind, $\mathcal{E}=\epsilon/U_0$ and $v_0=1.71134$. The solution for the region $\xi>\xi_0$ has the form \begin{equation} \phi_2(\xi)\sim e^{-k\xi}\xi^{g\kappa}\Psi\Big(-\frac{\kappa^2}{2k}+g\kappa+\frac12,1+2g\kappa; 2k\xi\Big) \end{equation} with $g^2$=0.21, $k^2=-2\mu\epsilon r_0^{*2}/\hbar^2>0$ and $\Psi(a,c;z)$ is the Tricomi's function, which is regular at $\xi\rightarrow\infty$ and solves the degenerate hypergeometric equation \cite{batemanSM} \begin{equation} \Big\{z\frac{d^2}{dz^2}+(c-z)\frac{d}{dz}-a\Big\}\Psi(a,c;z)=0. \end{equation} \begin{figure}[b] \includegraphics[width=8.5cm]{Fig_S3.eps} \caption{\label{fig:4} The normalized logarithmic derivatives for core solution $f_1(\mathcal{E})$ (red curve), external solution $f_2(\mathcal{E})$ (purple curve) and for regular solution $f_3(\mathcal{E})$ for the Kratzer potential with $g^2=0.21$ (blue curve) as a function of dimensionless parameter $|\mathcal{E}|=|\epsilon|/U_0$. The dashed purple lines represent the asymptotes of $f_2(\mathcal{E})$.} \end{figure} Introducing the normalized logarithmic derivatives for both solutions $f_n(\mathcal{E})=\kappa^{-1}[d\ln\phi_n(\xi)/d\xi]_{\xi=\xi_0},\, n=1,2$ \begin{equation} f_1(\mathcal{E})=-\sqrt{v_0-|\mathcal{E}|}\,\,\frac{J_1(2g^2\kappa\sqrt{v_0-|\mathcal{E}|})} {J_0(2g^2\kappa\sqrt{v_0-|\mathcal{E}|})}, \end{equation} \begin{equation} f_2(\mathcal{E})=-\sqrt{|\mathcal{E}|}+\frac{1}{2g}+ \big[\kappa-(2g\kappa+1)\sqrt{|\mathcal{E}|}\big]\times \frac{\Psi\Big(-\frac{\kappa}{2\sqrt{|\mathcal{E}|}}+g\kappa+\frac32,2+2g\kappa; 4g^2\kappa\sqrt{|\mathcal{E}|}\Big)}{\Psi\Big(-\frac{\kappa}{2\sqrt{|\mathcal{E}|}}+g\kappa+\frac12,1+2g\kappa; 4 g^2\kappa\sqrt{|\mathcal{E}|}\Big)}, \end{equation} one derives the energy spectrum from continuity equation $f_1(\mathcal{E})=f_2(\mathcal{E})$. We solve the latter equation for WSe$_2$ ML encapsulated in hBN with a set of parameters: $\mu$=0.2~$m_0$\cite{stierSM}, $\varepsilon=4.5$~\cite{geickSM}, and $r_0=45\,\mbox{{\AA}}$~\cite{berkelbachSM}. The derivatives as a function of $|\mathcal{E}|$ are presented in Fig.~\ref{fig:4} and the excitonic spectrum is defined by the intersection points of $f_1(\mathcal{E})$ and $f_2(\mathcal{E})$ curves. The excitonic spectrum, obtained within the described above method, can be presented in the same form as for the Kratzer potential (compare with Eq.~5 in the main text) and is given by \begin{equation} \epsilon_n=-\frac{134\,\text{meV}}{(n-0.099)^2}. \end{equation} This result demonstrates the good coincidence with the excitonic spectrum reported in Ref.~\citenum{stierSM}, with relative errors $8\%$ for $n=1$, $3.5\%$ for $n=2$ and less than $2\%$ for higher excited states. In order to check the precision of the graphical method, we applied it for the case, when the core potential is described by the Kratzer one with the same parameter $g^2=0.21$. For the Kratzer potential, graphical solution provides the excitonic spectrum, which follows Eq.~\ref{eq:partial_spectrum} of the SM. In this case, the logarithmic derivative is \begin{equation} f_3(\mathcal{E})=-\sqrt{|\mathcal{E}|}+\frac{1}{2g}- \frac{\big[\kappa-(2g\kappa+1)\sqrt{|\mathcal{E}|}\big]}{1+2g\kappa}\times \frac{_1F_1\Big(-\frac{\kappa}{2\sqrt{|\mathcal{E}|}}+g\kappa+\frac32,2+2g\kappa; 4g^2\kappa\sqrt{|\mathcal{E}|}\Big)}{_1F_1\Big(-\frac{\kappa}{2\sqrt{|\mathcal{E}|}}+g\kappa+\frac12,1+2g\kappa; 4 g^2\kappa\sqrt{|\mathcal{E}|}\Big)}, \end{equation} where $_1F_1(a,c;z)$ is the confluent hypergeometric function of the first kind, and corresponds to the blue curve in Fig.~\ref{fig:4}. One can mention, that the calculation with the potential $U_{app}(\xi)$ does not approximate perfectly the $1s$-exciton state. From the technical point of view such discrepancy can be the consequence of the modification of the Rytova-Keldysh potential at small distances. In order to check this hypothesis, we estimate the ground state energy of exciton using another method. Namely, we derive the ground state energy of S-TMD excitons using the Ritz variational procedure. We take the variational wave-function in the form $\psi_0(\mathbf{r})=\beta\exp(-\beta r/2)/\sqrt{2\pi}$ and evaluate the average of the Hamiltonian \begin{equation} H=-\frac{\hbar^2}{2\mu}\Delta_{2D}-\frac{\pi e^2}{2 r_0}\Big[\text{H}_0\Big(\frac{r\varepsilon}{r_0}\Big)-Y_0\Big(\frac{r\varepsilon}{r_0}\Big)\Big]. \end{equation} The kinetic energy can be calculated directly with \begin{equation} T_0(\beta)=-\frac{\hbar^2}{2\mu}\int_0^\infty\!\! dr\,\psi_0(\mathbf{r})\frac{d}{dr}\Big[r\frac{d\psi_0(\mathbf{r})}{dr}\Big]=\frac{\hbar^2\beta^2}{8\mu}. \end{equation} To determine the potential energy, a few steps need to be performed. First, we present the Rytova-Keldysh potential in the integral form~\cite{CudazzoSM} \begin{equation} U(r)=-\frac{e^2}{2\pi\varepsilon}\int_0^{2\pi} d\theta\int_0^\infty dk \frac{e^{ikr\cos\theta}}{1+kr_0^*}, \end{equation} Then, we substitute it in the expression for the average potential energy \begin{equation} U_0(\beta)=-\frac{e^2\beta^2}{2\pi\varepsilon}\int_0^\infty\!\!\!\frac{dk}{1+kr_0^*}\int_0^{2\pi}\!\!d\theta \int_0^\infty\!\! drre^{(ik\cos\theta-\beta)r}. \end{equation} Consequently, after evaluation of integrals and adding the kinetic energy part, we determine formula \begin{equation} \epsilon(a)=\frac{e^2}{r_0}\Big[\frac{\hbar^2\varepsilon^2}{8\mu r_0 e^2}a^2+af(a)\Big], \end{equation} where $a=\beta r^*_0$ is the dimensionless parameter and \begin{equation} f(a)=\frac{(a-1)\sqrt{1+a^2}-2a^2\text{Arcoth}\big(\frac{1+a}{\sqrt{1+a^2}}\big)}{(1+a^2)^{3/2}}. \end{equation} The minimum of $\epsilon(a)$ can be found straightforwardly (with the help of "Mathematica", for example). For the case of $\mu=0.2$~$m_0$~\cite{stierSM}, we got the ground state energy of exciton, $\epsilon_0$=-157~meV, which is in good agreement with the results of numerical simulation reported in Ref.~\citenum{stierSM}, obtained for the same values of parameters. Moreover, the relative deviation of the variational exciton ground-state energy, calculated for the different values of $\varepsilon$, deviates from the numerical results discussed in Ref.~\citenum{stierSM} less than $2\%$. Surprisingly, the numerical simulations for WSe$_2$ monolayer encapsulated in hBN with an effective mass $\mu$=0.21~$m_0$ are in better agreement with the experimentally obtained excitonic spectrum,$\epsilon_n=-140.5\,\mbox{meV}/(n-0.083)^2$ (see the main text), than for $\mu$=0.20~$m_0$ and leads to the energy ladder of excitons given by \begin{equation} \epsilon_n=-141\,\mbox{meV}/(n-0.087)^2, \end{equation} The excitonic binding energy, $E_b$=162~meV, calculated with the aid of the Ritz variational method also nicely matches to the experimental one, $E_b$=167~meV, obtained in the main text. \section{M\lowercase{agneto-photoluminescence investigation of} M\lowercase{o}S$_2$ \lowercase{and} WS$_2$ \lowercase{monolayers}\label{inne}} \begin{center} \begin{figure*}[h] \begin{minipage}[c]{0.25\linewidth} \includegraphics[width=1\linewidth]{inne_a.eps}% \end{minipage}\hfill \begin{minipage}[c]{0.25\linewidth} \includegraphics[width=1\linewidth]{inne_b.eps}% \end{minipage}\hfill \begin{minipage}[c]{0.25\linewidth} \includegraphics[width=1\linewidth]{inne_c.eps}% \end{minipage}\hfill \begin{minipage}[c]{0.25\linewidth} \includegraphics[width=1\linewidth]{inne_d.eps}% \end{minipage}\hfill \caption{(a)/(c) Helicity-resolved ($\sigma^{\pm}$) PL spectra of MoS$_2$/WS$_2$ MLs at selected magnetic fields. The separate parts of the spectra are normalized to the intensity of the 1$s$ and 2$s$ lines. (b)/(d) False-colour map of the corresponding PL spectra from 0 to 14~T.} \label{fig:inne} \end{figure*} \end{center} To study the ladder of excitonic states in S-TMD monolayer, we additionally performed the helicity-resolved magneto-photoluminescence experiments on MoS$_2$ and WS$_2$ monolayers encapsulated in hBN, see Fig.~\ref{fig:inne}. For these materials, however, only emissions related to the ground 1$s$ state and the first excited 2$s$ state of the A exciton is observed at whole range of applied magnetic fields. It is in contrast to the WSe$_2$ monolayer studied in the main text, in which the emissions of higher excitonic states is better visible for bigger values of magnetic fields. \section{D\lowercase{ependence of excitonic diamagnetic coefficients in} WS\lowercase{e}$_2$ \lowercase{monolayer}\label{sigma}} To test our assumption that the excitonic spectrum in the WSe$_2$ ML encapsulated in hBN resembles a 3D hydrogen atom ($\sim-1/n^2$), we investigate dependence of the obtained diamagnetic coefficients $\sigma$ of excitonic states, $ns$, in this system. We found theoretically (see Eq.~\ref{eq:diamag} in the SM) that the mean value of $r^2$ calculated with the aid of our Kratzer potential approach approximate the one for 3D hydrogen atom, in which $r^2$ parameters of excitonic states scales with $n^4$. With the aid of Eq.~\ref{eq:diamag} and $\sigma=(e r)^2/8 \mu$, we calculate theoretical $\sigma$ values of excitons for WSe$_2$ ML encapsulated in hBN with parameters: $\mu$=0.2~$m_0$\cite{stierSM} and $\varepsilon=4.5$~\cite{geickSM}. The theoretical values are compared with the experimental diamagnetic coefficients in Fig.~\ref{fig:fig_diamag}. The theoretical dependence fits very well the experimental data up to the 4$s$ state.This additionally confirms that the excitonic spectrum of the WSe$_2$ monolayer encapsulated in hBN corresponds to that of a 3D hydrogen atom. The apparent discrepancy between the theory and the experiment for the 5$s$ state results in our opinion from the small range of the low-field limit, which affects the determined $\sigma$ value for this state. \begin{figure}[h] \centering \includegraphics[width=0.35\linewidth]{Fig_S4.eps}% \caption{Diamagnetic coefficients $\sigma$ for the excitonic states as a function $n^4$ obtained (crosses) experimentally due to the analysis performed in the main text and (circles) theoretically using Eq.~\ref{eq:diamag}. The black line connects theoretical points as a guide to the eye.} \label{fig:fig_diamag} \end{figure} \section{L\lowercase{ow temperature reflectance contrast} \lowercase{spectra of} S-TMD \lowercase{monolayers}\label{rc}} The low temperature RC spectra of WSe$_2$, MoS$_2$, WS$_2$, and MoSe$_2$ encapsulated in hBN are presented in Fig.~\ref{fig:fig_S1}. We define the RC spectrum as $RC(E)=\frac{R(E)-R_0(E)}{R(E)+R_0(E)}\times 100\%$, with $R(E)$ and $R_0(E)$, respectively, the reflectance of the dielectric stack composed of a monolayer encapsulated in hBN supported by a bare Si substrate and of the two alone layers of hBN on top of Si substrate. Note that the presented spectra correspond to the PL ones shown in Fig.~4 in the main text. First, the spectra display two pronounced resonances labelled 1$s_{\text{A,B}}$ which arise from the ground state of the so-called A and B excitons~\cite{liSM,aroraWSe2SM,arorMoSe2SM,molasSM}. In addition to them, less pronounced features, labelled 2$s_{\text{A,B}}$ and 3$s_{\text{A}}$, appear at about 200 meV higher in energy as compared to the 1$s_{\text{A,B}}$ ones. The assignment of the 2$s_{\text{A}}$ and 3$s_{\text{A}}$ features to the first and the second excited state of the A exciton is straightforward and corresponds to many other investigations on S-TMD MLs encapsulated in hBN~\cite{stierSM,robertSM,hanSM,slobodeniukSM,gerberSM}. The origin of the 2$s_{\text{B}}$ is less clear, as it has not been reported so far. Due to the similar energy separation between 1$s_{\text{B}}$ and 2$s_{\text{B}}$ as compared with the 1$s_{\text{A}}$ and 2$s_{\text{A}}$, we ascribed tentatively the 2$s_{\text{B}}$ features to the first excited states of the B exciton, which, however, requires further investigations. \begin{figure*}[h] \centering \includegraphics[width=0.75\linewidth]{Fig_S5.eps} \caption{Low temperature RC spectra of S-TMD monolayers measured at $T$=5~K. The spectral regions around the 1$s_{\text{A}}$ resonance are scaled and shifted for clarity.} \label{fig:fig_S1} \end{figure*} \newpage \section{E\lowercase{stimation of the band-gap energy in} M\lowercase{o}S\lowercase{e}$_2$ \lowercase{monolayer}\label{bandgap}} \begin{figure}[h] \centering \includegraphics[width=0.36\linewidth]{Fig_S6.eps} \caption{Low temperature photoluminescence spectrum of MoSe$_2$ monolayer at $T$=5~K limited to the high energy PL signal composed of the 2$s$ line of A exciton. The blue, green, and red curves display fits of Gaussian profiles to the corresponding 1$s_{\text{A}}$, $ns_{\text{A}}$, and 1$s_{\text{B}}$ lines. The pink vertical arrow denotes the estimated band-gap energy $E_g$.} \label{fig:fig_S2} \end{figure} As it has been discussed in the main text, the estimation of band-gap energy is essential for our analysis of excitonic ladder in S-TMD MLs. The estimation of band-gap energy can be easily carried out for WSe$_2$, MoS$_2$, and WS$_2$ MLs, however, this issue is more complex for the MoSe$_2$ one. This is, because PL related to the ground state of B exciton, $1s_{\text{B}}$, appears in the spectral range of the emissions related to the $2s$ and higher $ns$ states of the A exciton, labelled as $1s_{\text{A}}$ and $ns_{\text{A}}$ in Fig.~\ref{fig:fig_S2}. To determine the band-gap energy of MoSe$_2$ ML, we use the procedure described in the main text. The PL intensity at the band-gap energy of WSe$_2$ ML equals 5$\%$ of the intensity of the 2$s$ exciton PL peak. In order to apply the same approach for MoSe$_2$ ML, we deconvolute the spectrum shown in Fig.~\ref{fig:fig_S2} with three Gaussian profiles (the PL due to the $ns_{\text{A}}$ and $1s_{\text{B}}$ are resolved spectrally). We set the linewidth of the $1s_{\text{B}}$ emission equals to 28~meV, as obtained from the upconversion PL spectrum of MoSe$_2$ ML reported in Ref.~\citenum{hanSM}. The result of the procedure is presented in Fig.~\ref{fig:fig_S2}. The band-gap energy in MoSe$_2$ ML determined using the procedure equals 1.861~eV and is marked in Fig.~\ref{fig:fig_S2} with pink arrows. \section{A\lowercase{pplication of the proposed model to the data available in the literature}\label{test}} In the main text of this report, we have clearly demonstrated that the simple $E_{ns}=E_g-Ry^*/(n+\delta)^2$ ansatz can be well applied to reproduce the energy ladder of excitonic $s$-resonances in WSe$_2$ monolayer encapsulated in hBN. Recently, such ladders has been also extrapolated from observation of a series of excitonic resonances in very high magnetic fields, for other, MoS$_2$, WS$_2$ and MoSe$_2$, monolayers encapsulated in hBN~\cite{gorycaSM}. Moreover, the observation of spectral series of $s$-type excitons has been also inferred from the refined analysis of the reflectance spectra in early reports on monolayer WS$_2$ deposited on Si/SiO$_2$ substrate~\cite{chernikovSM}. \begin{figure}[h] \centering \includegraphics[width=0.92\linewidth]{goryca.eps}% \caption{Experimental transition energies for the exciton states as a function of their index, $n$, measured on S-TMD monolayers encapsulated in hBN flakes. The black curves show fits to the data with the model described by Eq.~1 in the main text. The experimental data is taken from Ref.~\citenum{gorycaSM}.} \label{fig:fig_S5} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.322\linewidth]{chernikov.eps}% \caption{Experimental transition energies for the exciton states as a function of their index, $n$, measured on WS$_2$ monolayers exfoliated onto a Si/SiO$_2$ substrate. The black curve shows a fit to the data with the model described by Eq.~1 in the main text. The experimental data is taken from Ref.~\citenum{chernikovSM}.} \label{fig:fig_S3} \end{figure} It is interesting to test the applicability of our formula against the data reported in the above references. Thus, the energies of $s$-excitonic resonances, extracted from Refs.~\citenum{gorycaSM} and \citenum{chernikovSM}, have been fitted to the $E_{ns}=E_g-Ry^*/(n+\delta)^2$ formula, considering $E_g$, $Ry^*$, and $\delta$ as independent, adjustable parameters. As shown in Figs.~\ref{fig:fig_S5} and \ref{fig:fig_S3}, our formula fits perfectly the data for monolayers encapsulated in hBN and it is also well applied to the data for WS$_2$ on Si/SiO$_2$ system. The extracted $E_g^{\text{model}}$, $\delta^{\text{model}}$ and $E_b^{\text{model}}$ parameters ($E_b= Ry^*/(1+\delta)^2$) are listed in the Table~\ref{tab:model}. The corresponding energy values $E_g^{\text{data}}$ and $E_b^{\text{data}}$, originally reported in Refs.~\citenum{chernikovSM} and \citenum{gorycaSM}, are in perfect agreement with our findings. According to our preceding discussion (see section \ref{sec:S3}), the applicability of our simple approach to the case of monolayers in hBN is well understood. The fact that our formula is operational for the data for WS$_2$ on Si/SiO$_2$ structure is surprising. As a matter of fact, these data follow our formula but only in sort of the ``effective'' way. Description of these data, within our approach and notably within the Rytova-Keldysh formalism in Ref. \citenum{chernikovSM} as well, implies the use of rather unrealistic structure parameters, {\it e.g.}, the underestimated dielectric constant ($\varepsilon=1$) and overestimated screening radius ($r_0=75\,\mbox{{\AA}}$). \begin{center} \begin{table*}[t] \centering \begin{tabular}{cccccccccc} \hline \multicolumn{1}{c}{Publication} & \multicolumn{1}{|c}{S-TMD ML} & \multicolumn{1}{|c}{top medium} & \multicolumn{1}{|c}{bottom medium} & \multicolumn{1}{|c}{$E^{\text{data}}_g$ (eV)} & \multicolumn{1}{|c}{$E^{\text{model}}_g$ (eV)} & \multicolumn{1}{|c}{$E^{\text{data}}_b$ (meV)} & \multicolumn{1}{|c}{$E^{\text{model}}_b$ (meV)} & \multicolumn{1}{|c}{$\delta^{\text{model}}$} \\ \hline \hline \multicolumn{1}{c}{Ref. \citenum{gorycaSM} } & \multicolumn{1}{|c}{MoS$_2$} & \multicolumn{1}{|c}{hBN} & \multicolumn{1}{|c}{hBN} & \multicolumn{1}{|c}{2.16} & \multicolumn{1}{|c }{2.16} & \multicolumn{1}{|c }{221} & \multicolumn{1}{|c }{223} & \multicolumn{1}{|c}{-0.063} \\ \multicolumn{1}{c}{Ref. \citenum{gorycaSM} } & \multicolumn{1}{|c}{WS$_2$} & \multicolumn{1}{|c}{hBN} & \multicolumn{1}{|c}{hBN} & \multicolumn{1}{|c}{2.238} & \multicolumn{1}{|c }{2.24} & \multicolumn{1}{|c }{180} & \multicolumn{1}{|c }{180} & \multicolumn{1}{|c}{-0.11} \\ \multicolumn{1}{c}{Ref. \citenum{gorycaSM} } & \multicolumn{1}{|c}{MoSe$_2$} & \multicolumn{1}{|c}{hBN} & \multicolumn{1}{|c}{hBN} & \multicolumn{1}{|c}{1.874} & \multicolumn{1}{|c }{1.87} & \multicolumn{1}{|c }{231} & \multicolumn{1}{|c }{232} & \multicolumn{1}{|c}{0.044} \\ \hline \hline \multicolumn{1}{c}{Ref. \citenum{chernikovSM} } & \multicolumn{1}{|c}{WS$_2$} & \multicolumn{1}{|c}{air} & \multicolumn{1}{|c}{Si/SiO$_2$} & \multicolumn{1}{|c}{2.41} & \multicolumn{1}{|c }{2.42} & \multicolumn{1}{|c }{320} & \multicolumn{1}{|c }{328} & \multicolumn{1}{|c}{1.58} \\ \hline \end{tabular} \caption{Series of fitting parameters ($E^{\text{model}}_g$, $E^{\text{model}}_b$ and $\delta^{\text{model}}$) obtained from the analysis of data available in the literature~\cite{chernikovSM,gorycaSM} compared with the reported ones ($E^{\text{data}}_g$ and $E^{\text{data}}_b$).} \label{tab:model} \end{table*} \end{center}
2,877,628,091,390
arxiv
\section{Introduction} The study of exclusive processes in quantum chromodynamics (QCD), where intact hadrons are explicitly measured in the final state, provides important insights into the mechanisms of confinement and into the dynamics of hadronic bound states \cite{Brodsky:2003dq,Brodsky:2004wx}. Among the multitude of exclusive processes, two-photon annihilation into baryon-antibaryon pairs is particularly interesting, because it is one of the simplest calculable large-angle hadronic scattering reactions involving two hadrons. Therefore, $\gamma \gamma \rightarrow \mbox{B} \bar{\mbox{B}}$ has recently received considerable experimental \cite{exp} and theoretical \cite{theor,Berger:2002vc,Karliner:2001ut} attention. In a recent paper \cite{Berger:2002vc} we have studied baryon pair production in two-photon collisions for baryons belonging to the lowest-lying flavor octet. In the present note we extend our work to reactions involving spin-$3/2$ decuplet baryons. Previously, two-photon annihilation into decuplet baryons has been studied in Refs. \cite{Farrar:gv,Anselmino:1987gu,Farrar:1988vz,Karliner:2001ut, Karliner:2002nk} within different frameworks with differing conclusions. Thus an experimental analysis could shed light on the relative importance of the underlying mechanisms considered here and in the aforementioned references. Our model is a modification of the perturbative hard-scattering picture (HSP) for exclusive processes \cite{Lepage:1980fj,Efremov:1979qk}. While the HSP is exactly valid only at asymptotically large momentum transfer, the interplay of perturbatively calculable with nonperturbative effects renders theoretical analyses quite intricate at energies where data are currently available. In order to parameterize such possible non-perturbative effects within a perturbative framework, an effective formalism was developed in Ref.~\cite{Anselmino:1987vk}, where baryons are treated as quark-diquark systems. In the sequel this model has been successfully applied to a variety of exclusive reactions \cite{Berger:2002vc,Anselmino:1987gu,Jakob:1993th,prediqu1,prediqu2, Berger:1999gx}. In the following, we start with a brief review of the quark-diquark model. Then we go on to describe the new ingredients necessary for the study of processes involving decuplet baryons. In Sec.~\ref{sec:results} we present and discuss model predictions with emphasis on the $\Delta$ cross sections, for which experimental upper bounds are available~\cite{Argus}. Following concluding remarks, supplementary analytical expressions for the scattering amplitudes are tabulated in the Appendix. \section{Exclusive Reactions in the \\ Quark-Diquark Picture} Here we briefly summarize the modified hard-scattering formalism with diquarks, and elaborate on the aspects specific to the treatment of decuplet baryons. For a full account of all details we refer to our recent work \cite{Berger:2002vc,Berger:1999gx}. \subsection{Review of the Model}\label{sec:hsp} As in the conventional hard-scattering picture \cite{Lepage:1980fj,Efremov:1979qk}, an exclusive reaction amplitude $\mathcal{M}$ is convolutively factorized into a process-dependent, perturbative hard-scattering amplitude $\hat{T}$ and process-independent, non-perturbative distribution amplitudes $\Psi$. The latter are probability amplitudes for finding the pertinent valence Fock states, here quarks and diquarks, in the scattering hadrons. The amplitude for two-photon annihilation into a baryon-antibaryon pair is given by \begin{eqnarray} \overline{{\mathcal{M}}}_{\{\lambda\} }\! \left(\hat{s}, \hat{t}\right) \! & = & \!\!\! \int\limits_0^1 \!\! d x_1 \!\! \int\limits_0^1 \!\! d y_1 \Psi_{\mbox{\scriptsize B}}^\dagger \left(x_1\right) \Psi_{\overline{\mbox{\scriptsize B}}}^\dagger \left(y_1\right) \hat{T}_{\{\lambda\}\!\! } \left(x_1, y_1; \hat{s}, \hat{t}\right), \nonumber \\ \label{HSP} \end{eqnarray} where Lorentz and color indices are suppressed for convenience. Furthermore, the dependence on renormalization and factorization scales is neglected since we are only interested in a rather restricted range of momentum transfer. The subscript $\left\{ \lambda \right\}$ denotes all possible configurations of photon and baryon helicities. In the following we use the label $\mathrm{B}$ to denote spin-1/2 octet baryons and $\mbox{B}_{10}$ to label spin-3/2 decuplet baryons. For the process $\gamma \gamma \rightarrow \mbox{B}_{10} \overline{\mbox{B}}_{10}$, there are 19 independent helicity amplitudes, $\overline{\mathcal{M}}_{\lambda_{{\mbox{\scriptsize B}_{10}}},\,\lambda_{\overline{\mbox{\scriptsize B}}_{10}}; \,\lambda_1,\,\lambda_2}$, where the $\lambda_{{\mbox{\scriptsize B}_{10}}},\,\lambda_{\overline{\mbox{\scriptsize B}}_{10}}$ are the helicities of the outgoing baryon and antibaryon, respectively, and $\lambda_{1},\,\lambda_{2}$ label the helicities of the two photons. Only 13 out of these 19 helicity amplitudes involve a zero or single flip of the hadronic helicity. Double flip amplitudes vanish in our approach. We use the following convention for the nonvanishing amplitudes: \begin{samepage} \begin{equation} \begin{array}{lcl} \overline{\phi}_1 = \overline{\mathcal{M}}_{-\frac{1}{2},\,\frac{1}{2};\,1,\,-1}, & \quad & \overline{\phi}_7 = \overline{\mathcal{M}}_{-\frac{1}{2},\,\frac{3}{2};\,1,\,-1}, \\ \overline{\phi}_2 = \overline{\mathcal{M}}_{-\frac{1}{2},\,-\frac{1}{2};\,1,\,1}, & \quad & \overline{\phi}_8 = \overline{\mathcal{M}}_{\frac{1}{2},\,-\frac{3}{2};\,1,\,1}, \\ \overline{\phi}_3 = \overline{\mathcal{M}}_{\frac{1}{2},\,-\frac{1}{2};\,1,\,1}, & \quad & \overline{\phi}_9 = \overline{\mathcal{M}}_{\frac{1}{2},\,-\frac{3}{2};\,1,\,-1}, \\ \overline{\phi}_4 = \overline{\mathcal{M}}_{\frac{1}{2},\,\frac{1}{2};\,1,\,-1}, & \quad & \overline{\phi}_{10} = \overline{\mathcal{M}}_{-\frac{1}{2},\,\frac{3}{2};\,1,\,1}, \\ \overline{\phi}_5 = \overline{\mathcal{M}}_{\frac{1}{2},\,-\frac{1}{2};\,1,\,-1}, & \quad & \overline{\phi}_{11} = \overline{\mathcal{M}}_{-\frac{3}{2},\,\frac{3}{2};\,1,-\,1}, \\ \overline{\phi}_6 = \overline{\mathcal{M}}_{\frac{1}{2},\,\frac{1}{2};\,1,\,1}, & \quad & \overline{\phi}_{12} = \overline{\mathcal{M}}_{\frac{3}{2},\,-\frac{3}{2};\,1,\,1}, \\ & \quad & \overline{\phi}_{13} = \overline{\mathcal{M}}_{\frac{3}{2},\,-\frac{3}{2};\,1,\,-1}. \label{annamps} \end{array} \end{equation} \end{samepage} Other helicity configurations are related to these via parity and/or time reversal invariance. Our normalization of the amplitudes is such that the differential cross section for two-photon annihilation into decuplet baryons is given by \begin{equation} \frac{d \sigma}{d t} = \frac{1}{64 \pi s^2} \sum_{\left\{\lambda\right\}} \left| \overline{\mathcal{M}}_{\left\{\lambda\right\}} \right|^2, \end{equation} where the sum is over all possible helicity configurations $\{\lambda\}$. In (\ref{HSP}), $\hat{T}$ consists of all possible tree diagrams that contribute to the elementary scattering process $\gamma \gamma \rightarrow q D \bar{q} \bar{D}$. The momenta carried by quarks $q$ and diquarks $D$ are assumed to be collinear to those of their parent hadrons, $\mbox{B}$. The quark and antiquark carry momentum fractions $x_1$ and $y_1$ in the baryon and antibaryon, respectively, while the diquark and antidiquark carry momentum fractions $x_2 = 1-x_1$ and $y_2 = 1-y_1$, respectively. Since we assume that every baryonic constituent has a four-momentum $x \, p_{\mbox{\scriptsize B}}$ proportional to the four-momentum of its parent hadron $p_{\mbox{\scriptsize B}}$ \cite{Anselmino:vs}, it acquires an effective mass $x m_{\mbox{\scriptsize B}}$, where $m_{\mbox{\scriptsize B}}$ denotes the baryon mass. These effective masses are taken into account for all internal and external legs of the Feynman diagrams contributing to the hard-scattering amplitude $\hat{T}$. The hard-scattering amplitude is then expanded in powers of the small parameter $(m_{\mbox{\scriptsize B}}/\sqrt{s} )$ up to next-to-leading order, at fixed center-of-mass scattering angle $\hat{\theta}$. The result is reexpressed in terms of massless Mandelstam variables, $\hat{s}$, $\hat{t}$, and $\hat{u}$ which are obtained from the usual massive Mandelstam variables, $s,\,t,\,u$, again by expansion in $(m_{\mbox{\scriptsize B}}/\sqrt{\hat{s}} )$. In the hard scattering diagrams, the composite nature of the diquarks is taken into account by diquark form factors. These are parameterized such that asymptotically the scaling behavior of the pure quark HSP emerges. The complete parameterization of the model, including form factors and octet-baryon wave functions can be found in \cite{Berger:2002vc}. These parameters were fixed in \cite{Jakob:1993th} by fitting elastic electron-nucleon scattering data. With the same set of parameters a variety of other processes has been computed, and the results have successfully met experimental comparison \cite{Berger:2002vc,Jakob:1993th,prediqu2,Berger:1999gx}. \subsection{Decuplet Baryons}\label{decuplet} The diquark model comprises spin-0 (scalar) and spin-1 (vector) diquarks. While both scalar (S) and vector (V) diquarks contribute to processes involving spin-$1/2$ octet baryons, the valence Fock states of spin-$3/2$ decuplet baryons consist only of quarks and vector diquarks. We recall that the valence Fock state of an octet baryon $\mathrm{B}$ with mass $m_{\mbox{\scriptsize B}}$, momentum $p_{\mbox{\scriptsize B}}$, and helicity $\lambda$ can be described by the following quark-diquark wave function \begin{equation} \Psi_{\mbox{\scriptsize B}}(p_{\mbox{\scriptsize B}}, x, \lambda) = f_{S}^{\mbox{\scriptsize B}} \Phi_{S}^{\mbox{\scriptsize B}} (x) \chi_{S}^{\mbox{\scriptsize B}} \,u(p_{\mbox{\scriptsize B}},\lambda) + f_{V}^{\mbox{\scriptsize B}} \Phi_{V}^{\mbox{\scriptsize B}}(x) \chi_{V}^{\mbox{\scriptsize B}} \frac{1} { \sqrt{3}}\left( \gamma^\mu + \frac{p_{\mbox{\scriptsize B}}^\mu }{m_{\mbox{\scriptsize B}} } \right) \gamma_5\,u(p_{\mbox{\scriptsize B}},\lambda) \label{wave8}\end{equation} when transverse momenta of the constituents are neglected. $x$ is the longitudinal momentum fraction of the quark, whereas the diquark carries the longitudinal momentum fraction $1-x$. Analogously, the wave function of a decuplet baryon may be written as \begin{equation} \Psi_{{\mbox{\scriptsize B}_{10}}}^\mu (p_{{\mbox{\scriptsize B}_{10}}},x,\lambda) = f_V^{{\mbox{\scriptsize B}_{10}}} \Phi_V^{{\mbox{\scriptsize B}_{10}}}(x) \chi_V^{{\mbox{\scriptsize B}_{10}}} u^\mu\left(p_{{\mbox{\scriptsize B}_{10}}},\lambda\right), \label{wave} \end{equation} with the Rarita-Schwinger spinors \cite{Rarita:mf} \begin{eqnarray} u^\mu \left(p,\lambda = \pm 3/2 \right) & = & \varepsilon^\mu\left(\pm 1 \right) \,u\left(p,\lambda = \pm 1/2 \right) \, ,\nonumber \\ u^\mu\left(p,\lambda = \pm 1/2 \right) & = & \left[\sqrt{\frac{3}{2}} \varepsilon^\mu(0) - \frac{2 \lambda}{\sqrt{6}} \left( \gamma^\mu + \frac{p^\mu}{m_{{\mbox{\scriptsize B}_{10}}}} \right) \gamma_5 \right] u(p, \lambda). \end{eqnarray} Recall that all Lorentz indices have been suppressed in Eq.~(\ref{HSP}), the open index $\mu$ of the vector diquark polarization vector in (\ref{wave8}) and (\ref{wave}) is contracted appropriately in the convolution integral (\ref{HSP}). $\chi_D^{\mbox{\scriptsize B}}$, $\chi_D^{{\mbox{\scriptsize B}_{10}}}$ ($D=S,V$) denote pertinent SU(3) quark-diquark flavor wave functions and $\Phi_D^{\mbox{\scriptsize B}}$, $\Phi_D^{{\mbox{\scriptsize B}_{10}}}$ represent the nonperturbative probability amplitudes for finding these constituents with momentum fractions $x$ and $1-x$, respectively, in the (decuplet) baryon. These probability amplitudes are normalized such that \begin{equation} \int\limits_0^1 dx \Phi_D^{\mbox{\scriptsize B}}(x) = 1, \label{DAnorm} \end{equation} and analogously for $\Phi_D^{{\mbox{\scriptsize B}_{10}}}$. The constants $f_D^{\mbox{\scriptsize B}}$, $f_D^{{\mbox{\scriptsize B}_{10}}}$ result from integrating out intrinsic transverse momenta in the full wave function to produce Eqs. (\ref{wave8}) and (\ref{wave}), respectively. The numerical values of $f_D^{\mbox{\scriptsize B}}$ and $f_D^{{\mbox{\scriptsize B}_{10}}}$ are furthermore determined by the overall probability of finding the $|qD\rangle$-state in the baryon $\mathrm{B}$ or decuplet baryon $\mbox{B}_{10}$, respectively. For unbroken SU(6) spin-flavor symmetry octet- and decuplet baryon wave functions are related, specifically, $\Phi_{S}^{\mbox{\scriptsize B}}=\Phi_{V}^{\mbox{\scriptsize B}}=\Phi_{V}^{\mbox{\scriptsize B}_{10}}$ and $f_{S}^{\mbox{\scriptsize B}}=f_{V}^{\mbox{\scriptsize B}}=f_{V}^{\mbox{\scriptsize B}_{10}}/\sqrt{2}$. In the actual parameterization of the diquark model~\cite{Jakob:1993th} the asymptotic SU(6) symmetry is systematically broken down to SU(3) flavor symmetry. Thus the above SU(6) relations are by no means satisfied, and $\Phi_{S}^{\mbox{\scriptsize B}}$ and $\Phi_{V}^{\mbox{\scriptsize B}}$ as well as $f_{S}^{\mbox{\scriptsize B}}$ and $f_{V}^{\mbox{\scriptsize B}}$ have quite different values. Since SU(6) symmetry is thus already broken within the baryon octet we cannot use SU(6) symmetry for deriving quark-diquark wave functions of decuplet baryons. Instead, we will apply another strategy to fix $\Phi_{V}^{\mbox{\scriptsize B}_{10}}$ and $f_{V}^{\mbox{\scriptsize B}_{10}}$. The lowest moments of three-quark wave functions of octet and decuplet baryons are restricted by QCD sum rules~\cite{Farrar:1988vz,Chernyak:1989}. Model wave functions that satisfy the QCD sum-rule constraints (for a typical factorization scale of about $1$~GeV) are very asymmetric in the longitudinal momentum fractions $x_i,\,i=1,2,3$ of the quarks for octet baryons and nearly symmetric ($\sim x_1 x_2 x_3$) for the $\Delta$s and the $\Omega$~\cite{Farrar:1988vz}. By regrouping terms in the three-quark wave function such that, for example, quarks 2 and 3 are in a specific spin-flavor state and by integrating over one of the momentum fractions of the two quarks that build up this \lq\lq diquark\rq\rq\ we can convert the three-quark wave function into a quark-diquark wave function that nearly has the form (\ref{wave8}) or (\ref{wave}) for octet or decuplet baryons, respectively. For more information on this conversion we refer to \cite{Anselmino:1987gu}. The probability amplitudes $\Phi_{V}^{\mbox{\scriptsize B}}$ and $\Phi_{V}^{{\mbox{\scriptsize B}_{10}}}$ for general three-quark wave functions are different in the cases of helicity-0 and helicity-1 V diquarks. For the octet and decuplet model wave functions that we employ this difference turns out to negligible. We then arrive at Eq.~(\ref{wave8}) or (\ref{wave}), respectively. We apply the above procedure to the three-quark wave function of the $\Delta$ that has been proposed in Ref.~\cite{Farrar:1988vz} based on QCD sum-rule constraints. We obtain the following quark-diquark wave function for a $\Delta$ with helicity $\pm1/2$ \begin{equation} \Phi_{V}^{\Delta, |\lambda|=1/2}(x) = N x (1-x)^3 (1 - 2.95 x + 3.86 x^2) \exp \left\{-b^2 \left[ \frac{m_q^2}{x} + \frac{m_V^2 }{1-x} \right] \right\} \, . \label{huangfull} \end{equation} Analogous to the standard parameterization of the diquark model for octet baryons~\cite{Jakob:1993th} we have introduced an additional exponential factor that damps the end-point regions $x\rightarrow 0,1$. Such an exponential factor results if the transverse momentum dependence of the full wave function, which is integrated over, is assumed to be of Gaussian form. The parameters $b^2=0.248$~GeV$^2$, $m_q=0.33$~GeV, and $m_V=0.58$~GeV are taken to be the same as for octet baryons. The normalization factor $N$ is determined by Eq. (\ref{DAnorm}). The expression for $\Phi_{V}^{\Delta, |\lambda|=3/2}(x)$ differs, in general, from $\Phi_{V}^{\Delta, |\lambda|=1/2}(x)$. However, we refrain from quoting it here, because our explicit calculations show that the production of helicity-3/2 $\Delta$s is suppressed within the diquark model. The only remaining open parameter is now the normalization $f_{V}^{\Delta,|\lambda|=1/2}$ of the helicity-$1/2$ $\Delta$ wave function. Since the normalization $f_{V}^{\mbox{\scriptsize B}}$ of the octet-baryon wave function was taken as a free parameter in the diquark model we normalize the $\Delta$ wave function relative to the proton wave function. This means that we convert the three-quark wave functions for proton and $\Delta$ into quark-diquark wave functions of the form (\ref{wave8}) and (\ref{wave}), respectively, and consider the resulting ratio $f_{V}^{\Delta,|\lambda|=1/2}/f_{V}^{\mathrm{p}}$. For the QCD sum-rule based wave functions of Refs.~\cite{Farrar:1988vz} and \cite{Chernyak:1989} this ratio becomes $f_{V}^{\Delta,|\lambda|=1/2}/f_{V}^{\mathrm{p}}=0.898$. With $f_{V}^{\mathrm{p}}=127.7$~MeV, the value obtained in a fit of elastic electron-nucleon scattering data~\cite{Jakob:1993th}, we thus find \begin{equation} f_{V}^{\Delta,\,|\lambda|= 1/2} = 125.1 \mbox{ MeV} \, . \label{fVD} \end{equation} This completes the parameterization of our model for decuplet baryons. For sake of completeness we quote the flavor wave functions entering (\ref{wave}) for the differently charged $\Delta$s: \begin{eqnarray} \chi_V^{\Delta^{++}} &=& u V_{\{uu\}} \, ,\nonumber\\ \chi_V^{\Delta^{+\phantom{+}}} &=& \left[ \sqrt{2}u V_{\{ud\}}+d V_{\{uu\}}\right]/\sqrt{3}\, , \nonumber \\ \chi_V^{\Delta^{0\phantom{+}}} &=& \left[ \sqrt{2}d V_{\{ud\}}+u V_{\{dd\}}\right]/\sqrt{3}\, ,\nonumber \\ \chi_V^{\Delta^{-\phantom{+}}} &=& d V_{\{dd\}}\, . \label{flavorwf}\end{eqnarray} \section{Results} \label{sec:results} We list analytical results for the hard-scattering amplitudes $\hat{T}_{\{\lambda\} }$ contributing to $\gamma \gamma \rightarrow \mbox{B}_{10} \bar{\mbox{B}}_{10}$ in the Appendix. These results have been checked via crossing relations \cite{Bourrely:mr} against the separately computed amplitudes for the crossed process, Compton scattering $\gamma \mbox{B}_{10} \rightarrow \gamma \mbox{B}_{10}$. Comparing the spinor structure of the decuplet baryon wave function (\ref{wave}) with the one for octet baryons (\ref{wave8}), we find that the leading, non-flip, hard amplitudes for decuplet baryons with helicity 1/2 are related by a factor of 2 to those for octet baryons. From the analytical expressions we also observe, that the hard-scattering amplitudes for octet baryons with helicity $\pm 3/2$ are suppressed by ${\mathcal{O}}(m_{{\mbox{\scriptsize B}_{10}}}^2/\hat{s})$ or higher, even if these amplitudes conserve the hadronic helicity or flip it by one unit. The only 4-point contribution that is not suppressed enters the helicity amplitude $\bar{\phi}_{2}$. In the numerical calculations this contribution, however, turns out to be nearly negligible. 5-point functions with both photons attaching to the diquark do not contribute at all, since these are also suppressed by ${\mathcal{O}}(m_{{\mbox{\scriptsize B}_{10}}}^2/\hat{s})$ or even higher. As a consequence of this observation, cross section ratios of different decuplet baryon channels can easily be estimated, provided that the corresponding probability amplitudes $\Phi_V^{{\mbox{\scriptsize B}_{10}}}$ are not too different. The cross-section ratios are then essentially determined by the corresponding charge-flavor factors $C_{\mathrm{cf}}^{(3)}$ (see Eq.(\ref{hsa})) and the wave function normalizations $f_V^{{\mbox{\scriptsize B}_{10}}}$. For the $\Delta$-quartet $\Phi_V^{\Delta}$ and $f_V^{\Delta}$ are the same for all members due to isospin symmetry. From the flavor wave functions (\ref{flavorwf}) the charge-flavor factors $C_{\mathrm{cf}}^{(3)}$ are seen to be $4/9$, $3/9$, $2/9$, and $1/9$ for the $\Delta^{++}$, $\Delta^{+}$, $\Delta^{0}$, and $\Delta^{-}$, respectively. The cross section ratios become (approximately) \begin{equation} \sigma(\Delta^{++}): \sigma(\Delta^{+}): \sigma(\Delta^{0}): \sigma(\Delta^{-}) = 16 : 9 : 4 : 1 \, . \label{ratios}\end{equation} This is the first interesting prediction of the diquark model. In Fig.~\ref{fig:deltas} we show the integrated cross sections ($|\cos(\theta_{CM})|<0.6$, where $\theta_{CM}$ is the center-of-mass scattering angle) for the $\Delta$ channels. The plot exhibits numerical predictions obtained with the standard parameterization of the diquark model~\cite{Berger:2002vc} and the $\Delta$ wave function derived in Sec.~\ref{decuplet}. It confirms Eq.~(\ref{ratios}) within 1 percent. \begin{figure}[h!] \begin{center} \epsfig{file=deltas.eps,width=12cm,angle=0,clip=0} \caption{ Integrated cross sections for $\gamma \gamma \rightarrow \Delta^{++} \bar{\Delta}^{--}$ (solid line), $\Delta^{+} \bar{\Delta}^{-}$ (dotted), $\Delta^{0} \bar{\Delta}^{0}$ (dashed), $\Delta^{-} \bar{\Delta}^{+}$ (dash-dotted line) ($|\cos(\theta_{CM})|<0.6$) versus center-of-mass energy $W = \sqrt{s}$ predicted with the standard parameterization of the diquark model~\cite{Berger:2002vc} and the $\Delta$ DA defined in the text (see Eqs.~(\ref{wave}), (\ref{huangfull}), and (\ref{fVD})). \label{fig:deltas}} \end{center} \end{figure} This prediction is to be contrasted with the ratios $16:1:0:1$ that result if the photons couple to the total charge of the $\Delta$s. Also within the pure quark HSP the ratios for the $\Delta^{+}$ and the $\Delta^{0}$ channels differ from ours. Within the pure quark HSP the cross section ratios for the different $\Delta$ channels are predicted to be $\sigma(\Delta^{++}): \sigma(\Delta^{+}): \sigma(\Delta^{0}): \sigma(\Delta^{-}) \approx 16 : 2 : 1/3 : 1 $~\cite{Farrar:1988vz}. Note that all the above predictions agree with our result for the cross section ratio $\sigma(\Delta^{++}) : \sigma(\Delta^{-}) \approx 16:1 $. This result is also found in a more general QCD analysis~\cite{Karliner:2002nk}. However, yet another possible production mechanism via multi-pion intermediate states predicts $\sigma(\Delta^{++})= \sigma(\Delta^{-})$ and $\sigma(\Delta^{+})= \sigma(\Delta^{0})$~\cite{Karliner:2001ut}. An experimental determination of such cross section ratios could therefore provide important clues on the underlying production mechanisms, especially because in ratios of cross sections for different $\Delta$ channels the sensitivity to the specific form of the $\Delta$ wave function should be greatly reduced. If we assume SU(3)-flavor symmetry, that is, if we take the same $\Phi_V^{{\mbox{\scriptsize B}_{10}}}$ and $f_V^{{\mbox{\scriptsize B}_{10}}}$ for all decuplet baryons, we are also able to give estimates for the pair production of strange decuplet baryons. Aside from appropriate phase space factors, SU(3) symmetry implies \begin{eqnarray} \sigma(\Delta^{+}) & = & \sigma(\Sigma^{\ast +}) \, , \nonumber \\ \sigma(\Delta^{0}) & = & \sigma(\Sigma^{\ast 0}) = \sigma(\Xi^{\ast 0})\, , \nonumber \\ \sigma(\Delta^{-}) & = & \sigma(\Sigma^{\ast -}) = \sigma(\Xi^{\ast -}) = \sigma(\Omega^{\ast -}) \, . \end{eqnarray} However, since it is experimentally very difficult to measure pair-production cross sections for decuplet baryons, we refrain from giving quantitative results for the strange decuplet baryons. We rather concentrate in the following on the $\Delta^{++}$ channel which might have the best chance to be measured due to its comparably large cross section. \begin{figure}[h!] \begin{center} \epsfig{file=deltapp.eps,width=12cm,angle=0,clip=0} \caption{ Integrated cross section for $\gamma \gamma \rightarrow \Delta^{++} \bar{\Delta}^{--}$ ($|\cos(\theta_{CM})|<0.6$) versus $W = \sqrt{s}$ for the same $\Delta$ DA as in Fig~\ref{fig:deltapp}. The solid line corresponds to the full diquark-model calculation. The contribution to the cross section that comes from the hadronic-helicity conserving amplitudes $\bar{\phi}_{1}$ and $\bar{\phi}_{5}$ is represented by the dashed line. For comparison we also show the integrated cross section for $\gamma \gamma \rightarrow \mathrm{p} \bar{\mathrm{p}}$ (dotted line) calculated within the same model~\cite{Berger:2002vc}. The shaded boxes indicate experimental upper bounds as obtained by the ARGUS collaboration~\cite{Argus}.\label{fig:deltapp} } \end{center} \end{figure} In Fig.~\ref{fig:deltapp} we show for comparison with the $\gamma \gamma \rightarrow \Delta^{++} \bar{\Delta}^{--}$ cross section the $\gamma \gamma \rightarrow \mathrm{p} \bar{\mathrm{p}}$ cross section that we have obtained with the same parameterization~\cite{Berger:2002vc}. Surprisingly, we find that the $\Delta^{++}$ cross section is of the same order of magnitude as the proton cross section. This prediction seems to be very stable against (reasonable) changes of the $\Delta$ wave function. With a $\Delta$ wave function that satisfies the SU(6) relations $\Phi_{V}^{\Delta}=\Phi_{V}^{\mathrm{p}}$ and $f_{V}^{\Delta}=\sqrt{2} f_{V}^{\mathrm{p}}$ we obtain, for example, a result which is only about 20\% to 30\% smaller \footnote{In a previous attempt to estimate $\sigma(\Delta^{++})/\sigma(\mathrm{p})$ within a diquark model a ratio of $\approx 0.1$ was found~\cite{Anselmino:1987gu}. This, however, was obtained with an incomplete version of the diquark model, where V diquarks were not taken fully into account and mass effects have been neglected.}. Under the naive assumption that the photons couple directly to the charges of the baryons one would expect the $\Delta^{++}$ cross section to be about 16 times larger than the proton cross section. From the viewpoint of the pure quark hard-scattering picture, the ratio of $\gamma \gamma \rightarrow \Delta^{++} \bar{\Delta}^{--}$ to the $\gamma \gamma \rightarrow \mathrm{p} \bar{\mathrm{p}}$ cross section depends strongly on the choice of the proton wave function~\cite{Farrar:1988vz}. Not surprisingly, a result for the ratio comparable to ours is obtained with the QCD sum-rule wave functions of Refs.~\cite{Farrar:1988vz} and \cite{Chernyak:1989} for $\Delta$ and proton, respectively, which we have used in Sec.~\ref{decuplet} to derive and normalize our quark-diquark wave function of the $\Delta$. However, if the asymptotic wave function $\sim x_1 x_2 x_3$ is taken for both the proton and the $\Delta$, the cross section ratio $\sigma(\Delta^{++})/\sigma(\mathrm{p})$ can be as large as 50 within the pure quark HSP~\cite{Farrar:gv}. On the other hand, soliton models involving multi-pion channels predict a much smaller ratio \cite{Brodsky:2004wx,Karliner:2001ut}, comparable to our findings. An experimental determination of the ratio $\sigma(\Delta^{++})/\sigma(\mathrm{p})$ could therefore help to explore the importance of the various mechanisms that result in these quite different predictions. Unfortunately, it is very difficult experimentally to isolate the signal of the broad $\Delta^{++}$ resonance from the background and to disentangle the $\Delta^{++}$ and the $\Delta^{0}$ contributions in the $\gamma \gamma \rightarrow \mathrm{p} \bar{\mathrm{p}} \pi^+ \pi^-$ cross sections which are actually measured. Therefore only upper limits for the $\gamma \gamma \rightarrow \Delta^{++} \bar{\Delta}^{--}$ cross section have been extracted up to now by the ARGUS collaboration~\cite{Argus}. As can be seen in Fig. \ref{fig:deltapp}, our results lie well below these upper limits. More recent attempts to constrain the $\gamma \gamma \rightarrow \Delta^{++} \bar{\Delta}^{--}$ cross section using the data taken by the L3 group are afflicted with the same problems, but a preliminary assessment indicates compatibility with the ARGUS results and our predictions~\cite{Echenard}. A better chance to determine the cross section for $\Delta^{++} \bar{\Delta}^{--}$ pair production would perhaps exist for the BABAR or BELLE experiments which enjoy a much higher luminosity. Finally, let us comment on the treatment of mass effects within our approach. Fig.~\ref{fig:deltapp} displays the effect of taking into account the finite $\Delta$ mass. As explained in Sec.~\ref{sec:hsp}, the $\Delta$ mass is taken into account in the hard-scattering amplitudes via an expansion in the small parameter $(m_{\mbox{\scriptsize B}}/\sqrt{\hat{s}} )$ where only the leading and next-to leading order terms are kept. As expected, mass correction terms do not contribute to the hadronic helicity-conserving amplitudes $\overline{\phi}_1$ and $\overline{\phi}_5$. Only the amplitudes that involve a single flip of the hadronic helicity, which vanish if masses are neglected, become nonzero due to the mass correction terms. The comparison of the solid and the dashed lines in Fig.~\ref{fig:deltapp} shows that these mass effects can be sizable in the few-GeV region. At $W=2.5$~GeV the leading-order contributions provide only about 30\% of the full cross section. This ratio increases, of course, with increasing energy and becomes roughly 70\% at $W=5$~GeV. \section{Concluding Remarks} In this work we have computed $\gamma \gamma \rightarrow \mbox{B}_{10} \bar{\mbox{B}}_{10}$ cross sections at intermediate momentum transfer for the case of spin-3/2 decuplet baryons $\mbox{B}_{10}$. We have employed a modification of the hard-scattering picture for exclusive reactions, where baryons are treated as quark-diquark systems, thereby effectively parameterizing nonperturbative contributions which are undoubtedly present at currently experimentally accessible energies. Using the same model parameters as in previous studies of other photon-induced reactions, and constraining the quark-diquark wave function of the $\Delta$ with the help of QCD sum-rule results, we are able to give absolute predictions for $\gamma \gamma \rightarrow \Delta \bar{\Delta}$ without introducing new parameters. We find that the cross section for $\gamma \gamma \rightarrow \Delta^{++} \bar{\Delta}^{--}$ is of the same order of magnitude as the cross section for proton pair production, $\gamma \gamma \rightarrow p \bar{p}$. Furthermore, we observe that the pair production of decuplet baryons is almost completely determined within our model by those graphs where both photons couple to the quark line. This enables us to estimate production ratios for different decuplet-baryon channels independent of the choice of the wave function, provided that the wave functions are similar for all baryons within the decuplet. This is certainly the case for the $\Delta$-quartet for which we predict the ratios $\sigma(\Delta^{++}): \sigma(\Delta^{+}): \sigma(\Delta^{0}): \sigma(\Delta^{-}) = 16 : 9 : 4 : 1$. There are various other estimates of these cross section ratios in the literature, based on different viewpoints and production mechanisms, which differ in their predictions from ours. It would therefore be necessary to compare to experimental analyses, in order to determine the relative importance of the considered production mechanisms, and to learn more about the degree of symmetry among constituents in decuplet-baryon distribution amplitudes. Such an experimental analysis should be quite feasible at a high-luminosity $e^+e^-$ collider. We therefore hope that our experimental colleagues will study this interesting problem in the near future. \begin{appendix} \section{Elementary Helicity Amplitudes \\ for $\gamma \gamma \rightarrow q V \bar{q} \bar{V}$} There are 30 Feynman graphs that contribute to the hard-scattering amplitudes $\hat{T}$ for $\gamma \gamma \rightarrow q V \bar{q} \bar{V}$. Their general structure is \begin{equation} \hat{T}_{\{\lambda\} }(\hat{t},\hat{u}) = C_{\mathrm{cf}}^{(3)} \,\overline{T}^{(3, V)}_i (\hat{t},\hat{u}) F_V^{(3)} + C_{\mathrm{cf}}^{(4)} \,\overline{T}^{(4, D)}_i(\hat{t},\hat{u}) F_V^{(4)} + C_{\mathrm{cf}}^{(5)}\, \overline{T}^{(5, V)}_i (\hat{t},\hat{u}) F_V^{(5)}, \label{hsa}\end{equation} where $C_{\mathrm{cf}}^{(n)}$ are the appropriate charge-flavor factors. The subscript $i = 1,\dots, 13$ labels the helicity-combinations according to Eq.~(\ref{annamps}). Each $n$-point contribution $\overline{T}^{(n,V)}$ is found from a separately gauge-invariant set of Feynman diagrams, where $(n-2)$ gauge bosons couple to the diquark. The $\overline{T}^{(n,V)}$ are multiplied with the appropriate diquark form factors $F_V^{(n)}$, parameterizing the composite nature of diquarks. For further details we refer to \cite{Berger:2002vc}. The analytical results for $\overline{T}^{(n,V)}_i$ are presented in the following. For their calculation we employed the algebraic computer program \textit{Mathematica}~\cite{math} with the package \texttt{FeynCalc}~\cite{Mertig:1990an}. We do not list $n$-point contributions that are suppressed by at least ${\mathcal{O}}(m_{{\mbox{\scriptsize B}_{10}}}^2/\hat{s})$, since they are neglected in our numerical calculations. Those include, for example, all 5-point functions $\overline{T}^{(5, V)}_i$ and all amplitudes with (anti)baryon helicity $\pm \frac{3}{2}$. Note that the parameterization of the form factors $F_V^{(n)}$ for 4- and 5-point functions provides additional inverse powers of $\hat{s}$ as compared to $F_V^{(3)}$. We abbreviate $C = (4 \pi)^2 C_F \alpha \,\alpha_s$, where $C_F = \frac{4}{3}$ is the color factor, and $\alpha$ denotes the fine structure constant $\alpha \approx 1/137$. $\kappa_{\mbox{\scriptsize V}}$ is the anomalous magnetic moment of the vector diquark. \begin{eqnarray*} \overline{T}_1^{(3,V)} (\hat{t},\hat{u}) & = & -\frac{4}{3} C \frac{\kappa_{\mbox{\scriptsize V}}}{m_{{\mbox{\scriptsize B}_{10}}}^2 \sqrt{\hat{u} \hat{t} }} \left( \frac{\hat{u} }{x_1 y_1 } + \frac{\hat{t} }{x_2 y_2} \right) \\ \overline{T}_2^{(3,V)} (\hat{t},\hat{u}) & = & \frac{2}{3} C \frac{1}{m_{{\mbox{\scriptsize B}_{10}}} } \frac{ \sqrt{\hat{s}} \,\hat{s}}{\hat{u} \hat{t} } \frac{x_1 + y_1 }{x_1 y_1 } \\ \overline{T}_4^{(3,V)} (\hat{t},\hat{u}) & = & C\frac{2 }{3\, m_{{\mbox{\scriptsize B}_{10}}} } \frac{1}{\sqrt{\hat{s}}\,\hat{u} \hat{t}} \frac{1 }{x_1 x_2 y_1 y_2 } \Bigg\{ \\ & & - \kappa_{\mbox{\scriptsize V}} \left[ (2 x_1 -3) y_2 \hat{t}^2 + (2 y_1 - 3) x_2 \hat{u}^2 - 4 x_1 y_1 \hat{u} \hat{t} \right]+ \\ & & + \,\left[ \left( x_2 + y_2 \right) \left( \hat{t}^2 x_1 y_2 + \hat{u}^2 x_2 y_1 - 2 x_1 y_2 \hat{u} \hat{t} \right) - y_2 \hat{t}^2 - x_2 \hat{u}^2 \right] \Bigg\} \\ \overline{T}_5^{(3,V)} (\hat{t},\hat{u}) & = & \overline{T}_1^{(3,V)} (\hat{u},\hat{t}) \\ \overline{T}_6^{(3,V)} (\hat{t},\hat{u}) & = & - C \frac{2}{3} \frac{1 }{m_{{\mbox{\scriptsize B}_{10}}}} \frac{ \sqrt{\hat{s}} \, \hat{s}}{\hat{u} \hat{t} } \left[ - (1 + \kappa_{\mbox{\scriptsize V}}) \frac{x_1 y_2^2+ y_1 x_2^2 }{x_1 x_2 y_1 y_2 } + \kappa_{\mbox{\scriptsize V}} \frac{ x_1 + y_1}{x_2 y_2 } \right] \\ \overline{T}_2^{(4,V)} (\hat{t},\hat{u}) & = & - \frac{2}{3} C \frac{ \kappa_{\mbox{\scriptsize V}} (1 - \kappa_{\mbox{\scriptsize V}}) \sqrt{\hat{s}}}{m_{{\mbox{\scriptsize B}_{10}}}^3} \frac{1}{x_1 x_2^2 y_1 y_2^2 }. \end{eqnarray*} The hard-scattering amplitudes for Compton scattering off decuplet baryons are related to the amplitudes listed above via crossing \cite{Bourrely:mr}. The corresponding elementary helicity amplitudes, $\gamma q D \rightarrow \gamma q D$ have been computed separately as a check. They can be obtained from the authors upon request. \end{appendix} \subsection*{Acknowledgements} We thank Mauro Anselmino, Mariaelena Boglione, Stan Brodsky, Lance Dixon, and Marek Karliner for stimulating and helpful discussions. We are also grateful to Bertrand Eche\-nard for sharing his insights from an experimental viewpoint.
2,877,628,091,391
arxiv
\section{Introduction} In this paper we characterize the notion of uniform rectifiability in the sense of David and Semmes \cite{DS2} in terms of the $L^2$ boundedness of the $\rho$-variation for the Riesz transform, with $\rho>2$. Given $1\leq n<d$ integers and a Borel measure $\mu$ in ${\mathbb R}^d$, one defines the $n$-dimensional Riesz transform of a function $f\in L^1(\mu)$ by $R^\mu f(x)=\lim_{\epsilon\searrow0}R^\mu_\epsilon f(x)$ (whenever the limit exists), where $$R^\mu_\epsilon f(x)=\int_{|x-y|>\epsilon}\frac{x-y} {|x-y|^{n+1}}\,f(y)\,d\mu(y),\qquad x\in{\mathbb R}^d.$$ We will use the notation ${\mathcal R}^\mu f(x):=\{R_{\epsilon}^\mu f(x)\}_{\epsilon>0}$. When $d=2$ (i.e., $\mu$ is a Borel measure in ${\mathbb C}$), one defines the Cauchy transform of $f\in L^1(\mu)$ by $C^\mu f(x)=\lim_{\epsilon\searrow0}C^\mu_\epsilon f(x)$ (whenever the limit exists), where $$C^\mu_\epsilon f(x)=\int_{|x-y|>\epsilon}\frac{f(y)} {x-y}\,d\mu(y),\qquad x\in{\mathbb C}.$$ To avoid the problem of existence of the preceding limits, it is useful to consider the maximal operators $R^\mu_* f(x)=\sup_{\epsilon>0}|R^\mu_\epsilon f(x)|$ and $C^\mu_* f(x)=\sup_{\epsilon>0}|C^\mu_\epsilon f(x)|$. Notice that the Cauchy transform coincides with the $1$-dimensional Riesz transform in ${\mathbb R}^2$ modulo conjugation, since $1/x=\overline x/|x|^{2}$ for all $x\in{\mathbb C}\setminus\{0\}$. The Cauchy and Riesz transforms are two very important examples of singular integral operators with a Calder\'{o}n-Zygmund kernel. Given $d\geq2$, the kernels $K:{\mathbb R}^d\setminus\{0\}\to{\mathbb R}$ that we consider in this paper satisfy \begin{equation}\label{4eq333} |K(x)|\leq \frac{C}{|x|^{n}},\quad|\partial_{x^i}K(x)|\leq \frac{C}{|x|^{n+1}}\quad\text{and}\quad|\partial_{x^i}\partial_{x^j}K(x)|\leq \frac{C}{|x|^{n+2}}, \end{equation} for all $1\leq i,j\leq d$ and $x=(x^1,\ldots,x^d)\in{\mathbb R}^d\setminus\{0\}$, where $1\leq n<d$ is some integer and $C>0$ is some constant; and moreover $K(-x)=-K(x)$ for all $x\neq0$ (i.e. $K$ is odd). Notice that the $n$-dimensional Riesz transform corresponds to the vector kernel $(x^1,\ldots,x^d)/|x|^{n+1}$, and the Cauchy transform to $(x^1,-x^2)/|x|^{2}$ (so, we may consider $K$ to be any scalar component of these vector kernels). For $f\in L^1(\mu)$ and $x\in{\mathbb R}^d$, we set \begin{equation*} T_{\epsilon}^\mu f(x)\equiv T_{\epsilon}(f\mu)(x):=\int_{|x-y|>\epsilon}K(x-y)f(y)\,d\mu(y), \end{equation*} and we denote ${\mathcal T}^\mu f(x)=\{ T_{\epsilon}^\mu f(x)\}_{\epsilon>0}$. \begin{defi}[$\rho$-variation and oscillation] Let $\mathcal{F}:=\{F_\epsilon\}_{\epsilon>0}$ be a family of functions defined on ${\mathbb R}^d$. Given $\rho>0$, the $\rho$-{\em variation} of ${\mathcal F}$ at $x\in{\mathbb R}^d$ is defined by \begin{equation*} {\mathcal V}_{\rho}({\mathcal F})(x):=\sup_{\{\epsilon_{m}\}}\bigg(\sum_{m\in{\mathbb Z}} |F_{\epsilon_{m+1}}(x)-F_{\epsilon_{m}}(x)|^{\rho}\bigg)^{1/\rho}, \end{equation*} where the pointwise supremum is taken over all decreasing sequences $\{\epsilon_{m}\}_{m\in{\mathbb Z}}\subset(0,\infty)$. Fix a decreasing sequence $\{r_{m}\}_{m\in{\mathbb Z}}\subset(0,\infty)$. The {\em oscillation} of ${\mathcal F}$ at $x\in{\mathbb R}^d$ is defined by \begin{equation*} {\mathcal O}({\mathcal F})(x):=\sup_{\{\epsilon_m\},\{\delta_{m}\}}\bigg(\sum_{m\in{\mathbb Z}} |F_{\epsilon_m}(x)-F_{\delta_m}(x)|^{2}\bigg)^{1/2}, \end{equation*} where the pointwise supremum is taken over all sequences $\{\epsilon_m\}_{m\in{\mathbb Z}}$ and $\{\delta_{m}\}_{m\in{\mathbb Z}}$ such that $r_{m+1}\leq\epsilon_{m}\leq\delta_{m}\leq r_{m}$ for all $m\in{\mathbb Z}$. \end{defi} The $\rho$-variation and oscillation for martingales and some families of operators have been studied in many recent papers on probability, ergodic theory, and harmonic analysis (see \cite{Lepingle}, \cite{Bourgain}, \cite{JKRW-ergodic}, \cite{CJRW-Hilbert}, \cite{JSW}, \cite{Lacey}, and \cite{OSTTW}, for example). In this paper we are interested in the $\rho$-variation and oscillation of the family ${\mathcal T}^\mu f$. That is, given a Borel measure $\mu$ in ${\mathbb R}^d$ and $f\in L^1(\mu)$ we will deal with \begin{equation*} \begin{split} &({\mathcal V}_{\rho}\circ{\mathcal T}^\mu)f(x):={\mathcal V}_{\rho}({\mathcal T}^\mu f)(x),\quad ({\mathcal O}\circ{\mathcal T}^\mu)f(x):={\mathcal O}({\mathcal T}^\mu f)(x). \end{split} \end{equation*} We are specially interested in the case ${\mathcal T}^\mu={\mathcal R}^\mu$. Notice, by the way, that $T_*^\mu f(x) \leq ({\mathcal V}_{\rho}\circ{\mathcal T}^\mu)f(x)$ for any compactly supported function $f\in L^1(\mu)$ and all $x\in{\mathbb R}^d$. When $\mu$ coincides with the Lebesgue measure in the real line and $K(x)=1/x$ is the kernel of the Hilbert transform, Campbell, Jones, Reinhold and Wierdl \cite{CJRW-Hilbert} showed that ${\mathcal V}_{\rho}\circ{\mathcal T}^\mu$ and ${\mathcal O}\circ{\mathcal T}^\mu$ are bounded in $L^p(\mu)$, for $1<p<\infty$, and of weak type $(1,1)$. This result was extended to other singular integral operators in higher dimensions in \cite{CJRW-singular integrals}. The case of the Cauchy transform and other odd Calder\'on-Zygmund operators on Lipschitz graphs was studied recently in \cite{MT}. Let us turn our attention to uniform rectifiability now. Recall that a Borel measure $\mu$ in ${\mathbb R}^d$ is called $n$-rectifiable if there exists a countable family of $n$-dimensional $C^1$ submanifolds $\{M_i\}_{i\in{\mathbb N}}$ in ${\mathbb R}^d$ such that $\mu(E\setminus\bigcup_{i\in{\mathbb N}}M_i)=0$. Moreover, $\mu$ is said to be $n$-dimensional Ahlfors-David regular, or simply AD regular, if there exists some constant $C>0$ such that $C^{-1}r^n\leq\mu(B(x,r))\leq Cr^n$ for all $x\in{\operatorname{supp}}\mu$ and $0<r\leq{\operatorname{diam}}({\operatorname{supp}}\mu)$. One also says that $\mu$ is uniformly $n$-rectifiable if there exist $\theta,M>0$ so that, for each $x\in{\operatorname{supp}}\mu$ and $r>0$, there is a Lipschitz mapping $g$ from the $n$-dimensional ball $B^n(0,r)\subset{\mathbb R}^n$ into ${\mathbb R}^d$ such that ${\operatorname{Lip}}(g)\leq M$ and $\mu\big(B(x,r)\cap g(B^n(0,r))\big) \geq \theta r^n,$ where ${\operatorname{Lip}}(g)$ stands for the Lipschitz constant of $g$. In particular, uniform rectifiability implies rectifiability. Given a set $E\subset{\mathbb R}^d$, we denote by ${\mathcal H}^n_E$ the $n$-dimensional Hausdorff measure restricted to $E$. Then $E$ is called, respectively, $n$-rectifiable, AD regular, or uniformly $n$-rectifiable if ${\mathcal H}^n_E$ is so. By the Lebesgue differentiation theorem, any $n$-dimensional AD regular measure $\mu$ is of the form $\mu=f{\mathcal H}^n_{{\operatorname{supp}}\mu}$ with $C^{-1}\leq f(x)\leq C$ for some constant $C>0$ and all $x\in{\operatorname{supp}}\mu$. G. David and S. Semmes asked more than twenty years ago the following question, which is still open (see, for example, \cite[Chapter 7]{Pajot}): \begin{ques}\label{ques David-Semmes} Is it true that an $n$-dimensional AD regular measure $\mu$ is uniformly $n$-rectifiable if and only if $R_*^\mu$ is bounded in $L^2(\mu)$? \end{ques} Some comments are in order. By the results in \cite{DS1}, the ``only if'' implication of the question above is already known to hold. Also in \cite{DS1}, G. David and S. Semmes gave a positive answer to Question \ref{ques David-Semmes} if one replaces the $L^2$ boundedness of $R^\mu_*$ by the $L^2$ boundedness of $T^\mu_*$ for a wide class of odd kernels $K$. In the case $n=1$ (in particular, for the Cauchy transform), the ``if'' implication was proved by P. Mattila, M. Melnikov and J. Verdera in \cite{MMV} using the notion of curvature of measures. Later on, G. David and J. C. L\'{e}ger \cite{Leger} proved that the $L^2$ boundedness $C^\mu_*$ implies that $\mu$ is rectifiable, even without the AD regularity assumption (with $n=1$). When $\mu$ is the $n$-dimensional Hausdorff measure on a set $E\subset{\mathbb R}^d$ such that $\mu(E)<\infty$, the rectifiability of $\mu$ is also related with the existence $\mu$-a.e. of the principal value of the Riesz transform of $\mu$, that is, the existence of $R^\mu1(x)=\lim_{\epsilon\searrow0}R^\mu_\epsilon1(x)$ for $\mu$-a.e. $x\in E$. In \cite{Mattila-Preiss}, P. Mattila and D. Preiss proved that, under the additional assumption that \begin{equation}\label{eqjo88} \lim\inf_{r\to 0}r^{-n}\mu(B(x,r))>0 \qquad\mbox{for $\mu$-a.e. $x\in E$,} \end{equation} the rectifiability of $E$ is equivalent to the existence of $R^\mu1(x)$ $\mu$-a.e. $x\in E$. Later on, in \cite{Tolsa8} X. Tolsa removed the assumption \eqref{eqjo88} and proved the result in full generality. Let us mention that, for the case $n=1$ and $d=2$ (that is, for the Cauchy transform), the analogous results had been obtained previously by \cite{Mattila2} under the assumption \eqref{eqjo88}, and in \cite{Tolsa7}, in full generality, by using the notion of curvature of measures. In this paper we prove the following: \begin{teo}\label{teo88} Let $1\leq n<d$ and $\rho>2$. An $n$-dimensional AD regular Borel measure $\mu$ in ${\mathbb R}^d$ is uniformly $n$-rectifiable if and only if ${\mathcal V}_\rho\circ{\mathcal R}^\mu$ is a bounded operator in $L^2(\mu)$. Moreover, if $\mu$ is $n$-uniformly rectifiable, then for any kernel $K$ satisfying \eqref{4eq333}, the operator ${\mathcal V}_\rho\circ{\mathcal T}^\mu$ is bounded in $L^2(\mu)$. \end{teo} Let us compare this result with the David-Semmes Question \ref{ques David-Semmes}. Notice that the preceding theorem asserts that if we replace the $L^2(\mu)$ boundedness of $R_*^\mu$ by the stronger assumption that ${\mathcal V}_\rho\circ{\mathcal R}^\mu$ is bounded in $L^2(\mu)$, then $\mu$ must be uniformly rectifiable. On the other hand, the theorem claims that the variation for odd singular integral operators with any kernel satisfying \eqref{4eq333}, in particular for the $n$-dimensional Riesz transforms, is bounded in $L^2(\mu)$. A natural question then arises. Given an arbitrary measure $\mu$ on ${\mathbb R}^d$, without atoms say, does the $L^2(\mu)$ boundedness of $R_*^\mu$ implies the $L^2(\mu)$ boundedness of ${\mathcal V}_\rho\circ{\mathcal R}^\mu$, for $\rho>2$? By the results of \cite{MMV} and Theorem \ref{teo88}, this is true in the case $n=1$ if $\mu$ is AD regular $1$-dimensional. Clearly, a positive answer in the general case $n\geq1$ would solve the David-Semmes problem in the affirmative. Nevertheless, such an approach to try to solve this problem looks quite difficult. In fact, we recall that is not even known if the $L^2(\mu)$ boundedness of $R_*^\mu$ ensures the $\mu$-a.e.\ existence of the principal values of $R^\mu1$, which is a necessary condition for the $L^2(\mu)$ boundedness of ${\mathcal V}_\rho\circ{\mathcal R}^\mu$. Concerning the proof of Theorem \ref{teo88}, in our previous paper \cite{MT} we showed that, if $\mu$ stands for the $n$-dimensional Hausdorff-measure on an $n$-dimensional Lipschitz graph, then the $\rho$-variation for Riesz transforms and odd Calder\'on-Zygmund operators with smooth truncations are bounded in $L^2(\mu)$. This is a fundamental step to prove that ${\mathcal V}_\rho\circ{\mathcal R}^\mu$ and, more generally, ${\mathcal V}_\rho\circ{\mathcal T}^\mu$, are bounded in $L^2(\mu)$ if $\mu$ is uniformly $n$-rectifiable. Another basic tool in our arguments is the geometric corona decomposition of uniformly rectifiable measures introduced by David and Semmes in \cite{DS1}, which, roughly speaking, describes how ${\operatorname{supp}}(\mu)$ can be approximated at different scales by $n$-dimensional Lipschitz graphs. The proof of the fact that the $L^2(\mu)$ boundedness of ${\mathcal V}_\rho\circ{\mathcal R}^\mu$ implies the uniform rectifiability of $\mu$ is not so laborious as the one of the converse implication. As remarked above, if ${\mathcal V}_\rho\circ{\mathcal R}^\mu$ is bounded in $L^2(\mu)$, then the principal values of $R^\mu1$ exist $\mu$-a.e., which implies the $n$-rectifiability of $\mu$, by the results of \cite{Mattila-Preiss} or \cite{Tolsa8}. However, this is not enough to ensure the {\em uniform} $n$-rectifiability of $\mu$. We will prove the uniform $n$-rectifiability by arguments partially inspired by some of the techniques in \cite{To}. Finally, let us remark that Theorem \ref{teo88} follows from a more general result, namely Theorem \ref{4main theorem} below, which also deals with the variation for Riesz transforms and odd Calder\'on-Zygmund operators with smooth truncations. As usual, in the paper the letter `$C$' stands for some constant which may change its value at different occurrences, and which quite often only depends on $n$ and $d$. The notation $A\lesssim B$ ($A\gtrsim B$) means that there is some fixed constant $C$ such that $A\leq CB$ ($A\geq CB$), with $C$ as above. Also, $A\approx B$ is equivalent to $A\lesssim B\lesssim A$. \section{Preliminaries} \subsection{The main theorem} \begin{defi}[families of truncations]\label{4defi varphi} Let $\chi_{\mathbb R}:=\chi_{[1,\infty)}$ and let $\varphi_{\mathbb R}:[0,+\infty)\to[0,+\infty)$ be a non decreasing ${\mathcal C}^2$ function with $\chi_{[4,\infty)}\leq\varphi_{\mathbb R}\leq \chi_{[1/4,\infty)}$. Suppose moreover that $|\varphi'_{\mathbb R}|$ is bounded below away from zero in $[1/3,3]$, i.e., $\chi_{[1/3,3]} \leq C|\varphi'_{\mathbb R}|$ for some $C>0$. Given $x\in{\mathbb R}^d$, and $0<\epsilon\leq\delta$, we set \begin{gather*} \chi_\epsilon(x):=\chi_{{\mathbb R}}(|x|/\epsilon)\quad\text{and}\quad \chi_\epsilon^\delta(x):=\chi_\epsilon(x)-\chi_\delta(x),\\ \varphi_\epsilon(x):=\varphi_{{\mathbb R}}(|x|^2/\epsilon^2)\quad\text{and}\quad \varphi_\epsilon^\delta(x):=\varphi_\epsilon(x)-\varphi_\delta(x). \end{gather*} Notice that, for any finite Borel measure $\mu$, $T_{\epsilon}\mu(x) = (K\chi_\epsilon*\mu)(x)$. Given $x=(x^1,\ldots,x^d)\in{\mathbb R}^d$, we denote $\widetilde x=(x^1,\ldots,x^n,0,\ldots,0)\in{\mathbb R}^d$, and we set $\widetilde\varphi_\epsilon(x):=\varphi_\epsilon(\widetilde x)$ and $\widetilde\varphi_\epsilon^\delta(x):=\varphi_\epsilon^\delta(\widetilde x)$. Finally, for $f\in L^1(\mu)$ we set ${\mathcal T}^\mu f\equiv {\mathcal T}(f\mu):=\{T_\epsilon^\mu f\}_{\epsilon>0}$, \begin{gather*} T_{\varphi_\epsilon}^\mu f(x)\equiv T_{\varphi_\epsilon}(f\mu)(x):=(K\varphi_\epsilon*\mu)(x) \quad\text{and}\quad{\mathcal T}_\varphi^\mu f\equiv {\mathcal T}_\varphi(f\mu):=\{T_{\varphi_\epsilon}^\mu f\}_{\epsilon>0},\\ T_{\widetilde\varphi_\epsilon}^\mu f(x)\equiv T_{\widetilde\varphi_\epsilon}(f\mu)(x):=(K\widetilde\varphi_\epsilon*\mu)(x) \quad\text{and}\quad{\mathcal T}_{\widetilde\varphi}^\mu f\equiv {\mathcal T}_{\widetilde\varphi}(f\mu):=\{T_{\widetilde\varphi_\epsilon}^\mu f\}_{\epsilon>0}. \end{gather*} \end{defi} \begin{remarko}{\em In the definition, the choice of $[4,\infty)$, $[1/4,\infty)$, and $[1/3,3]$ is not specially relevant, it is just for definiteness. One can replace the preceding intervals by other suitable intervals, and all the proofs in the paper remain almost the same.} \end{remarko} We will prove the following. \begin{teo}[Main Theorem]\label{4main theorem} Let $1\leq n<d$ be integers. Let $\mu$ be an $n$-dimensional AD regular Borel measure on ${\mathbb R}^d$. The following are equivalent: \begin{itemize} \item[$(a)$] $\mu$ is uniformly $n$-rectifiable. \item[$(b)$] For any $K$ satisfying \eqref{4eq333} and any $\rho>2$, the operator ${\mathcal V}_\rho\circ{\mathcal T}^\mu_\varphi$ is bounded in $L^p(\mu)$ for all $1<p<\infty$, and from $L^1(\mu)$ into $L^{1,\infty}(\mu)$. \item[$(c)$] For any $K$ satisfying \eqref{4eq333} and any $\rho>2$, the operator ${\mathcal V}_\rho\circ{\mathcal T}^\mu$ is bounded in $L^2(\mu)$. \item[$(d)$] For some $\rho>0$, the operator ${\mathcal V}_\rho\circ{\mathcal R}^\mu$ is bounded in $L^2(\mu)$. \item[$(e)$] For $K(x)=x/|x|^{n+1}$ and some $\rho>0$, the operator ${\mathcal V}_\rho\circ{\mathcal T}^\mu_\varphi$ is bounded in $L^2(\mu)$. \end{itemize} \end{teo} Clearly, Theorem \ref{teo88} is a direct consequence of the preceding result. \begin{remarko}\label{4remark oscil2}{\em Let $\{r_m\}_{m\in{\mathbb Z}}\subset(0,\infty)$ be a fixed decreasing sequence defining ${\mathcal O}$. Then, the implications $(a)\Rightarrow(b),\ldots,(e)$ in the theorem above still hold if one replaces ${\mathcal V}_\rho$ by ${\mathcal O}$. If there exists $C>0$ such that $C^{-1}r_m\leq r_m-r_{m+1}\leq Cr_m$ for all $m\in{\mathbb Z}$, then the implications $(b),\ldots,(e)\Rightarrow(a)$ also hold (so Theorem \ref{4main theorem} remains true replacing ${\mathcal V}_\rho$ by ${\mathcal O}$), but we do not know if they are still true without this additional assumption (see Remark \ref{4remark oscil}).} \end{remarko} Notice that, by Theorem \ref{4main theorem}, besides ${\mathcal V}_\rho\circ{\mathcal R}^\mu$ and ${\mathcal O}\circ{\mathcal R}^\mu$, the operators ${\mathcal V}_\rho\circ{\mathcal T}^\mu_\varphi$ and ${\mathcal O}\circ{\mathcal T}^\mu_\varphi$ for $K(x)=x/|x|^{n+1}$ characterize completely the $n$-AD regular measures $\mu$ which are uniformly $n$-rectifiable. One of the main ingredients for the proof of Theorem \ref{4main theorem} is the following result, which strengthens one of the endpoint estimates obtained in \cite{MT}. Let $M({\mathbb R}^d)$ be the space of finite real Borel measures on ${\mathbb R}^d$, with the norm induced by the variation of measures. \begin{teo}\label{4unif rectif teo3} Let $\rho>2$ and let $\mu$ be the $n$-dimensional Hausdorff measure restricted to an $n$-dimensional Lipschitz graph. Then, ${\mathcal V}_\rho\circ{\mathcal T}_\varphi$ is a bounded operator from $M({\mathbb R}^d)$ to $L^{1,\infty}(\mu)$. In particular, ${\mathcal V}_\rho\circ{\mathcal T}_\varphi^{\mu}$ is of weak type $(1,1)$. The bound of the norm of this operator only depends on $n$, $d$, $K$, $\rho$, $\varphi_{\mathbb R}$, and the maximal slope of $\Gamma$. \end{teo} By an $n$-dimensional Lipschitz graph $\Gamma\subset{\mathbb R}^d$ we mean any translation and rotation of a set of the type $\{x\in{\mathbb R}^d\,:\,x=(y,A(y)),\, y\in{\mathbb R}^n\}$, where $A:{\mathbb R}^n\to{\mathbb R}^{d-n}$ is some Lipschitz function with Lipschitz constant ${\operatorname{Lip}}(A)$, which coincides with the maximal slope of $\Gamma$. \begin{remarko}{\em The theorem above remains valid if one replaces ${\mathcal V}_\rho$ by ${\mathcal O}$. Moreover, the norm of ${\mathcal O}\circ{\mathcal T}_\varphi^{\mu}$ is bounded independently of the sequence that defines ${\mathcal O}$.} \end{remarko} The plan to prove Theorem \ref{4main theorem} is the following: in Section \ref{4sec unif rectif teo3} we deal with Theorem \ref{4unif rectif teo3}, which is used in the subsequent Section \ref{4sec acotacio unif rectif} to obtain the implication $(a)\,\Longrightarrow\,(b)$ of Theorem \ref{4main theorem}. In Section \ref{5s var no suau} we prove $(a)\,\Longrightarrow\,(c)$ in Theorem \ref{5teo var no suau acotada L2}, and in Section \ref{4sec acotacio implica rectif} we prove Theorem \ref{4rectif teorema}, which gives $(d)\,\Longrightarrow\,(a)$ and $(e)\,\Longrightarrow\,(a)$, and finishes the proof of Theorem \ref{4main theorem}, taking into account that the implications $(b)\,\Longrightarrow\,(e)$ and $(c)\,\Longrightarrow\,(d)$ are trivial. Theorems \ref{4main theorem} and \ref{4unif rectif teo3} are stated in terms of ${\mathcal V}_\rho$, but they also hold for ${\mathcal O}$, as remarked above. However, we will only give the proof of these results for ${\mathcal V}_\rho$, because the case of ${\mathcal O}$ follows by very similar arguments and computations. \subsection{Calder\'{o}n-Zygmund decomposition for measures}\label{4secdob} Given a cube $Q\subset{\mathbb R}^d$ and $a>0$, we denote by $\ell(Q)$ the side length of $Q$ and by $aQ$ the cube concentric with $Q$ with side length $a\ell(Q)$. The cubes that we consider in this paper have sides parallel to the coordinate axes in ${\mathbb R}^d$. A proof of the following result can be found in \cite[Chapter 2]{Tolsa-llibre} or \cite[Lemma 5.1.2]{Mas-thesis}. \begin{lema}[Calder\'{o}n-Zygmund decomposition]\label{4lema CZ} Assume that $\mu:={\mathcal H}^n_{\Gamma\cap B}$, where $\Gamma$ is an $n$-dimensional Lipschitz graph and $B\subset{\mathbb R}^d$ is some fixed ball. For any $\nu\in M({\mathbb R}^d)$ with compact support and any $\lambda>2^{d+1}\|\nu\|/\|\mu\|$, the following holds: \begin{itemize} \item[$(a)$] There exists a finite or countable collection of almost disjoint cubes $\{Q_j\}_j\subset{\mathbb R}^d$ (that is, $\sum_j\chi_{Q_j}\leq C$) and a function $f\in L^1(\mu)$ such that \begin{gather} |\nu|(Q_j)>2^{-d-1}\lambda\mu(2Q_j),\label{4lema CZ 1}\\ |\nu|(\eta Q_j)\leq2^{-d-1}\lambda\mu(2\eta Q_j)\quad\text{for }\eta>2,\label{4lema CZ 2}\\ \nu=f\mu\text{ in }{\mathbb R}^d\setminus\textstyle{\bigcup_j}Q_j\text{ with }|f|\leq\lambda\;\,\mu\text{-a.e}.\label{4lema CZ 3} \end{gather} \item[$(b)$]For each $j$, let $R_j:=6Q_j$ and denote $w_j:=\chi_{Q_j}\big(\sum_k\chi_{Q_k}\big)^{-1}$. Then, there exists a family of functions $\{b_j\}_j$ with ${\operatorname{supp}} b_j\subset R_j$ and with constant sign satisfying \begin{gather} \int b_j\,d\mu=\int w_j\,d\nu,\label{4lema CZ 4}\\ \| b_j\|_{L^\infty(\mu)}\mu(R_j)\leq C|\nu|(Q_j),\text{ and}\label{4lema CZ 5}\\ {\textstyle\sum_j}|b_j|\leq C_0\lambda\quad \text{(where $C_0$ is some absolute constant).}\label{4lema CZ 6} \end{gather} \end{itemize} \end{lema} \subsection{Dyadic lattices}\label{dyadic lattice} For the study of the uniformly rectifiable measures we will use the ``dyadic cubes'' built by G. David in \cite[Appendix 1]{David-LNM} (see also \cite[Chapter 3 of Part I]{DS2}). These dyadic cubes are not true cubes, but they play this role with respect to a given $n$-dimenasional AD regular Borel measure $\mu$, in a sense. To distinguish them from the usual cubes, we will call them {\em $\mu$-cubes}. Let us explain which are the precise results and properties about the lattice of dyadic $\mu$-cubes. Given an $n$-dimensional AD regular Borel measure $\mu$ in ${\mathbb R}^d$ (for simplicity, we may assume ${\operatorname{diam}}({\operatorname{supp}}\mu)=\infty$), for each $j\in{\mathbb Z}$ there exists a family ${\mathcal D}_j$ of Borel subsets of ${\operatorname{supp}}\mu$ (the dyadic $\mu$-cubes of the $j$-th generation) such that: \begin{itemize} \item[$(a)$] each ${\mathcal D}_j$ is a partition of ${\operatorname{supp}}\mu$, i.e.\ ${\operatorname{supp}}\mu=\bigcup_{Q\in {\mathcal D}_j} Q$ and $Q\cap Q'=\emptyset$ whenever $Q,Q'\in{\mathcal D}_j$ and $Q\neq Q'$; \item[$(b)$] if $Q\in{\mathcal D}_j$ and $Q'\in{\mathcal D}_k$ with $k\leq j$, then either $Q\subset Q'$ or $Q\cap Q'=\emptyset$; \item[$(c)$] for all $j\in{\mathbb Z}$ and $Q\in{\mathcal D}_j$, we have $2^{-j}\lesssim{\operatorname{diam}}(Q)\leq2^{-j}$ and $\mu(Q)\approx 2^{-jn}$; \item[$(d)$] there exists $C>0$ such that, for all $j\in{\mathbb Z}$, $Q\in{\mathcal D}_j$, and $0<\tau<1$, \begin{equation}\label{small boundary condition} \begin{split} \mu\big(\{x\in Q:\, &{\operatorname{dist}}(x,{\operatorname{supp}}\mu\setminus Q)\leq\tau2^{-j}\}\big)\\&+\mu\big(\{x\in {\operatorname{supp}}\mu\setminus Q:\, {\operatorname{dist}}(x,Q)\leq\tau2^{-j}\}\big)\leq C\tau^{1/C}2^{-jn}. \end{split} \end{equation} This property is usually called the {\em small boundaries condition}. From (\ref{small boundary condition}), it follows that there is a point $z_Q\in Q$ (the center of $Q$) such that ${\operatorname{dist}}(z_Q,{\operatorname{supp}}\mu\setminus Q)\gtrsim 2^{-j}$ (see \cite[Lemma 3.5 of Part I]{DS2}). \end{itemize} We denote ${\mathcal D}:=\bigcup_{j\in{\mathbb Z}}{\mathcal D}_j$. For $Q\in {\mathcal D}_j$, we define the side length of $Q$ as $\ell(Q)=2^{-j}$. Notice that $\ell(Q)\lesssim{\operatorname{diam}}(Q)\leq \ell(Q)$. Actually it may happen that a $\mu$-cube $Q$ belongs to ${\mathcal D}_ j\cap {\mathcal D}_k$ with $j\neq k$. In this case, $\ell(Q)$ is not well defined. However, this problem can be solved in many ways. For example, the reader may think that a $\mu$-cube is not only a subset of ${\operatorname{supp}}\mu$, but a couple $(Q,j)$, where $Q$ is a subset of ${\operatorname{supp}}\mu$ and $j\in{\mathbb Z}$ is such that $Q\in{\mathcal D}_j$. Given $a>1$ and $Q\in{\mathcal D}$, we set $a Q:= \bigl\{x\in {\operatorname{supp}}\mu: {\operatorname{dist}}(x,Q)\leq (a-1)\ell(Q)\bigr\}.$ Observe that ${\operatorname{diam}}(a Q)\leq {\operatorname{diam}}(Q) + 2(a-1)\ell(Q)\leq (2a-1)\ell(Q)$. \subsection{Corona decomposition}\label{5ss corona decomposition} Given an $n$-dimensional AD regular Borel measure $\mu$ on ${\mathbb R}^d$, let ${\mathcal D}:=\{Q\in{\mathcal D}_j\,:\,j\in{\mathbb Z}\}$ be the dyadic lattice associated to $\mu$ introduced in Subsection \ref{dyadic lattice}. Following \cite[Definitions 3.13 and 3.19 of Part I]{DS2}, one says that $\mu$ admits a corona decomposition if, for each $\eta>0$ and $\theta>0$, one can find a triple $({\mathcal B},{\mathcal G},{\operatorname{Trs}})$, where ${\mathcal B}$ and ${\mathcal G}$ are two subsets of ${\mathcal D}$ (the ``bad $\mu$-cubes'' and the ``good $\mu$-cubes'') and ${\operatorname{Trs}}$ is a family of subsets $S\subset{\mathcal G}$ (that we will call {\em trees}), which satisfy the following conditions:: \begin{itemize} \item[$(a)$] ${\mathcal D}={\mathcal B}\cup{\mathcal G}$\quad and\quad${\mathcal B}\cap{\mathcal G}=\emptyset.$ \item[$(b)$] ${\mathcal B}$ satisfies a Carleson packing condition, i.e., $\sum_{Q\in{\mathcal B}:\, Q\subset R}\mu(Q)\lesssim\mu(R)$ for all $R\in{\mathcal D}$. \item[$(c)$] ${\mathcal G}=\biguplus_{S\in{\operatorname{Trs}}}S$, i.e., any $Q\in{\mathcal G}$ belongs to only one $S\in{\operatorname{Trs}}$. \item[$(d)$] Each $S\in{\operatorname{Trs}}$ is {\em coherent}. This means that each $S\in{\operatorname{Trs}}$ has a unique maximal element $Q_S$ which contains all other elements of $S$ as subsets, that $Q'\in S$ as soon as $Q'\in{\mathcal D}$ satisfies $Q\subset Q'\subset Q_S$ for some $Q\in S$, and that if $Q\in S$ then either all of the children of $Q$ lie in $S$ or none of them do (if $Q\in{\mathcal D}_j$, the {\em children} of $Q$ is defined as the collection of $\mu$-cubes $Q'\in{\mathcal D}_{j+1}$ such that $Q'\subset Q$). \item[$(e)$] The maximal $\mu$-cubes $Q_S$, for $S\in{\operatorname{Trs}}$, satisfy a Carleson packing condition. That is, $\sum_{S\in{\operatorname{Trs}}:\, Q_S\subset R}\mu(Q_S)\lesssim\mu(R)$ for all $R\in{\mathcal D}$. \item[$(f)$] For each $S\in{\operatorname{Trs}}$, there exists an $n$-dimensional Lipschitz graph $\Gamma_S$ with constant smaller than $\eta$ such that ${\operatorname{dist}}(x,\Gamma_S)\leq\theta\,{\operatorname{diam}}(Q)$ whenever $x\in2Q$ and $Q\in S$ (one can replace ``$x\in2Q$'' by ``$x\in C_{cor}Q$'' for any constant $C_{cor}\geq2$ given in advance, by \cite[Lemma 3.31 of Part I]{DS2}). \end{itemize} It is shown in \cite{DS1} (see also \cite{DS2}) that if $\mu$ is uniformly rectifiable then it admits a corona decomposition for all parameters $k>2$ and $\eta,\theta>0$. Conversely, the existence of a corona decomposition for a single set of parameters $k>2$ and $\eta,\theta>0$ implies that $\mu$ is uniformly rectifiable. \subsection{The $\alpha$ and $\beta$ coefficients} Let $\mu$ be an $n$-dimensional AD regular Borel measure in ${\mathbb R}^d$ and ${\mathcal D}$ as in Subsection \ref{dyadic lattice}. Given $1\leq p<\infty$ and a $\mu$-cube $Q\in{\mathcal D}$, one sets (see \cite{DS2}) \begin{equation*} \beta_{p,\mu}(Q) = \inf_L \biggl\{ \frac1{\ell(Q)^n}\int_{2Q} \biggl(\frac{{\operatorname{dist}}(y,L)}{\ell(Q)}\biggr)^pd\mu(y)\biggr\}^{1/p}, \end{equation*} where the infimum is taken over all $n$-planes $L$ in ${\mathbb R}^d$. For $p=\infty$ one replaces the $L^p$ norm by the supremum norm. The $\beta_{\infty,\mu}$ coefficients were first introduced by P. Jones in his celebrated work on rectifiability \cite{Jones-salesman}, while the $\beta_{p,\mu}$'s for $1\leq p<\infty$ were introduced by G. David and S. Semmes in their pioneering work on uniform rectifiability (see \cite{DS1} for example). Other coefficients that have been proved useful in the study of uniform rectifiability and boundedness of Calder\'on-Zygmund operators are the $\alpha$ coefficients introduced in \cite{To}. Let $F\subset{\mathbb R}^d$ be the closure of an open set. Given two finite Borel measures $\sigma$, $\nu$ on ${\mathbb R}^d$, one sets ${\operatorname{dist}}_F(\sigma,\nu):= \sup\bigl\{ \bigl|{\textstyle \int f\,d\sigma - \int f\,d\nu}\bigr|:\,{\rm Lip}(f) \leq1,\,{\operatorname{supp}} f\subset F\bigr\}.$ Finally, given a $\mu$-cube $Q\in{\mathcal D}$, consider the closed ball $B_Q:=B(z_Q,6\sqrt{d}\ell(Q))$, where $z_Q$ denotes the center of $Q$. Then one defines \begin{equation*} \alpha_\mu(Q):=\frac1{\ell(Q)^{n+1}}\,\inf_{c\geq0,L} \,{\operatorname{dist}}_{B_Q}(\mu,\,c{\mathcal H}^n_{L}), \end{equation*} where the infimum is taken over all constants $c\geq0$ and all $n$-planes $L$ in ${\mathbb R}^d$. The following result characterizes the uniform rectifiability of $\mu$ in terms of the $\alpha$ and $\beta$ coefficients (see \cite{DS1} for $(a)\Longleftrightarrow(b)$ and \cite{To} for $(a)\Longleftrightarrow(c)$). \begin{teo}\label{alphas unifrectif} Let $p\in[1,2]$ and let $\mu$ be an $n$-dimensional AD regular Borel measure in ${\mathbb R}^d$. The following are equivalent: \begin{itemize} \item[$(a)$] $\mu$ is uniformly $n$-rectifiable. \item[$(b)$] $\sum_{Q\in{\mathcal D}:\,Q\subset R}\beta_{p,\mu}(Q)^2\ell(Q)^n\lesssim\ell(R)^n$ for all $\mu$-cubes $R\in{\mathcal D}$. \item[$(c)$] $\sum_{Q\in{\mathcal D}:\,Q\subset R}\alpha_{\mu}(Q)^2\ell(Q)^n\lesssim\ell(R)^n$ for all $\mu$-cubes $R\in{\mathcal D}$. \end{itemize} \end{teo} For the case $\mu={\mathcal H}^n_\Gamma$ for some Lipschitz graph $\Gamma=\{x\in{\mathbb R}^d\,:\,x=(y,A(y)),\, y\in{\mathbb R}^n\}$, one can take ${\mathcal D}=\{\widetilde Q\times{\mathbb R}^{d-n}\cap\Gamma:\,\widetilde Q\in{\mathcal D}({\mathbb R}^n)\}$, where ${\mathcal D}({\mathbb R}^n)$ denotes the standard dyadic lattice of ${\mathbb R}^n$. For $Q= (\widetilde Q\times{\mathbb R}^{d-n})\cap\Gamma\in{\mathcal D}$, we set \begin{equation*} \widetilde\alpha_\mu(Q):=\frac1{\ell(\widetilde Q)^{n+1}}\,\inf_{c\geq0,L} \,{\operatorname{dist}}_{6\widetilde Q\times{\mathbb R}^{d-n}}(\mu,\,c{\mathcal H}^n_{L}), \end{equation*} where the infimum is taken over all constants $c\geq0$ and all $n$-planes $L$ in ${\mathbb R}^d$. Then, it is easy to show that $\widetilde\alpha_\mu(Q)\approx\alpha_\mu(Q)$ for all $Q\in{\mathcal D}$. One can also define $\widetilde\beta_{p,\mu}(Q)$ in an analogous manner. By Theorem \ref{alphas unifrectif}, \begin{equation}\label{pack alpha graph} \sum_{Q\in{\mathcal D}:\,Q\subset R}(\widetilde\beta_{p,\mu}(Q)^2+\widetilde\alpha_\mu(Q)^2)\ell(Q)^n\leq C\ell(R)^n \end{equation} for all $R\in{\mathcal D}$, with $C$ independent of $R$. Moreover, one can also show that this last inequality also holds replacing $Q$ and $R$ by $k_1Q$ and $k_2R$ for any $k_1,k_2\geq1$ given in advance, where $kQ:=(k\widetilde Q\times{\mathbb R}^{d-n})\cap\Gamma$ for $k>0$. \section{If $\Gamma$ is an $n$-dimensional Lipschitz graph, then\\ ${\mathcal V}_\rho\circ{\mathcal T}_\varphi:\,M({\mathbb R}^d)\to L^{1,\infty}({\mathcal H}^n_\Gamma)$ is a bounded operator}\label{4sec unif rectif teo3} The following result is contained in \cite[Theorem 1.1]{MT} (see also \cite[Main Theorem 3.0.1]{Mas-thesis}). \begin{teo}\label{main teo lip} Let $\rho>2$ and let $\mu$ be the $n$-dimensional Hausdorff measure restricted to an $n$-dimensional Lipschitz graph. Then, the operator ${\mathcal V}_\rho\circ{\mathcal T}^\mu_{\widetilde\varphi}$ is bounded in $L^{2}(\mu)$. The bound of the norm only depends on $n$, $d$, $K$, $\rho$, $\varphi_{\mathbb R}$, and the slope of the graph. \end{teo} By very similar techniques to the ones used in the proof of the theorem above, one can prove the following. \begin{teo}\label{theorem lip} Let $\rho>2$ and let $\mu$ be the $n$-dimensional Hausdorff measure restricted to an $n$-dimensional Lipschitz graph. Then, the operator ${\mathcal V}_\rho\circ{\mathcal T}^\mu_{\varphi}$ is bounded in $L^{2}(\mu)$. The bound of the norm only depends on $n$, $d$, $K$, $\rho$, $\varphi_{\mathbb R}$, and the slope of the graph. \end{teo} \begin{proof}[{\bf{\em Sketch of the proof}}] The first step consists in obtaining the following basic estimate: Fix a cube $\widetilde P\subset{\mathbb R}^n$. Set $\Gamma:=\{x\in {\mathbb R}^d\,:\,x=(y,A(y)),\,y\in{\mathbb R}^n\},$ where $A:{\mathbb R}^n\to{\mathbb R}^{d-n}$ is a Lipschitz function supported in $\widetilde P$, and set $P:=(\widetilde P\times {\mathbb R}^{d-n})\cap\Gamma$. Set $\mu:=f{\mathcal H}^n_{\Gamma}$, where $f(x)=1$ for all $x\in \Gamma\setminus P$ and $C_0^{-1}\leq f(x)\leq C_0$ for all $x\in P$, for some constant $C_0>0$. For each $x\in\Gamma$, define \begin{equation}\label{definicio Wmu} W\mu(x)^2:=\sum_{m\in{\mathbb Z}}|(K\varphi_{2^{-m}}*\mu)(x)-(K\widetilde \varphi_{2^{-m}}*\mu)(x)|^{2}. \end{equation} and \begin{equation}\label{definicio Smu} S\mu(x)^2:=\sup_{\{\epsilon_{m}\}}\sum_{j\in{\mathbb Z}} \,\sum_{m\in{\mathbb Z}:\,\epsilon_{m},\epsilon_{m+1}\in I_j} |(K\varphi_{\epsilon_{m+1}}^{\,\epsilon_{m}}*\mu)(x)|^{2}, \end{equation} where $I_j=[2^{-j-1},2^{-j})$ and the supremum is taken over all decreasing sequences of positive numbers $\{\epsilon_{m}\}_{m\in{\mathbb Z}}$. Then, we claim that \begin{equation}\label{basic estimate} \left\|W\mu\right\|^{2}_{L^{2}(\mu)}+\left\|S\mu\right\|^{2}_{L^{2}(\mu)}\lesssim \sum_{Q\in{\mathcal D}}\big(\,\widetilde\alpha_\mu(C_1Q)^{2}+\widetilde\beta_{2,\mu}(Q)^2\,\big)\ell(Q)^n, \end{equation} where $C_1>0$ only depends on $C_0$, $n$, $d$, $K$, $\varphi_{\mathbb R}$, and ${\operatorname{Lip}}(A)$, and where ${\mathcal D}$ denotes the dyadic lattice associated to ${\mathcal H}^n_\Gamma$ defined below Theorem \ref{alphas unifrectif}. Let us prove the claim. If we define $\widetilde S\mu$ like $S\mu$ but replacing $\varphi_{\epsilon_{m+1}}^{\,\epsilon_{m}}$ by $\widetilde\varphi_{\epsilon_{m+1}}^{\,\epsilon_{m}}$, in the proof of Theorem \ref{main teo lip} in \cite{MT} it is shown that $\|\widetilde S\mu\|^{2}_{L^{2}(\mu)}$ is bounded above by the right hand side of (\ref{basic estimate}). The proof for $\|S\mu\|^{2}_{L^{2}(\mu)}$ is almost the same. Let us deal now with $W\mu$. Fix $D:=(\widetilde D\times{\mathbb R}^{d-n})\cap\Gamma\in{\mathcal D}$ with $\ell(D)=2^{-m}$ and $x\in D$. Let $L_D$ be an $n$-plane that minimizes $\widetilde\alpha_\mu(C_1D)$, where $C_1>0$ is some constant big enough which will be fixed later, and let $\sigma_D:=c_D{\mathcal H}^n_{L_D}$ be a minimizing measure for $\widetilde\alpha_\mu(C_1D)$. Let $L_D^x$ be the $n$-plane parallel to $L_D$ which contains $x$, and set $\sigma^x_D:=c_D{\mathcal H}^n_{L^x_D}$. Since $x\in D$ and $\ell(D)=2^{-m}$, $(\varphi_{2^{-m}}(x-\cdot)-\widetilde\varphi_{2^{-m}}(x-\cdot))K(x-\cdot)$ is a function supported in $C_1\widetilde D\times{\mathbb R}^{d-n}$ (for some constant $C_1$ big enough) and with Lipschitz constant smaller than $C2^{m(n+1)}$. Moreover, by the antisymmetry of the function $(\varphi_{2^{-m}}(x-\cdot)-\widetilde\varphi_{2^{-m}}(x-\cdot))K(x-\cdot)$, and since $\sigma_D^x$ is a multiple of the $n$-dimensional Hausdorff measure on an $n$-plane which contains $x$, we have $(K\varphi_{2^{-m}}*\sigma_D^x)(x)-(K\widetilde \varphi_{2^{-m}}*\sigma_D^x)(x)=0.$ Therefore, \begin{equation}\label{basic eq1} \begin{split} (K\varphi_{2^{-m}}&*\mu)(x)-(K\widetilde \varphi_{2^{-m}}*\mu)(x)= (K(\varphi_{2^{-m}}-\widetilde \varphi_{2^{-m}})*\mu)(x)\\ &=(K(\varphi_{2^{-m}}-\widetilde \varphi_{2^{-m}})*(\mu-\sigma_D))(x) +(K(\varphi_{2^{-m}}-\widetilde \varphi_{2^{-m}})*(\sigma_D-\sigma_D^x))(x). \end{split} \end{equation} Using the definition of $\widetilde\alpha_\mu$, we get \begin{equation}\label{dsds1} \begin{split} |(K(\varphi_{2^{-m}}-\widetilde \varphi_{2^{-m}})*(\mu-\sigma_D))(x)| \lesssim2^{m(n+1)}{\operatorname{dist}}_{C_1\widetilde D\times{\mathbb R}^{d-n}}(\mu,\sigma_D)\lesssim\widetilde\alpha_\mu(C_1 D). \end{split} \end{equation} Since $L_D^x$ is a translation of $L_D$, by standard estimates it is not hard to show that \begin{equation}\label{dsds} |(K(\varphi_{2^{-m}}-\widetilde \varphi_{2^{-m}})*(\sigma_D-\sigma_D^x))(x)|\lesssim2^{m}{\operatorname{dist}}(x,L_D)={\operatorname{dist}}(x,L_D)/\ell(D). \end{equation} Let ${\operatorname{dist}}_{\mathcal H}(E,F)$ denote the Hausdorff distance of two given sets $E,F\subset{\mathbb R}^d$, and set $\widetilde B_D:=6\widetilde D\times {\mathbb R}^{d-n}$. If $L_D^1$ and $L_D^2$ denote a minimizing $n$-plane for $\widetilde\beta_{1,\mu}(D)$ and $\widetilde\beta_{2,\mu}(D)$, respectively, one can show that ${\operatorname{dist}}_{\mathcal H}(L_D\cap \widetilde B_D,L_D^1\cap \widetilde B_D)\lesssim\widetilde\alpha_\mu(D)\ell(D)$ and that ${\operatorname{dist}}_{\mathcal H}(L_D^1\cap \widetilde B_D,L_D^2\cap \widetilde B_D)\lesssim\widetilde\beta_{2,\mu}(D)\ell(D)$. This easily implies that ${\operatorname{dist}}(x,L_D)\lesssim{\operatorname{dist}}(x,L^2_D)+\widetilde\beta_{2,\mu}(D)\ell(D)+\widetilde\alpha_\mu(D)\ell(D)$ for all $x\in D$. Applying this to (\ref{dsds}), and using also (\ref{dsds1}) and (\ref{basic eq1}), we obtain \begin{equation*} \begin{split} \left\|W\mu\right\|^{2}_{L^{2}(\mu)}&= \int\sum_{m\in{\mathbb Z}}|(K(\varphi_{2^{-m}}-\widetilde \varphi_{2^{-m}})*\mu)(x)|^{2}\,d\mu(x)\\ &=\sum_{m\in{\mathbb Z}}\,\sum_{D\in{\mathcal D}:\,\ell(D)=2^{-m}}\int_D|(K(\varphi_{2^{-m}}-\widetilde \varphi_{2^{-m}})*\mu)(x)|^{2}\,d\mu(x)\\ &\lesssim\sum_{m\in{\mathbb Z}}\,\sum_{D\in{\mathcal D}:\,\ell(D)=2^{-m}}\int_D\big({\operatorname{dist}}(x,L^2_D)/\ell(D)+\widetilde\beta_{2,\mu}(D)+\widetilde\alpha_\mu(C_1D)\big)^2\,d\mu(x)\\ &\lesssim\sum_{D\in{\mathcal D}}\big(\widetilde\alpha_\mu(C_1 D)^2+\widetilde\beta_{2,\mu}(D)^2\big)\ell(D)^n, \end{split} \end{equation*} which proves (\ref{basic estimate}). Let now $\mu$ be as in Theorem \ref{theorem lip}. Using (\ref{basic estimate}) and Theorem \ref{main teo lip}, one can show that there exists $C>0$ such that, for any cube $\widetilde D\subset{\mathbb R}^n$ and any $g\in L^\infty(\mu)$ supported in $D$ (where $D:=\widetilde D\times{\mathbb R}^{d-n}$), \begin{equation*} \int_{D}\big(({\mathcal V}_\rho\circ{\mathcal T}_{\varphi}^{\mu})g\big)^2\,d\mu\leq C\|g\|^2_{L^\infty(\mu)}\mu(D). \end{equation*} This yields the endpoint estimates ${\mathcal V}_\rho\circ{\mathcal T}^\mu_{\varphi}:H^1(\mu)\to L^1(\mu)$ and ${\mathcal V}_\rho\circ{\mathcal T}^\mu_{\varphi}:L^\infty(\mu)\to BMO(\mu)$, where $H^1(\mu)$ denotes the atomic Hardy space related to $\mu$. Then, by interpolation, one finally deduces that ${\mathcal V}_\rho\circ{\mathcal T}^\mu_{\varphi}$ is bounded in $L^2(\mu)$. Since this part of the proof is analogous to the one in the proof of Theorem \ref{main teo lip} (see \cite[Theorem 1.1]{MT}), we omit it. \end{proof} \subsection{Proof of Theorem \ref{4unif rectif teo3}} The proof of Theorem \ref{4unif rectif teo3} uses the Calder\'{o}n-Zygmund decomposition of Lemma \ref{4lema CZ} and rather standard arguments. Set $\mu:={\mathcal H}^n_{\Gamma\cap B}$, where is some fixed ball $B\subset{\mathbb R}^d$. Let $\nu\in M({\mathbb R}^d)$ be a finite Radon measure with compact support and $\lambda>2^{d+1}\|\nu\|/\|\mu\|$. We will show that \begin{equation}\label{4cota mesuresL1debil} \mu\big(\big\{x\in{\mathbb R}^d\,:\,({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu(x)>\lambda\big\}\big)\leq\frac{C}{\lambda}\,\|\nu\|, \end{equation} where $C>0$ depends on $n$, $d$, $K$, $\rho$ and $\Gamma$, but not on $B$. Let us check that this implies that ${\mathcal V}_\rho\circ{\mathcal T}_\varphi$ is bounded from $M({\mathbb R}^d)$ into $L^{1,\infty}({\mathcal H}^n_\Gamma)$. First, we show that (\ref{4cota mesuresL1debil}) also holds for $\nu$ without compact support. Set $\nu_N = \chi_{B(0,N)}\,\nu$ and let $N_0$ be such that ${\operatorname{supp}}\mu\subset B(0,N_0)$. Then it is not hard to show that, for $x\in{\operatorname{supp}}\mu$, $$|({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu(x)-({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu_N(x)|\leq C\,\frac{|\nu|({\mathbb R}^d\setminus B(0,N))}{N-N_0},$$ thus $({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu_N(x)\to ({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu(x)$ for all $x\in {\operatorname{supp}}\mu$, and since the estimate (\ref{4cota mesuresL1debil}) holds by assumption for $\nu_N$, letting $N\to\infty$, we deduce that it also holds for $\nu$. Now, by increasing the size of the ball $B$ and by monotone convergence, we deduce that ${\mathcal H}^n_\Gamma\big(\big\{x\in{\mathbb R}^d\,:\,({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu(x)>\lambda\big\}\big)\leq C\lambda^{-1}\|\nu\|$, as desired. To prove (\ref{4cota mesuresL1debil}) for $\nu\in M({\mathbb R}^d)$ with compact support, let $\{Q_j\}_j$ be the almost disjoint family of cubes of Lemma \ref{4lema CZ}, and set $\Omega:=\bigcup_jQ_j$ and $R_j:=6Q_j$. Then we can write $\nu=g\mu+\nu_b$, with $$g\mu=\chi_{{\mathbb R}^d\setminus\Omega}\nu+ \sum_j b_j\mu\quad\text{and}\quad \nu_b=\sum_j\nu_b^j:=\sum_j\left(w_j\nu-b_j\mu\right),$$ where the functions $b_j$ satisfy (\ref{4lema CZ 4}), (\ref{4lema CZ 5}), (\ref{4lema CZ 6}) and $w_j=\chi_{Q_j}\big(\sum_k \chi_{Q_k}\big)^{-1}$. By the subadditivity of ${\mathcal V}_\rho\circ{\mathcal T}_\varphi$, we have \begin{equation}\label{4cota mesuresL1debil 2} \begin{split} \mu\big(&\big\{x\in{\mathbb R}^d\,:\,({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu(x)>\lambda\big\}\big)\\ &\leq\mu\big(\big\{x\in{\mathbb R}^d\,:\,({\mathcal V}_\rho\circ{\mathcal T}_\varphi^\mu)g(x)>\lambda/2\big\}\big) +\mu\big(\big\{x\in{\mathbb R}^d\,:\,({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu_b(x)>\lambda/2\big\}\big). \end{split} \end{equation} Since ${\mathcal V}_\rho\circ{\mathcal T}_\varphi^{{\mathcal H}^n_\Gamma}$ is bounded in $L^2({\mathcal H}^n_\Gamma)$ by Theorem \ref{theorem lip}, it is easy to show that ${\mathcal V}_\rho\circ{\mathcal T}_\varphi^{\mu}$ is bounded in $L^2(\mu)$, with a bound independent of $B$. Notice that $|g|\leq C\lambda$ by (\ref{4lema CZ 3}) and (\ref{4lema CZ 6}). Then, using (\ref{4lema CZ 5}), \begin{equation}\label{4cota mesuresL1debil 3} \begin{split} \mu\big(\big\{x\in{\mathbb R}^d\,:\,({\mathcal V}_\rho\circ{\mathcal T}_\varphi^\mu)g(x)>\lambda/2\big\}\big) &\lesssim\frac{1}{\lambda^{2}}\int|({\mathcal V}_\rho\circ{\mathcal T}_\varphi^\mu)g|^2\,d\mu \lesssim\frac{1}{\lambda^{2}}\int|g|^2\,d\mu\\ &\lesssim\frac{1}{\lambda}\int|g|\,d\mu \lesssim\frac{1}{\lambda}\bigg(|\nu|({\mathbb R}^d\setminus\Omega)+\sum_j\int_{R_j}|b_j|\,d\mu\bigg)\\ &\lesssim\frac{1}{\lambda}\bigg(|\nu|({\mathbb R}^d\setminus\Omega)+\sum_j|\nu|(Q_j)\bigg)\lesssim\frac{\|\nu\|}{\lambda}. \end{split} \end{equation} Let $\widehat\Omega:=\bigcup_j2Q_j$. By (\ref{4lema CZ 1}), we have $\mu(\widehat\Omega)\leq\sum_j\mu(2Q_j)\lesssim\lambda^{-1}\sum_j|\nu|(Q_j)\lesssim\lambda^{-1}\|\nu\|$. We are going to show now that \begin{equation}\label{4cota mesuresL1debil 1} \mu\big(\big\{x\in{\mathbb R}^d\setminus\widehat\Omega\,:\,({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu_b(x)>\lambda/2\big\}\big)\leq\frac{C}{\lambda}\,\|\nu\|, \end{equation} and then (\ref{4cota mesuresL1debil}) is a direct consequence of (\ref{4cota mesuresL1debil 2}), (\ref{4cota mesuresL1debil 3}), (\ref{4cota mesuresL1debil 1}) and the estimate $\mu(\widehat\Omega)\lesssim\lambda^{-1}\|\nu\|$. Since ${\mathcal V}_\rho\circ{\mathcal T}_\varphi$ is sublinear, \begin{equation}\label{4cota mesuresL1debil equa1} \begin{split} \mu\big(\big\{x\in{\mathbb R}^d\setminus\widehat\Omega\,:\,({\mathcal V}_\rho&\circ{\mathcal T}_\varphi)\nu_b(x)>\lambda/2\big\}\big) \lesssim\frac{1}{\lambda}\sum_{j}\int_{{\mathbb R}^d\setminus\widehat\Omega}({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu_b^j\,d\mu\\ &\leq\frac{1}{\lambda}\sum_{j}\int_{{\mathbb R}^d\setminus 2R_j}({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu_b^j\,d\mu +\frac{1}{\lambda}\sum_{j}\int_{2R_j\setminus 2Q_j}({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu_b^j\,d\mu. \end{split} \end{equation} We are going to estimate the two terms on the right of (\ref{4cota mesuresL1debil equa1}) separately. Let us start with the first one. Given $j$ and $x\in{\operatorname{supp}}\mu\setminus2R_j$, let $\{\epsilon_m\}_{m\in{\mathbb Z}}$ be a decreasing sequence of positive numbers (which depends on $j$ and $x$, i.e. $\epsilon_m\equiv\epsilon_m(j,x)$) such that \begin{equation}\label{4cota mesuresL1debil equa3} \begin{split} ({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu_b^j(x)& \leq2\bigg(\sum_{m\in{\mathbb Z}}|(K\varphi_{\epsilon_{m+1}}^{\epsilon_m}*\nu_b^j)(x)|^\rho\bigg)^{1/\rho}. \end{split} \end{equation} If we set $I_k:=[2^{-k-1},2^{-k})$, we can decompose ${\mathbb Z}={\mathcal S}\cup{\mathcal L}$, where \begin{equation*} \begin{split} &{\mathcal L}:=\{m\in{\mathbb Z}\,:\,\epsilon_m\in I_k,\,\epsilon_{m+1}\in I_i,\text{ for }i>k\},\\ &{\mathcal S}:=\bigcup_{k\in{\mathbb Z}}{\mathcal S}_k,\quad{\mathcal S}_k:=\{m\in{\mathbb Z}\,:\,\epsilon_{m},\epsilon_{m+1}\in I_k\}. \end{split} \end{equation*} Let $z_j$ denote the center of $Q_j$ (and of $R_j$). Then, since $\nu_b^j(R_j)=0$ and ${\operatorname{supp}}\nu_b^j\subset R_j$, \begin{equation}\label{4cota mesuresL1debil equa2} \begin{split} |(K\varphi_{\epsilon_{m+1}}^{\epsilon_m}*\nu_b^j)(x)| &=\bigg|\int\varphi_{\epsilon_{m+1}}^{\epsilon_m}(x-y)K(x-y)\,d\nu_b^j(y)\bigg|\\ &\leq\int\big|\varphi_{\epsilon_{m+1}}^{\epsilon_m}(x-y)K(x-y) -\varphi_{\epsilon_{m+1}}^{\epsilon_m}(x-z_j)K(x-z_j)\big|\,d|\nu_b^j|(y). \end{split} \end{equation} If $m\in{\mathcal L}$, it is easy to see that $|\nabla(\varphi_{\epsilon_{m+1}}^{\epsilon_m}K)(t)|\leq|\nabla(\varphi_{\epsilon_{m+1}}K)(t)|+|\nabla(\varphi_{\epsilon_m}K)(t)|\lesssim|t|^{-n-1}$ for all $t\in{\mathbb R}^d\setminus\{0\}$. Moreover, since $x\in{\mathbb R}^d\setminus 2R_j$ and ${\operatorname{supp}}\nu_b^j\subset R_j$, there are finitely many $m\in{\mathcal L}$ (which depends only on $n$ and $d$) such that $(K\varphi_{\epsilon_{m+1}}^{\epsilon_m}*\nu_b^j)(x)\neq0$, and this number only depends on $n$ and $d$. On the other hand, if $m\in{\mathcal S}_k$, it is not hard to show that $|\nabla(\varphi_{\epsilon_{m+1}}^{\epsilon_m}K)(t)|\lesssim2^{k}|\epsilon_m-\epsilon_{m+1}||t|^{-n-1}$. Actually, this follows from the fact that $(\varphi_{\epsilon_{m+1}}^{\epsilon_m}K)(t)\neq0$ only if $|t|\approx 2^{-k}$ and the estimates \begin{equation}\label{Short eq1} \begin{split} |\varphi_{\epsilon_{m+1}}^{\,\epsilon_{m}}(t)|&=\left|\varphi_{\mathbb R}\bigg(\frac{|t|}{\epsilon_{m+1}}\bigg)-\varphi_{\mathbb R}\bigg(\frac{|t|}{\epsilon_{m}}\bigg)\right| \leq\|\varphi'_{\mathbb R}\|_{L^\infty({\mathbb R})}\left|\frac{|t|}{\epsilon_{m+1}}-\frac{|t|}{\epsilon_{m}}\right|\\ &=\|\varphi'_{\mathbb R}\|_{\infty}|t|\,\frac{\epsilon_{m}-\epsilon_{m+1}}{\epsilon_{m}\epsilon_{m+1}}\lesssim2^{k}|\epsilon_{m}-\epsilon_{m+1}| \end{split} \end{equation} and \begin{equation}\label{Short eq2} \begin{split} \big|\partial_{t^i}\big(\varphi_{\epsilon_{m+1}}^{\,\epsilon_m}(t)\big)\big| &\leq\left|\varphi'_{\mathbb R}\left(\frac{|t|}{\epsilon_{m}}\right)\frac{1}{\epsilon_m}- \varphi'_{\mathbb R}\left(\frac{|t|}{\epsilon_{m+1}}\right)\frac{1}{\epsilon_{m+1}}\right|\\ &\leq\left|\varphi'_{\mathbb R}\left(\frac{|t|}{\epsilon_{m}}\right)\right|\left|\frac{1}{\epsilon_{m}}-\frac{1}{\epsilon_{m+1}}\right| +\left|\varphi'_{\mathbb R}\left(\frac{|t|}{\epsilon_{m}}\right)-\varphi'_{\mathbb R}\left(\frac{|t|}{\epsilon_{m+1}}\right)\right|\frac{1}{\epsilon_{m+1}}\\ &\leq\left(\|\varphi_{\mathbb R}'\|_{\infty}+\|\varphi''_{\mathbb R}\|_{\infty}\frac{|t|}{\epsilon_{m+1}}\right)\frac{\epsilon_{m}-\epsilon_{m+1}}{\epsilon_{m}\epsilon_{m+1}}\lesssim2^{k}(\epsilon_{m}-\epsilon_{m+1})|t|^{-1}, \end{split} \end{equation} where $1\leq i\leq d$ and $t^i$ denotes the $i$'th coordinate of $t\in{\mathbb R}^d$ (recall that $\epsilon_m\approx\epsilon_{m+1}\approx 2^{-k}$ for $m\in{\mathcal S}_k$ and we assumed $|t|\approx 2^{-k}$). Similarly to the case $m\in{\mathcal L}$, there are finitely many $k\in{\mathbb Z}$ such that ${\operatorname{supp}}\varphi_{2^{-k-1}}^{2^{-k}}(x-\cdot)\cap R_j\neq\emptyset$, and the number only depends on $n$ and $d$ (notice that ${\operatorname{supp}}\varphi_{\epsilon_{m+1}}^{\epsilon_m}(x-\cdot)\subset{\operatorname{supp}}\varphi_{2^{-k-1}}^{2^{-k}}(x-\cdot)$ for all $m\in{\mathcal S}_k$). From these estimates and remarks, and (\ref{4cota mesuresL1debil equa3}), (\ref{4cota mesuresL1debil equa2}), we obtain \begin{equation*} \begin{split} ({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu_b^j(x) &\lesssim \sum_{k\in{\mathbb Z}}\,\sum_{m\in{\mathcal S}_k}|(K\varphi_{\epsilon_{m+1}}^{\epsilon_m}*\nu_b^j)(x)|+\sum_{m\in{\mathcal L}}|(K\varphi_{\epsilon_{m+1}}^{\epsilon_m}*\nu_b^j)(x)|\\ &\lesssim \sum_{{\begin{subarray}{c}k\in{\mathbb Z}:\, {\operatorname{supp}}\varphi_{2^{-k-1}}^{2^{-k}}(x-\cdot)\cap R_j\neq\emptyset \end{subarray}}}\,\sum_{m\in{\mathcal S}_k} 2^{k}|\epsilon_m-\epsilon_{m+1}||x-z_j|^{-n-1}\ell(R_j)\|\nu_b^j\|\\ &\quad+\sum_{\begin{subarray}{c}m\in{\mathcal L}:\, {\operatorname{supp}}\varphi_{\epsilon_{m+1}}^{\epsilon_m}(x-\cdot)\cap R_j\neq\emptyset \end{subarray}}|x-z_j|^{-n-1}\ell(R_j)\|\nu_b^j\| \lesssim|x-z_j|^{-n-1}\ell(R_j)\|\nu_b^j\| \end{split} \end{equation*} for all $j$ and $x\in{\operatorname{supp}}\mu\setminus 2R_j$. Therefore, using that $\mu$ has $n$-dimensional growth, that $\|\nu_b^j\|\lesssim|\nu|(Q_j)$, and that the $Q_j$'s are semidisjoint, \begin{equation}\label{4cota mesuresL1debil equa5} \begin{split} \sum_{j}\int_{{\mathbb R}^d\setminus 2R_j}({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu_b^j\,d\mu\lesssim \sum_{j}\ell(R_j)\|\nu_b^j\|\int_{{\mathbb R}^d\setminus 2R_j}|x-z_j|^{-n-1}\,d\mu\lesssim \sum_{j}\|\nu_b^j\|\lesssim\|\nu\|. \end{split} \end{equation} Let us now estimate the second term on the right hand side of (\ref{4cota mesuresL1debil equa1}). As above, given $j$ and $x\in 2R_j\setminus2Q_j$, let $\{\epsilon_m\}_{m\in{\mathbb Z}}$ be a decreasing sequence of positive numbers such that \begin{equation*} \begin{split} ({\mathcal V}_\rho\circ{\mathcal T}_\varphi)(w_j\nu)(x)& \leq2\bigg(\sum_{m\in{\mathbb Z}}|(K\varphi_{\epsilon_{m+1}}^{\epsilon_m}*(w_j\nu))(x)|^\rho\bigg)^{1/\rho}, \end{split} \end{equation*} where $w_j=\chi_{Q_j}\big(\sum_k\chi_{Q_k}\big)^{-1}$. Since $\rho>2$, ${\mathcal V}_\rho\circ{\mathcal T}_\varphi$ is sublinear, and since $\nu_b^j=w_j\nu-b_j\mu$, for $x\in2R_j\setminus2Q_j$ we have \begin{equation*} \begin{split} ({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu_b^j(x)&\leq({\mathcal V}_\rho\circ{\mathcal T}_\varphi)(w_j\nu)(x)+({\mathcal V}_\rho\circ{\mathcal T})(b_j\mu)(x)\\ &\leq2\sum_{m\in{\mathbb Z}}\big|(K\varphi_{\epsilon_{m+1}}^{\epsilon_m}*(w_j\nu))(x)\big|+({\mathcal V}_\rho\circ{\mathcal T}_\varphi^\mu)b_j(x)\\ &\lesssim|\nu|(Q_j)|x-z_j|^{-n}+({\mathcal V}_\rho\circ{\mathcal T}_\varphi^\mu)b_j(x). \end{split} \end{equation*} Since ${\mathcal V}_\rho\circ{\mathcal T}_\varphi^\mu$ is bounded in $L^2(\mu)$, using the estimate above and Cauchy-Schwarz we get \begin{equation*} \begin{split} \sum_j\int_{2R_j\setminus 2Q_j}({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu_b^j\,d\mu &\lesssim\sum_j\int_{2R_j\setminus2Q_j}\frac{|\nu|(Q_j)}{|x-z_j|^{n}}\,d\mu(x) +\sum_{j}\int_{2R_j\setminus 2Q_j}({\mathcal V}_\rho\circ{\mathcal T}_\varphi^\mu)b_j\,d\mu\\ &\lesssim\sum_j|\nu|(Q_j)\frac{\mu(2R_j)}{\ell(Q_j)^{n}} +\sum_{j}\|({\mathcal V}_\rho\circ{\mathcal T}_\varphi^\mu)b_j\|_{L^2(\mu)}\mu(2R_j)^{1/2}\\ &\lesssim\sum_j|\nu|(Q_j) +\sum_{j}\|b_j\|_{L^\infty(\mu)}\mu(R_j) \lesssim\sum_j|\nu|(Q_j)\lesssim\|\nu\|. \end{split} \end{equation*} Together with (\ref{4cota mesuresL1debil equa5}) and (\ref{4cota mesuresL1debil equa1}), this proves (\ref{4cota mesuresL1debil 1}), and Theorem \ref{4unif rectif teo3} follows. \section{If $\mu$ is a uniformly $n$-rectifiable measure, then\\ ${\mathcal V}_\rho\circ{\mathcal T}_\varphi^\mu:\,L^p(\mu)\to L^p(\mu)$ is a bounded operator for $1<p<\infty$}\label{4sec acotacio unif rectif} The purpose of this section consists in proving the following theorem and the subsequent corollary. \begin{teo}\label{4unif rectif teo} Let $\mu$ be an $n$-dimensional AD regular Borel measure in ${\mathbb R}^d$ and let $\rho>2$. Assume that there exist constants $C_0$ and $C_1$ such that, for each ball $B$ centered on ${\operatorname{supp}}\mu$, there is a set $F=F_B$ such that: \begin{itemize} \item[$(a)$]$\mu(F\cap B)\geq C_0\mu(B),$ \item[$(b)$] ${\mathcal V}_\rho\circ{\mathcal T}_\varphi$ is bounded from $M({\mathbb R}^d)$ to $L^{1,\infty}({\mathcal H}^n_F)$ with constant bounded by $C_1$. \end{itemize} Then ${\mathcal V}_\rho\circ{\mathcal T}_\varphi$ is bounded from $M({\mathbb R}^d)$ to $L^{1,\infty}(\mu)$, and ${\mathcal V}_\rho\circ{\mathcal T}_\varphi^{\mu}$ is a bounded operator in $L^p(\mu)$ for all $1<p<\infty$. \end{teo} \begin{coro}\label{unif rectif implies var smooth} If $\mu$ is an $n$-dimensional AD regular uniformly $n$-rectifiable measure, then ${\mathcal V}_\rho\circ{\mathcal T}_\varphi^{\mu}$ is a bounded operator in $L^p(\mu)$ for all $1<p<\infty$ and $\rho>2$. Moreover, the operator ${\mathcal V}_\rho\circ{\mathcal T}_\varphi$ is bounded from $M({\mathbb R}^d)$ to $L^{1,\infty}(\mu)$, so ${\mathcal V}_\rho\circ{\mathcal T}_\varphi^{\mu}$ is also of weak type $(1,1)$. \end{coro} \begin{proof}[{\bf {\em Proof.}}] Recall from \cite[Definition 1.26]{DS2} that a Borel measure $\nu$ in ${\mathbb R}^d$ has $BPLG$ ({\em big pieces of Lipschitz graphs}) if $\nu$ is $n$-dimensional AD regular and if there exist constants $C_1>0$ and $\theta>0$ such that, for any $x\in {\operatorname{supp}}\nu$ and $0<r<{\operatorname{diam}}({\operatorname{supp}}\nu)$, there is (a rotation and translation of) an $n$-dimensional Lipschitz graph $\Gamma$ with constant less than $C_1$ such that $\nu(\Gamma\cap B(x,r))\geq\theta r^n$. Thus, if $\nu$ has $BPLG$, the assumption $(a)$ of Theorem \ref{4unif rectif teo} is satisfied for $\nu$ by taking $F=\Gamma$, while Theorem \ref{4unif rectif teo3} implies that the assumption $(b)$ holds with a uniform constant. Therefore, from Theorem \ref{4unif rectif teo} we deduce that, if $\nu$ has $BPLG$ and $\rho>2$, then ${\mathcal V}_\rho\circ{\mathcal T}_\varphi$ is bounded from $M({\mathbb R}^d)$ to $L^{1,\infty}(\nu)$. Similarly, a measure $\nu$ has $(BP)^2LG$ ({\em big pieces of big pieces of Lipschitz graphs}) if there exist constants $C_g$, $\theta$, and $0<\alpha\leq1$ so that, if $B$ is any ball centered on ${\operatorname{supp}}\nu$, then there is an $n$-dimensional AD regular set $F\subset{\mathbb R}^d$ (with constant bounded by $C_g$) such that $\nu(F\cap B)\geq\alpha\nu(B)$ and such that ${\mathcal H}^n_ F$ has $BPLG$ with uniform constants. So ${\mathcal V}_\rho\circ{\mathcal T}_\varphi$ is a bounded operator from $M({\mathbb R}^d)$ to $L^{1,\infty}({\mathcal H}^n_F)$, by the comments above. Hence, we can apply once again Theorem \ref{4unif rectif teo} to $\nu$ (now $(b)$ is satisfied for the big pieces $F$ of $\nu$), and we deduce that, for any measure $\nu$ which has $(BP)^2LG$, ${\mathcal V}_\rho\circ{\mathcal T}_\varphi$ is bounded from $M({\mathbb R}^d)$ to $L^{1,\infty}(\nu)$. Similar arguments yield that ${\mathcal V}_\rho\circ{\mathcal T}_\varphi^{\nu}$ is a bounded operator in $L^p(\nu)$ for all $1<p<\infty$. Finally, from \cite[page 22]{DS2} and the remark given in \cite[page 16]{DS2}, we know that if $\mu$ is $n$-dimensional AD regular, then being uniformly $n$-rectifiable is equivalent to having $(BP)^2LG$. Therefore, the corollary is proved by applying the comments above to $\nu=\mu$. \end{proof} Since the arguments for proving Theorem \ref{4unif rectif teo} are more or less standard in Calder\'on-Zygmund theory, for the sake of shortness we will only sketch its proof (see \cite[Chapter 2]{Tolsa-llibre} or \cite[Proposition 1.28 of Part I]{DS2} for a similar argument). \begin{proof}[{\bf{\em Sketch of the proof of} Theorem \ref{4unif rectif teo}}] The proof follows by the so-called {\em good $\lambda$ inequality} method. Fix $\rho>2$ and let $M^\mu$ denote the Hardy-Littlewood maximal operator $$M^\mu \nu(x):=\sup_{r>0}\frac{|\nu|(B(x,r))}{\mu(B(x,r))},\quad\text{ for }\nu\in M({\mathbb R}^d)\text{ and } x\in{\operatorname{supp}}\mu.$$ {\em The good $\lambda$ inequality}: one shows that there exists some absolute constant $\eta>0$ such that for all $\epsilon>0$ there exists $\delta:=\delta(\epsilon)>0$ such that \begin{equation}\label{4eqgli0} \begin{split} \mu\big(\big\{x\in{\mathbb R}^d\,:\,({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu(x)&>(1+\epsilon)\lambda,\;M^\mu \nu(x)\leq\delta\lambda\big\}\big)\\ &\leq(1-\eta)\mu\big(\big\{x\in{\mathbb R}^d\,:\,({\mathcal V}_\rho\circ{\mathcal T}_\varphi)\nu(x)>\lambda\big\}\big) \end{split} \end{equation} for all $\lambda>0$ and $\nu\in M({\mathbb R}^d)$. It is easy to check that this implies that ${\mathcal V}_\rho\circ{\mathcal T}_\varphi$ is bounded from $M({\mathbb R}^d)$ to $L^{1,\infty}(\mu)$, and that ${\mathcal V}_\rho\circ{\mathcal T}_\varphi^{\mu}$ is bounded in $L^p(\mu)$ for all $1<p<\infty$, by standard arguments (recall that $M^\mu$ is bounded in these spaces). The proof of (\ref{4eqgli0}) is quite standard. The interested reader may look at \cite[Theorem 5.2.1]{Mas-thesis} for the detailed proof, or to \cite[Chapter 2]{Tolsa-llibre} for similar arguments. The only point that we should mention is that, in order to pursue the good $\lambda$ inequality method, one needs the following estimate: let $\nu\in M({\mathbb R}^d)$, consider a ball $B\subset{\mathbb R}^d$ and take $x,z\in B$. Then, \begin{equation}\label{4eqgli01} \big|({\mathcal V}_\rho\circ{\mathcal T}_\varphi)(\chi_{{\mathbb R}^d\setminus 2B}\nu)(x)-({\mathcal V}_\rho\circ{\mathcal T}_\varphi)(\chi_{{\mathbb R}^d\setminus 2B}\nu)(z)\big|\lesssim M^\mu\nu(x). \end{equation} We finish the sketch of the proof of Theorem \ref{4unif rectif teo} by showing (\ref{4eqgli01}). Since $x,z\in B$ and ${\mathcal V}_\rho\circ{\mathcal T}_\varphi$ is sublinear and positive, by the mean value theorem, \begin{equation}\label{4eq unif rectif1} \begin{split} \big|({\mathcal V}_\rho\circ{\mathcal T}_\varphi)(&\chi_{{\mathbb R}^d\setminus 2B}\nu)(x)-({\mathcal V}_\rho\circ{\mathcal T}_\varphi)(\chi_{{\mathbb R}^d\setminus 2B}\nu)(z)\big|\\ &\leq\sup_{\epsilon_m}\bigg(\sum_{m\in{\mathbb Z}} |(K{\varphi_{\epsilon_{m+1}}^{\epsilon_m}}*(\chi_{{\mathbb R}^d\setminus 2B}\nu))(x)-(K{\varphi_{\epsilon_{m+1}}^{\epsilon_m}}*(\chi_{{\mathbb R}^d\setminus 2B}\nu))(z)|^\rho\bigg)^{1/\rho}\\ &\leq\sup_{\epsilon_m}\bigg(\sum_{m\in{\mathbb Z}} \bigg(\int_{B_m(x,z)}|\nabla(\varphi_{\epsilon_{m+1}}^{\epsilon_m}K)(u_{x,z}(y)-y)||x-z|\,d|\nu|(y)\bigg)^\rho\bigg)^{1/\rho}, \end{split} \end{equation} where $B_m(x,z):=({{\mathbb R}^d\setminus 2B})\cap({\operatorname{supp}}\varphi_{\epsilon_{m+1}}^{\epsilon_m}(x-\cdot)\cup{\operatorname{supp}}\varphi_{\epsilon_{m+1}}^{\epsilon_m}(z-\cdot))$ and $u_{x,z}(y)$ is some point lying on the segment joining $x$ and $z$. For each $x$ and $z$, let $\epsilon_m\equiv\epsilon_m(x,z)$ be a sequence that realizes the supremum in the right hand side of (\ref{4eq unif rectif1}). Given $\epsilon_m>0$, let $j(\epsilon_m)$ denote the integer such that $\epsilon_m\in[2^{-j(\epsilon_m)-1},2^{-j(\epsilon_m)})$. For $j\in{\mathbb Z}$ set $I_j:=[2^{-j-1},2^{-j})$. As usual, we decompose ${\mathbb Z}={\mathcal S}\cup{\mathcal L}$, where \begin{equation*} \begin{split} &{\mathcal S}:=\bigcup_{j\in{\mathbb Z}}{\mathcal S}_j,\quad{\mathcal S}_j:=\{m\in{\mathbb Z}\,:\,\epsilon_m,\epsilon_{m+1}\in I_j\},\\ &{\mathcal L}:=\{m\in{\mathbb Z}\,:\,\epsilon_m\in I_i,\,\epsilon_{m+1}\in I_j\text{ for }i<j\}. \end{split} \end{equation*} Notice that if $2^{-j+2}<r(B)$, where $r(B)$ denotes the radius of $B$, then $B_m(x.z)=\emptyset$ for all $m\in{\mathcal S}_j$. Therefore, we can assume that $j\leq\log_2(4/r(B))$. If $m\in{\mathcal S}_j$, then $B_m(x,z)\subset B(x,2^{-j+3})$, and for $t\in{\operatorname{supp}}(\varphi_{\epsilon_{m+1}}^{\epsilon_m}K)$ we have that $|\nabla(\varphi_{\epsilon_{m+1}}^{\epsilon_m}K)(t)|\lesssim2^{j(n+2)}|\epsilon_m-\epsilon_{m+1}|$ (see (\ref{Short eq1}) and (\ref{Short eq2})). If $m\in{\mathcal L}$, we easily have $|\nabla(\varphi_{\epsilon_{m+1}}^{\epsilon_m}K)(t)|\lesssim|t|^{-n-1}$. Therefore, using (\ref{4eq unif rectif1}), that $\rho>2$, that the sets $B_m(x,z)$ have bounded overlap for $m\in{\mathcal L}$, and that $|x-z|\lesssim r(B)$, we get \begin{equation*} \begin{split} \big|({\mathcal V}_\rho\circ{\mathcal T}_\varphi)(&\chi_{{\mathbb R}^d\setminus 2B}\nu)(x)-({\mathcal V}_\rho\circ{\mathcal T}_\varphi)(\chi_{{\mathbb R}^d\setminus 2B}\nu)(z)\big|\\ &\lesssim\sum_{j\leq\log_2(4/r(B))}\,\sum_{m\in{\mathcal S}_j} |x-z|2^{j(n+2)}|\epsilon_m-\epsilon_{m+1}|\int_{B(x,2^{-j+3})}\,d|\nu|(y)\\ &\quad+|x-z|\sum_{m\in{\mathcal L}}\int_{B_m(x,z)}|x-y|^{-n-1}\,d|\nu|(y)\\ &\lesssim\sum_{j\leq\log_2(4/r(B))} r(B)2^{j(n+1)}\int_{B(x,2^{-j+3})}\,d|\nu|(y) +r(B)\int_{{\mathbb R}^d\setminus 2B}\frac{d|\nu|(y)}{|x-y|^{n+1}}\\ &\lesssim\sum_{j\leq\log_2(4/r(B))} \frac{r(B)2^{j}}{\mu(B(x,2^{-j+3}))}\int_{B(x,2^{-j+3})}\,d|\nu|(y)\\ &\quad+r(B)\sum_{k\geq1}\int_{2^{k+2}r(B)\geq|x-y|\geq2^{k-1}r(B)}\frac{d|\nu|(y)}{|x-y|^{n+1}}\\ &\lesssim M^\mu \nu(x) +\sum_{k\geq1}\frac{2^{-k}}{\mu(B(x,2^{k+2}r(B_i)))}\int_{B(x,2^{k+2}r(B))}\,d|\nu|(y) \lesssim M^\mu\nu(x). \end{split} \end{equation*} \end{proof} \begin{remarko}{\em Notice that, to prove (\ref{4eqgli01}), it is a key fact that we are considering smooth truncations (given by $\varphi_{\mathbb R}$) in the definition of ${\mathcal T}_\varphi$. These computations are no longer valid if one replaces ${\mathcal T}_\varphi$ by ${\mathcal T}$.} \end{remarko} \section{If $\mu$ is a uniformly $n$-rectifiable measure, then\\ ${\mathcal V}_\rho\circ{\mathcal T}^\mu:\,L^2(\mu)\to L^2(\mu)$ is a bounded operator}\label{5s var no suau} This section is devoted to the proof of the following result. \begin{teo}\label{5teo var no suau acotada L2} Let $\rho>2$ and let $\mu$ be an $n$-dimensional AD regular Borel measure on ${\mathbb R}^d$. If $\mu$ is uniformly $n$-rectifiable, then ${\mathcal V}_\rho\circ{\mathcal T}^\mu$ is a bounded operator in $L^2(\mu)$. \end{teo} \subsection{Short and long variation} Given $j\in{\mathbb Z}$, set $I_j:=[2^{-j-1},2^{-j})$. Then, using the triangle inequality, we can split the variation operator into the so-called short variation and long variation operators, i.e., $({\mathcal V}_\rho\circ{\mathcal T}^\mu)f(x)\leq ({\mathcal V}^{\mathcal S}_\rho\circ{\mathcal T}^\mu)f(x)+({\mathcal V}^{\mathcal L}_\rho\circ{\mathcal T}^\mu)f(x),$ where \begin{equation}\label{5 short and long variation} \begin{split} &({\mathcal V}^{\mathcal S}_\rho\circ{\mathcal T}^\mu)f(x):=\sup_{\{\epsilon_m\}}\bigg(\sum_{j\in{\mathbb Z}}\,\sum_{\epsilon_m,\epsilon_{m+1}\in I_j}|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)|^{\rho}\bigg)^{1/\rho},\\ &({\mathcal V}^{\mathcal L}_\rho\circ{\mathcal T}^\mu)f(x):=\sup_{\{\epsilon_m\}}\bigg(\sum_{\begin{subarray}{c}m\in{\mathbb Z}:\,\epsilon_m\in I_j,\,\epsilon_{m+1}\in I_k\\ \text{ for some }j<k\end{subarray}}|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)|^{\rho}\bigg)^{1/\rho}, \end{split} \end{equation} and, in both cases, the pointwise supremum is taken over all the sequences of positive numbers $\{\epsilon_m\}_{m\in{\mathbb Z}}$ decreasing to zero. To prove Theorem \ref{5teo var no suau acotada L2} we will show that both the short and long variation operators are bounded in $L^2(\mu)$. \subsection{$L^2(\mu)$ boundedness of ${\mathcal V}^{\mathcal L}_\rho\circ{\mathcal T}^\mu$}\label{5sslong} The $L^2(\mu)$-norm of the long variation operator ${\mathcal V}^{\mathcal L}_\rho\circ{\mathcal T}^\mu$ can be handled by comparing it with its smoothed version ${\mathcal V}_\rho\circ{\mathcal T}_\varphi^\mu$, using Corollary \ref{unif rectif implies var smooth}, and estimating the error terms by the short variation operator. \begin{lema}\label{lqlq} We have $\|({\mathcal V}^{{\mathcal L}}_\rho\circ{\mathcal T}^\mu)f\|_{L^2(\mu)} \lesssim\|({\mathcal V}^{\mathcal S}_\rho\circ{\mathcal T}^\mu)f\|_{L^2(\mu)}+\|f\|_{L^2(\mu)}.$ \end{lema} \begin{proof}[{\bf {\em Proof.}}] We decompose \begin{equation}\label{5 short and long variation1} \begin{split} \big(({\mathcal V}^{\mathcal L}_\rho&\circ{\mathcal T}^\mu)f(x)\big)^{\rho}=\sup_{\{\epsilon_m\}}\sum_{\begin{subarray}{c}m\in{\mathbb Z}:\,\epsilon_m\in I_j,\,\epsilon_{m+1}\in I_k\\ \text{ for some }j<k\end{subarray}}|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)|^{\rho}\\ &\lesssim\sup_{\{\epsilon_m\}}\sum_{\begin{subarray}{c}m\in{\mathbb Z}:\\\epsilon_m\in I_j,\,\epsilon_{m+1}\in I_k\\ \text{ for some }j<k\end{subarray}}\Big(|(K(\chi_{\epsilon_{m+1}}^{\epsilon_m}-\varphi_{\epsilon_{m+1}}^{\epsilon_m})*(f\mu))(x)|^{\rho}+ |(K\varphi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)|^{\rho}\Big)\\ &\lesssim\sup_{\{\epsilon_m\}}\sum_{\begin{subarray}{c}m\in{\mathbb Z}:\,\epsilon_m\in I_j,\,\epsilon_{m+1}\in I_k\\ \text{ for some }j<k\end{subarray}}|(K(\chi_{\epsilon_{m+1}}^{\epsilon_m}-\varphi_{\epsilon_{m+1}}^{\epsilon_m})*(f\mu))(x)|^{\rho}+ \big(({\mathcal V}_\rho\circ{\mathcal T}_\varphi^\mu)f(x)\big)^\rho. \end{split} \end{equation} For simplicity, we denote by $\big(({\mathcal V}^{{\mathcal L}}_\rho\circ{\mathcal T}_{\chi-\varphi}^\mu)f(x)\big)^\rho$ the first term on the right hand side of (\ref{5 short and long variation1}). Notice that, given $\epsilon,\delta>0$, we have $\chi_\epsilon^\delta-\varphi_\epsilon^\delta=(\chi_\epsilon-\varphi_\epsilon) -(\chi_\delta-\varphi_\delta)$. Recall that, in the definition of $\varphi_{\mathbb R}$ in Definition \ref{4defi varphi}, we have taken $\chi_{[4,\infty)}\leq\varphi_{\mathbb R}\leq\chi_{[1/4,\infty)}$. Hence, given $t\geq0$, $$\chi_{{\mathbb R}}(t)-\varphi_{{\mathbb R}}(t)=\chi_{[1,\infty)}(t)-\int_{1/4}^4\varphi'_{\mathbb R}(s)\chi_{[s,\infty)}(t)\,ds= \int_{1/4}^4\varphi'_{\mathbb R}(s)\big(\chi_{[1,\infty)}(t)-\chi_{[s,\infty)}(t)\big)\,ds$$ (that is, $\chi_{{\mathbb R}}-\varphi_{{\mathbb R}}$ is a convex combination of $\chi_{[1,\infty)}-\chi_{[s,\infty)}$ for $1/4\leq s\leq 4$), and thus, by Fubini's theorem, \begin{equation*} \begin{split} \big(K&(\chi_\epsilon-\varphi_\epsilon)*(f\mu)\big)(x) =\int\big(\chi_{\mathbb R}(|x-y|^2/\epsilon^2)-\varphi_{\mathbb R}(|x-y|^2/\epsilon^2)\big)K(x-y)f(y)\,d\mu(y)\\ &=\int_{1/4}^4\varphi'_{\mathbb R}(s)\int\Big(\chi_{[1,\infty)}(|x-y|^2/\epsilon^2)-\chi_{[s,\infty)}(|x-y|^2/\epsilon^2)\Big)K(x-y)f(y)\,d\mu(y)\,ds\\ &=\int_{1/4}^4\varphi'_{\mathbb R}(s)\int\chi_{\epsilon}^{\epsilon\sqrt{s}}(x-y)K(x-y)f(y)\,d\mu(y)\,ds =\int_{1/4}^4\varphi'_{\mathbb R}(s)\big((K\chi_{\epsilon}^{\epsilon\sqrt{s}}*(f\mu))(x)\big)\,ds. \end{split} \end{equation*} Therefore, by the triangle inequality and Minkowski's integral inequality, we get \begin{equation*} \begin{split} \|({\mathcal V}^{{\mathcal L}}_\rho\circ{\mathcal T}_{\chi-\varphi}^\mu)f&\|_{L^2(\mu)} \leq2\bigg\|\sup_{\{\epsilon_m\in I_m:\,m\in{\mathbb Z}\}}\bigg(\sum_{m\in{\mathbb Z}} |(K(\chi_{\epsilon_{m}}-\varphi_{\epsilon_{m}})*(f\mu))(x)|^{\rho}\bigg)^{1/\rho}\bigg\|_{L^2(\mu)}\\ &\leq2\int_{1/4}^4\varphi'_{\mathbb R}(s)\bigg\|\sup_{\{\epsilon_m\in I_m:\,m\in{\mathbb Z}\}}\bigg(\sum_{m\in{\mathbb Z}}|(K\chi_{\epsilon_m}^{\epsilon_m\sqrt{s}}*(f\mu))(x)|^\rho\bigg)^{1/\rho}\bigg\|_{L^2(\mu)}\,ds. \end{split} \end{equation*} One can easily verify that $\sup_{\{\epsilon_m\in I_m:\,m\in{\mathbb Z}\}}\big(\sum_{m\in{\mathbb Z}}|(K\chi_{\epsilon_m}^{\epsilon_m\sqrt{s}}*(f\mu))(x)|^\rho\big)^{1/\rho}\lesssim({\mathcal V}^{\mathcal S}_\rho\circ{\mathcal T}^\mu)f(x)$ for all $s\in[1/4,4]$ with uniform bounds. Hence \begin{equation}\label{5 short and long variation1a} \begin{split} \|({\mathcal V}^{{\mathcal L}}_\rho\circ{\mathcal T}_{\chi-\varphi}^\mu)f\|_{L^2(\mu)} \lesssim\int_{1/4}^4\varphi'_{\mathbb R}(s)\|({\mathcal V}^{\mathcal S}_\rho\circ{\mathcal T}^\mu)f\|_{L^2(\mu)}\,ds\lesssim\|({\mathcal V}^{\mathcal S}_\rho\circ{\mathcal T}^\mu)f\|_{L^2(\mu)}. \end{split} \end{equation} Finally, using (\ref{5 short and long variation1}), (\ref{5 short and long variation1a}), and Corollary \ref{unif rectif implies var smooth}, \begin{equation*} \begin{split} \|({\mathcal V}^{{\mathcal L}}_\rho\circ{\mathcal T}^\mu)f\|_{L^2(\mu)} &\lesssim\|({\mathcal V}^{{\mathcal L}}_\rho\circ{\mathcal T}_{\chi-\varphi}^\mu)f\|_{L^2(\mu)} +\|({\mathcal V}_\rho\circ{\mathcal T}_{\varphi}^\mu)f\|_{L^2(\mu)}\\ &\lesssim\|({\mathcal V}^{\mathcal S}_\rho\circ{\mathcal T}^\mu)f\|_{L^2(\mu)}+\|f\|_{L^2(\mu)}. \end{split} \end{equation*} \end{proof} Thus, to prove Theorem \ref{5teo var no suau acotada L2}, it only remains to show the $L^2(\mu)$ boundedness of ${\mathcal V}^{\mathcal S}_\rho\circ{\mathcal T}^\mu$. \subsection{$L^2(\mu)$ boundedness of ${\mathcal V}^{\mathcal S}_\rho\circ{\mathcal T}^\mu$}\label{5ssshort} Given $f\in L^2(\mu)$ and $x\in{\operatorname{supp}}\mu$, let $\{\epsilon_m\}_{m\in{\mathbb Z}}$ be a decreasing sequence of positive numbers (depending on $x$) such that \begin{equation*} \big(({\mathcal V}^{\mathcal S}_2\circ{\mathcal T}^\mu)f(x)\big)^2\leq2\sum_{j\in{\mathbb Z}}\,\sum_{\epsilon_m,\epsilon_{m+1}\in I_j}|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)|^{2}. \end{equation*} Given $D\in{\mathcal D}_j$ and $x\in D$, we set ${\mathcal S}_D(x):=\{m\in{\mathbb Z}:\,\epsilon_{m},\epsilon_{m+1}\in I_j\}$. Since $\rho\geq2$, we have \begin{equation*} \begin{split} \|({\mathcal V}^{\mathcal S}_\rho\circ{\mathcal T}^\mu)f\|_{L^2(\mu)}^2&\leq\|({\mathcal V}^{\mathcal S}_2\circ{\mathcal T}^\mu)f\|_{L^2(\mu)}^2 \lesssim\int\sum_{j\in{\mathbb Z}}\,\sum_{\epsilon_m,\epsilon_{m+1}\in I_j}|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)|^{2}\,d\mu(x)\\ &=\sum_{D\in{\mathcal D}}\int_D\sum_{m\in{\mathcal S}_D(x)}|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)|^{2}\,d\mu(x). \end{split} \end{equation*} Let $\eta$ and $\theta$ be two positive numbers that will be fixed below (see the proofs of Claims \ref{5 claim1} and \ref{5 claim2}). Consider a corona decomposition of $\mu$ with parameters $\eta$ and $\theta$ as in Subsection \ref{5ss corona decomposition}. Then, we can decompose ${\mathcal D}={\mathcal B}\cup(\bigcup_{S\in{\operatorname{Trs}}}S)$, so that \begin{equation}\label{5 var eq1} \begin{split} \|({\mathcal V}^{\mathcal S}_\rho\circ{\mathcal T}^\mu)f\|_{L^2(\mu)}^2 &\lesssim\sum_{D\in{\mathcal B}}\int_D\sum_{m\in{\mathcal S}_D(x)}|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)|^{2}\,d\mu(x)\\ &\quad+\sum_{S\in{\operatorname{Trs}}}\,\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)|^{2}\,d\mu(x). \end{split} \end{equation} Since the $\mu$-cubes in ${\mathcal B}$ satisfy a Carleson packing condition, we can use Carleson's embedding theorem to estimate the sum on the right hand side of (\ref{5 var eq1}) over the $\mu$-cubes in ${\mathcal B}$. More precisely, if we set $m_D^\mu f:=\mu(D)^{-1}\int_{D}f\,d\mu$ for $D\in{\mathcal D}$, we have \begin{equation}\label{5 var eq2} \begin{split} \sum_{D\in{\mathcal B}}\int_D&\sum_{m\in{\mathcal S}_D(x)}|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)|^{2}\,d\mu(x)\\ &\leq\sum_{D\in{\mathcal B}}\int_D\sum_{m\in{\mathcal S}_D(x)}\bigg(\int_{\epsilon_{m+1}\leq|x-y|\leq\epsilon_m}|K(x-y)||f(y)|\,d\mu(y)\bigg)^{2}\,d\mu(x)\\ &\lesssim\sum_{D\in{\mathcal B}}\int_D\bigg(\frac{1}{\ell(D)^n}\int_{5D}|f|\,d\mu\bigg)^{2}\,d\mu \approx\sum_{D\in{\mathcal B}}\big(m_{5D}^\mu|f|\big)^{2}\mu(D)\lesssim\|f\|^2_{L^2(\mu)}. \end{split} \end{equation} Now we are going to estimate now the second term on the right hand side of (\ref{5 var eq1}), that is the sum over the $\mu$-cubes in $S$, for all $S\in{\operatorname{Trs}}$. To this end, we need to introduce some notation. Given $R\in{\mathcal D}_j$ for some $j\in{\mathbb Z}$, let $P(R)$ denote the $\mu$-cube in ${\mathcal D}_{j-1}$ which contains $R$ (the {\em parent} of $R$), and set \begin{equation}\label{5 pares fills veins} \begin{split} & {\operatorname{Ch}}( R):=\{Q\in{\mathcal D}_{j+1}:\,Q\subset R\},\\ &V(R):=\{Q\in{\mathcal D}_j:\, Q\cap B(y,\ell(R))\neq\emptyset\text{ for some }y\in R\} \end{split} \end{equation} (${\operatorname{Ch}}(R)$ are the {\em children} of $R$, and $V(R)$ stands for the {\em vicinity} of $R$). Notice that $P(R)$ is a $\mu$-cube but $ {\operatorname{Ch}}( R)$ and $V(R)$ are collections of $\mu$-cubes. It is not hard to show that the number of $\mu$-cubes in $ {\operatorname{Ch}}( R)$ and $V(R)$ is bounded by some constant depending only on $n$ and the AD regularity constant of $\mu$. If $R\in S$ for some $S\in{\operatorname{Trs}}$, we denote by ${\operatorname{Tr}}(R)$ the set of $\mu$-cubes $Q\in S$ such that $Q\subset R$ (the {\em tree} of $R$). Otherwise, i.e., if $R\in{\mathcal B}$, we set ${\operatorname{Tr}}(R):=\emptyset$. Finally, if ${\operatorname{Tr}}(R)\neq\emptyset$, let ${\operatorname{Stp}}(R)$ denote the set of $\mu$-cubes $Q\in{\mathcal B}\cup({\mathcal G}\setminus {\operatorname{Tr}}(R))$ such that $Q\subset R$ and $P(Q)\in {\operatorname{Tr}}(R)$ (the {\em stopping} $\mu$-cubes relative to $R$), so actually $Q\subsetneq R$. On the other hand, if $R\in{\mathcal B}$, we set ${\operatorname{Stp}}(R):=\{R\}$. Fix $S\in{\operatorname{Trs}}$, $D\in S$, and $x\in D$. To deal with the second term on the right hand side of (\ref{5 var eq1}), we have to estimate the sum $\sum_{m\in{\mathcal S}_D(x)}|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)|^2$. By the definition of ${\mathcal S}_D(x)$, we have \begin{equation}\label{5 haar decomposition2} \sum_{m\in{\mathcal S}_D(x)}|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)|^2= \sum_{m\in{\mathcal S}_D(x)}|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\chi_{\widetilde D}f\mu))(x)|^2, \end{equation} where $\widetilde D:=\bigcup_{R\in V(D)}R$. Since this union of $\mu$-cubes is disjoint, we can decompose the function $\chi_{\widetilde D}f$ using a Haar basis adapted to ${\mathcal D}$ in the following manner: \begin{equation}\label{5 haar decomposition} \chi_{\widetilde D}f=\sum_{R\in V(D)}\bigg((m_R^\mu f)\chi_R+\sum_{Q\in {\operatorname{Tr}}(R)}\Delta_Q f +\sum_{Q\in {\operatorname{Stp}}(R)}\widetilde\Delta_Q f\bigg), \end{equation} where we have set \begin{equation*} \begin{split} \Delta_Q f:=\sum_{U\in {\operatorname{Ch}}( Q)}\chi_U(m_U^\mu f-m_Q^\mu f),\!\!\quad\text{and}\quad\!\! \widetilde\Delta_Q f:=\sum_{U\in {\operatorname{Ch}}( Q)}\chi_U(f-m_Q^\mu f)=\chi_Q(f-m_Q^\mu f). \end{split} \end{equation*} Using (\ref{5 haar decomposition}), we split the left hand side of (\ref{5 haar decomposition2}) as follows: \begin{equation}\label{eqmain} \begin{split} \sum_{m\in{\mathcal S}_D(x)}|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)|^2 &\lesssim\sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)} (K\chi_{\epsilon_{m+1}}^{\epsilon_m}*((m_R^\mu f)\chi_R\mu))(x)\bigg|^2\\ &+\sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Tr}}(R)}(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\Delta_Q f\mu))(x)\bigg|^2\\ &+\sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Stp}}(R)}(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\widetilde\Delta_Q f\mu))(x)\bigg|^2. \end{split} \end{equation} In the following subsections, we will estimate each part separately. \subsubsection{{\bf Estimate of $\sum_{m\in{\mathcal S}_D(x)}\big|\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Tr}}(R)}(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\Delta_Q f\mu))(x)\big|^2$ from (\ref{eqmain})}}\label{5 ss5321} \begin{lema}\label{5 var eq8} Under the notation above, we have \begin{equation*} \sum_{S\in{\operatorname{Trs}}}\,\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Tr}}(R)}(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\Delta_Q f\mu))(x)\bigg|^2d\mu(x)\lesssim\|f\|^2_{L^2(\mu)}. \end{equation*} \end{lema} \begin{proof}[{\bf {\em Proof.}}] Let $C_0>0$ be a small constant to be fixed below. Given $m\in{\mathcal S}_D(x)$ set $A_m(x):=A(x,\epsilon_{m+1},\epsilon_m)$, and given $R\in V(D)$ let \begin{equation*} \begin{split} J_m^{1,R}:=\{Q\in {\operatorname{Tr}}(R):\,Q\cap A_m(x)\neq\emptyset,\,\ell(Q)> C_0(\epsilon_m-\epsilon_{m+1})\},\\ J_m^{2,R}:=\{Q\in {\operatorname{Tr}}(R):\,Q\cap A_m(x)\neq\emptyset,\,\ell(Q)\leq C_0(\epsilon_m-\epsilon_{m+1})\}. \end{split} \end{equation*} For $Q\in J_m^{1,R}$, we write $|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\Delta_Q f\mu))(x)| \lesssim\ell(D)^{-n}\|\chi_{A_m(x)}\Delta_Q f\|_{L^1(\mu)}$. The following claim will be proved in Subsection \ref{5ss proof of claims} below. \begin{claim}\label{5 claim1} The following estimate holds: $\sum_{Q\in J_m^{1,R}}\ell(Q)^{n-1/2}\lesssim\ell(D)^{n-1/2}$. \end{claim} Using that $V(D)$ has finitely many elements (depending only on $n$ and the AD regularity constant of $\mu$), Cauchy-Schwarz inequality, Claim \ref{5 claim1}, and the previous estimate, we obtain \begin{equation}\label{5 var eq3} \begin{split} \sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)}&\,\sum_{Q\in J_m^{1,R}}(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\Delta_Q f\mu))(x)\bigg|^2\\ &\lesssim\sum_{R\in V(D)}\,\sum_{m\in{\mathcal S}_D(x)}\bigg(\sum_{Q\in J_m^{1,R}} \ell(D)^{-n}\|\chi_{A_m(x)}\Delta_Q f\|_{L^1(\mu)}\bigg)^2\\ &\lesssim\sum_{R\in V(D)}\,\sum_{m\in{\mathcal S}_D(x)}\bigg(\sum_{Q\in J_m^{1,R}}\ell(Q)^{n-1/2}\bigg) \bigg(\sum_{Q\in J_m^{1,R}} \frac{\|\chi_{A_m(x)}\Delta_Q f\|^2_{L^1(\mu)}}{\ell(D)^{2n}\ell(Q)^{n-1/2}}\bigg)\\ &\lesssim\sum_{R\in V(D)}\,\sum_{m\in{\mathcal S}_D(x)} \,\sum_{Q\in {\operatorname{Tr}}(R)} \frac{\|\chi_{A_m(x)}\Delta_Q f\|^2_{L^1(\mu)}}{\ell(D)^{n+1/2}\ell(Q)^{n-1/2}}\\ &\lesssim\sum_{R\in V(D)}\, \,\sum_{Q\in {\operatorname{Tr}}(R)} \bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{1/2}\frac{\|\Delta_Q f\|^2_{L^1(\mu)}}{\ell(D)^{n}\ell(Q)^{n}}. \end{split} \end{equation} We deal now with the $\mu$-cubes $Q\in J_m^{2,R}$. Let $z_Q$ denote the center of $Q$. Since $\int\Delta_Q f\,d\mu=0$, we can decompose \begin{equation}\label{5 var eq4} \begin{split} (K\chi_{\epsilon_{m+1}}^{\epsilon_m}\!\!*\!(\Delta_Q f\mu))(x) &=\!\int\!\big( \chi_{A_m(x)}(y)K(x-y) \!-\!\chi_{A_m(x)}(z_Q)K(x-z_Q)\big)\Delta_Qf(y)\,d\mu(y)\\ &=\!\int\!\chi_{A_m(x)}(y)\Big(K(x-y)\!-\!K(x-z_Q)\Big)\Delta_Qf(y)\,d\mu(y)\\ &\quad+\int\Big(\chi_{A_m(x)}(y)-\chi_{A_m(x)}(z_Q)\Big)K(x-z_Q)\Delta_Qf(y)\,d\mu(y)\\ &=:T_m^{1,\mu}(\Delta_Qf)(x)+T_m^{2,\mu}(\Delta_Qf)(x). \end{split} \end{equation} For the first term on the right hand side of the last equality, we have the standard estimate (by assuming $C_0$ small enough, so any $Q\in J_m^{2,R}$ is far from $x$) $$|T_m^{1,\mu}(\Delta_Qf)(x)|\lesssim\int_{A_m(x)}\frac{|y-z_Q|}{|x-y|^{n+1}}\,|\Delta_Qf(y)|\,d\mu(y) \lesssim\frac{\ell(Q)}{\ell(D)^{n+1}}\|\chi_{A_m(x)}\Delta_Qf\|_{L^1(\mu)}.$$ From this estimate and Cauchy-Schwarz inequality, we obtain \begin{equation*} \begin{split} \sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)}&\, \sum_{Q\in J_m^{2,R}}T_m^{1,\mu}(\Delta_Qf)(x)\bigg|^2\\ &\lesssim\sum_{R\in V(D)}\,\sum_{m\in{\mathcal S}_D(x)}\bigg(\sum_{Q\in J_m^{2,R}} \frac{\ell(Q)}{\ell(D)^{n+1}}\|\chi_{A_m(x)}\Delta_Qf\|_{L^1(\mu)}\bigg)^2\\ &\lesssim\sum_{R\in V(D)}\bigg(\sum_{Q\in {\operatorname{Tr}}(R)}\frac{\ell(Q)}{\ell(D)^{n+1}}\sum_{m\in{\mathcal S}_D(x)} \|\chi_{A_m(x)}\Delta_Qf\|_{L^1(\mu)}\bigg)^2\\ &\lesssim\sum_{R\in V(D)}\bigg(\sum_{Q\in {\operatorname{Tr}}(R)}\frac{\ell(Q)^{n+1}}{\ell(D)^{n+1}}\bigg)\bigg(\sum_{Q\in {\operatorname{Tr}}(R)}\frac{\|\Delta_Qf\|^2_{L^1(\mu)}} {\ell(Q)^{n-1}\ell(D)^{n+1}}\bigg). \end{split} \end{equation*} Since $\ell(R)=\ell(D)$ for all $R\in V(D)$, we have $\sum_{Q\in {\operatorname{Tr}}(R)}\big(\frac{\ell(Q)}{\ell(D)}\big)^{n+1}\leq\sum_{Q\in{\mathcal D}:\,Q\subset R}\big(\frac{\ell(Q)}{\ell(R)}\big)^{n+1}\lesssim1$. Thus, using that $t\lesssim\sqrt{t}$ for all $t\lesssim1$, we conclude \begin{equation}\label{5 var eq5} \begin{split} \sum_{m\in{\mathcal S}_D(x)}\bigg|&\sum_{R\in V(D)}\,\sum_{Q\in J_m^{2,R}}T_m^{1,\mu}(\Delta_Qf)(x)\bigg|^2\lesssim\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Tr}}(R)}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{1/2}\frac{\|\Delta_Qf\|^2_{L^1(\mu)}}{\ell(Q)^{n}\ell(D)^{n}}. \end{split} \end{equation} We deal now with the second term on the right hand side of (\ref{5 var eq4}). Given $Q\in J_m^{2,R}$, since ${\operatorname{supp}}(\Delta_Qf)\subset Q$, if $Q\subset A_m(x)$ or $Q\subset(A_m(x))^c$ then we obviously have $\chi_{A_m(x)}(y)-\chi_{A_m(x)}(z_Q)=0$ for all $y\in{\operatorname{supp}}(\Delta_Qf)$. Therefore, to estimate the sum of $T_m^{2,\mu}(\Delta_Qf)(x)$ over all $Q\in J_m^{2,R}$, we can replace $J_m^{2,R}$ by $$J_m^{3,R}:=\{Q\in {\operatorname{Tr}}(R):\,Q\cap A_m(x)\neq\emptyset,\,Q\cap (A_m(x))^c\neq\emptyset,\,\ell(Q)\leq C_0(\epsilon_m-\epsilon_{m+1})\}.$$ For $m\in{\mathcal S}_D(x)$ and $Q\in J_m^{3,R}$, we will use the estimate $|T_m^{2,\mu}(\Delta_Qf)(x)|\lesssim \ell(D)^{-n}\|\Delta_Qf\|_{L^1(\mu)}.$ \begin{claim}\label{5 claim2} The following holds: $\sum_{Q\in J_m^{3,R}}\ell(Q)^{n-1/2}\lesssim\ell(D)^{n-1}(\epsilon_m-\epsilon_{m+1})^{1/2}$. \end{claim} Hence, using that $V(D)$ has finitely many terms, Cauchy-Schwarz inequality, assuming Claim \ref{5 claim2} (see Subsection \ref{5ss proof of claims}), and by the previous estimate, we deduce \begin{equation*} \begin{split} \sum_{m\in{\mathcal S}_D(x)}\bigg|&\sum_{R\in V(D)}\,\sum_{Q\in J_m^{2,R}}T_m^{2,\mu}(\Delta_Qf)(x)\bigg|^2\lesssim\sum_{R\in V(D)}\,\sum_{m\in{\mathcal S}_D(x)}\bigg(\sum_{Q\in J_m^{3,R}} \frac{\|\Delta_Qf\|_{L^1(\mu)}}{\ell(D)^{n}}\bigg)^2\\ &\leq\sum_{R\in V(D)}\,\sum_{m\in{\mathcal S}_D(x)} \bigg(\sum_{Q\in J_m^{3,R}}\frac{\ell(Q)^{n-1/2}}{\ell(D)^{n-1/2}}\bigg) \bigg(\sum_{Q\in J_m^{3,R}} \frac{\ell(Q)^{1/2-n}}{\ell(D)^{n+1/2}}\,\|\Delta_Qf\|^2_{L^1(\mu)}\bigg)\\ &\lesssim\sum_{R\in V(D)}\,\sum_{m\in{\mathcal S}_D(x)}\bigg(\frac{\epsilon_m-\epsilon_{m+1}}{\ell(D)}\bigg)^{1/2} \sum_{Q\in J_m^{3,R}} \frac{\ell(Q)^{1/2-n}}{\ell(D)^{n+1/2}}\,\|\Delta_Qf\|^2_{L^1(\mu)}\\ &\leq\sum_{R\in V(D)}\,\sum_{Q\in {\operatorname{Tr}}(R)} \frac{\ell(Q)^{1/2-n}}{\ell(D)^{n+1/2}}\,\|\Delta_Qf\|^2_{L^1(\mu)}\sum_{\begin{subarray}{c}m\in{\mathcal S}_D(x):\,A_m(x)\cap Q\neq\emptyset,\\\ell(Q)\leq C_0(\epsilon_m-\epsilon_{m+1})\end{subarray}}\bigg(\frac{\epsilon_m-\epsilon_{m+1}}{\ell(D)}\bigg)^{1/2}. \end{split} \end{equation*} The sum over $m$ on the right hand side of the last inequality can be easily bounded by some constant depending on $C_0$, thus we finally obtain \begin{equation}\label{5 var eq6} \begin{split} \sum_{m\in{\mathcal S}_D(x)}\bigg|&\sum_{R\in V(D)}\,\sum_{Q\in J_m^{2,R}}T_m^{2,\mu}(\Delta_Qf)(x)\bigg|^2\lesssim\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Tr}}(R)}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{1/2}\frac{\|\Delta_Qf\|^2_{L^1(\mu)}}{\ell(Q)^{n}\ell(D)^{n}}. \end{split} \end{equation} Finally, combining (\ref{5 var eq3}), (\ref{5 var eq4}), (\ref{5 var eq5}), and (\ref{5 var eq6}), we conclude \begin{equation}\label{5 var eq7} \begin{split} \sum_{m\in{\mathcal S}_D(x)}\!\bigg|\!\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Tr}}(R)}\!\!(K\chi_{\epsilon_{m+1}}^{\epsilon_m}\!\!*\!(\Delta_Q f\mu))(x)\bigg|^2 \!\lesssim\!\!\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Tr}}(R)}\!\!\!\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{1/2}\frac{\|\Delta_Qf\|^2_{L^1(\mu)}}{\ell(Q)^{n}\ell(D)^{n}}, \end{split} \end{equation} Since $\|\Delta_Qf\|_{L^1(\mu)}\lesssim\|\Delta_Qf\|_{L^2(\mu)}\ell(Q)^{n/2}$ by H\"{o}lder's inequality, since $V(D)$ has finitely many terms, and since $\ell(R)=\ell(D)$ for all $R\in V(D)$, we get \begin{equation*} \begin{split} \sum_{S\in{\operatorname{Trs}}}\,\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}\bigg|&\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Tr}}(R)}(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\Delta_Q f\mu))(x)\bigg|^2d\mu(x)\\ &\lesssim\sum_{S\in{\operatorname{Trs}}}\,\sum_{D\in S}\,\sum_{R\in V(D)}\,\sum_{Q\in {\operatorname{Tr}}(R)}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{1/2}\|\Delta_Qf\|^2_{L^2(\mu)}\\ &\leq\sum_{S\in{\operatorname{Trs}}}\,\sum_{Q\in S}\,\sum_{R\in {\mathcal D}:\,R\supset Q}\,\sum_{D\in V(R)}\bigg(\frac{\ell(Q)}{\ell(R)}\bigg)^{1/2}\|\Delta_Qf\|^2_{L^2(\mu)}\\ &\lesssim\sum_{S\in{\operatorname{Trs}}}\,\sum_{Q\in S}\|\Delta_Qf\|^2_{L^2(\mu)} \leq\sum_{Q\in{\mathcal D}}\|\Delta_Qf\|^2_{L^2(\mu)}\leq\|f\|^2_{L^2(\mu)}. \end{split} \end{equation*} To complete the proof of Lemma \ref{5 var eq8}, it only remains to show Claims \ref{5 claim1} and \ref{5 claim2}. \end{proof} \subsubsection{{\bf Proof of Claims \ref{5 claim1} and \ref{5 claim2}}}\label{5ss proof of claims} First of all, we need an auxiliary result whose easy proof is left for the reader. \begin{lema}\label{lema pendent petita3} Let $\Gamma:=\{x\in {\mathbb R}^d\,:\,x=(y,A(y)),\,y\in{\mathbb R}^n\}$ be the graph of a Lipschitz function $A:{\mathbb R}^n\to{\mathbb R}^{d-n}$ such that ${\operatorname{Lip}}(A)$ is small enough. Then, ${\mathcal H}^n_\Gamma(A^d(z,a,b))\lesssim(b-a)b^{n-1}$ for all $0<a\leq b$ and $z\in\Gamma$. \end{lema} \begin{remarko}\label{rem499}{\em Actually, to obtain the conclusion of the lemma, one only needs ${\operatorname{Lip}}(A)<1$ (see \cite[Lemma 4.1.9]{Mas-thesis}). Let us mention that this assumption is sharp in the sense that if ${\operatorname{Lip}}(A)\geq1$ then the lemma fails. However, we do not need this stronger version for our purposes.} \end{remarko} Claims \ref{5 claim1} and \ref{5 claim2} follow from the next lemma, which will be proved using Lemma \ref{lema pendent petita3}. \begin{lema}\label{5 lema claims} Let $C_0>0$ be some constant depending only on $n$, $d$, and the AD regularity constant of $\mu$, and consider $x\in D\in{\mathcal D}_j$ for some $j\in{\mathbb Z}$. Let $\epsilon\in[2^{-j-1},2^{-j}).$ Given $k\geq j$ and $R\in V(D)$, set $$\Lambda_k:=\{Q\in {\operatorname{Tr}}(R)\cap{\mathcal D}_k:\, Q\subset A(x,\epsilon-C_02^{-k},\epsilon+C_02^{-k})\}.$$ Then, $\mu\big(\bigcup_{Q\in\Lambda_k}Q\big)\lesssim2^{-k}\ell(D)^{n-1}\approx2^{-k-j(n-1)}$. \end{lema} \begin{proof}[{\bf {\em Proof.}}] First of all, we can assume $k\gg j$ (otherwise, the claim follows easily using the AD regularity of $\mu$), thus we may assume that ${\operatorname{dist}}(x,Q)\geq\frac{3}{4}\,\epsilon$. For simplicity, set $S\equiv {\operatorname{Tr}}(R)$. By the property $(f)$ of the corona decomposition of $\mu$, there exists a (rotation and translation of an) $n$-dimensional Lipschitz graph $\Gamma_S$ with ${\operatorname{Lip}}(\Gamma_S)\leq\eta$ such that ${\operatorname{dist}}(y,\Gamma_S)\leq\theta\,{\operatorname{diam}}(Q)$ whenever $y\in C_{cor}Q$ and $Q\in S$, for some given constant $C_{cor}\geq2$. Since $x\in D$ and $R\in V(D)$, we have $x\in C_{cor}Q$ assuming $C_{cor}$ big enough, and so ${\operatorname{dist}}(x,\Gamma_S)\leq\theta\,{\operatorname{diam}}(Q)$. Hence, if $\eta$ and $\theta$ are small enough, one can easily modify $\Gamma_S$ inside $B(x,\frac{1}{4}\,\epsilon)$ to obtain a Lipschitz graph $\Gamma_S^x$ such that $x\in\Gamma_S^x$, and moreover \begin{equation}\label{5lema anell} {\operatorname{Lip}}(\Gamma_S^x)\leq\eta'\text{ for some }\eta'\text{ small enough},\quad\text{and}\quad\Gamma_S^x \setminus B(x,\epsilon/4)=\Gamma_S\setminus B(x,\epsilon/4). \end{equation} Using that ${\operatorname{dist}}(x,Q)\geq\frac{3}{4}\,\epsilon$ for all $Q\in\Lambda_k$, that ${\operatorname{dist}}(z_Q,\Gamma_S)\leq\theta\,{\operatorname{diam}}(Q)$ for the centre $z_Q$ of $Q$, and the last part of (\ref{5lema anell}), we deduce that ${\operatorname{dist}}(z_Q,\Gamma_S^x)\leq\theta\,{\operatorname{diam}}(Q)$ for all $Q\in\Lambda_k$. So $B(z_Q,\theta\,{\operatorname{diam}}(Q))\cap\Gamma_S^x\neq\emptyset$, which in turn yields ${\mathcal H}^n\big({\Gamma_S^x}\cap B(z_Q,2\theta\,{\operatorname{diam}}(Q))\big)\gtrsim(\theta\,{\operatorname{diam}}(Q))^n$. Therefore, since $\{B(z_Q,2\theta\,{\operatorname{diam}}(Q))\}_{Q\in\Lambda_k}$ is a family with finite overlap bounded by some constant depending only on $n$, $\theta$, and the AD regularity constant of $\mu$, we have \begin{equation*} \begin{split} \mu\bigg(\bigcup_{Q\in\Lambda_k}Q\bigg)& \approx\sum_{Q\in\Lambda_k}\ell(Q)^n \lesssim\theta^{-n}\sum_{Q\in\Lambda_k}{\mathcal H}^n\big({\Gamma_S^x}\cap B(z_Q,2\theta\,{\operatorname{diam}}(Q))\big)\\ &\lesssim\theta^{-n}{\mathcal H}^n_{\Gamma_S^x}\bigg(\bigcup_{Q\in\Lambda_k}B(z_Q,2\theta\,{\operatorname{diam}}(Q))\bigg)\\ &\lesssim\theta^{-n}{\mathcal H}^n_{\Gamma_S^x}\big(A(x,\epsilon-C_02^{-k},\epsilon+C_02^{-k})\big) \lesssim\theta^{-n}2^{-k-j(n-1)}, \end{split} \end{equation*} where we used Lemma \ref{lema pendent petita3} and that $\epsilon\approx2^{-j}$ in the last inequality. The lemma is proved. \end{proof} \begin{proof}[{\bf{\em Proof of} Claim \ref{5 claim1}}] Recall that $J_m^{1,R}:=\{Q\in {\operatorname{Tr}}(R):\,Q\cap A_m(x)\neq\emptyset,\,\ell(Q)\geq C_0(\epsilon_m-\epsilon_{m+1})\},$ where $R\in V(D)$ and $D\in{\mathcal D}_j$. We have to check that $\sum_{Q\in J_m^{1,R}}\ell(Q)^{n-1/2}\lesssim\ell(D)^{n-1/2}$. We will split the sum into different scales and we will apply Lemma \ref{5 lema claims} at each scale. Given $i\in{\mathbb Z}$ such that $2^{-i}\geq C_0(\epsilon_m-\epsilon_{m+1})$, the number of $\mu$-cubes $Q\in{\mathcal D}_i$ such that $Q\subset R$ and $Q\cap A_m(x)\neq\emptyset$ is bounded by $C\ell(R)^{n-1}2^{i(n-1)}\approx2^{-j(n-1)+i(n-1)}$, since for all these $\mu$-cubes, $Q\subset A(x,\epsilon_{m+1}-C2^{-i},\epsilon_m+C2^{-i})\subset A(x,\epsilon_{m}-C2^{-i+1},\epsilon_m+C2^{-i+1})$ for some constant $C>0$ big enough, and then by Lemma \ref{5 lema claims}, $\mu\big(\bigcup_{Q\in J_m^{1,R}\cap{\mathcal D}_i}Q\big)\lesssim 2^{-i}\ell(D)^{n-1}$. Therefore, \begin{equation*} \begin{split} \sum_{Q\in J_m^{1,R}}\ell(Q)^{n-1/2}&=\sum_{i\in{\mathbb Z}:\,i\geq j}2^{i/2}\sum_{Q\in J_m^{1,R}\cap{\mathcal D}_i}\ell(Q)^{n}\lesssim\sum_{i\in{\mathbb Z}:\,i\geq j}2^{i/2}2^{-i}\ell(D)^{n-1}\\ &\approx2^{-j/2}\ell(D)^{n-1}=\ell(D)^{n-1/2}. \end{split} \end{equation*} \end{proof} \begin{proof}[{\bf{\em Proof of} Claim \ref{5 claim2}}] Recall that $J_m^{3,R}:=\{Q\in {\operatorname{Tr}}(R):\,Q\cap A_m(x)\neq\emptyset,\,Q\cap (A_m(x))^c\neq\emptyset,\,\ell(Q)\leq C_0(\epsilon_m-\epsilon_{m+1})\},$ where $R\in V(D)$ and $D\in{\mathcal D}_j$. We have to check that $$\sum_{Q\in J_m^{3,R}}\ell(Q)^{n-1/2}\lesssim\ell(D)^{n-1}(\epsilon_m-\epsilon_{m+1})^{1/2}.$$ As before, we will split the sum into the different scales and we will apply Lemma \ref{5 lema claims} at each scale. Given $i\in{\mathbb Z}$ such that $2^{-i}\leq C_0(\epsilon_m-\epsilon_{m+1})$, since for any $Q\in J_m^{3,R}\cap{\mathcal D}_i$ we have $Q\subset A(x,\epsilon_{m+1}-C2^{-i},\epsilon_{m+1}+C2^{-i})\cup A(x,\epsilon_{m}-C2^{-i},\epsilon_{m}+C2^{-i}$) for some constant $C>0$ big enough, by Lemma \ref{5 lema claims} applied to both annuli we have $\mu\big(\bigcup_{Q\in J_m^{3,R}\cap{\mathcal D}_i}Q\big)\lesssim2^{-i}\ell(D)^{n-1}$. Therefore, \begin{equation*} \begin{split} \sum_{Q\in J_m^{3,R}}\ell(Q)^{n-1/2}&=\sum_{\begin{subarray}{c}i\in{\mathbb Z}:\,i\geq-\log_2( C_0(\epsilon_m-\epsilon_{m+1}))\end{subarray}}2^{i/2}\sum_{Q\in J_m^{3,R}\cap{\mathcal D}_i}\ell(Q)^n\\ &\lesssim\sum_{\begin{subarray}{c}i\in{\mathbb Z}:\,i\geq-\log_2( C_0(\epsilon_m-\epsilon_{m+1}))\end{subarray}}2^{-i/2}\ell(D)^{n-1}\approx(\epsilon_m-\epsilon_{m+1})^{1/2}\ell(D)^{n-1}. \end{split} \end{equation*} \end{proof} \subsubsection{{\bf Estimate of $\sum_{m\in{\mathcal S}_D(x)}\big|\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Stp}}(R)}(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\widetilde\Delta_Q f\mu))(x)\big|^2$ from (\ref{eqmain})}}\label{5 ss5321b} \begin{lema}\label{5 var eq11} Under the notation above, we have \begin{equation*} \sum_{S\in{\operatorname{Trs}}}\,\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Stp}}(R)}(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\widetilde\Delta_Q f\mu))(x)\bigg|^2d\mu(x)\lesssim\|f\|^2_{L^2(\mu)}. \end{equation*} \end{lema} \begin{proof}[{\bf{\em Proof}}] Given $R\in V(D)$, consider a $\mu$-cube $Q\in {\operatorname{Stp}}(R)$. If ${\operatorname{Tr}}(R)\neq\emptyset$, then $Q\in{\mathcal B}\cup({\mathcal G}\setminus {\operatorname{Tr}}(R))$, $Q\subset R$ and $P(Q)\in {\operatorname{Tr}}(R)$ (in particular, $Q\subsetneq R$). Take $S\in{\operatorname{Trs}}$ such that $R\in S$. By property $(f)$ of the corona decomposition (see Subsection \ref{5ss corona decomposition}), we have ${\operatorname{dist}}(y,\Gamma_S)\leq\theta{\operatorname{diam}}(P(Q))$ for all $y\in C_{cor}P(Q)$. Hence, ${\operatorname{dist}}(y,\Gamma_S)\leq C\theta{\operatorname{diam}}(Q)$ for all $y\in C_{cor}Q$. On the other hand, if ${\operatorname{Tr}}(R)=\emptyset$ we have set ${\operatorname{Stp}}(R)=\{R\}$. In this case, we have $R\in{\mathcal B}$. Take $S$ such that $D\in S$. Since $R\in V(D)$, we have $R\subset C_{cor}D$ if $C_{cor}$ is chosen big enough, and thus ${\operatorname{dist}}(y,\Gamma_S)\leq C\theta {\operatorname{diam}}(R)$ for all $y\in C'R$, where $C$ is as above and $C'$ depends on $C_{cor}$. Taking into account the comments above, one can prove the following claims using similar arguments to the ones in the proof of Claims \ref{5 claim1} and \ref{5 claim2}. \begin{claim}\label{5 claim3} Let $x\in D\in{\mathcal D}$, $R\in V(D)$, and $m\in{\mathcal S}_D(x)$. If we set $J_m^{1,R}:=\{Q\in {\operatorname{Stp}}(R):\,Q\cap A_m(x)\neq\emptyset,\,\ell(Q)\geq C_0(\epsilon_m-\epsilon_{m+1})\},$ then $\sum_{Q\in J_m^{1,R}}\ell(Q)^{n-1/2}\lesssim\ell(D)^{n-1/2}$. \end{claim} \begin{claim}\label{5 claim4} Let $x\in D\in{\mathcal D}$, $R\in V(D)$, and $m\in{\mathcal S}_D(x)$. If we set $J_m^{3,R}:=\{Q\in {\operatorname{Stp}}(R):\,Q\cap A_m(x)\neq\emptyset,\,Q\cap (A_m(x))^c\neq\emptyset,\,\ell(Q)\leq C_0(\epsilon_m-\epsilon_{m+1})\},$ then $\sum_{Q\in J_m^{3,R}}\ell(Q)^{n-1/2}\lesssim\ell(D)^{n-1}(\epsilon_m-\epsilon_{m+1})^{1/2}$. \end{claim} The only properties of $\Delta_Q f$ that we used to obtain (\ref{5 var eq7}) were that $\Delta_Q f$ is supported in $Q$ and that $\int\Delta_Q f\,d\mu=0$. The function $\widetilde\Delta_Qf$ is also supported in $Q$ and has vanishing integral. Thus, if we replace ${\operatorname{Tr}}(R)$ by ${\operatorname{Stp}}(R)$, Claims \ref{5 claim1} and \ref{5 claim2} by Claims \ref{5 claim3} and \ref{5 claim4}, and $\Delta_Q f$ by $\widetilde\Delta_Q f$, the same arguments that gave us (\ref{5 var eq7}) yield the following estimate: \begin{equation}\label{5 var eq9} \begin{split} \!\sum_{m\in{\mathcal S}_D(x)}\!\bigg|\!\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Stp}}(R)}\!\!(K\chi_{\epsilon_{m+1}}^{\epsilon_m}\!\!*\!(\widetilde\Delta_Q f\mu))(x)\bigg|^2 \!\!\lesssim\!\!\!\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Stp}}(R)}\!\!\frac{\ell(Q)^{1/2-n}}{\ell(D)^{1/2+n}}\,\|\widetilde\Delta_Qf\|^2_{L^1(\mu)}. \end{split} \end{equation} Below we will use that $\|\widetilde\Delta_Qf\|^2_{L^1(\mu)}\ell(Q)^{-n}=\big(\int_Q|f-m^\mu_Qf|\,d\mu\big)^2\ell(Q)^{-n}\lesssim\big(m^\mu_Q|f|\big)^2\mu(Q).$ Notice that, by the definition of ${\operatorname{Stp}}(R)$ and since the corona decomposition is coherent (property $(d)$), any $Q\in {\operatorname{Stp}}(R)$ is actually a maximal $\mu$-cube $Q_S$ of some $S\in{\operatorname{Trs}}$ or $Q\in{\mathcal B}$ (and in this case ${\operatorname{Tr}}(R)$ is empty). Hence, if we integrate (\ref{5 var eq9}) in $D$, we sum over all $D\in S\in{\operatorname{Trs}}$, and we change the order of summation, we get \begin{equation*} \begin{split} \sum_{S\in{\operatorname{Trs}}}\,\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}&\bigg|\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Stp}}(R)}(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\widetilde\Delta_Q f\mu))(x)\bigg|^2d\mu(x)\\ &\lesssim\sum_{S\in{\operatorname{Trs}}}\,\sum_{D\in S}\,\sum_{R\in V(D)}\,\sum_{Q\in {\operatorname{Stp}}(R)}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{1/2}\frac{\|\widetilde\Delta_Qf\|^2_{L^1(\mu)}}{\ell(Q)^{n}}\\ &\lesssim\sum_{D\in{\mathcal D}}\,\sum_{R\in V(D)}\,\sum_{S\in{\operatorname{Trs}}:\,Q_S\subset R}\bigg(\frac{\ell(Q_S)}{\ell(D)}\bigg)^{1/2}\big(m^\mu_{Q_S}|f|\big)^2\mu(Q_S)\\ &\quad+\sum_{D\in{\mathcal D}}\,\sum_{R\in V(D)}\,\sum_{Q\in{\mathcal B}:\,Q\subset R}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{1/2}\big(m^\mu_{Q}|f|\big)^2\mu(Q)\\ &=\sum_{S\in{\operatorname{Trs}}}\,\sum_{R\in{\mathcal D}:\,R\supset Q_S}\,\sum_{D\in V(R)}\bigg(\frac{\ell(Q_S)}{\ell(R)}\bigg)^{1/2}\big(m^\mu_{Q_S}|f|\big)^2\mu(Q_S)\\ &\quad+\sum_{Q\in{\mathcal B}}\,\sum_{R\in{\mathcal D}:\,R\supset Q}\,\sum_{D\in V(R)}\bigg(\frac{\ell(Q)}{\ell(R)}\bigg)^{1/2}\big(m^\mu_{Q}|f|\big)^2\mu(Q). \end{split} \end{equation*} Finally, using that $V(R)$ has finitely many elements, and that the $\mu$-cubes $Q_S$ with $S\in{\operatorname{Trs}}$ and the $\mu$-cubes $Q\in{\mathcal B}$ satisfy a Carleson packing condition (so we can apply Carleson's embedding theorem), we deduce \begin{equation*} \begin{split} \sum_{S\in{\operatorname{Trs}}}&\,\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Stp}}(R)}(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\widetilde\Delta_Q f\mu))(x)\bigg|^2d\mu(x)\\ &\lesssim\sum_{S\in{\operatorname{Trs}}}\big(m^\mu_{Q_S}|f|\big)^2\mu(Q_S)\sum_{R\in{\mathcal D}:\,R\supset Q_S}\frac{\ell(Q_S)^{1/2}}{\ell(R)^{1/2}}+\sum_{Q\in{\mathcal B}}\big(m^\mu_{Q}|f|\big)^2\mu(Q)\sum_{R\in{\mathcal D}:\,R\supset Q}\frac{\ell(Q)^{1/2}}{\ell(R)^{1/2}}\\ &\lesssim\sum_{S\in{\operatorname{Trs}}}\big(m^\mu_{Q_S}|f|\big)^2\mu(Q_S) +\sum_{Q\in{\mathcal B}}\big(m^\mu_{Q}|f|\big)^2\mu(Q) \lesssim\|f\|^2_{L^2(\mu)}. \end{split} \end{equation*} \end{proof} \subsubsection{{\bf Estimate of $\sum_{m\in{\mathcal S}_D(x)}\big|\sum_{R\in V(D)} (K\chi_{\epsilon_{m+1}}^{\epsilon_m}*((m_R^\mu f)\chi_R\mu))(x)\big|^2$ from (\ref{eqmain})}} We will need the following auxiliary lemma, which we prove for completeness, despite we think it is already known. \begin{lema}\label{5 veins quasiortogonalitat} Given $D\in{\mathcal D}$ and $f\in L^2(\mu)$, set $a_D(f):=\sum_{R\in V(D)}|m_R^\mu f-m_D^\mu f|$. Then, there exists $C>0$ depending only $n$ and the AD regularity constant of $\mu$ such that $$\sum_{D\in{\mathcal D}}(a_D(f))^2\mu(D)\leq C\|f\|^2_{L^2(\mu)}.$$ \end{lema} \begin{proof}[{\bf {\em Proof.}}] By subtracting a constant if necessary, we can assume that $f$ has mean zero. Consider the representation of $f$ with respect to the Haar basis associated to ${\mathcal D}$, that is $f=\sum_{Q\in{\mathcal D}}\Delta_Qf$. For $m\in{\mathbb Z}$, we define the function $u_m=\sum_{Q\in{\mathcal D}_m}\Delta_Qf$, so $f=\sum_{m\in{\mathbb Z}}u_m$ and the equality holds in $L^2(\mu)$. Given $j\in{\mathbb Z}$, define the operator $$S_j(f):=\bigg(\sum_{D\in{\mathcal D}_j}(a_D(f))^2\chi_D\bigg)^{1/2}.$$ We will prove that there exists a sequence $\{\sigma(k)\}_{k\in{\mathbb Z}}$ such that \begin{equation}\label{5 vei quasiort} \sum_{k\in{\mathbb Z}}\sigma(k)\leq C<\infty\quad\text{and}\quad\|S_j(u_m)\|_{L^2(\mu)}\lesssim\sigma(|m-j|)\|u_m\|_{L^2(\mu)}. \end{equation} Assume for the moment that (\ref{5 vei quasiort}) holds. Then, since each $S_j$ is sublinear, by Cauchy-Schwarz inequality and the orthogonality of the $u_m$'s, \begin{equation*} \begin{split} \sum_{D\in{\mathcal D}}(a_D(f))^2\mu(D)&=\sum_{j\in{\mathbb Z}}\int\sum_{D\in{\mathcal D}_j}(a_D(f))^2\chi_D\,d\mu =\sum_{j\in{\mathbb Z}}\|S_j(f)\|^2_{L^2(\mu)}\\ &=\sum_{j\in{\mathbb Z}}\bigg\|S_j\bigg(\sum_{m\in{\mathbb Z}}u_m\bigg)\bigg\|^2_{L^2(\mu)} \leq\sum_{j\in{\mathbb Z}}\bigg(\sum_{m\in{\mathbb Z}}\|S_j(u_m)\|_{L^2(\mu)}\bigg)^2\\ &\leq\sum_{j\in{\mathbb Z}}\bigg(\sum_{m\in{\mathbb Z}}\sigma(|m-j|)\bigg)\bigg(\sum_{m\in{\mathbb Z}}\sigma(|m-j|)^{-1}\|S_j(u_m)\|^2_{L^2(\mu)}\bigg)\\ &\lesssim\sum_{j\in{\mathbb Z}}\sum_{m\in{\mathbb Z}}\sigma(|m-j|)\|u_m\|^2_{L^2(\mu)} =\sum_{m\in{\mathbb Z}}\|u_m\|^2_{L^2(\mu)}\sum_{j\in{\mathbb Z}}\sigma(|m-j|)\\ &\lesssim\sum_{m\in{\mathbb Z}}\|u_m\|^2_{L^2(\mu)}=\|f\|^2_{L^2(\mu)}, \end{split} \end{equation*} and the lemma follows. Let us verify (\ref{5 vei quasiort}) now. By definition, \begin{equation}\label{5 def Sj} \|S_j(u_m)\|^2_{L^2(\mu)} =\sum_{D\in{\mathcal D}_j}\bigg(\sum_{R\in V(D)}\bigg|\sum_{Q\in{\mathcal D}_m}\int\Delta_Qf \bigg(\frac{\chi_R}{\mu(R)}-\frac{\chi_D}{\mu(D)}\bigg)\,d\mu\,\bigg|\bigg)^2\mu(D). \end{equation} Assume first that $m\geq j$. If $D\in{\mathcal D}_j$, $R\in V(D)$, and $Q\in{\mathcal D}_m$, then either $Q\cap R=\emptyset$ or $Q\subset R$. In both cases, since $\Delta_Qf$ has mean zero and is supported in $Q$, we have $\int\Delta_Q f\,\chi_R\,d\mu=0$. Thus, the right hand side of (\ref{5 def Sj}) vanishes (obviously $D\in V(D)$), and (\ref{5 vei quasiort}) follows. Assume now that $m<j$. Set $\widetilde D:=\bigcup_{R\in V(D)}R$. Recall that $\Delta_Q f:=\sum_{U\in {\operatorname{Ch}}( Q)}\chi_U(m_U^\mu f-m_Q^\mu f)$, so $\Delta_Q f$ is constant in each $U\in {\operatorname{Ch}}( Q)$. Hence, if for some $U\in {\operatorname{Ch}}( Q)$ we have $\widetilde D\subset U$ or $\widetilde D\subset{\operatorname{supp}}\mu\setminus U$, then $(R\cup D)\subset U$ or $(R\cup D)\cap U=\emptyset$ for all $R\in V(D)$, and so $$\int\chi_U(m_U^\mu f-m_Q^\mu f)\bigg(\frac{\chi_R}{\mu(R)}-\frac{\chi_D}{\mu(D)}\bigg)\,d\mu =(m_U^\mu f-m_Q^\mu f)\int_U\bigg(\frac{\chi_R}{\mu(R)}-\frac{\chi_D}{\mu(D)}\bigg)\,d\mu=0$$ for all $R\in V(D)$. Therefore, if we set $m_{U,Q}^\mu f:=(m_U^\mu f-m_Q^\mu f)$, using that $V(D)$ has finitely many elements and that $\int|\mu(R)^{-1}\chi_R-\mu(D)^{-1}\chi_D|\,d\mu\leq2$ for all $R\in V(D)$, we deduce from (\ref{5 def Sj}) that \begin{equation}\label{5 def Sj1} \begin{split} \|S_j(u_m)\|^2_{L^2(\mu)} &=\sum_{D\in{\mathcal D}_j}\!\bigg(\!\sum_{R\in V(D)}\!\bigg|\!\sum_{Q\in{\mathcal D}_m}\!\int\!\!\sum_{\begin{subarray}{c}U\in {\operatorname{Ch}}( Q):\\\widetilde D\cap U\neq\emptyset,\\\widetilde D\cap U^c\neq\emptyset\end{subarray}}\!\!\chi_U\,m_{U,Q}^\mu f \bigg(\frac{\chi_R}{\mu(R)}-\frac{\chi_D}{\mu(D)}\bigg)\,d\mu\,\bigg|\bigg)^2\!\mu(D)\\ &\lesssim\sum_{D\in{\mathcal D}_j}\bigg(\sum_{Q\in{\mathcal D}_m}\, \sum_{\begin{subarray}{c}U\in {\operatorname{Ch}}( Q):\,\widetilde D\cap U\neq\emptyset,\\\widetilde D\cap U^c\neq\emptyset\end{subarray}} |m_{U,Q}^\mu f|\bigg)^2\mu(D)\\ &=\sum_{D\in{\mathcal D}_j}\bigg(\sum_{\begin{subarray}{c}U\in{\mathcal D}_{m+1}:\,\widetilde D\cap U\neq\emptyset,\\\widetilde D\cap U^c\neq\emptyset\end{subarray}}|m_{U,P(U)}^\mu f|\bigg)^2\mu(D). \end{split} \end{equation} It is not hard to show that, since $m<j$ and $D\in{\mathcal D}_j$, the number of $\mu$-cubes $U\in{\mathcal D}_{m+1}$ such that $\widetilde D\cap U\neq\emptyset$ and $\widetilde D\cap U^c\neq\emptyset$ is bounded by some constant depending only on $n$ and the AD regularity constant of $\mu$ (but not on the precise value of $m$). Hence, \begin{equation}\label{5 def Sj2} \begin{split} \sum_{D\in{\mathcal D}_j}\bigg(\sum_{\begin{subarray}{c}U\in{\mathcal D}_{m+1}:\,\widetilde D\cap U\neq\emptyset,\\\widetilde D\cap U^c\neq\emptyset\end{subarray}}|m_{U,P(U)}^\mu f|\bigg)^2\mu(D) &\lesssim\sum_{D\in{\mathcal D}_j}\,\sum_{\begin{subarray}{c}U\in{\mathcal D}_{m+1}:\,\widetilde D\cap U\neq\emptyset,\\\widetilde D\cap U^c\neq\emptyset\end{subarray}}|m_{U,P(U)}^\mu f|^2\mu(D)\\ &=\sum_{U\in{\mathcal D}_{m+1}}|m_{U,P(U)}^\mu f|^2 \,\mu\bigg(\bigcup_{\begin{subarray}{c}D\in{\mathcal D}_{j}:\,\widetilde D\cap U\neq\emptyset,\\\widetilde D\cap U^c\neq\emptyset\end{subarray}}D\bigg). \end{split} \end{equation} Fix $U\in{\mathcal D}_{m+1}$. Recall that $\widetilde D:=\bigcup_{R\in V(D)}R$, so ${\operatorname{diam}}(\widetilde D)\approx{\operatorname{diam}}(D)$. Thus, there exists a constant $\tau_0>0$ such that \begin{equation*} \begin{split} \bigcup_{\begin{subarray}{c}D\in{\mathcal D}_{j}:\,\widetilde D\cap U\neq\emptyset,\,\widetilde D\cap U^c\neq\emptyset \end{subarray}}D &\subset\{x\in U:\, {\operatorname{dist}}(x,{\operatorname{supp}}\mu\setminus U)\leq\tau_0\ell(D)\}\\ &\quad\cup\{x\in {\operatorname{supp}}\mu\setminus U:\, {\operatorname{dist}}(x,U)\leq\tau_0\ell(D)\}\\ &=\{x\in U:\, {\operatorname{dist}}(x,{\operatorname{supp}}\mu\setminus U)\leq\tau_02^{m-j+1}\ell(U)\}\\ &\quad\cup\{x\in {\operatorname{supp}}\mu\setminus U:\, {\operatorname{dist}}(x,U)\leq\tau_02^{m-j+1}\ell(U)\}. \end{split} \end{equation*} If $m\ll j$, then $\tau:=\tau_02^{m-j+1}<1$, so we can apply the {\em small boundaries condition} (\ref{small boundary condition}) of Subsection \ref{dyadic lattice} to obtain $\mu\big(\bigcup_{D\in{\mathcal D}_{j}:\,\widetilde D\cap U\neq\emptyset,\,\widetilde D\cap U^c\neq\emptyset}D\big)\leq C\tau^{1/C}2^{-mn}.$ On the contrary, if $|m-j|\lesssim1$, then $\tau^{1/C}\approx1$, so $\mu\big(\bigcup_{D\in{\mathcal D}_{j}:\,\widetilde D\cap U\neq\emptyset,\,\widetilde D\cap U^c\neq\emptyset}D\big)\leq\mu(C_1U)\lesssim2^{-mn}\approx\tau^{1/C}2^{-mn}$, for some big constant $C_1>0$. Thus, in any case, $\mu\big(\bigcup_{D\in{\mathcal D}_{j}:\,\widetilde D\cap U\neq\emptyset,\,\widetilde D\cap U^c\neq\emptyset}D\big)\lesssim2^{(m-j)/C}\ell(U)^n,$ and combining this with (\ref{5 def Sj2}) and (\ref{5 def Sj1}) we conclude that, for $m<j$, \begin{equation*} \begin{split} \|S_j(u_m)\|^2_{L^2(\mu)}&\lesssim 2^{(m-j)/C}\sum_{U\in{\mathcal D}_{m+1}}|m_U^\mu f-m_{P(U)}^\mu f|^2\ell(U)^n\\ &\approx2^{(m-j)/C}\int\sum_{U\in{\mathcal D}_{m+1}}\chi_U|m_U^\mu f-m_{P(U)}^\mu f|^2\,d\mu=2^{-|m-j|/C}\|u_m\|^2_{L^2(\mu)}, \end{split} \end{equation*} which gives (\ref{5 vei quasiort}) with $\sigma(k)=2^{-\frac{|k|}{2C}}$ and finishes the proof of the lemma. \end{proof} \begin{lema}\label{5 var eq8a} Under the notation above, we have \begin{equation*} \sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)} (K\chi_{\epsilon_{m+1}}^{\epsilon_m}*((m_R^\mu f)\chi_R\mu))(x)\bigg|^2\,d\mu(x)\lesssim\|f\|_{L^2(\mu)}^2. \end{equation*} \end{lema} \begin{proof}[{\bf{\em Proof}}] Recall that, given $D\in{\mathcal D}$, we have set $\widetilde D:=\bigcup_{R\in V(D)}R$. For $x\in D$, we have \begin{equation}\label{5 var quasiort eq1} \begin{split} \sum_{m\in{\mathcal S}_D(x)}\!\bigg|\!\sum_{R\in V(D)} (K\chi_{\epsilon_{m+1}}^{\epsilon_m}*&((m_R^\mu f)\chi_R\mu))(x)\bigg|^2\lesssim\!\sum_{m\in{\mathcal S}_D(x)}\!\big| (K\chi_{\epsilon_{m+1}}^{\epsilon_m}*((m_D^\mu f)\chi_{\widetilde D}\mu))(x)\big|^2\\ &\quad+\sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)} (K\chi_{\epsilon_{m+1}}^{\epsilon_m}*((m_R^\mu f-m_D^\mu f)\chi_R\mu))(x)\bigg|^2. \end{split} \end{equation} We are going to estimate the two terms on the right hand side of (\ref{5 var quasiort eq1}) separately. For the second one, recall also that, given $m\in{\mathcal S}_D(x)$, we have set $A_m(x):=A(x,\epsilon_{m+1},\epsilon_m)$. We write \begin{equation*} \begin{split} |(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*((m_R^\mu f-m_D^\mu f)\chi_R\mu))(x)| &\leq|m_R^\mu f-m_D^\mu f|\int_{A_m(x)}|K(x-y)|\chi_R(y)\,d\mu(y)\\ &\lesssim|m_R^\mu f-m_D^\mu f|\,\mu(A_m(x)\cap R)\ell(D)^{-n}. \end{split} \end{equation*} Therefore, interchanging the order of summation, \begin{equation*} \begin{split} \sum_{m\in{\mathcal S}_D(x)}&\bigg|\sum_{R\in V(D)} (K\chi_{\epsilon_{m+1}}^{\epsilon_m}*((m_R^\mu f-m_D^\mu f)\chi_R\mu))(x)\bigg|^2\\ &\lesssim\bigg(\sum_{m\in{\mathcal S}_D(x)}\sum_{R\in V(D)}|m_R^\mu f-m_D^\mu f|\,\mu(A_m(x)\cap R)\ell(D)^{-n}\bigg)^2\\ &\leq\bigg(\sum_{R\in V(D)}|m_R^\mu f-m_D^\mu f|\,\frac{\mu(R)}{\ell(D)^{n}}\bigg)^2 \approx\bigg(\sum_{R\in V(D)}|m_R^\mu f-m_D^\mu f|\bigg)^2=(a_D(f))^2, \end{split} \end{equation*} where $a_D(f)$ are the coefficients introduced in Lemma \ref{5 veins quasiortogonalitat}. If we integrate on $D$ and sum over all $D\in S$ and $S\in{\operatorname{Trs}}$, we can apply Lemma \ref{5 veins quasiortogonalitat}, and we finally obtain \begin{equation}\label{5 var quasiort eq2} \begin{split} \sum_{S\in{\operatorname{Trs}}}\,\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)} (K\chi_{\epsilon_{m+1}}^{\epsilon_m}*((m_R^\mu f&-m_D^\mu f)\chi_R\mu))(x)\bigg|^2d\mu(x)\\ &\lesssim\sum_{D\in{\mathcal D}}(a_D(f))^2\mu(D)\lesssim\|f\|^2_{L^2(\mu)}. \end{split} \end{equation} Let us estimate now the first term on the right hand side of (\ref{5 var quasiort eq1}). Let $L_D$ be a minimizing $n$-plane for $\alpha_{\mu}(D)$ and let $L_D^x$ be the $n$-plane parallel to $L_D$ which contains $x$. Given $z\in{\mathbb R}^d$, let $p_0^x$ denote the orthogonal projection onto $L_D^x$. Let $g_1,g_2:{\mathbb R}\to[0,1]$ be such that ${\operatorname{supp}} g_1\subset(-2\varepsilon\ell(D),2\varepsilon\ell(D))$, ${\operatorname{supp}} g_2\subset(-\ell(D)\varepsilon,\ell(D)\varepsilon)^c$, and $g_1+g_2=1$, where $\varepsilon>0$ is some fixed constant small enough. For $z\in{\mathbb R}^d$, consider the projection onto $L_D^x$ given by \begin{equation}\label{5projeccio angular} p^x(z):=\bigg(x+(p^x_0(z)-x)\frac{|z-x|}{|p^x_0(z)-x|}\bigg)g_2(|p^x_0(z)-x|)+p^x_0(z)g_1(|p^x_0(z)-x|). \end{equation} Since ${\operatorname{supp}} g_2$ does not contain the origin, $p^x$ is well defined. Moreover, if $z\in{\mathbb R}^d$ is such that $g_2(|p^x_0(z)-x|)=1$, then $|z-x|=|p^x(z)-x|$. Let $C_*>0$ be a small constant which will be fixed below. Assume that $\alpha_\mu(10D)\geq C_*$. Then, we can easily estimate \begin{equation}\label{5teorema L2 no suau 6'} \begin{split} \sum_{m\in{\mathcal S}_D(x)}\big|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*&((m_D^\mu f)\chi_{\widetilde D}\mu))(x)\big|^2 =|m_D^\mu f|^2\sum_{m\in{\mathcal S}_{\mathcal D}(x)}\bigg|\int_{A_m(x)\cap\widetilde D}K(x-y)\,d\mu(y)\bigg|^2\\ &\lesssim|m_D^\mu f|^2\bigg(\sum_{m\in{\mathcal S}_{\mathcal D}(x)}\int_{A_m(x)\cap\widetilde D}|K(x-y)|\,d\mu(y)\bigg)^2\\ &\lesssim|m_D^\mu f|^2\bigg(\int_{\widetilde D}\ell(D)^{-n}\,d\mu(y)\bigg)^2 \lesssim|m_D^\mu f|^2\lesssim|m_D^\mu f|^2\alpha_\mu(10D)^2. \end{split} \end{equation} From now on, we assume that $\alpha_\mu(10D)<C_*$. By assuming $C_*$ small enough, it is not difficult to show that then the distance between $\widetilde D$ and $L_D^x$ is smaller than $\ell(D)/1000$. Moreover, $p^x$ restricted to $\{y\in A_m(x):\, {\operatorname{dist}}(y,L_D^x)\leq\ell(D)/1000\}$ is a Lipschitz function with Lipschitz constant depending only $n$, $d$, and the AD regularity constant of $\mu$. Furthermore, by taking $\varepsilon$ small enough, we have \begin{equation}\label{mimimim} p^x(z)=x+(p^x_0(z)-x)\frac{|z-x|}{|p^x_0(z)-x|} \end{equation} for all $z\in\{y\in \widetilde D\cap A_m(x):\, {\operatorname{dist}}(y,L_D^x)\leq\ell(D)/1000\}\subset{\operatorname{supp}}\mu$. Recall that $D\in S$ for some $S\in{\operatorname{Trs}}$. Let $Q_S$ be the maximal $\mu$-cube of $S$, and set $\nu_x:=p^x_\sharp(\chi_{40Q_S}\mu)$. Then, since ${\operatorname{supp}}\mu\cap A_m(x)\subset\widetilde D$ by the construction of $\widetilde D$, \begin{equation}\label{5teorema L2 no suau 6} \begin{split} (K\chi_{\epsilon_{m+1}}^{\epsilon_m}*((&m_D^\mu f)\chi_{\widetilde D}\mu))(x) =(m_D^\mu f)\int_{A_m(x)}K(x-y)\,d\mu(y)\\ &=(m_D^\mu f)\int_{A_m(x)}K(x-y)\,d(\mu-\nu_x)(y) +(m_D^\mu f)\int_{A_m(x)}K(x-y)\,d\nu_x(y)\\ &=:U1_m(x)+U2_m(x). \end{split} \end{equation} \begin{claim}\label{5teorema L2 no suau 18} Under the notation above, we have \begin{equation*} \sum_{m\in{\mathcal S}_D(x)}|U1_m(x)|^2 \lesssim|m_D^\mu f|^2\bigg(\beta_{1,\mu}(D)^2+\alpha_\mu(D)^2+\bigg(\frac{{\operatorname{dist}}(x,L_D)}{\ell(D)}\bigg)^2\bigg). \end{equation*} \end{claim} \begin{proof}[{\bf{\em Proof of }Claim \ref{5teorema L2 no suau 18}}] By (\ref{mimimim}), $y\in A_m{(x)}$ if and only if $p^x(y)\in A_m(x)$ in the integral defining $U1_m(x)$. Since $|y-p^x(y)|\lesssim {\operatorname{dist}}(y,L_D^x)\leq{\operatorname{dist}}(y,L_D)+{\operatorname{dist}}(x,L_D)$ for all $y\in\sup\mu\cap A_m(x)$, \begin{equation*} \begin{split} |U1_m(x)|&\leq|m_D^\mu f|\int_{A_m(x)}|K(x-y)-K(x-p^x(y))|\,d\mu(y)\\ &\lesssim\frac{|m_D^\mu f|}{\ell(D)^{n+1}}\int_{A_m(x)}|y-p^x(y)|\,d\mu(y)\\ &\lesssim\frac{|m_D^\mu f|}{\ell(D)^{n+1}}\int_{A_m(x)}({\operatorname{dist}}(y,L_D)+{\operatorname{dist}}(x,L_D))\,d\mu(y). \end{split} \end{equation*} If $L_D^1$ denotes a minimizing $n$-plane for $\beta_1(D)$, then ${\operatorname{dist}}_{\mathcal H}(L_D\cap B_D,L_D^1\cap B_D)\lesssim\alpha_\mu(D)\ell(D)$, so ${\operatorname{dist}}(y,L_D)\lesssim{\operatorname{dist}}(y,L^1_D)+\alpha_\mu(D)\ell(D)$ for $y\in CD$ (see \cite{To}). Therefore, \begin{equation*} \begin{split} \sum_{m\in{\mathcal S}_D(x)}|U1_m(x)|^2 &\lesssim\bigg(\frac{|m_D^\mu f|}{\ell(D)^{n+1}}\sum_{m\in{\mathcal S}_D(x)}\int_{A_m(x)} ({\operatorname{dist}}(y,L_D)+{\operatorname{dist}}(x,L_D))\,d\mu(y)\bigg)^2\\ &\lesssim|m_D^\mu f|^2\bigg(\ell(D)^{-n-1}\int_{CD}({\operatorname{dist}}(y,L_D)+{\operatorname{dist}}(x,L_D))\,d\mu(y)\bigg)^2\\ &\lesssim|m_D^\mu f|^2\bigg(\beta_{1,\mu}(D)^2+\alpha_\mu(D)^2+\bigg(\frac{{\operatorname{dist}}(x,L_D)}{\ell(D)}\bigg)^2\bigg). \end{split} \end{equation*} \end{proof} Let us consider $U2_m(x)$ now. We can assume that $\nu_x$ is absolutely continuous with respect to ${\mathcal H}^n_{L_D^x}$ (for example, by convolving it with an approximation of the identity and making a limiting argument). Let $h_x$ be the corresponding density, so $\nu_x=h_x{\mathcal H}^n_{L_D^x}$. We may also assume that $h_x\in L^2({\mathcal H}^n_{L_D^x})$. So, \begin{equation* \begin{split} U2_m(x)&=(m_D^\mu f)\int_{A_m(x)}K(x-y)\,d\nu_x(y)=(m_D^\mu f)\int_{A_m(x)}K(x-y)h_x(y)\,d{\mathcal H}^n_{L_D^x}(y). \end{split} \end{equation*} At this point, we need to introduce a wavelet basis. \begin{defi}\label{definicio wavelet} Let ${\mathcal D}^n$ denote the standard dyadic lattice of ${\mathbb R}^n$. Let $\{\psi_Q^k\}_{Q\in{\mathcal D}^n,\,k=1,\ldots,2^n-1}$ be an orthonormal basis of ${\mathcal C}^1$ wavelets on ${\mathbb R}^n$ in the following manner (see \cite[Part I]{David-LNM}): \begin{enumerate} \item[$(a)$] $\psi_Q^k:{\mathbb R}^n\to{\mathbb R}$ is a ${\mathcal C}^1$ function for all $Q\in{\mathcal D}^n$ and $k=1,\ldots,2^n-1$. \item[$(b)$] There exists $C>1$ and $\psi_0:[0,C]^n\to{\mathbb R}$ with $\|\psi_0\|_2=1$, $\|\psi_0\|_\infty\lesssim1$, and such that, for any $Q\in{\mathcal D}^n$ and $k=1,\ldots,2^n-1$, there exists $l\in{\mathbb Z}^n$ such that $\psi_Q^k(y)=\psi_0(y/\ell(Q)-l)\ell(Q)^{-n/2}$ for all $y\in{\mathbb R}^n$. \item[$(c)$] $\|\psi_Q^k\|_2=1$, $\int\psi_Q^k\,d{\mathcal L}^n=0$ and $\int\psi_Q^k\psi_R^l\,d{\mathcal L}^n=0$, for all $Q,R\in{\mathcal D}^n$ and $k,l=1,\ldots,2^n-1$ such that $(Q,k)\neq(R,l)$, where ${\mathcal L}^n$ denotes the Lebesgue measure in ${\mathbb R}^n$. \item[$(d)$] ${\operatorname{supp}}\psi_Q^k\subset C_wQ$ for all $Q\in{\mathcal D}^n$ and $k=1,\ldots,2^n-1$, where $C_w>1$ is some fixed constant (which depends on $n$). In particular, for any $j\in{\mathbb Z}$ the supports of the functions in $\bigcup_{Q\in{\mathcal D}^n:\, \ell(Q)=2^{-j}}\{\psi_Q^k\}_{k=1,\ldots,2^n-1}$ have finite overlap. \item[$(e)$] $\|\psi_Q^k\|_\infty\lesssim\ell(Q)^{-n/2}$ and $\|\nabla\psi_Q^k\|_\infty\lesssim\ell(Q)^{-n/2-1}$ for all $Q\in{\mathcal D}^n$, $k=1,\ldots,2^n-1$. \item[$(f)$] If $h\in L^2({\mathcal L}^n)$, then $h=\sum_{Q\in{\mathcal D}^n,\,k=1,\ldots,2^n-1}\Delta_Q^k h$, where $\Delta_Q^k h:=\big(\int h\psi_Q^k\,d{\mathcal L}^n\big)\psi_Q^k.$ \end{enumerate} \end{defi} In order to reduce the notation, we may think that a cube of ${\mathcal D}^n$ is not only a subset of ${\mathbb R}^n$, but a couple $(Q,k)$, where $Q$ is a subset of ${\mathbb R}^n$ and $k=1,\ldots,2^n-1$. In particular, there exist $2^n-1$ cubes in ${\mathcal D}^n$ such that the subsets that they represent in ${\mathbb R}^n$ coincide. We make this abuse of notation to avoid using the superscript $k$ in the previous definition. Then, we can rewrite the wavelet basis as $\{\psi_Q\}_{Q\in{\mathcal D}^n}$, with the evident adjustments of the properties $(a),\ldots,(f)$ in Definition \ref{definicio wavelet}. Let ${\mathcal D}^{n,0}_x$ be a fixed dyadic lattice of the $n$-plane $L_{D}^x$, and let $\{\psi_Q\}_{Q\in{\mathcal D}^{n,0}_x}$ be a wavelet basis as the one introduced in Definition \ref{definicio wavelet} but defined on $L_{D}^x$. Denote by $E_D^x$ the $n$-dimensional vector space which defines $L_{D}^x$, and let $\{Q^0_k\}_{k\in{\mathbb Z}}$ be a fixed sequence of nested dyadic cubes in $E_D^x$ having the origin as a common vertex and such that $\ell(Q^0_k)=2^{-k}$ for all $k\in{\mathbb Z}$. Given $s\in E_D^x$, set ${\mathcal D}^{n,s}_x:=\{s+Q:\,Q\in{\mathcal D}^{n,0}_x\}$ (notice that, for any $k\in{\mathbb Z}$, the family $\{Q\in{\mathcal D}^{n,s}_x:\, \ell(Q)=2^{-k}\}$ is periodic in the parameter $s$), For any $Q\in{\mathcal D}^{n,0}_x$ and $y\in L_{D}^x$, if $Q'=s+Q\in {\mathcal D}^{n,s}_x$, we define $\psi_{Q'}(y)\equiv\psi_{s+Q}(y):=\psi_{Q}(y-s)$. Then $\{\psi_{Q'}\}_{Q'\in{\mathcal D}^{n,s}_x}$ is also a wavelet basis defined on $L_{D}^x$. Consider the decomposition of $h_x$ with respect to this basis, \begin{equation}\label{mklnko} h_x=\sum_{Q\in{\mathcal D}^{n,s}_x}\Delta^\psi_{Q}h_x=\sum_{Q\in{\mathcal D}^{n,0}_x}\Delta^\psi_{Q,s}h_x, \end{equation} where $\Delta^\psi_{Q,s}h_x(z):=\big(\int h_x(y)\psi_Q(y-s)\,d\mu(y)\big)\,\psi_Q(z-s)$ (recall that, for any $Q\in{\mathcal D}^{n,s}_x$, $\int\psi_{Q}\,d{\mathcal H}^n_{L_D^x}=0$). We set $J(Q_S):=-\log_2(\ell(Q_S))$, and given $Q\in{\mathcal D}^{n,s}_x$, we set $J(Q):=-\log_2(\ell(Q))$ and $J'(Q):=\max\{J(Q_S),J(Q)\}$. Given $\Omega\subset E_D^x$, denote by $m_{s\in\Omega} g$ the average of a function $g:E_D^x\to{\mathbb R}$ over all $s\in\Omega$ and with respect to ${\mathcal H}^n_{E_D^x}$. Then, by the periodicity of $\{\psi_Q\}_{Q\in{\mathcal D}^{n,s}_x}$ in the parameter $s$ (recall Definition \ref{definicio wavelet}$(b)$) and (\ref{mklnko}), we can write \begin{equation*} h_x=m_{s\in Q^0_{J(Q_S)}} (h_x)=\sum_{Q\in{\mathcal D}^{n,0}_x}m_{s\in Q^0_{J(Q_S)}}(\Delta^\psi_{Q,s}h_x) =\sum_{Q\in{\mathcal D}^{n,0}_x}m_{s\in Q^0_{J'(Q)}}(\Delta^\psi_{Q,s}h_x). \end{equation*} Set $J:=\{Q\in{\mathcal D}_x^{n,0}\,:\,{\operatorname{supp}}\psi_{Q}(\cdot-s)\cap{\operatorname{supp}}\chi_{2^{-j-1}}^{\,2^{-j}}(x-\cdot)\neq\emptyset\text{ for some }s\in Q^0_{J'(Q)}\}$. Then, \begin{equation}\label{chili} U2_m(x)=(m_D^\mu f)\int_{A_m(x)}K(x-y)\sum_{Q\in J}m_{s\in Q^0_{J'(Q)}}(\Delta^\psi_{Q,s}h_x(y))\,d{\mathcal H}^n_{L_D^x}(y). \end{equation} Recall that $D\in{\mathcal D}_j$ and $m\in{\mathcal S}_D(x)$. Since $x\in D$ and $\ell(D)=2^{-j}$, if $Q\in J$, then $D\subset B(x,C_a\ell(Q))$ or $Q\subset B(x,C_a\ell(D))$ for some constant $C_a>0$ big enough. In particular, if $\ell(Q)\gtrsim\ell(D)$ then $D\subset B(z_Q,C_a\ell(Q))$, and if $\ell(Q)\leq C\ell(D)$ with $C>0$ small enough then $Q\subset B(z_D,C_a\ell(D))$, where $z_Q$ denotes the center of $Q\subset L_D^x$ and $z_D$ denotes the center of $D\in{\mathcal D}$. We define \begin{equation*} \begin{split} J_1:&=\{Q\in J:\,\ell(Q)\leq C\ell(D)\}\subset\{Q\in{\mathcal D}_x^{n,0}:\,Q\subset B(z_D,C_a\ell(D))\},\text{ and}\\ J_2:&=J\setminus J_1\subset\{Q\in{\mathcal D}_x^{n,0}:\,D\subset B(z_Q,C_a\ell(Q))\}. \end{split} \end{equation*} Then, using (\ref{chili}), that ${\operatorname{supp}}\chi_{\epsilon_{m+1}}^{\,\epsilon_m}(x-\cdot)\subset{\operatorname{supp}}\chi_{2^{-j-1}}^{\,2^{-j}}(x-\cdot)$ for all $m\in{\mathcal S}_D(x)$, that $\int_{A_m(x)}K(x-y)\,d{\mathcal H}^n_{L_D^x}(y)=0$ by antisymmetry, and that $J'(Q)=J(Q)$ for all $Q\in J_1$ (because $D\subset Q_S$), if $x'$ denotes some fixed point in $A(x,2^{-j-1},2^{-j})\cap L_D^x$, we have \begin{equation}\label{5teorema L2 no suau 16} \begin{split} U2_m&(x)=(m_D^\mu f)\int_{A_m(x)}K(x-y)\sum_{Q\in J_1}m_{s\in Q^0_{J(Q)}}(\Delta^\psi_{Q,s}h_x(y))\,d{\mathcal H}^n_{L_D^x}(y)\\ &\quad+(m_D^\mu f)\int_{A_m(x)}K(x-y)\sum_{Q\in J_2}m_{s\in Q^0_{J'(Q)}}\big(\Delta^\psi_{Q,s}h_x(y) -\Delta^\psi_{Q,s}h_x(x')\big)\,d{\mathcal H}^n_{L_D^x}(y)\\ &=:U3_m(x)+U4_m(x). \end{split} \end{equation} \begin{claim}\label{5teorema L2 no suau 20} Under the notation above, we have \begin{equation*} \sum_{m\in{\mathcal S}_D(x)}|U4_m(x)|^2 \lesssim|m_D^\mu f|^2\sum_{Q\in J_2}\bigg(\frac{\ell(D)}{\ell(Q)}\bigg)^{1/2}\ell(Q)^{-n}\big(m_{s\in Q^0_{J'(Q)}}\|\Delta^\psi_{Q,s}h_x\|_2\big)^2. \end{equation*} \end{claim} \begin{proof}[{\bf{\em Proof of }Claim \ref{5teorema L2 no suau 20}}] By property $(e)$ of the wavelet basis in Definition \ref{definicio wavelet}, we have $|\Delta^\psi_{Q,s}h_x(y)-\Delta^\psi_{Q,s}h_x(x')|\leq\|\nabla(\Delta^\psi_{Q,s}h_x)\|_\infty|x'-y|\lesssim\|\Delta^\psi_{Q,s}h_x\|_2|x'-y|\ell(Q)^{-n/2-1}$. Moreover, if $y\in A_m(x)$, then $|x'-y|\lesssim\ell(D)$. Therefore, \begin{equation*} \begin{split} |U4_m(x)|&\leq\sum_{Q\in J_2}|m_D^\mu f|\int_{A_m(x)}|K(x-y)|m_{s\in Q^0_{J'(Q)}}\big(\big|\Delta^\psi_{Q,s}h_x(y)-\Delta^\psi_{Q,s}h_x(x')\big|\big)\,d{\mathcal H}^n_{L_D^x}(y)\\ &\lesssim\sum_{Q\in J_2}|m_D^\mu f|m_{s\in Q^0_{J'(Q)}}\big(\|\Delta^\psi_{Q,s}h_x\|_2\big)\ell(D)^{1-n}\ell(Q)^{-n/2-1}{\mathcal H}^n_{L_D^x}({A_m(x)}), \end{split} \end{equation*} and then, by Cauchy-Schwarz inequality and since $J_2\subset\{Q\in{\mathcal D}_x^{n,0}:\,D\subset B(z_Q,C_a\ell(Q))\}$ (in particular, $\ell(D)/\ell(Q)\lesssim(\ell(D)/\ell(Q))^{1/2}$), \begin{equation*} \begin{split} \sum_{m\in{\mathcal S}_D(x)}|U4_m(x)|^2 &\lesssim\bigg(\sum_{m\in{\mathcal S}_D(x)}\sum_{Q\in J_2}|m_D^\mu f|\, m_{s\in Q^0_{J'(Q)}}\big(\|\Delta^\psi_{Q,s}h_x\|_2\big)\frac{\ell(Q)^{n/2+1}}{\ell(D)^{n-1}}\,{\mathcal H}^n_{L_D^x}({A_m(x)})\bigg)^2\\ &\leq\bigg(\sum_{Q\in J_2}|m_D^\mu f|\,m_{s\in Q^0_{J'(Q)}}\big(\|\Delta^\psi_{Q,s}h_x\|_2\big)\ell(D)\ell(Q)^{n/2+1}\bigg)^2\\ &\leq\bigg(\sum_{Q\in J_2}\frac{\ell(D)}{\ell(Q)}\bigg)\bigg(\sum_{Q\in J_2}|m_D^\mu f|^2\big(m_{s\in Q^0_{J'(Q)}}\|\Delta^\psi_{Q,s}h_x\|_2\big)^2\frac{\ell(D)}{\ell(Q)^{n+1}}\bigg)\\ &\lesssim|m_D^\mu f|^2\sum_{Q\in J_2}\bigg(\frac{\ell(D)}{\ell(Q)}\bigg)^{1/2}\ell(Q)^{-n}\big(m_{s\in Q^0_{J'(Q)}}\|\Delta^\psi_{Q,s}h_x\|_2\big)^2. \end{split} \end{equation*} \end{proof} We are going to estimate $U3_m(x)$ with techniques very similar to the ones used in Subsections \ref{5 ss5321} and \ref{5 ss5321b}. First of all, let $b_*>0$ be a small constant which will be fixed later on, and consider the family ${\mathcal P}:=\{Q\in{\mathcal D}^{n,0}_x:\,\ell(Q)\leq \ell(D)\}$. Let ${\operatorname{Stp}}$ denote the set of cubes $Q\in {\mathcal P}$ such that there exists $R_Q\in{\mathcal D}$ with $\ell(R_Q)=\ell(Q)$, $10R_Q\cap(p^x)^{-1}({\operatorname{supp}}\psi_Q)\neq\emptyset$, and \begin{equation}\label{reduced lema1} \sum_{R\in{\mathcal D}:\,R_Q\subset R,\,\ell(R)\leq\ell(D)}\alpha_{\mu}(10R)\geq b_*\quad\text{but}\quad\sum_{R\in{\mathcal D}:\,P(R_Q)\subset R,\,\ell(R)\leq\ell(D)}\alpha_{\mu}(10R)<b_*. \end{equation} Observe that if $Q$ and $Q'$ are different and belong to ${\operatorname{Stp}}$, then $Q\cap Q'=\emptyset$. Notice also that $D\not\in{\operatorname{Stp}}$ because we assumed $\alpha_\mu(10D)<C_*$. Finally, denote by ${\operatorname{Tr}}$ the set of cubes $Q\in{\mathcal P}\setminus{\operatorname{Stp}}$ such that $R\not\in{\operatorname{Stp}}$ for all $R\in{\mathcal P}$ with $R\supset Q$. Then ${\mathcal P}={\operatorname{Tr}}\cup\bigcup_{Q\in{\operatorname{Stp}}}\{R\in{\mathcal P}:\,R\subset Q\}$. By taking $C_*$ small enough we can assume that, if $R\in J_1\cap{\mathcal P}$ and $R\subset Q$ for some $Q\in{\operatorname{Stp}}$, then $Q\in J_1$. So we write \begin{equation*} \begin{split} \sum_{Q\in J_1}m_{s\in Q^0_{J(Q)}}(&\Delta^\psi_{Q,s}h_x)\\ &=\sum_{Q\in J_1\cap{\operatorname{Tr}}}m_{s\in Q^0_{J(Q)}}(\Delta^\psi_{Q,s}h_x) +\sum_{Q\in J_1\cap{\operatorname{Stp}}}\,\sum_{R\in J_1\cap{\mathcal P}:\,R\subset Q}m_{s\in Q^0_{J(Q)}}(\Delta^\psi_{R,s}h_x) \end{split} \end{equation*} Set $\widetilde\Delta^\psi_{Q,s}h_x:=\sum_{R\in{\mathcal P}:\,R\subset Q}\Delta^\psi_{R,s}h_x$. Then, using the definition of $J_1$ and $J$, we can split \begin{equation}\label{ernwvijf} \begin{split} U3_m(x)&=(m_D^\mu f)\int_{A_m(x)}K(x-y)\sum_{Q\in J_1\cap{\operatorname{Tr}}}m_{s\in Q^0_{J(Q)}}(\Delta^\psi_{Q,s}h_x(y))\,d{\mathcal H}^n_{L_D^x}(y)\\ &\quad+(m_D^\mu f)\int_{A_m(x)}K(x-y)\sum_{Q\in J_1\cap{\operatorname{Stp}}}m_{s\in Q^0_{J(Q)}}(\widetilde\Delta^\psi_{Q,s}h_x(y))\,d{\mathcal H}^n_{L_D^x}(y)\\ &=:U3_m^{a}(x)+U3_m^{b}(x). \end{split} \end{equation} \begin{claim}\label{5teorema L2 no suau 19} Under the notation above, we have \begin{equation*} \sum_{m\in{\mathcal S}_D(x)}|U3^{a}_m(x)|^2\lesssim|m_D^\mu f|^2\sum_{Q\in J_1\cap{\operatorname{Tr}}}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{1/2}\|m_{s\in Q^0_{J(Q)}}(\Delta^\psi_{Q,s}h_x)\|_2^2\,\ell(D)^{-n}. \end{equation*} For simplicity of notation, we have set $\|\cdot\|_p:=\|\cdot\|_{L^p({\mathcal H}^n_{L_D^x})}$. \end{claim} \begin{proof}[{\bf{\em Proof of }Claim \ref{5teorema L2 no suau 19}}] Notice that ${\mathcal H}^n_{L_D^x}(A_m(x)) \lesssim(\epsilon_m-\epsilon_{m+1})\ell(D)^{n-1}$. Moreover, the function $m_{s\in Q^0_{J(Q)}}(\Delta^\psi_{Q,s}h_x)$ is supported in $CQ$ and has vanishing integral, because the same holds for each $\Delta^\psi_{Q,s}h_x$ with $s\in Q^0_{J(Q)}$. Hence, the sum $\sum_{m\in{\mathcal S}_D(x)}|U3^{a}_m(x)|^2$ can be estimated using arguments very similar to the ones in Subsection \ref{5 ss5321} (see (\ref{5 var eq7})), and the analogues of Lemma \ref{lema pendent petita3} and Claims \ref{5 claim1} and \ref{5 claim2} for ${\mathcal H}^n_{L_D^x}$ follow easily. One obtains the expected estimate. \end{proof} \begin{claim}\label{5teorema L2 no suau 19b} Under the notation above, we have \begin{equation*} \sum_{m\in{\mathcal S}_D(x)}|U3^{b}_m(x)|^2\lesssim|m_D^\mu f|^2\sum_{Q\in J_1\cap{\operatorname{Stp}}}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{1/2}\frac{\|m_{s\in Q^0_{J(Q)}}(\widetilde\Delta^\psi_{Q,s}h_x)\|_1^2}{\ell(D)^{n}\ell(Q)^{n}}. \end{equation*} \end{claim} \begin{proof}[{\bf{\em Proof of }Claim \ref{5teorema L2 no suau 19b}}] Since $m_{s\in Q^0_{J(Q)}}(\widetilde\Delta^\psi_{Q,s}h_x)$ has vanishing integral and it is supported in a neighbourhood of $Q$, the term $U3^{b}_m(x)$ can be estimated in the same manner (but now we do not use the estimate $\|m_{s\in Q^0_{J(Q)}}(\widetilde\Delta^\psi_{Q,s}h_x)\|_1^2\lesssim\ell(Q)^{n}\|m_{s\in Q^0_{J(Q)}}(\widetilde\Delta^\psi_{Q,s}h_x)\|_2^2$), and one obtains the expected estimate (compare with (\ref{5 var eq9})). \end{proof} Recall that we have fixed $x\in D\in S\in{\operatorname{Trs}}$, and we denote by $Q_S$ the maximal $\mu$-cube in $S$ from the corona decomposition, so $D\subset Q_S$. The following lemma, whose proof is given in Subsection \ref{oooo}, yields the suitable estimates for $m_{s\in Q^0_{J'(Q)}}(\Delta^\psi_{Q,s}h_x)$ and $m_{s\in Q^0_{J'(Q)}}(\widetilde\Delta^\psi_{Q,s}h_x)$. \begin{lema}\label{5lema L2} Assume that $\alpha_\mu(D)< C_*$, for some constant $C_*>0$ small enough. Given $Q\in{\mathcal D}_x^{n,0}$, there exists constants $C_1,C_2>1$ depending on $C_*$ and $b_*$ (see $(\ref{reduced lema1})$)such that, \begin{enumerate} \item[$(a)$]if $Q\in J_2$ and $\ell(Q)>\ell(Q_S)$, then $m_{s\in Q^0_{J'(Q)}}(\|\Delta^\psi_{Q,s}h_x\|_2)\lesssim\ell(Q_S)^n\ell(Q)^{-n/2}$, \item[$(b)$]if $Q\in J_2$ and $\ell(Q)\leq\ell(Q_S)$, then $$m_{s\in Q^0_{J'(Q)}}\big(\|\Delta^\psi_{Q,s}h_x\|_2\big)\lesssim\bigg(\sum_{R\in{\mathcal D}:\,D\subset R\subset B(z_Q,C_1\ell(Q))}\alpha_{\mu}(C_1R)+\frac{{\operatorname{dist}}(x,L_D)}{\ell(D)}\bigg)\ell(Q)^{n/2},$$ \item[$(c)$] if $Q\in J_1\cap{\operatorname{Tr}}$, then there exists $Q_0\equiv Q_0(x,Q)\in{\mathcal D}$ depending on $x$ and $Q\in{\mathcal D}_x^{n,0}$ such that $Q_0\subset C_2D$, $\ell(Q_0)\approx\ell(Q)$, $Q_0\cap(p^x)^{-1}({\operatorname{supp}}\psi_Q)\neq\emptyset$ and $$\|m_{s\in Q^0_{J(Q)}}(\Delta^\psi_{Q,s}h_x)\|_2\lesssim\bigg(\sum_{R\in{\mathcal D}:\,Q_0\subset R\subset C_2D}\alpha_{\mu}(C_2R)+\frac{{\operatorname{dist}}(x,L_D)}{\ell(D)}\bigg)\ell(Q)^{n/2},\quad\text{and}$$ \item[$(d)$] if $Q\in J_1\cap{\operatorname{Stp}}$, then $\|m_{s\in Q^0_{J(Q)}}(\widetilde\Delta^\psi_{Q,s}h_x)\|_1\lesssim\ell(Q)^n.$ \end{enumerate} \end{lema} We are ready to put all the estimates together to bound the first term on the right hand side of (\ref{5 var quasiort eq1}). From (\ref{5teorema L2 no suau 6'}), (\ref{5teorema L2 no suau 6}), (\ref{5teorema L2 no suau 16}), and (\ref{ernwvijf}) we have \begin{equation}\label{5estimate ss8.2} \begin{split} \sum_{m\in{\mathcal S}_D(x)}&\big| (K\chi_{\epsilon_{m+1}}^{\epsilon_m}*((m_D^\mu f)\chi_{\widetilde D}\mu))(x)\big|^2 \lesssim|m_D^\mu f|^2\alpha_\mu(10D)^2\\ &\quad+\sum_{m\in{\mathcal S}_D(x)}(|U1_m(x)|^2+|U3^a_m(x)|^2+|U3^b_m(x)|^2+|U4_m(x)|^2). \end{split} \end{equation} Let us deal with $U1_m(x)$ (the term $|m_D^\mu f|^2\alpha_\mu(10D)^2$ above is handled in the same manner). If $L_D^1$ and $L_D^2$ denote a minimizing $n$-plane for $\beta_{1,\mu}(D)$ and $\beta_{2,\mu}(D)$, respectively, one can show that ${\operatorname{dist}}_{\mathcal H}(L_D\cap B_D,L_D^1\cap B_D)\lesssim\alpha_\mu(D)\ell(D)$ and ${\operatorname{dist}}_{\mathcal H}(L_D^1\cap B_D,L_D^2\cap B_D)\lesssim\beta_{2,\mu}(D)\ell(D)$, so we have ${\operatorname{dist}}(x,L_D)\lesssim{\operatorname{dist}}(x,L^2_D)+\beta_{2,\mu}(D)\ell(D)+\alpha_\mu(D)\ell(D)$ for $x\in D$. Then, by Claim \ref{5teorema L2 no suau 18} and Carleson's embedding theorem, \begin{equation}\label{est u1} \begin{split} \sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}&\int_D\sum_{m\in{\mathcal S}_D(x)}|U1_m|^2\,d\mu\\ &\lesssim\sum_{D\in{\mathcal D}}\int_D |m_D^\mu f|^2\bigg(\beta_{1,\mu}(D)^2+\alpha_\mu(D)^2+\bigg(\frac{{\operatorname{dist}}(x,L_D)}{\ell(D)}\bigg)^2\bigg)\,d\mu(x)\\ &\lesssim\sum_{D\in{\mathcal D}} |m_D^\mu f|^2\ell(D)^n\big(\beta_{1,\mu}(D)^2+\alpha_\mu(D)^2+\beta_{2,\mu}(D)^2\big)\lesssim\|f\|_{L^2(\mu)}^2. \end{split} \end{equation} For the case of $U3^a_m (x)$, by Claim \ref{5teorema L2 no suau 19} and Lemma \ref{5lema L2}$(c)$ applied to the $\mu$-cubes in $J_1\cap{\operatorname{Tr}}$, we have \begin{equation*} \begin{split} \sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}&\int_D\sum_{m\in{\mathcal S}_D(x)}|U3_m^a|^2\,d\mu\\ &\lesssim\sum_{D\in{\mathcal D}}|m_D^\mu f|^2\int_D\sum_{Q\in J_1\cap{\operatorname{Tr}}}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{1/2}\frac{\|m_{s\in Q^0_{J(Q)}}(\Delta^\psi_{Q,s}h_x)\|^2_2}{\ell(D)^{n}}\,d\mu(x)\\ &\lesssim\sum_{D\in{\mathcal D}}|m_D^\mu f|^2\int_D\sum_{Q\in J_1\cap{\operatorname{Tr}}}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{n+1/2}\bigg(\sum_{\begin{subarray}{c}R\in{\mathcal D}:\\Q_0(x,Q)\subset R\subset C_2D\end{subarray}}\alpha_{\mu}(C_2R)\bigg)^2\,d\mu(x)\\ &\quad+\sum_{D\in{\mathcal D}}|m_D^\mu f|^2\int_D\sum_{Q\in J_1\cap{\operatorname{Tr}}}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{n+1/2}\bigg(\frac{{\operatorname{dist}}(x,L_D)}{\ell(D)}\bigg)^2\,d\mu(x)=:S_1+S_2. \end{split} \end{equation*} Recall that $J_1\subset\{Q\in{\mathcal D}_x^{n,0}:\,Q\subset B(z_D,C_a\ell(D))\}$. Then $\sum_{Q\in J_1}(\ell(Q)/\ell(D))^{n+1/2}\lesssim1$, and since ${\operatorname{dist}}(x,L_D)\lesssim{\operatorname{dist}}(x,L^2_D)+\beta_{2,\mu}(D)\ell(D)+\alpha_\mu(D)\ell(D)$ for $x\in D$, then $S_2\lesssim\sum_{D\in{\mathcal D}}|m_D^\mu f|^2(\beta_{2,\mu}(D)^2+\alpha_\mu(D)^2)\ell(D)^n,$ and hence $S_2\leq C\|f\|_{L^2(\mu)}^2$, by Carleson's embedding theorem. For $S_1$, since $\ell(Q)\approx\ell(Q_0(x,Q))$ (recall the definition of $Q_0\equiv Q_0(x,Q)$ in Lemma \ref{5lema L2}$(c)$), $Q_0(x,Q)\subset C_2D$, and every $Q_0\in{\mathcal D}$ intersects $(p^x)^{-1}({\operatorname{supp}}\psi_Q)$ for finitely many cubes $Q\in{\mathcal D}_x^{n,0}$ (with a bound for the number of such cubes $Q$ independent of $x$ and $Q_0$), we have \begin{equation*} \begin{split} \sum_{Q\in J_1\cap{\operatorname{Tr}}}&\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{n+1/2} \bigg(\sum_{R\in{\mathcal D}:\,Q_0(x,Q)\subset R\subset C_2D}\alpha_{\mu}(C_2R)\bigg)^2\\ &=\sum_{P\in{\mathcal D}\,:\,P\subset C_2D}\,\sum_{\begin{subarray}{c}Q\in{\mathcal D}_x^{n,0}:\,Q\subset B(z_D,C_a\ell(D)),\\Q_0(x,Q)=P\end{subarray}}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{n+1/2} \bigg(\sum_{R\in{\mathcal D}:\,P\subset R\subset C_2D}\alpha_{\mu}(C_2R)\bigg)^2\\ &\lesssim\sum_{P\in{\mathcal D}\,:\,P\subset C_2D}\bigg(\frac{\ell(P)}{\ell(D)}\bigg)^{n+1/2} \bigg(\sum_{R\in{\mathcal D}:\,P\subset R\subset C_2D}\alpha_{\mu}(C_2R)\bigg)^2. \end{split} \end{equation*} By Cauchy-Schwarz inequality, \begin{equation}\label{fnjid} \begin{split} \sum_{P\in{\mathcal D}\,:\,P\subset C_2D}&\bigg(\frac{\ell(P)}{\ell(D)}\bigg)^{n+1/2} \bigg(\sum_{R\in{\mathcal D}:\,P\subset R\subset C_2D}\alpha_{\mu}(C_2R)\bigg)^2\\ &\lesssim\sum_{P\in{\mathcal D}\,:\,P\subset C_2D}\bigg(\frac{\ell(P)}{\ell(D)}\bigg)^{n+1/2} \log_2\bigg(\frac{\ell(D)}{\ell(P)}\bigg)\sum_{R\in{\mathcal D}:\,P\subset R\subset C_2D}\alpha_{\mu}(C_2R)^2\\ &\lesssim\sum_{R\in{\mathcal D}:\,R\subset C_2D}\alpha_{\mu}(C_2R)^2\sum_{P\in{\mathcal D}\,:\,P\subset R}\bigg(\frac{\ell(P)}{\ell(D)}\bigg)^{n+1/4}\\ &\lesssim\sum_{R\in{\mathcal D}:\,R\subset C_2D}\alpha_{\mu}(C_2R)^2\bigg(\frac{\ell(R)}{\ell(D)}\bigg)^{n+1/4}=:\lambda_1(D)^2. \end{split} \end{equation} By standard arguments one can easily show that these $\lambda_1$ coefficients satisfy a Carleson packing condition, so by (\ref{fnjid}) and Carleson's embedding theorem we obtain $S_1\lesssim\sum_{D\in{\mathcal D}}|m_D^\mu f|^2\ell(D)^n\lambda_1(D)^2\lesssim\|f\|_{L^2(\mu)}^2,$ which combined with $S_2\lesssim\|f\|_{L^2(\mu)}^2$ yields \begin{equation}\label{est u3a} \sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}|U3^a_m|^2\,d\mu\lesssim\|f\|_{L^2(\mu)}^2. \end{equation} Let us deal now with $U3^b_m$. By Claim \ref{5teorema L2 no suau 19b} and Lemma \ref{5lema L2}$(d)$ applied to the $\mu$-cubes in $J_1\cap{\operatorname{Stp}}$, we have \begin{equation*} \begin{split} \sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}&\int_D\sum_{m\in{\mathcal S}_D(x)}|U3_m^b|^2\,d\mu\\ &\lesssim\sum_{D\in{\mathcal D}}|m_D^\mu f|^2\int_D\sum_{Q\in J_1\cap{\operatorname{Stp}}}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{1/2}\frac{\|m_{s\in Q^0_{J(Q)}}(\widetilde\Delta^\psi_{Q,s}h_x)\|^2_1}{\ell(D)^{n}\ell(Q)^{n}}\,d\mu(x)\\ &\lesssim\sum_{D\in{\mathcal D}}|m_D^\mu f|^2\int_D\sum_{Q\in J_1\cap{\operatorname{Stp}}}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{n+1/2}\,d\mu.\\ \end{split} \end{equation*} Given $D\in{\mathcal D}$, consider the family $\Lambda_D:=\{R\in{\mathcal D}:\, R=R_Q \text{ for some }x\in D \text{ and some }Q\in J_1\cap{\operatorname{Stp}}\}$ (see the definition of $R_Q$ in (\ref{reduced lema1})). Observe that every $R\in{\mathcal D}$ intersects $(p^x)^{-1}(Q\cap L_D^x)$ for finitely many $\mu$-cubes $Q\in{\mathcal D}_x^{n,0}$ such that $\ell(Q)=\ell(R)$. Thus, simlilarly to what we did for $Q\in J_1\cap{\operatorname{Tr}}$ in the case of $U3^a_m$, we have \begin{equation*} \begin{split} \sum_{D\in{\mathcal D}}|m_D^\mu f|^2\int_D\sum_{Q\in J_1\cap{\operatorname{Stp}}}\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)^{n+1/2}\,d\mu \lesssim\sum_{D\in{\mathcal D}}|m_D^\mu f|^2\int_D\sum_{R\in\Lambda_D}\bigg(\frac{\ell(R)}{\ell(D)}\bigg)^{n+1/2}\,d\mu\\ \lesssim\sum_{D\in{\mathcal D}}|m_D^\mu f|^2\sum_{R\in\Lambda_D}\bigg(\frac{\ell(R)}{\ell(D)}\bigg)^{n+1/2}\mu(D)=\sum_{D\in{\mathcal D}}|m_D^\mu f|^2\lambda_2(D)^2\mu(D), \end{split} \end{equation*} where we have set $\lambda_2(D)^2:=\sum_{R\in\Lambda_D}(\ell(R)/\ell(D))^{n+1/2}$. Since the $\alpha_\mu$'s satisfy a Carleson packing condition, it is not hard to show that the same holds for the $\lambda_2$'s. Indeed, since for any $R\in\Lambda_D$ we have $\sum_{R'\in{\mathcal D}:\,R\subset R',\,\ell(R)\leq\ell(D)}\alpha_{\mu}(10R')\geq b_*$ by (\ref{reduced lema1}), then $$\lambda_2(D)^2\leq b_*^{-2}\sum_{R\in\Lambda_D}\bigg(\frac{\ell(R)}{\ell(D)}\bigg)^{n+1/2}\bigg(\sum_{R'\in{\mathcal D}:\,R\subset R',\,\ell(R)\leq\ell(D)}\alpha_{\mu}(10R')\bigg)^2,$$ and we can proceed as in (\ref{fnjid}). Hence, putting these estimates together and using Carleson's embedding theorem for the $\lambda_2$'s, we obtain \begin{equation}\label{est u3b} \sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}|U3_m^b|^2\,d\mu \lesssim\|f\|_{L^2(\mu)}^2. \end{equation} We deal now with $U4_m(x)$. By Claim \ref{5teorema L2 no suau 20} and Lemma \ref{5lema L2}$(a)$ and $(b)$ applied to the cubes in $J_2$, \begin{equation}\label{qwqe} \begin{split} &\sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}|U4_m|^2\,d\mu\\ &\quad\lesssim\sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}|m_D^\mu f|^2\int_D\sum_{Q\in J_2}\bigg(\frac{\ell(D)}{\ell(Q)}\bigg)^{1/2}\,\frac{m_{s\in Q^0_{J'(Q)}}\big(\|\Delta^\psi_{Q,s}h_x\|_2\big)^2}{\ell(Q)^{n}}\,d\mu\\ &\quad\lesssim\sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}|m_D^\mu f|^2\int_D\sum_{Q\in J_2:\, \ell(Q)\leq\ell(Q_S)}\bigg(\frac{\ell(D)}{\ell(Q)}\bigg)^{1/2}\\ &\quad\qquad\qquad\qquad\qquad\quad\quad\bigg[\bigg(\sum_{R\in{\mathcal D}:\,D\subset R\subset B(z_Q,C_1\ell(Q))}\alpha_{\mu}(C_1R)\bigg)^2+\bigg(\frac{{\operatorname{dist}}(x,L_D)}{\ell(D)}\bigg)^2\bigg]\,d\mu\\ &\quad\quad+\sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}|m_D^\mu f|^2\int_D\sum_{Q\in J_2:\,\ell(Q)>\ell(Q_S)}\bigg(\frac{\ell(D)}{\ell(Q)}\bigg)^{1/2}\frac{\ell(Q_S)^{2n}}{\ell(Q)^{2n}}\,d\mu=:S_3+S_4. \end{split} \end{equation} Regarding $S_3$, since ${\operatorname{dist}}(x,L_D)\lesssim{\operatorname{dist}}(x,L^2_D)+\beta_{2,\mu}(D)\ell(D)+\alpha_\mu(D)\ell(D)$ for $x\in D$ and $\sum_{Q\in J_2}(\ell(D)/\ell(Q))^{1/2}\lesssim1$, the second term in the definition of $S_3$ is bounded by $\sum_{D\in{\mathcal D}}|m_D^\mu f|^2(\beta_{2,\mu}(D)^2+\alpha_\mu(D)^2)\ell(D)^n,$ and hence by $C\|f\|_{L^2(\mu)}^2$, by Carleson's embedding theorem. For the first term in $S_3$, by Cauchy-Schwarz inequality, \begin{equation*} \begin{split} &\sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}|m_D^\mu f|^2\int_D\sum_{Q\in J_2:\, \ell(Q)\leq\ell(Q_S)}\bigg(\frac{\ell(D)}{\ell(Q)}\bigg)^{1/2}\bigg(\sum_{\begin{subarray}{c}R\in{\mathcal D}:\\D\subset R\subset B(z_Q,C_1\ell(Q))\end{subarray}}\alpha_{\mu}(C_1R)\bigg)^2\,d\mu\\ &\quad\lesssim\sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}|m_D^\mu f|^2\int_D\sum_{\begin{subarray}{c}Q\in J_2:\\ \ell(Q)\leq\ell(Q_S)\end{subarray}}\bigg(\frac{\ell(D)}{\ell(Q)}\bigg)^{1/2}\log_2\bigg(\frac{\ell(Q)}{\ell(D)}\bigg)\!\!\sum_{\begin{subarray}{c}R\in {\mathcal D}:\\D\subset R\subset B(z_Q,C_1\ell(Q))\end{subarray}}\alpha_{\mu}(C_1R)^2\,d\mu\\ &\quad\lesssim\sum_{D\in{\mathcal D}}|m_D^\mu f|^2\int_D\sum_{\begin{subarray}{c}R\in{\mathcal D}:\\D\subset R\end{subarray}}\alpha_{\mu}(C_1R)^2\sum_{\begin{subarray}{c}Q\in{\mathcal D}_x^{n,0}:\{\mathbb R}\subset B(z_Q,C_1\ell(Q))\end{subarray}}\bigg(\frac{\ell(D)}{\ell(Q)}\bigg)^{1/4}\,d\mu. \end{split} \end{equation*} Notice that $\sum_{Q\in{\mathcal D}_x^{n,0}:\,R\subset B(z_Q,C_1\ell(Q))}\big(\ell(D)/\ell(Q)\big)^{1/4}\lesssim\big(\ell(D)/\ell(R)\big)^{1/4}$, thus the right side of the preceeding inequality is bounded above by \begin{equation}\label{bncdenacl} \sum_{D\in{\mathcal D}}|m_D^\mu f|^2\ell(D)^n\sum_{R\in {\mathcal D}:\,D\subset R}\alpha_{\mu}(C_1R)^2\bigg(\frac{\ell(D)}{\ell(R)}\bigg)^{1/4}=:\sum_{D\in{\mathcal D}}|m_D^\mu f|^2\ell(D)^n\lambda_3(D)^2. \end{equation} By standard arguments one can show that the $\lambda_3$'s satisfy a Carleson packing condition, so by Carleson's embedding theorem again, the last term in (\ref{bncdenacl}) is bounded by $C\|f\|_{L^2(\mu)}^2$. Thus we obtain $S_3\lesssim\|f\|_{L^2(\mu)}^2$. The estimate of $S_4$ from (\ref{qwqe}) is easier: \begin{equation*} \begin{split} S_4\lesssim\sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}|m_D^\mu f|^2\int_D\sum_{\begin{subarray}{c}Q\in{\mathcal D}_x^{n,0}:\,\ell(Q)>\ell(Q_S),\\D\subset B(z_Q,C_1\ell(Q))\end{subarray}}\frac{\ell(D)^{1/2}\ell(Q_S)^{2n}}{\ell(Q)^{2n+1/2}}\,d\mu(x). \end{split} \end{equation*} As before, $\sum_{Q\in{\mathcal D}_x^{n,0}:\,\ell(Q)>\ell(Q_S),\, D\subset B(z_Q,C_1\ell(Q))}\ell(Q)^{-2n-1/2}\lesssim\ell(Q_S)^{-2n-1/2}$, thus \begin{equation*} \begin{split} S_4&\lesssim\sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}|m_D^\mu f|^2\ell(D)^{n}\bigg(\frac{\ell(D)}{\ell(Q_S)}\bigg)^{1/2}\lesssim\sum_{D\in{\mathcal D}}|m_D^\mu f|^2\ell(D)^{n}\sum_{S\in{\operatorname{Trs}}:\,S\ni D}\bigg(\frac{\ell(D)}{\ell(Q_S)}\bigg)^{1/2}\\ &=:\sum_{D\in{\mathcal D}}|m_D^\mu f|^2\ell(D)^{n}\lambda_4(D)^2. \end{split} \end{equation*} Similarly to the case of the $\lambda_3$ coefficients, one can show that the $\lambda_4$'s also satisfy a Carleson packing condition, thus $S_4\lesssim\|f\|_{L^2(\mu)}^2$ by Carleson's embedding theorem. Actually, if one defines $\widehat\alpha_\mu(Q)=1$ if $Q=Q_S$ for some $S\in{\operatorname{Trs}}$ and $\widehat\alpha_\mu(Q)=0$ otherwise, using the packing condition for the $\mu$-cubes $Q_S$ with $S\in{\operatorname{Trs}}$, one can easily verify that the $\widehat\alpha_\mu$'s satisfy a Carleson packing condition. Then, $$\lambda_4(D)^2=\sum_{S\in{\operatorname{Trs}}:\,D\subset Q_S}\bigg(\frac{\ell(D)}{\ell(Q_S)}\bigg)^{1/2}\widehat\alpha_\mu(Q_S)^2=\sum_{Q\in{\mathcal D}:\,D\subset Q}\bigg(\frac{\ell(D)}{\ell(Q)}\bigg)^{1/2}\widehat\alpha_\mu(Q)^2,$$ and we can argue as in the case of the $\lambda_3$'s in (\ref{bncdenacl}). By the estimates of $S_3$ and $S_4$, we obtain \begin{equation}\label{est u4} \sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}|U4_m|^2\,d\mu\lesssim\|f\|_{L^2(\mu)}^2. \end{equation} Finally, plugging (\ref{est u1}), (\ref{est u3a}), (\ref{est u3b}), and (\ref{est u4}) in (\ref{5estimate ss8.2}), and combining the result with (\ref{5 var quasiort eq1}) and (\ref{5 var quasiort eq2}), we conclude that \begin{equation*} \begin{split} \sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)} (K\chi_{\epsilon_{m+1}}^{\epsilon_m}*((m_R^\mu f)\chi_R\mu))(x)\bigg|^2\,d\mu(x)\lesssim\|f\|_{L^2(\mu)}^2, \end{split} \end{equation*} and Lemma \ref{5 var eq8a} is finally proved, except for Lemma \ref{5lema L2}. \end{proof} \subsubsection{{\bf Proof of Lemma \ref{5lema L2}}}\label{oooo} \begin{proof}[\bf{\em Proof of} Lemma \ref{5lema L2}$(a)$] By Definition \ref{definicio wavelet}$(e)$, for any $s\in Q^0_{J'(Q)}$ we have \begin{equation*} \begin{split} \|\Delta^\psi_{Q,s}h_x\|_\infty&\lesssim|\langle h_x,\psi_{s+Q}\rangle|\ell(Q)^{-n/2}\lesssim\ell(Q)^{-n}\int h_x\,d{\mathcal H}^n_{L_D^x}=\ell(Q)^{-n}\int\,d\nu_x\\ &=\ell(Q)^{-n}\int\,d(p^x_\sharp(\chi_{40Q_S}\mu))=\ell(Q)^{-n}\int_{40Q_S}\,d\mu \lesssim\frac{\ell(Q_S)^n}{\ell(Q)^n}. \end{split} \end{equation*} Hence, $\|\Delta^\psi_{Q,s}h_x\|_2\leq\|\Delta^\psi_{Q,s}h_x\|_\infty{\mathcal L}^n({{\operatorname{supp}}\psi_{s+Q}})^{1/2}\lesssim\ell(Q_S)^n\ell(Q)^{-n/2}$ for all $s\in Q^0_{J'(Q)}$, and Lemma \ref{5lema L2}$(a)$ follows by taking the average over $s\in Q^0_{J'(Q)}$. \end{proof} \begin{proof}[\bf{\em Proof of} Lemma \ref{5lema L2}$(b)$] Since $D\subset B(z_Q,C_a\ell(Q))$, $D\in S$, and $\ell(D)\lesssim\ell(Q)\leq\ell(Q_S)$, by taking $C_{cor}$ big enough (see property $(f)$ in Subsection \ref{5ss corona decomposition}), we can assume that $\mu$ is well approximated by $\Gamma_S$ in a neighborhood of $Q$. We are going to show that, for each $s\in Q^0_{J'(Q)}$, \begin{equation}\label{kikiki} \|\Delta^\psi_{Q,s}h_x\|_2\lesssim\bigg(\sum_{R\in{\mathcal D}:\,D\subset R\subset B(z_Q,C_1\ell(Q))}\alpha_{\mu}(C_1R)+\frac{{\operatorname{dist}}(x,L_D)}{\ell(D)}\bigg)\ell(Q)^{n/2}, \end{equation} and Lemma \ref{5lema L2}$(b)$ will follow by taking the average over $s\in Q^0_{J'(Q)}$. Fix $Q\in J_2$, so $D\subset B(z_Q,C_a\ell(Q))$ with $\ell(Q)\leq\ell(Q_S)$, and $s\in Q^0_{J'(Q)}$. Take $Q'\in{\mathcal D}$ such that $\ell(Q)=\ell(Q')$ and $Q\subset B(z_{Q'},3\ell(Q))$. Recall that ${\operatorname{supp}}\psi_{s+Q}\subset CQ$ and $|\nabla\psi_{s+Q}|\lesssim\ell(Q)^{-n/2-1}$. Let $\phi_{s+Q'}$ be an extension of $\psi_{s+Q}$, i.e., let $\phi_{s+Q'}:{\mathbb R}^d\to{\mathbb R}$ be such that ${\operatorname{supp}}\phi_{s+Q'}\subset B_{Q'}\subset{\mathbb R}^d$, $|\nabla\phi_{s+Q'}|\lesssim\ell(Q')^{-n/2-1}$ and $\phi_{s+Q'}=\psi_{s+Q}$ in $L_D^x$. Let $L_{Q'}$ be a minimizing $n$-plane for $\alpha_{\mu}(C_1Q')$, where $C_1>1$ is some big constant to be fixed below, and let $L_{Q'}^x$ be the $n$-plane parallel to $L_{Q'}$ which contains $x$. Let $\sigma_{Q'}:=c_{Q'}{\mathcal H}^n_{L_{Q'}}$ be a minimizing measure for $\alpha_{\mu}(C_1Q')$ and define $\sigma^x_{Q'}:=c_{Q'}{\mathcal H}^n_{L_{Q'}^x}$. Finally, set $\sigma:=c_{Q'}{\mathcal H}^n_{L_D^x}$. Since $\psi_{s+Q}$ has vanishing integral in $L_D^x$, we also have $\int \phi_{s+Q'}\,d{\mathcal H}^n_{L_D^x}=0$. Hence, \begin{equation}\label{pipipi} \begin{split} \|\Delta_{Q,s}^\psi h_x\|_2&=\|\langle h_x,\psi_{s+Q}\rangle\psi_{s+Q}\|_2=|\langle h_x,\psi_{s+Q}\rangle| =\Big|\int_{L_D^x}\phi_{s+Q}(y)\,d\nu_x(y)\Big|\\&=\Big|\int\phi_{s+Q'}(y)\,d(\nu_x-\sigma)(y)\Big| \lesssim\ell(Q)^{-n/2-1}{\operatorname{dist}}_{B_{Q'}}(\nu_x,\sigma). \end{split} \end{equation} We can assume that \begin{equation}\label{ririri} \sum_{R\in{\mathcal D}:\,D\subset R\subset B(z_Q,C_1\ell(Q))}\alpha_{\mu}(C_1R)\leq b_*, \end{equation} otherwise Lemma \ref{5lema L2}$(b)$ follows easily. By assuming (\ref{ririri}) one can show that the angle between $L_D^x$ and $L_{Q'}^x$ is small. By the triangle inequality, we have \begin{equation}\label{eq4} \begin{split} {\operatorname{dist}}_{B_{Q'}}(\nu_x,\sigma) \leq{\operatorname{dist}}_{B_{Q'}}(\nu_x,p^x_\sharp\sigma_{Q'}^x) +{\operatorname{dist}}_{B_{Q'}}(p^x_\sharp\sigma_{Q'}^x,\sigma). \end{split} \end{equation} To deal with the first term on the right hand side of (\ref{eq4}), let $h$ be a Lipschitz function such that ${\operatorname{supp}} h\subset B_{Q'}$ and ${\operatorname{Lip}}(h)\leq1$. Then, using that ${\operatorname{supp}}\mu$ is well approximated in $CQ'$ by a Lipschitz graph $\Gamma_S$ with small slope, the function $h\circ p^x$ restricted to ${\operatorname{supp}}\mu\cup L_Q^x$ can be extended to a Lipschitz function supported in $B_{C_1Q'}$ (if $C_1$ is big enough) with ${\operatorname{Lip}}(h\circ p^x)$ bounded by a constant which only depends on $n$, $d$, and ${\operatorname{Lip}}(\Gamma_S)$. Therefore, \begin{equation}\label{eq7} \begin{split} \Big|&\int_{B_{Q'}} h\,d(\nu_x-p^x_\sharp\sigma_{Q'}^x)\Big|=\Big|\int_{B_{C_1{Q'}}}h\circ p^x\,d(\mu-\sigma_{Q'}^x)\Big| \lesssim{\operatorname{dist}}_{B_{C_1Q'}}(\mu,\sigma_{Q'}^x)\\ &\leq{\operatorname{dist}}_{B_{C_1Q'}}(\mu,\sigma_{Q'})+{\operatorname{dist}}_{B_{C_1Q'}}(\sigma_{Q'},\sigma_{Q'}^x) \lesssim\alpha_{\mu}(C_1Q')\ell(Q)^{n+1}+{\operatorname{dist}}(x,L_{Q'})\ell(Q)^n. \end{split} \end{equation} Since $x\in D$ and $D\subset C_1Q'$ (if $C_1>C_b$), by \cite[Remark 5.3]{To} we have \begin{equation}\label{eq9} \begin{split} {\operatorname{dist}}(x,L_{Q'})\lesssim\sum_{R\in{\mathcal D}:\,D\subset R\subset C_1Q'}\alpha_{\mu}(R)\ell(R)+{\operatorname{dist}}(x,L_D). \end{split} \end{equation} Taking the supremum over all possible Lipschitz functions $h$ in (\ref{eq7}) and using that $\ell(D)\leq\ell(R)\lesssim\ell(Q)$ in the sum above, we get \begin{equation}\label{eq8} \begin{split} {\operatorname{dist}}_{B_{Q'}}(\nu_x,p^x_\sharp\sigma_{Q'}^x)\lesssim\sum_{R\in{\mathcal D}:\,D\subset R\subset C_1Q'}\alpha_{\mu}(C_1R)\ell(Q)^{n+1}+\frac{{\operatorname{dist}}(x,L_D)}{\ell(D)}\,\ell(Q)^{n+1}. \end{split} \end{equation} To estimate the second term on the right hand side of (\ref{eq4}), notice that $p^x_\sharp\sigma=\sigma$ because $p^x|_{L_D^x}=\operatorname{Id}$. Hence, as in (\ref{eq7}), \begin{equation*} \begin{split} {\operatorname{dist}}_{B_{Q'}}(p^x_\sharp\sigma_{Q'}^x,\sigma)&={\operatorname{dist}}_{B_{Q'}}(p^x_\sharp\sigma_{Q'}^x,p^x_\sharp\sigma) \lesssim{\operatorname{dist}}_{B_{C_1{Q'}}}(\sigma_{Q'}^x,\sigma)\\ &\leq{\operatorname{dist}}_{B_{C_1Q'}}(\sigma_{Q'}^x,\sigma_{Q'})+{\operatorname{dist}}_{B_{C_1Q'}}(\sigma_{Q'},\sigma)\\ &\lesssim{\operatorname{dist}}_{B_{C_1Q'}}({\mathcal H}^n_{L_{Q'}^x},{\mathcal H}^n_{L_{Q'}})+{\operatorname{dist}}_{B_{C_1Q'}}({\mathcal H}^n_{L_{Q'}},{\mathcal H}^n_{L_D}) +{\operatorname{dist}}_{B_{C_1Q'}}({\mathcal H}^n_{L_D},{\mathcal H}^n_{L_D^x})\\ &\lesssim{\operatorname{dist}}(x,L_{Q'})\ell(Q)^{n}+{\operatorname{dist}}_{B_{C_1Q'}}({\mathcal H}^n_{L_{Q'}},{\mathcal H}^n_{L_D})+{\operatorname{dist}}(x,L_D)\ell(Q)^{n}. \end{split} \end{equation*} The term ${\operatorname{dist}}_{B_{C_1Q'}}({\mathcal H}^n_{L_{Q'}},{\mathcal H}^n_{L_D})$ can be estimated using the intermediate $\mu$-cubes between $D$ and $C_1Q'$ (similarly to (\ref{eq8})), and we obtain $${\operatorname{dist}}_{B_{C_1Q}}({\mathcal H}^n_{L_Q},{\mathcal H}^n_{L_D})\lesssim\sum_{R\in{\mathcal D}:\,D\subset R\subset C_1Q}\alpha_{\mu}(C_1R)\ell(Q)^{n+1}.$$ Thus, by (\ref{eq9}) and since $\ell(D)\lesssim\ell(Q)$, \begin{equation*} \begin{split} {\operatorname{dist}}_{B_{Q'}}(p^x_\sharp\sigma_{Q'}^x,\sigma)\lesssim \sum_{R\in{\mathcal D}:\,D\subset R\subset C_1Q'}\alpha_{\mu}(C_1R)\ell(Q)^{n+1}+\frac{{\operatorname{dist}}(x,L_D)}{\ell(D)}\,\ell(Q)^{n+1}. \end{split} \end{equation*} Then, (\ref{kikiki}) follows by plugging this last inequality and (\ref{eq8}) in (\ref{eq4}) combined with (\ref{pipipi}), and recalling that $\ell(Q)\approx\ell(Q')$. Thus we are done with Lemma \ref{5lema L2}$(b)$. \end{proof} \begin{proof}[\bf{\em Proof of} Lemma \ref{5lema L2}$(c)$] Given $Q\in J_1\cap{\operatorname{Tr}}$, using (\ref{reduced lema1}) we have \begin{equation*} \sum_{R'\in{\mathcal D}:\,R\subset R',\,\ell(R')\leq\ell(D)}\alpha_{\mu}(10R')<b_* \end{equation*} for all $R\in{\mathcal D}$ with $\ell(R)=\ell(Q)$ and such that $R\cap(p^x)^{-1}({\operatorname{supp}}\psi_{s+Q})\neq\emptyset$ for all $s\in Q^0_{J(Q)}$. By assuming $b_*$ small enough, we are going to show that for some $Q_0(x,Q)\in{\mathcal D}$ as in the statement $(c)$ and all $s\in Q^0_{J(Q)}$ we have \begin{equation}\label{lilili} \|\Delta^\psi_{Q,s}h_x\|_2\lesssim\bigg(\sum_{R\in{\mathcal D}:\,Q_0\subset R\subset C_2D}\alpha_{\mu}(C_2R)+\frac{{\operatorname{dist}}(x,L_D)}{\ell(D)}\bigg)\ell(Q)^{n/2} \end{equation} As before, Lemma \ref{5lema L2}$(c)$ will follow by averaging over $s\in Q^0_{J(Q)}$, and noting that $\|m_{s\in Q^0_{J(Q)}}(\Delta^\psi_{Q,s}h_x)\|_2\leq m_{s\in Q^0_{J(Q)}}\|\Delta^\psi_{Q,s}h_x\|_2$ by Minkowski's integral inequality. Take $Q\in J_1\cap{\operatorname{Tr}}$. Let $C_2$ be some big constant which will be fixed later on, and let $Q_0\in{\mathcal D}$ be a minimal $\mu$-cube such that $C_2Q_0$ contains ${\operatorname{supp}}\mu\cap(p^x)^{-1}({\operatorname{supp}}\psi_{s+Q}\cap L_D^x)$ for all $s\in J(Q)$. We can assume that $Q_0\subset C_2D$ if $C_2$ is big enough and, by (\ref{reduced lema1}), we may also suppose that $\sum_{R\in{\mathcal D}:\,Q_0\subset R\subset C_2D}\alpha_{\mu}(C_2R)$ is small enough. Hence, if $L_{Q_0}$ is a minimizing $n$-plane for $\beta_{\infty,\mu}(C_2Q_0)$, the angle between $L_{Q_0}$ and $L_D^x$ is also small enough, since it is bounded by $\sum_{R\in{\mathcal D}:\,Q_0\subset R\subset C_2D}\alpha_{\mu}(C_2R)$ (see \cite[Lemma 5.2]{To} for a related argument). It is not hard to show that then \begin{equation}\label{L2 claim} {\operatorname{diam}}(\Gamma\cap(p^x)^{-1}(Q\cap L_D^x))\lesssim\ell(Q). \end{equation} Let $L_{Q_0}$ and $\sigma_{Q_0}:=c_{Q_0}{\mathcal H}^n_{L_{Q_0}}$ be a minimizing $n$-plane and measure for $\alpha_{\mu}(C_2Q_0)$, respectively. Fix $z_{Q_0}\in L_{Q_0}\cap B_{C_2Q_0}$ and let $L_r$ be an $n$-plane parallel to $L_{D}^x$ which contains $z_{Q_0}$. Finally, define the measures $\sigma_{r}:=c_{Q_0}{\mathcal H}^n_{L_{r}}$ and $\sigma':=c_{Q_0}{\mathcal H}^n_{L_D^x}$. Since $\sigma'$ is a multiple of ${\mathcal H}^n_{L_D^x}$, similarly to (\ref{pipipi}) and using the triangle inequality, \begin{equation}\label{eq6} \begin{split} \|\Delta^\psi_{Q,s}h_x\|_2\ell(Q)^{n/2+1}&\lesssim{\operatorname{dist}}_{B_Q}(\nu_x,\sigma')\\ &\leq{\operatorname{dist}}_{B_Q}(\nu_x,p^x_\sharp\sigma_{Q_0}) +{\operatorname{dist}}_{B_Q}(p^x_\sharp\sigma_{Q_0},p^x_\sharp\sigma_r) +{\operatorname{dist}}_{B_Q}(p^x_\sharp\sigma_r,\sigma'), \end{split} \end{equation} where we have set $B_Q:=B(z_Q,3\ell(Q))\subset{\mathbb R}^d$ (for these computations, we may also assume that $\ell(Q)$ is small enough in comparison with $\ell(D)$). Arguing as in (\ref{eq7}), if $C_2$ is big enough, we have \begin{equation}\label{eq10} \begin{split} {\operatorname{dist}}_{B_Q}(\nu_x,p^x_\sharp\sigma_{Q_0})={\operatorname{dist}}_{B_Q}(p^x_\sharp\mu,p^x_\sharp\sigma_{Q_0})\lesssim\alpha_\mu(C_2Q_0)\ell(Q)^{n+1}, \end{split} \end{equation} and \begin{equation*} \begin{split} {\operatorname{dist}}_{B_Q}(p^x_\sharp\sigma_{Q_0},p^x_\sharp\sigma_r)\lesssim{\operatorname{dist}}_{B_{C_2Q_0}}(\sigma_{Q_0},\sigma_r)\lesssim{\operatorname{dist}}_{\mathcal H}(L_{Q_0}\cap B_{C_2Q_0},L_r\cap B_{C_2Q_0})\ell(Q)^n. \end{split} \end{equation*} Let $\gamma$ be the angle between $L_r$ and $L_{Q_0}$ (which is the same as the one between $L_D$ and $L_{Q_0}$). Since $z_{Q_0}\in L_{Q_0}\cap L_r\cap B_{C_2Q_0}$, we have ${\operatorname{dist}}_{\mathcal H}(L_{Q_0}\cap B_{C_2Q_0},L_r\cap B_{C_2Q_0})\lesssim\sin(\gamma)\ell(Q)$, and it is not difficult to show that $\sin(\gamma)\lesssim\sum_{R\in{\mathcal D}:\,Q_0\subset R\subset C_2D}\alpha_{\mu}(C_2R)$. Thus, \begin{equation}\label{eq11} \begin{split} {\operatorname{dist}}_{B_Q}(p^x_\sharp\sigma_{Q_0},p^x_\sharp\sigma_r)\lesssim \sum_{R\in{\mathcal D}:\,Q_0\subset R\subset C_2D}\alpha_{\mu}(C_2R)\ell(Q)^{n+1}. \end{split} \end{equation} Let us estimate the last term on the right hand side of (\ref{eq6}). Since $c_{Q_0}\lesssim1$, we have ${\operatorname{dist}}_{B_Q}(p^x_\sharp\sigma_r,\sigma')\lesssim{\operatorname{dist}}_{B_Q}(p^x_\sharp{\mathcal H}^n_{L_r},{\mathcal H}^n_{L_D^x})$. Let $h$ be a 1-Lipschitz function supported in $B_Q$ and such that Set $d:={\operatorname{dist}}(z_{Q_0},L_D^x)$. Since $Q\in J_1\subset J$ and $\ell(Q)\leq C\ell(D)$, if $C$ is small enough then ${\operatorname{dist}}(x,B_Q)\gtrsim\ell(D)$. Without loss of generality, we may assume that $x=0$ and that $L_D^x={\mathbb R}^n\times\{0\}^{d-n}$, so $L_r=z_{Q_0}+{\mathbb R}^n\times\{0\}^{d-n}$. Thus, if we set $z_{Q_0}':=(z_{Q_0}^{n+1},\ldots,z_{Q_0}^d)$, we have that $d=|z_{Q_0}'|$ and $p^x$ restricted to $L_r\cap B_Q$ can be written in the following manner: $p^x:y=(y^1,\ldots,y^n,z_{Q_0}')\mapsto(F(y^1,\ldots,y^n),0)$, where $F:{\mathbb R}^n\setminus\{0\}^n\to{\mathbb R}^n$ is defined by $$F(y)=y\frac{\sqrt{|y|^2+d^2}}{|y|}=y\sqrt{1+\frac{d^2}{|y|^2}}.$$ Therefore, $\int h\,d(p^x_\sharp{\mathcal H}^n_{L_r})=\int h\circ p^x\,d{\mathcal H}^n_{L_r}=\int_{{\mathbb R}^n}(h\circ p^x)(y,z_{Q_0}')\,dy=\int_{{\mathbb R}^n}h(F(y),0)\,dy$, and we also have $\int h\,d{\mathcal H}^n_{L_D^x}=\int_{{\mathbb R}^n}h((y,0))\,dy=\int_{{\mathbb R}^n}h(F(y),0)J(F)(y)\,dy$ by a change of variables, where $J(F)$ denotes the Jacobian of $F$. Hence \begin{equation}\label{lmlml} \left|\int h\,d(p^x_\sharp{\mathcal H}^n_{L_r}-{\mathcal H}^n_{L_D^x})\right|\lesssim\int_{{\mathbb R}^n}|h(F(y),0)||1-J(F)(y)|\,dy. \end{equation} Notice that, because of the assumptions on ${\operatorname{supp}} h(F(\cdot),0)$ and since $z_{Q_0}\in B_{C_2Q_0}$ and $Q_0\subset C_2D$, we have $d\lesssim|y|$ for all $y\in{\operatorname{supp}} h(F(\cdot),0)$. If $F_i$ denotes the $i$'th coordinate of $F$, it is straightforward to check that $\partial_{y^j}F_i(y)=-d^2y^iy^j|y|^{-3}(|y|^2+d^2)^{-1/2}$ if $i\neq j$ and $\partial_{y^i}F_i(y)=(1+d^2/|y|^2)^{1/2}-d^2(y^i)^2|y|^{-3}(|y|^2+d^2)^{-1/2}$. Thus, we easily obtain \begin{equation}\label{lolo} |1-J(F)(y)|\lesssim d/|y|\lesssim d/\ell(D) \end{equation} for all $y\in{\operatorname{supp}} h(F(\cdot),0)$. Since ${\operatorname{diam}}({\operatorname{supp}} h(F(\cdot),0))\lesssim\ell(Q)$ and $h((F(\cdot),0))$ is Lipschitz, using (\ref{lolo}) and taking the supremum in (\ref{lmlml}) over all such functions $h$, we have ${\operatorname{dist}}_{B_Q}(p^x_\sharp{\mathcal H}^n_{L_r},{\mathcal H}^n_{L_D^x})\lesssim \ell(Q)^{n+1}d/\ell(D).$ Finally, by \cite[Remark 5.3]{To} and since $z_{Q_0}\in L_{Q_0}$, $$d\lesssim{\operatorname{dist}}(z_{Q_0},L_D)+{\operatorname{dist}}(L_D,L_D^x) \lesssim\sum_{R\in{\mathcal D}:\,Q_0\subset R\subset C_2D}\alpha_\mu(C_2R)\ell(R)+{\operatorname{dist}}(x,L_D),$$ and thus \begin{equation}\label{eq12} \begin{split} {\operatorname{dist}}_{B_Q}(p^x_\sharp{\mathcal H}^n_{L_r},{\mathcal H}^n_{L_D^x})\lesssim \sum_{R\in{\mathcal D}:\,Q_0\subset R\subset C_2D}\alpha_\mu(C_2R)\ell(Q)^{n+1}+\frac{{\operatorname{dist}}(x,L_D)}{\ell(D)}\,\ell(Q)^{n+1}. \end{split} \end{equation} Finally, (\ref{lilili}) follows by applying (\ref{eq10}), (\ref{eq11}), and (\ref{eq12}) to (\ref{eq6}), which yields Lemma \ref{5lema L2}$(c)$. \end{proof} \begin{proof}[\bf{\em Proof of} Lemma \ref{5lema L2}$(d)$] This is the key point where taking averages of dyadic lattices with respect to the parameter $s$ is necessary. Given $Q\in J_1\cap{\operatorname{Stp}}$, we have to show that $\|m_{s\in Q^0_{J(Q)}}(\widetilde\Delta^\psi_{Q,s}h_x)\|_1\lesssim\ell(Q)^n.$ Unlike in $(a),\ldots,(c)$, the estimate in $(d)$ does not hold for a particular choice of $s$ in general but, as we will see, it holds in average. Recall that, for a fixed $s\in Q^0_{J(Q)}$, \begin{equation*} \begin{split} \widetilde\Delta^\psi_{Q,s}h_x&=\sum_{R\in{\mathcal P}:\,R\subset Q}\Delta^\psi_{R,s}h_x\\ &=\sum_{\begin{subarray}{c}R\in{\mathcal P}:\,{\operatorname{supp}}\psi_{R}\cap Q\neq\emptyset\\ \ell(R)\leq\ell(Q)\end{subarray}}\chi_{s+Q}\,\Delta^\psi_{R,s}h_x -\sum_{\begin{subarray}{c}R\in{\mathcal P}:\,{\operatorname{supp}}\psi_{R}\cap Q\neq\emptyset\\ \ell(R)\leq\ell(Q),\,R\not\subset Q\end{subarray}}\chi_{s+Q}\,\Delta^\psi_{R,s}h_x\\ &\quad+\sum_{\begin{subarray}{c}R\in{\mathcal P}:\\ R\subset Q\end{subarray}}\chi_{(s+Q)^c}\,\Delta^\psi_{R,s}h_x =:I_s+II_s+III_s. \end{split} \end{equation*} We are going to estimate $I_s$, $II_s$, and $III_s$ separately. For the case of $I_s$, we have \begin{equation*} \begin{split} \chi_{s+Q}\, h_x=\chi_{s+Q}\sum_{R\in{\mathcal D}_x^{n,0}:\, \ell(R)>\ell(Q)}\Delta^\psi_{R,s}h_x+ \chi_{s+Q}\sum_{R\in{\mathcal D}_x^{n,0}:\, \ell(R)\leq\ell(Q)}\Delta^\psi_{R,s}h_x=\chi_{s+Q}\,I'_s+I_s,\\ \end{split} \end{equation*} where we have set $I'_s:=\sum_{R\in{\mathcal D}_x^{n,0}:\, \ell(R)>\ell(Q)}\Delta^\psi_{R,s}h_x$. On one hand, since $Q\in J_1\cap{\operatorname{Stp}}$, (\ref{reduced lema1}) holds. Thus, using that $\sum_{R\in{\mathcal D}:\,P(R_Q)\subset R,\,\ell(R)\leq\ell(D)}\alpha_{\mu}(10R)<b_*$, one can show that \begin{equation}\label{njniop} \|\chi_{s+Q}\,h_x\|_1\lesssim\ell(Q)^n \end{equation} (see above (\ref{L2 claim}) for a related argument). On the other hand, since $\|\chi_{s+Q}\,h_x\|_1\lesssim\ell(Q)^n$, it is known that then $\|\chi_{s+Q} I'_s\|_1\lesssim\ell(Q)^n$ (see \cite[Part I]{David-LNM}, in particular pay attention to the last sum in equation (46) of Part I). Combining these estimates, we conclude that $\|I_s\|_1\lesssim\ell(Q)^n$. Let us now deal with $II_s$. First of all, split $II_s$ into different scales, that is $$\sum_{\begin{subarray}{c}R\in{\mathcal P}:\,{\operatorname{supp}}\psi_{R}\cap Q\neq\emptyset\\ \ell(R)\leq\ell(Q),\,R\not\subset Q\end{subarray}}\chi_{s+Q}\,\Delta^\psi_{R,s}h_x= \sum_{k\geq J(Q)}\,\sum_{\begin{subarray}{c}R\in{\mathcal P}:\,{\operatorname{supp}}\psi_{R}\cap Q\neq\emptyset\\ \ell(R)=2^{-k},\,R\not\subset Q\end{subarray}}\chi_{s+Q}\,\Delta^\psi_{R,s}h_x.$$ Observe that if $k\geq J(Q)$, ${\operatorname{supp}}\psi_{R}\cap Q\neq\emptyset$, $\ell(R)=2^{-k}$, and $R\not\subset Q$, then $s+R\subset U_{C2^{-k}}(s+\partial Q)$, where $C>1$ is some fixed constant and $U_{C2^{-k}}(s+\partial Q):=\{z\in L_D^x:\, {\operatorname{dist}}(z,s+\partial Q)<C2^{-k}\}$. Hence, using Definition \ref{definicio wavelet}$(e)$ and the definition of $h_x$, we get \begin{equation*} \begin{split} \|II_s\|_1\leq\sum_{k\geq J(Q)}\,\sum_{\begin{subarray}{c}R\in{\mathcal P}:\,{\operatorname{supp}}\psi_{R}\cap Q\neq\emptyset\\ \ell(R)=2^{-k},\,R\not\subset Q\end{subarray}}\|\Delta^\psi_{R,s}h_x\|_1 \lesssim\sum_{k\geq J(Q)}\nu_x(U_{C2^{-k}}(s+\partial Q)). \end{split} \end{equation*} The case of $III_s$ can be dealt with very similar techniques, and then one obtains the same estimate. Therefore, \begin{equation}\label{reduced aqpl} \begin{split} \|m_{s\in Q^0_{J(Q)}}(\widetilde\Delta^\psi_{Q,s}h_x)\|_1&=\|m_{s\in Q^0_{J(Q)}}(I_s+II_s+III_s)\|_1 \leq m_{s\in Q^0_{J(Q)}}\|I_s+II_s+III_s\|_1\\ &\lesssim\ell(Q)^n+m_{s\in Q^0_{J(Q)}}\bigg(\sum_{k\geq J(Q)}\nu_x(U_{C2^{-k}}(s+\partial Q))\bigg). \end{split} \end{equation} Using Fubini's theorem, it is not difficult to show that $$m_{s\in Q^0_{J(Q)}}\nu_x(U_{C2^{-k}}(s+\partial Q))\big)\lesssim2^{-k}\ell(Q)^{-1}\nu_x(CQ)$$ for all for $k\geq J(Q)$ (see \cite[Lemma 7.5]{Tolsa3} for example, for a related argument). Since $Q\in{\operatorname{Stp}}$, then (\ref{reduced lema1}) holds and then, as in (\ref{njniop}), we have $\nu_x(CQ)\lesssim\ell(Q)^n$, thus $$m_{s\in Q^0_{J(Q)}}\bigg(\sum_{k\geq J(Q)}\nu_x(U_{C2^{-k}}(s+\partial Q))\bigg)\lesssim\ell(Q)^n.$$ If we combine this last estimate with (\ref{reduced aqpl}), we are done. \end{proof} \subsubsection{{\bf Final estimates}} From Lemmas \ref{5 var eq8}, \ref{5 var eq11}, and \ref{5 var eq8a}, we obtain the following: \begin{equation*} \begin{split} \sum_{S\in{\operatorname{Trs}}}\,\sum_{D\in S}&\int_D\sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Tr}}(R)}(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\Delta_Q f)\mu)(x)\bigg|^2\,d\mu(x)\\ +\sum_{S\in{\operatorname{Trs}}}&\,\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)}\sum_{Q\in {\operatorname{Stp}}(R)}(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(\widetilde\Delta_Q f)\mu)(x)\bigg|^2\,d\mu(x)\\ &+\sum_{S\in{\operatorname{Trs}}}\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}\bigg|\sum_{R\in V(D)} (K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(m_R^\mu f)\chi_R\mu)(x)\bigg|^2\,d\mu(x)\lesssim\|f\|_{L^2(\mu)}^2. \end{split} \end{equation*} Combining this estimate with (\ref{eqmain}), we deduce \begin{equation*} \begin{split} \sum_{S\in{\operatorname{Trs}}}&\,\sum_{D\in S}\int_D\sum_{m\in{\mathcal S}_D(x)}\big|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)\big|^2\,d\mu(x)\lesssim\|f\|_{L^2(\mu)}^2. \end{split} \end{equation*} Finally, using (\ref{5 var eq1}) and (\ref{5 var eq2}), we conclude that \begin{equation*} \begin{split} \|({\mathcal V}^{\mathcal S}_\rho\circ{\mathcal T}^\mu)f\|_{L^2(\mu)}^2\lesssim\sum_{D\in{\mathcal D}}\int_D\sum_{m\in{\mathcal S}_D(x)}\big|(K\chi_{\epsilon_{m+1}}^{\epsilon_m}*(f\mu))(x)\big|^2\,d\mu(x)\lesssim\|f\|_{L^2(\mu)}^2. \end{split} \end{equation*} This finishes the proof of Theorem \ref{5teo var no suau acotada L2}. \section{If ${\mathcal V}_\rho\circ{\mathcal R}^\mu:\,L^2(\mu)\to L^2(\mu)$ is a bounded operator,\\then $\mu$ is a uniformly $n$-rectifiable measure}\label{4sec acotacio implica rectif} Let $C_\mu>0$ be the AD regularity constant of an AD regular measure $\mu$, that is $C_\mu^{-1}r^n\leq\mu(B(x,r))\leq C_\mu r^{n}$ for all $x\in{\operatorname{supp}}\mu$ and $0\leq r<{\operatorname{diam}}({\operatorname{supp}}\mu)$. For simplicity of notation, we may assume that ${\operatorname{diam}}({\operatorname{supp}}\mu)=\infty$ (the general case follows with minor modifications in our arguments). As before, we denote by ${\mathcal D}$ the dyadic lattice of $\mu$-cubes introduced in Subsection \ref{dyadic lattice}. In this section, we set $K(x)={x}{|x|^{-n-1}}$ for $x\neq0$. Recall that, given $\epsilon>0$, a Borel measure $\mu$, and $f\in L^1(\mu)$, we have set ${\mathcal R}^\mu f:=\{R_\epsilon^\mu f\}_{\epsilon>0}$, where \begin{equation*} R_\epsilon^\mu f(x) = \int_{|x-y|>\epsilon} K(x-y)f(y)\,d\mu(y). \end{equation*} In order to prove the main theorem of this section, namely Theorem \ref{4rectif teorema}, we need first to introduce some notation and state some preliminary results. \begin{defi}[Special truncation of the Riesz transform]\label{c djlcnA} For $\epsilon>0$, let $\varphi_\epsilon$ be as in Definition \ref{4defi varphi}. Given $m\in{\mathbb Z}$ and a Borel measure $\mu$ in ${\mathbb R}^d$, we set \begin{equation*} S_m\mu(x) := \int \big(\varphi_{2^{-m-1}}(x-y)-\varphi_{2^{-m}}(x-y)\big)K(x-y)\,d\mu(y). \end{equation*} \end{defi} \begin{lema}[Lemma 5.8 of \cite{DS1}] \label{4lemli} Given $Q\in{\mathcal D}$, there exist $n+1$ points $x_0,\ldots,x_n$ in $Q$ (and thus in ${\operatorname{supp}}\mu$) such that ${\operatorname{dist}}(x_j,L_{j-1})\geq C\ell(Q)$, where $L_k$ denotes the $k$-plane passing through $x_0,\ldots,x_k$, and where $C$ depends only on $n$ and $C_\mu$. \end{lema} \begin{lema}[Lemma 7.4 and Remark 7.5 of \cite{To}]\label{4lemcas1} Let $Q\in{\mathcal D}$ and $x_0,\ldots,x_n\in Q$ be like in Lemma \ref{4lemli}. Denote $r={\operatorname{diam}}(Q)$, and let $m,p\in{\mathbb Z}$ be such that $t\geq s>4r$ for $t=2^{-p}$ and $s=2^{-m}$. Suppose that $A(x_0,2^{-m-1/2},2^{-m+1/2})\cap{\operatorname{supp}}\mu\neq\emptyset$. Then any point $x_{n+1}\in 3Q$ satisfies \begin{equation}\label{4eqrem} {\operatorname{dist}}(x_{n+1},L_0)\lesssim s\sum_{j=1}^{n+1}\sum_{k=p}^m|S_k\mu(x_j)-S_k\mu(x_0)| +\frac{r^2}{s}+\frac{rs}{t}, \end{equation} where $L_0$ is the $n$-plane passing through $x_0,\ldots,x_n$. \end{lema} The following proposition is a direct consequence of the techniques used in the last section of \cite{To}. We give the proof for completeness. \begin{propo}\label{4rectif propo} Given $\epsilon_0>0$, there exist $\delta_0>0$ and $m_0,k_0\in{\mathbb N}$ depending on $\epsilon_0$, $n$, and $C_\mu$ such that, for all $i\in{\mathbb Z}$ and all $Q\in{\mathcal D}_i$ with $\beta_{1,\mu}(Q)>\epsilon_0$, there exist $k\in{\mathbb Z}$ with $|k|\leq k_0$ and $P\in{\mathcal D}_{i+k+m_0}$ such that $P\subset 4Q$ and $|S_{i+k}\mu(x)|\geq\delta_0\text{ for all }x\in P$. \end{propo} \begin{proof}[{\bf {\em Proof.}}] Fix $\epsilon_0>0$. Let $Q\in{\mathcal D}_i$ such that $\beta_{1,\mu}(Q)>\epsilon_0$. Take points $x_0,\ldots,x_n$ in $Q$ as in Lemma \ref{4lemli}, denote $r={\operatorname{diam}} Q$, and let $m\in{\mathbb Z}$ to be fixed below such that $4r<2^{-m}=:s$ and $A(x_0,2^{-m-1/2},2^{-m+1/2})\cap{\operatorname{supp}}\mu\neq\emptyset$ (we assume ${\operatorname{diam}}({\operatorname{supp}}\mu)=\infty$). By Lemma \ref{4lemcas1}, for $t:=2^{-p}\geq s$ to be fixed below and all $x_{n+1}\in3Q$, \begin{equation*} \begin{split} {\operatorname{dist}}(x_{n+1},L_0)&\lesssim s\sum_{j=1}^{n+1}\sum_{k=p}^m|S_k\mu(x_j)-S_k\mu(x_0)| +\frac{r^2}{s}+\frac{rs}{t}\\ &\lesssim s\sum_{k=p}^m\sum_{j=0}^{n+1}|S_k\mu(x_j)| +\frac{r^2}{s}+\frac{rs}{t}. \end{split} \end{equation*} Then, by integrating on $x_{n+1}\in3Q$, for some constant $C_1>0$ depending only on $n$ and $C_\mu$ \begin{equation*} \begin{split} \epsilon_0&<\beta_{1,\mu}(Q)\leq\frac{1}{\ell(Q)^n}\int_{3Q}\frac{{\operatorname{dist}}(x_{n+1},L_0)}{\ell(Q)}\,d\mu(x_{n+1})\\ &\leq C_1\bigg(\frac{s}{r}\sum_{k=p}^m\bigg(\frac{1}{\ell(Q)^n}\int_{3Q}|S_k\mu(x_{n+1})|\,d\mu(x_{n+1})+\sum_{j=0}^{n}|S_k\mu(x_j)|\bigg)+\frac{r}{s}+\frac{s}{t}\bigg). \end{split} \end{equation*} Thus, \begin{equation*} \begin{split} \frac{r}{s}\bigg(\frac{\epsilon_0}{C_1}-\frac{r}{s}-\frac{s}{t}\bigg) \leq\sum_{k=p}^m\bigg(\int_{3Q}\frac{|S_k\mu(x_{n+1})|}{\ell(Q)^n}\,d\mu(x_{n+1})+\sum_{j=0}^{n}|S_k\mu(x_j)|\bigg). \end{split} \end{equation*} We can easily choose $s$ and $t$ big enough (depending on $r$, $\epsilon_0$, and $C_1$) such that, for some constant $\epsilon_1>0$ depending only on $\epsilon_0$, $n$ and $C_\mu$, \begin{equation}\label{4rectif eq8} \begin{split} 0<\epsilon_1\leq\sum_{k=p}^m\bigg(\int_{3Q}\frac{|S_k\mu(x_{n+1})|}{\ell(Q)^n}\,d\mu(x_{n+1})+\sum_{j=0}^{n}|S_k\mu(x_j)|\bigg). \end{split} \end{equation} Notice that, since $t=2^{-p}$ and $s=2^{-m}$ where chosen depending on $r\approx2^{-i}$, the sum on the right hand side of (\ref{4rectif eq8}) has a finite number of terms which only depends on $\epsilon_0$, $n$ and $C_\mu$. Therefore, there exists $k_0\in{\mathbb N}$ and $C_2>0$ depending only on $\epsilon_0$, $n$ and $C_\mu$ such that, for some negative integer $k$ with $|k|\leq k_0$ and some $j=0,\ldots,n$, \begin{equation*} \begin{split} \epsilon_1\leq C_2\bigg(\frac{1}{\ell(Q)^n}\int_{3Q}|S_{i+k}\mu|\,d\mu+|S_{i+k}\mu(x_{j})|\bigg), \end{split} \end{equation*} which implies that there exists $C_3$ (depending on $C_2$) and $z\in 3Q$ such that $\epsilon_1\leq C_3|S_{i+k}\mu(z)|$. Given $x\in{\operatorname{supp}}\mu$, if $|x-z|\leq2^{-i-k}$, then \begin{equation*} \begin{split} |S_{i+k}\mu(x)-S_{i+k}\mu(z)|&\leq \int_{|y-z|\lesssim2^{-i-k}}\|\nabla(\varphi_{i+k}K)\|_\infty|x-z|\,d\mu(y)\\ &\lesssim2^{(i+k)(n+1)}|x-z|\int_{|y-z| \lesssim2^{-i-k}}\,d\mu(y)\lesssim2^{i+k}|x-z|. \end{split} \end{equation*} Hence if $|x-z|\leq C_42^{-i-k}$ with $C_4>0$ small enough, we have $C_3|S_{i+k}\mu(x)-S_{i+k}\mu(z)|\leq\epsilon_1/2$, so $\epsilon_1/2\leq C_3|S_{i+k}\mu(x)|$. Therefore, there exist $m_0\in{\mathbb N}$ depending on $C_4$ (and thus on $\epsilon_0$, $n$, and $C_\mu$) and $P\in{\mathcal D}_{i+k+m_0}$ such that $\epsilon_1/2\leq C_3|S_{i+k}\mu(x)|$ for all $x\in P$. We can also assume that $P\subset4Q$ by taking $C_4$ small enough, and since $|k|\leq k_0$ we have $\ell(P)\approx\ell(Q)$. The proposition follows by setting $\delta_0:=\epsilon_1/(2C_3)>0$. \end{proof} \begin{defi} Given $\epsilon_0>0$, let $\delta_0,m_0>0$ be as in Proposition \ref{4rectif propo}. Set \begin{equation*} \begin{split} {\mathcal B}&:=\{Q\in{\mathcal D}\,:\,\beta_{1,\mu}(Q)>\epsilon_0\},\quad \widetilde{\mathcal B}:=\bigcup_{k\in{\mathbb Z}}\{Q\in{\mathcal D}_{k+m_0}\,:\,|S_k\mu(x)|\geq\delta_0\text{ for all }x\in Q\}. \end{split} \end{equation*} Given $P,R\in{\mathcal D}$ with $P\subset R$, we set $F_P^R=\sum_{Q\in\widetilde{\mathcal B}:\,P\subset Q\subset R}\chi_Q$ and $F^R=\sum_{Q\in\widetilde{\mathcal B}:\,Q\subset R}\chi_Q$. \end{defi} \begin{lema}\label{4rectif lema} Let $\rho>0$. Assume that there exists $C_0>0$ such that, for all $R\in{\mathcal D}$, \begin{equation}\label{4rectif eq1} \int_R \big(F^R\big)^{2/\rho}\,d\mu\leq C_0\mu(R). \end{equation} Then, there exists $C>0$ such that $\sum_{Q\in\widetilde{\mathcal B}:\,Q\subset R}\mu(Q)\leq C\mu(R)$ for all $R\in{\mathcal D}$. \end{lema} \begin{proof}[{\bf {\em Proof.}}] Let $M>1$ big enough (it will be fixed below). For $R\in{\mathcal D}$, set \begin{equation*} \begin{split} {\operatorname{Tree}}(R)&:=\big\{Q\in\widetilde{\mathcal B}\,:\,Q\subset R,\,\chi_Q F_Q^R\leq M\chi_Q\big\},\\ {\operatorname{Top}}_0(R)&:=\big\{P\in\widetilde{\mathcal B}\,:\,P\subset R,\,\chi_P F_P^R>M\chi_P,\text{ and }\chi_Q F_{Q}^R\leq M\chi_{Q}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{ for all }\,Q\in\widetilde{\mathcal B}\text{ such that }P\subsetneq Q\subset R\big\}. \end{split} \end{equation*} For $m\geq1$, set ${\operatorname{Top}}_m(R):=\bigcup_{P\in{\operatorname{Top}}_{m-1}(R)}{\operatorname{Top}}_0(P)$, and ${\operatorname{Top}}(R):=\bigcup_{m\geq0}{\operatorname{Top}}_m(P)$. Notice that if $R\in\widetilde{\mathcal B}$ then $R\in{\operatorname{Tree}}(R)$, because $M>1$. Notice also that \begin{equation}\label{4rectif eq2} \{Q\in\widetilde{\mathcal B}\,:\,Q\subset R\}={\operatorname{Tree}}(R)\cup\Big(\textstyle{\bigcup_{P\in{\operatorname{Top}}(R)}}{\operatorname{Tree}}(P)\Big), \end{equation} and the union is disjoint. Fix $R\in{\mathcal D}$. Then, by (\ref{4rectif eq2}), \begin{equation}\label{4rectif eq3} \begin{split} \sum_{Q\in\widetilde{\mathcal B}:\,Q\subset R}\mu(Q)&=\sum_{Q\in{\operatorname{Tree}}(R)}\mu(Q)+\sum_{P\in{\operatorname{Top}}(R)}\,\sum_{Q\in{\operatorname{Tree}}(P)}\mu(Q)\\ &=\int_R\sum_{Q\in{\operatorname{Tree}}(R)}\chi_Q\,d\mu+\int_R\sum_{P\in{\operatorname{Top}}(R)}\,\sum_{Q\in{\operatorname{Tree}}(P)}\chi_Q\,d\mu. \end{split} \end{equation} Given $x\in R$ and $P\in{\mathcal D}$ such that $P\subset R$, by the definition of ${\operatorname{Tree}}(P)$, we have $$\sum_{Q\in{\operatorname{Tree}}(P)}\chi_Q(x)\leq M\chi_P(x).$$ Therefore, by (\ref{4rectif eq3}), \begin{equation}\label{4rectif eq4} \begin{split} \sum_{Q\in\widetilde{\mathcal B}:\,Q\subset R}\mu(Q)&\leq M\mu(R)+\int_R\sum_{P\in{\operatorname{Top}}(R)}M\chi_P\,d\mu =M\bigg(\mu(R)+\sum_{m\geq0}\sum_{P\in{\operatorname{Top}}_m(R)}\mu(P)\bigg). \end{split} \end{equation} We are going to prove that, if $M$ is big enough, \begin{equation}\label{4rectif eq5} \sum_{P\in{\operatorname{Top}}_m(R)}\mu(P)\leq2^{-m}\mu(R) \end{equation} for all $m\geq0$, and then, by (\ref{4rectif eq4}), we will finally obtain $$\sum_{Q\in\widetilde{\mathcal B}:\,Q\subset R}\mu(Q)\leq M\mu(R)+M\sum_{m\geq0}2^{-m}\mu(R)\leq3M\mu(R),$$ and the lemma will be proven. Notice that, if $P,P'\in{\operatorname{Top}}_0(R)$ are different, then $P\cap P'=\emptyset$ because of the last condition in the definition of ${\operatorname{Top}}_0(R)$. So, to verify (\ref{4rectif eq5}), it is enough to show that, for all $m\geq0$, \begin{equation}\label{4rectif eq5bis} \begin{split} \sum_{P\in{\operatorname{Top}}_{m+1}(R)}\mu(P)<\frac{1}{2}\sum_{P\in{\operatorname{Top}}_{m}(R)}\mu(P). \end{split} \end{equation} We have \begin{equation}\label{plki} \sum_{P\in{\operatorname{Top}}_{m+1}(R)}\mu(P)=\sum_{P\in{\operatorname{Top}}_{m}(R)}\,\sum_{Q\in{\operatorname{Top}}_{0}(P)}\mu(Q) \end{equation} and $\sum_{Q\in{\operatorname{Top}}_0(P)}\chi_Q=\chi_{U}$, where $U:=\bigcup_{Q\in{\operatorname{Top}}_0(P)}Q\subset P$. If $x\in U$, there exists $Q\in{\operatorname{Top}}_0(P)$ such that $x\in Q$, so $1=\chi_Q(x)<{M^{-2/\rho}}\big(F^P_Q(x)\big)^{2/\rho}\leq{M^{-2/\rho}}\big(F^P(x)\big)^{2/\rho}$, and then using (\ref{4rectif eq1}) we have \begin{equation*} \sum_{Q\in{\operatorname{Top}}_0(P)}\mu(Q)=\int_P\sum_{Q\in{\operatorname{Top}}_0(P)}\chi_Q\,d\mu =\int_U1\,d\mu<M^{-2/\rho}\int_P\big(F^P\big)^{2/\rho}\,d\mu\leq \frac{C_0}{M^{2/\rho}}\,\mu(P), \end{equation*} which, in combination with (\ref{plki}), yields (\ref{4rectif eq5bis}) by taking $M>(2C_0)^{\rho/2}$. \end{proof} \begin{lema}\label{4rectif lema 2} Assume that, for some $C_1>0$, $\sum_{Q\in\widetilde{\mathcal B}:\,Q\subset R}\mu(Q)\leq C_1\mu(R)$ for all $R\in{\mathcal D}$. Then there exists $C_2>0$ such that $\sum_{Q\in{\mathcal B}:\,Q\subset R}\mu(Q)\leq C_2\mu(R)$ for all $R\in{\mathcal D}$. \end{lema} \begin{proof}[{\bf {\em Proof.}}] Given $Q\in{\mathcal B}$, by Proposition \ref{4rectif propo}, there exists $P_Q\in{\mathcal D}_{k+m_0}$ for some $k\in{\mathbb Z}$ such that $P_Q\subset 4Q$, $\mu(P_Q)\geq C_0\mu(Q)$, and $|S_k\mu(x)|\geq\delta_0\text{ for all }x\in P_Q,$ where $C_0>0$ is some small constant. Thus, in particular, $P_Q\in\widetilde{\mathcal B}$ for all $Q\in{\mathcal B}$. Since $P_Q\subset 4Q$ and $\mu(P_Q)\geq C_0\mu(Q)$ for all $Q\in{\mathcal B}$, given $P\in\widetilde{\mathcal B}$ there are finitely many $\mu$-cubes $Q\in{\mathcal B}$ such that $P_Q=P$, and the number of such $\mu$-cubes is bounded above by a constant depending only on $n$, $C_0$, and $C_\mu$. Hence, since $4R$ is contained in the union of a bounded number of $\mu$-cubes with side length $\ell(R)$, \begin{equation*} \sum_{Q\in{\mathcal B}:\,Q\subset R}\mu(Q)\leq C_0^{-1}\sum_{Q\in{\mathcal B}:\,Q\subset R}\mu(P_Q) \lesssim\sum_{P\in\widetilde{\mathcal B}:\,P\subset 4R}\mu(P)\leq C_1\mu(R) \end{equation*} for all $R\in{\mathcal D}$, as wished. \end{proof} \begin{teo}\label{4rectif teorema} Let $\rho>0$. Given an $n$-dimensional AD regular measure $\mu$, if ${\mathcal V}_\rho\circ{\mathcal R}^\mu$ is a bounded operator in $L^2(\mu)$, then $\mu$ is uniformly $n$-rectifiable. \end{teo} \begin{proof}[{\bf {\em Proof.}}] It is easy to see that, if ${\mathcal V}_\rho\circ{\mathcal R}^\mu$ is a bounded operator in $L^2(\mu)$, then $R_*^\mu$ is also bounded in $L^2(\mu)$. By Theorem 1.2 in \cite[Part III, Chapter 1]{DS2}, in order to show that $\mu$ is uniformly $n$-rectifiable, it is enough to show that $\mu$ satisfies the Weak Geometric Lemma, i.e., that for any $\epsilon_0>0$, the set ${\mathcal B}$ is a Carleson set. In other words, it suffices to show that there exists a constant $C>0$ depending on $\epsilon_0$ such that $\sum_{Q\in{\mathcal B}:\,Q\subset R}\mu(Q)\leq C\mu(R)$ for all $R\in{\mathcal D}$. By Lemma \ref{4rectif lema 2} and Lemma \ref{4rectif lema}, this holds if, for some $\rho>0$, there exists $C>0$ depending on $\epsilon_0$ such that, for all $R\in{\mathcal D}$, \begin{equation}\label{4rectif eq6} \int_R \big(F^R\big)^{2/\rho}\,d\mu\leq C\mu(R). \end{equation} Notice that, for $m\in{\mathbb Z}$ and $f\in L^1(\mu)$, $S_m(f\mu)=T_{\varphi_{2^{-m-1}}}^\mu f-T_{\varphi_{2^{-m}}}^\mu f$, where $S_m$ is introduced in Definition \ref{c djlcnA} and $T_{\varphi_{\epsilon}}^\mu$ is as in Definition \ref{4defi varphi} (remember that now $K$ denotes the Riesz kernel), thus \begin{equation}\label{4rectif eqvar} \begin{split} \sum_{k\in{\mathbb Z}}|S_k(f\mu)(x)|^{\rho} \leq\big(({\mathcal V}_{\rho}\circ{\mathcal T}_\varphi^\mu)f(x)\big)^{\rho}. \end{split} \end{equation} We may assume that $\rho\geq1$, since $({\mathcal V}_{\widetilde\rho}\circ{\mathcal R}^\mu)f(x)\leq({\mathcal V}_{\rho}\circ{\mathcal R}^\mu)f(x)$ for $\widetilde\rho\geq\rho$, and then the $L^2(\mu)$ boundedness of ${\mathcal V}_{\rho}\circ{\mathcal R}^\mu$ for some $\rho>0$ implies the $L^2(\mu)$ boundedness of ${\mathcal V}_{\widetilde\rho}\circ{\mathcal R}^\mu$ for all $\widetilde\rho\geq\rho$. Since $\varphi_{\mathbb R}\big(2^{2m}t^2\bigr)$ is a convex combination of the functions $\chi_{\{s\in{\mathbb R}\,:\, s>\epsilon\}}(t)$ for $\epsilon>0$, using that $\rho\geq1$ and Minkowski's integral inequality, it is not hard to show that the $L^2(\mu)$ boundedness of ${\mathcal V}_{\rho}\circ{\mathcal R}^\mu$ implies the $L^2(\mu)$ boundedness of ${\mathcal V}_{\rho}\circ{\mathcal T}_\varphi^\mu$ (see Subsection \ref{5sslong}, or \cite[Lemma 2.4]{CJRW-Hilbert}, for a similar argument). Therefore, for any $M>0$, we have \begin{equation}\label{4rectif eqvar1} \|({\mathcal V}_{\rho}\circ{\mathcal T}_\varphi^\mu)\chi_{MR}\|^2_{L^2(\mu)}\leq C\mu(MR)\leq C\mu(R)\text{ for all $R\in{\mathcal D}$.} \end{equation} Fix $\epsilon_0>0$, let $\delta_0,m_0>0$ be as in Proposition \ref{4rectif propo}, and let $R\in{\mathcal D}$. Given $x\in R$ and $k\in{\mathbb Z}$, for any $Q\in{\mathcal D}_{k+m_0}\cap\widetilde{\mathcal B}$ such that $x\in Q\subset R$ we have $|S_k\mu(x)|\geq\delta_0$. Notice that, since $Q\in{\mathcal D}_{k+m_0}$ and $Q\subset R$, there exists $M>1$ depending only on $n$ and $m_0$ such that $\delta_0\leq|S_k\mu(x)|=|S_k(\chi_{MR}\mu)(x)|$. Therefore, using (\ref{4rectif eqvar}) and that for each $k\in{\mathbb Z}$ there is at most one $\mu$-cube $Q\in{\mathcal D}_{k+m_0}$ such that $x\in Q\subset R$, \begin{equation}\label{eq osc} \begin{split} F^R(x)&=\sum_{k\in{\mathbb Z}}\,\sum_{Q\in{\mathcal D}_{k+m_0}\cap\widetilde{\mathcal B}\,:\,x\in Q\subset R\,}\chi_Q(x) \leq\sum_{k\in{\mathbb Z}}\,\sum_{Q\in{\mathcal D}_{k+m_0}\cap\widetilde{\mathcal B}\,:\,x\in Q\subset R\,}\delta_0^{-\rho}|S_k(\chi_{MR}\mu)(x)|^{\rho}\\ &\leq\delta_0^{-\rho}\sum_{k\in{\mathbb Z}}|S_k(\chi_{MR}\mu)(x)|^{\rho} \leq\delta_0^{-\rho}\big(({\mathcal V}_{\rho}\circ{\mathcal T}_\varphi^\mu)\chi_{MR}(x)\big)^{\rho} \end{split} \end{equation} and then, by (\ref{4rectif eqvar1}), \begin{equation*} \begin{split} \int_R\big(F^R)^{2/\rho}\,d\mu& \leq\delta_0^{-2}\int_R\big(({\mathcal V}_{\rho}\circ{\mathcal T}_\varphi^\mu)\chi_{MR}\big)^2\,d\mu \leq\delta_0^{-2}\|({\mathcal V}_{\rho}\circ{\mathcal T}_\varphi^\mu)\chi_{MR}\|^2_{L^2(\mu)} \leq C\mu(R) \end{split} \end{equation*} for all $R\in{\mathcal D}$. This yields (\ref{4rectif eq6}), and the theorem follows. \end{proof} \begin{remarko}\label{4remark oscil} {\em Let $\{r_m\}_{m\in{\mathbb Z}}\subset(0,\infty)$ be a fixed decreasing sequence defining ${\mathcal O}$. If there exists $C>0$ such that $C^{-1}r_m\leq r_m-r_{m+1}\leq Cr_m$ for all $m\in{\mathbb Z}$, then the last inequality in (\ref{eq osc}) still holds if we replace ${\mathcal V}_{\rho}$ by ${\mathcal O}$ (by taking from the beginning $\rho=2$). Hence, Theorem \ref{4rectif teorema} still holds replacing ${\mathcal V}_\rho$ by ${\mathcal O}$ for this particular sequence $\{r_m\}_{m\in{\mathbb Z}}$. However, we do not know if it holds for any $\{r_m\}_{m\in{\mathbb Z}}\subset(0,\infty)$. } \end{remarko}
2,877,628,091,392
arxiv
\section{Introduction} Arbitrage Pricing Theory (APT) was conceived by \cite{ross} in order to derive the conclusions of Capital Asset Pricing Model (see \cite{lintner,sharpe}) from alternative assumptions. These remarkable conclusions had a huge bearing on empirical work but they somehow overshadowed the highly inventive model suggested in \cite{ross}. Mathematical finance subsequently took up the idea of a market with countably many assets and the theory of large financial markets was founded in \cite{kk1} and further developed in e.g.\ \cite{kk2,iw1,iw2,klein,josef}, just to mention a few. For the sake of generality, continuous trading was assumed in the overwhelming majority of related papers which, again, eclipsed the original setting of \cite{ross}. While the arbitrage theory of the large financial markets has been worked out in \cite{kk1,kk2} satisfactorily in continuous time, other crucial topics -- such as utility maximization or superreplication -- brought about only dubious conclusions and unsettled questions. Portfolios in finitely many assets were considered in the above references and a natural definition for strategies involving possibly all the assets was missing. Generalized portfolios were introduced (see \cite{paolo,josef,oleksii}) as suitable limits of portfolios with finitely many assets. They lacked, however, a clear economic interpretation. In the APT (and, for the moment, only in that model) \cite{jmaa} introduces a straightforward concept of portfolios in infinitely many assets which we will use in the present paper. In \cite{CR18} it is proved that assuming absence of arbitrage in all of the small markets and under integrability conditions, the no arbitrage condition stated with infinitely many assets also holds true. In the same paper, the authors obtain a dual representation of the superreplication cost of a contingent claim. In this paper, we investigate the existence of optimizers for utility functions on the whole real line (the positive real axis case was treated in \cite{CR18}) and we relax some rather stringent conditions imposed in \cite{jmaa,ijtaf}. From both a theoretical and a computational viewpoint it is crucial to clarify the relationship between optimal investment in the finite markets and those in the whole market. In our setup, it is expected that the value functions in finite markets perform asymptotically as well as the value function in the large market. Considering utility indifference prices, these should also converge as the number of assets increases. While these facts are intuitive, no formal justification has been provided so far. We prove these facts in Theorem \ref{zut} and Corollary \ref{prixut} below. We also prove that certain convex combinations of the optimal portfolios in finite markets perform asymptotically as well as the overall optimizer. Asymptotic results for superhedging and mean-variance hedging have been obtained in \cite{baran,campi}. In the utility maximization context the first such result is Theorem 5.3 of \cite{jmaa} where it was shown that there exists a sequence of strategies in finite markets whose values converge to the optimal value. That paper, however, assumed that asset price changes may take arbitrarily large negative and positive values which is a rather strong requirement. Under the more relaxed conditions of the present work we also show the existence of such sequence, moreover, they can be chosen to be averages of finite market optimizers, see Theorem \ref{matural} below. Section \ref{lmm} presents the model and recalls some useful results from \cite{CR18}. Section \ref{secut} contains the main contributions: existence of utility maximization and the asymptotics from small markets to big markets. \section{The large market model} \label{lmm} Let $({\Omega}, \Fc, P)$ be a probability space. We consider a two stage Arbitrage Pricing Model. For any $i \geq 1$, let the return on asset $i$ be given by \begin{eqnarray*} R_i &=& \bar{\beta}_i(\varepsilon_i-b_i),\quad 1\leq i\leq m;\\ R_i &=& \sum_{j=1}^m\beta_i^j(\varepsilon_j-b_j)+\bar{\beta}_i(\varepsilon_i- b_i),\quad i>m, \end{eqnarray*} where the $(\varepsilon_i)_{i \geq 1}$ are random variables and $(\bar{\beta}_i)_{i \geq 1}, (b_i)_{i \geq 1},(\beta_i^j)_{i >m, 1 \leq j \leq m}$ are constants. We refer to \cite{kk1,def, ijtaf} for further discussions on the model. \begin{assumption} \label{un} The $(\e_i)_{i \geq 1}$ are square-integrable, independent random variables satisfying \begin{eqnarray*} E(\varepsilon_i)=0,\quad E\left(\varepsilon_i^2\right)=1,\quad i\geq 1. \end{eqnarray*} \end{assumption} We consider strategies using potentially infinitely many assets and belonging to $$\ell_2:=\left\{(h_i)_{i\geq 1}: \, h_{i}\in\mathbb{R},\, i\geq 1,\, \sum_{i=1}^{\infty}h_i^2<\infty\right\}$$ which is an Hilbert space with the norm $||h||_{\ell_2}:=\sqrt{\sum_{i=1}^{\infty}h_i^2}$. \\ Let $L^2(\Omega, \Fc,P):=\{X:\Omega \to \R, \, E|X|^2< \infty\}$ (denoted by $L^2(P)$ from now on), which is again a Hilbert space with the norm $||X||_{L^2}:=\sqrt{E(|X|^2)}$. For $h \in \ell_2,$ let $\Phi(h):=\sum_{i=1}^{\infty}h_i\e_i,$ where the infinite sum in $\Phi(h)$ has to be understood as the limit in $L^2(P)$ of the finite sequences $(\sum_{i=1}^{n}h_i\e_i)_{n \geq 1}$. Then $\Phi$ is an isometry from $\ell_2$ to $L^2(P).$ \begin{assumption}\label{b} We have $\|b\|_{\ell_2}<\infty$. \end{assumption} Under Assumption \ref{b}, we have (see (5) in \cite{CR18}) that \begin{equation} \label{isol} E\left(\left(\sum_{i=1}^{\infty}h_i(\e_i-b_i)\right)^2\right) \leq (1+\|b\|_{\ell_2}^2) \| h\|^2_{\ell_2}<\infty, \end{equation} and we may consider again the infinite sum $\langle h, \e-b\rangle:=\sum_{i=1}^{\infty}h_i(\e_i -b_i)$. Note that $$E(|\langle h, \e-b\rangle |) \leq\sqrt{E\left(\langle h, \e-b\rangle \right)^2}\leq \sqrt{1+\|b\|_{\ell_2}^2} \| h\|_{\ell_2}.$$ The (self-financed) value at time $1$ that can be attained starting from $x$ and using a strategy $h$ in $\ell_2$ with infinitely many assets is given by $$ V^{x,h}:= x+\langle h, \e-b\rangle.{} $$ \begin{assumption} \label{AOAfini} For all $i \geq 1$, $$P(\e_i >b_i)>0 \mbox{ and } P(\e_i <b_i)>0.$$ \end{assumption} Fix $N\geq 1$. Using Lemma 3.3 in \cite{CR18}, under Assumptions \ref{un} and \ref{AOAfini}, there exists some ${\alpha}_{N}\in (0,1)$ such that for every $(h_1,\ldots, h_N) \in \R^N$ satisfying $\sum_{i=1}^N h_i^2=1$ we have \begin{eqnarray} \label{aoapetitNalpha} P\left(\sum_{i=1}^N h_i(\e_i - b_i)<-{\alpha}_{N}\right)>{\alpha}_{N}. \end{eqnarray} This condition is the so called quantitative no-arbitrage condition on any ``small market'' with $N$ random sources and it is well-known that this condition is equivalent to the existence of a equivalent martingale measure for the finite market with asset $R_{1},\ldots,R_{N}$ (see \cite{dmw} and \cite{follmer-schied}). However, we need the existence of martingale measures for the whole market and even sufficient integrability of the martingale density. We say that EMM2 holds true if \begin{eqnarray} \label{mmset} \Mc_ 2:=\left\{Q \sim P, \, \frac{dQ}{dP} \in L ^2(P), \, E_{Q} (\e_i)=b_i, \, \forall i\geq 1\right\}\neq \emptyset. \end{eqnarray} Unfortunately, Assumptions \ref{un}, \ref{b} and \ref{AOAfini} are not known to be sufficient for ensuring that EMM2 holds true (see Proposition 4 of \cite{def}). Hence we also need the following technical condition. \begin{assumption}\label{trois} We have that \begin{equation}\label{harom} \sup_{i\geq 1} E\left[|\varepsilon_i|^3\right]<\infty. \end{equation} \end{assumption} \begin{lemma} Under Assumptions \ref{un}, \ref{AOAfini} and \ref{trois}, \begin{equation} \label{eqaaa} \mbox{ Assumption} \, \ref{b} \Longleftrightarrow \mbox{ EMM2.} \end{equation} \end{lemma} \begin{proof} This is Corollary 1 of \cite{def}. \end{proof} Lemma \ref{aoaquant} below asserts that the quantitative no arbitrage condition, mentioned above, holds true in the large market, too. \begin{lemma} \label{aoaquant} Assume that Assumptions \ref{un}, \ref{b}, \ref{AOAfini} and \ref{trois} hold true. Then there exists some $\a>0$, such that for all $h \in \ell_2$ satisfying $\|h\|_{\ell_2}=1$ $$P(\langle h,\e\rangle <-\a) > \a.$$ \end{lemma} \begin{proof} This is Proposition 3.11 in \cite{CR18}. \end{proof} \begin{remark} \label{nulla} {\rm If $Q\in\mathcal{M}_2$ is such that $dQ/dP\in L^2$ and if Assumption \ref{b} holds true then $E_Q\left(V^{0,h}\right)=0$ for all $h \in \ell_2$, see Remark 3.4 of \cite{CR18}.} \end{remark} Lemma \ref{miki} below will be used in the proofs of Theorems \ref{csonti} and \ref{zut} in order to show uniform integrability. \begin{lemma} \label{miki} Assume that $\sup_{i \geq 1} E|\varepsilon_i|^{\gamma}<\infty$ for some $\gamma\geq 2$. Then there is a constant $C_{\gamma}$ such that, for all $h\in\ell_2$ $$ E|\langle h,\varepsilon-b\rangle|^{\gamma}\leq C_{\gamma}\Vert h\Vert_{\ell_2}^{\gamma}(1+\Vert b\Vert_{\ell_2}^{\gamma}). $$ \end{lemma} \begin{proof} This is Lemma 3.7 in \cite{CR18}. \end{proof} We note an important consequence: under Assumption \ref{trois}, for any $c>0$, $\{|V^{x,h}|, \,h \in \ell_2, \, \|h\|_{\ell_2}\leq c \}$ are uniformly integrable. \section{Utility maximisation} \label{secut} It is standard (see \cite{von}) to model economic agents' preferences by concave increasing utility functions $U$. So suppose that $U:\mathbb{R}\to\mathbb{R}$ is a concave strictly increasing differentiable function and that for some $x_0 \in \R$ \begin{eqnarray} \label{norma} U(x_0)=0 \mbox{ and } U'(x_0)=1. \end{eqnarray} For a claim $G \in L^0$ and $x \in \mathbb{R} $, we define $$\mathcal{A}(U,G,x):=\left\{ h \in \ell_2,\;E U^{-}(V^{x,h}-G)<+\infty \right\}.$$ Define the supremum of expected utility at the terminal date when delivering a contingent claim $G$, starting from initial wealth $x \in \R$, by \begin{eqnarray} \label{gnon} u(G,x):=\sup_{h \in\mathcal{A}(U,G,x)}EU(V^{x,h}-G). \end{eqnarray} The following assumptions will be needed in Theorems \ref{csonti} and \ref{zut}. \begin{assumption} \label{queuneg1} There exists some constants $C_1 \in (0,\infty)$, $C_2 \in \mathbb{R}_{+}$ and $\beta>1$ such that for all $x \leq x_0$ $$|U(x)|\geq C_1|x|^{\beta} -C_2.$$ \end{assumption} \begin{assumption} \label{queuneg2} There exists some constants $C_3 \in (0,\infty)$, $C_4 \in\mathbb{R}_{+}$ and $\gamma\geq \beta \wedge 2$ such that for all $x \in \R$ $$U^-(x)\leq C_3|x|^{\gamma} +C_4$$ and \begin{equation}\label{harom2} \sup_{i\geq 1} E\left[|\varepsilon_i|^{\gamma}\right]<\infty. \end{equation} \end{assumption} \begin{assumption} \label{intG} We have $G \geq 0$ a.s.\ and it satisfies $|E(U(x-G))|<+\infty$, for all $x \in \R$. \end{assumption} Assumption \ref{intG} is satisfied, in particular, when $G$ is measurable and bounded. \begin{remark} Let $U$ be concave, strictly increasing and differentiable, satisfying Assumptions \ref{queuneg1}, \ref{queuneg2} and \ref{intG}. Then \eqref{norma} actually imposes no restriction on $U$. Indeed, as $U$ cannot be constant, there exists $x_0 \in \R$ such that $U'(x_0)>0$. Define $$V(x)=\frac{U(x)}{U'(x_0)}-\frac{U(x_0)}{U'(x_0)},$$ which obviuosly satisfies \eqref{norma}. Moreover, \begin{eqnarray*} |V(x)| & \geq & \frac{C_1}{U'(x_0)} |x|^{\beta} -\frac{C_2}{U'(x_0)} -\frac{|U(x_0)|}{U'(x_0)}, \; \; x \leq x_0\\ V^-(x)& \leq & \frac{C_3}{U'(x_0)} |x|^{\gamma} +\frac{C_4}{U'(x_0)} +\frac{U^+(x_0)}{U'(x_0)}, \; \; x \in \R \\ |E(V(-G))| & \leq & \frac{|E(U(-G))|}{U'(x_0)} +\frac{|U(x_0)|}{U'(x_0)}<\infty. \end{eqnarray*} So Assumptions \ref{queuneg1}, \ref{queuneg2} and \ref{intG} hold true for $V$. One may apply Theorems \ref{csonti}, \ref{zut} and Corollary \ref{prixut} below to $V$ and then these same results can be deduced for $U$, too. \end{remark} The following lemmata will be used in the proofs of Theorems \ref{csonti} and \ref{zut}. \begin{lemma} \label{toutva} Let Assumption \ref{b} hold true and assume $G \geq 0$ a.s. Then for all $y \in \R$ and $h \in \ell_2$ \begin{eqnarray} \label{ilfaitbeau} U^+(y+\langle h, \e-b\rangle-G) & \leq & |x_0|+ |y+\langle h, \e-b\rangle|. \end{eqnarray} \end{lemma} \begin{proof} As $U$ is increasing, concave and differentiable, recalling \eqref{norma}, we get for all $y \in \R$, \begin{eqnarray*} U(y) & \leq & U(\max(x_0,y))\leq U(x_0)+ \max(y-x_0,0) U\rq{}(x_0)\\ & \leq & \max(y-x_0,0)\leq |y-x_0|\leq |y| + |x_0|. \end{eqnarray*} Let $h \in \ell_2$, we get that \begin{eqnarray} \nonumber & & U^+(y+\langle h, \e-b\rangle-G) \leq U^+(y+\langle h, \e-b\rangle) \\ \nonumber & \leq & U^+(y+\langle h, \e-b\rangle) 1_{y+\langle h, \e-b\rangle \geq x_0} + U^+(x_0) 1_{y+\langle h, \e-b\rangle <x_0} \\ \nonumber & = & U(y+\langle h, \e-b\rangle) 1_{y+\langle h, \e-b\rangle \geq x_0} \leq |x_0|+|y+\langle h, \e-b\rangle|. \end{eqnarray} \end{proof} Lemma \ref{hborne} asserts that an optimal solution for \eqref{gnon} must be bounded. \begin{lemma} \label{hborne} Assume that Assumptions \ref{un}, \ref{b}, \ref{AOAfini}, \ref{trois}, \ref{queuneg1} and \ref{intG} hold true. Let $x \in \R.$ There exists some constant $M_{x,G} >0$ such that if $h \in \ell_2$ satisfies $$\|h\|_{\ell_2}> M_{x,G}$$ then the $0$ strategy performs better than $h$, that is, $$ EU(x-G) > EU(x+\langle h,\e-b\rangle-G). $$ \end{lemma} \begin{proof} Let $x \in \R$ and $h \in \ell_2$. Recall $\a>0$ from Lemma \ref{aoaquant}. As $b \in \ell_2$, there exists some $n_{\a}\geq 1$ such that $\left(\sum_{i \geq n_{\a}+1} b_i^2\right)^{1/2} \leq \alpha/2$. Let \begin{eqnarray*} \underline h:=(h_1,\ldots,h_{n_{\a}},0, \ldots,) &\mbox{ and }& \underline b=:(b_1,\ldots,b_{n_{\a}}, 0,\ldots,)\\ \overline h:=(0,\ldots,0,h_{n_{\a}+1}, \ldots,) & \mbox{ and }& \overline b=:(0,\ldots,0,b_{n_{\a}+1}, \ldots,). \end{eqnarray*} From the no arbitrage condition in the market with ${n_{\a}} $ assets (see \eqref{aoapetitNalpha}) there exits $\a_{n_{\a}}$ such that $P(A)>\a_{n_{\a}},$ where $A:=\left\{\sum_{i=1}^{n_{\a}} h_i(\e_i-b_i)<-\a_{n_{\a}}\|\underline h\|_{\ell_2}\right\}$. Let $B:=\left\{\sum_{i \geq n_{\a}+1} h_i\e_i\leq -\a \|\overline h\|_{\ell_2} \right\}$ then $P(B)>\a$ (recall Lemma \ref{aoaquant}). As the $(\e_i)_{i\geq 1}$ are independent, we get that $P(A\cap B)=P(A)P(B)>\a_{n_{\a}} \a$. On $A\cap B$, \begin{eqnarray*} \langle h,\e-b\rangle & = & \langle\underline h,\e-b\rangle + \langle\overline h,\e-b\rangle \leq -\a_{n_{\a}}\|\underline h\|_{\ell_2} -\a \|\overline h\|_{\ell_2} - \langle\overline h,\overline b\rangle \\ & \leq & -\a_{n_{\a}}\|\underline h\|_{\ell_2} -\a \|\overline h\|_{\ell_2} + \|\overline b\|_{\ell_2}\|\overline h\|_{\ell_2}\\ & \leq & -\a_{n_{\a}}\|\underline h\|_{\ell_2} -\a \|\overline h\|_{\ell_2} + \a/2\|\overline h\|_{\ell_2} \leq -\overline{\a} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2}), \end{eqnarray*} where $\overline{\a}=\inf(\a_{n_{\a}},\a/2)$. Thus $P(\langle h,\e-b\rangle <-\overline{\a} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2}))>\a_{n_{\a}} \a$. Assume that $\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2} \geq \max \left( \frac{x-x_0}{\overline{\a}},\frac{|x|}{\overline{\a}}\right)$. Then applying Lemma \ref{toutva} and Assumption \ref{queuneg1}, we get that \begin{eqnarray*} EU(V^{x,h}-G) & \leq & EU(x+\langle h,\e-b\rangle) \\ & \leq & E\left(U(x+\langle h,\e-b\rangle)1_{\langle h,\e-b\rangle <-\overline{\a} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})}\right) +\\ & & E\left(U^+(x+\langle h,\e-b\rangle)1_{\langle h,\e-b\rangle \geq -\overline{\a} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})}\right)\\ & \leq & U(x-\overline{\a} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})) \a_{n_{\a}} \a + |x_0|+ E|x+\langle\underline h,\e-b\rangle + \langle\overline h,\e-b\rangle|\\ & \leq & U(x-\overline{\a} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})) \a_{n_{\a}} \a + |x_0|+|x|+\|\underline h\|_{\ell_2}\sqrt{1+ \|\underline b\|^2_{\ell_2}}\\ & & +\|\overline h\|_{\ell_2}\sqrt{1+ \|\overline b\|^2_{\ell_2}} \\ & \leq & \left(-C_1 \left|\overline{\a} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})-x\right|^{\beta}+ C_2 \right)\a_{n_{\a}} \a + |x_0|+|x|\\ & &+(\|\underline h\|_{\ell_2} +\|\overline h\|_{\ell_2})\sqrt{1+ \|b\|^2_{\ell_2}}\\ & \leq & \left(-C_1 \overline{\a} ^{\beta} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})^{\beta}+ C_2 \right)\a_{n_{\a}} \a + |x_0|+|x|\\ & &(\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})\sqrt{1+ \|b\|^2_{\ell_2}}. \end{eqnarray*} because $U(x-\overline{\a} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})) \leq U(x_0) =0$ and \begin{eqnarray*}\left|\overline{\a} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})-x\right|^{\beta} &\geq& \left|\overline{\a} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})-|x|\right|^{\beta} =\left(\overline{\a} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})-|x|\right)^{\beta} \\ &\geq &\overline{\a} ^{\beta} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})^{\beta}. \end{eqnarray*} Assume that \begin{eqnarray*} (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})\sqrt{1+ \|b\|^2_{\ell_2}} -\frac{C_1}2 \a_{n_{\a}} \a\overline{\a}^{\beta}(\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})^{\beta} < 0\\ -\frac{C_1}2 \overline{\a}^{\beta}\a_{n_{\a}} \a (\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2})^{\beta} + |x_0| + |x| + C_2\a_{n_{\a}} \a < -|EU(x-G)| \leq EU(x-G), \end{eqnarray*} which is true if $\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2}> \overline M_{x,G},$ where \begin{eqnarray*} \overline M_{x,G} &:= & \max \left(\left(2\frac{|x_0| +|x| + C_2\a_{n_{\a}} \a +|E(U(x-G))|}{{C_1} \a_{n_{\a}} \a \overline{\a}^{\beta}}\right)^{\frac1{\b}}, \left(2\frac{\sqrt{1+ \|b\|^2_{\ell_2}}}{{C_1} \a_{n_{\a}} \a \overline{\a}^{\beta}}\right)^{\frac1{\b-1}}\right). \end{eqnarray*} Then, setting $M_{x,G}:=\max \left(\frac{x-x_0}{\overline \a}, \frac{|x|}{\overline \a},\overline M_{x,G}\right),$ if $\|\underline h\|_{\ell_2}+\|\overline h\|_{\ell_2} >{M_{x,G}},$ \begin{eqnarray}\label{lajta} EU(V^{x,h}-G) & < & EU(x-G) \end{eqnarray} so the strategy $0$ performs better than $h$. It follows that $\| h\|_{\ell_2} >M_{x,G}$ implies \eqref{lajta} since $$\| h\|_{\ell_2}=\left(\|\underline h\|^2_{\ell_2} + \|\overline h\|^2_{\ell_2}\right)^{\frac12} \leq \|\underline h\|_{\ell_2} + \|\overline h\|_{\ell_2}.$$ \end{proof} Now we present our first main result. We establish the existence of an optimizer for the utility maximization problem. In \cite{ijtaf} this was shown assuming uniformly bounded exponential moments for the $\varepsilon_{i}$. In \cite{jmaa} the moment condition was weak but it was assumed that all the $\varepsilon_{i}$ take arbitrarily large negative and positive values. Here we do not need the latter assumption and merely assume \eqref{harom} and \eqref{harom2}. \begin{theorem}\label{csonti} Assume that Assumptions \ref{un}, \ref{b}, \ref{AOAfini}, \ref{trois}, \ref{queuneg1}, \ref{queuneg2} and \ref{intG} hold true. Let $x\in \R$. There exists $h^*\in\mathcal{A}(U,G,x)$ such that $$ u(G,x)=EU(V^{x,h^*}-G). $$ \end{theorem} \begin{proof} Let $x \in \R$ and let $h_n\in\mathcal{A}(U,G,x)$ be a sequence such that $$ EU(V^{x,h_n}-G)\uparrow u(G,x),\ n\to\infty. $$ If $\|\underline h_n\|_{\ell_2}+\|\overline h_n\|_{\ell_2} >M_{x,G},$ then using Lemma \ref{hborne}, we can replace $h_n$ by $0$ and still have a maximising sequence. So one can assume that $\sup_{n\in\mathbb{N}} \|h_n\|_{\ell_2}\leq M_{x,G}<\infty.$ Hence as $\ell_2$ has the Banach-Saks Property, there exists a subsequence $(n_k)_{k\geq 1}$ and some $h^*\in \ell_2$ such that for $\widetilde{h}_n:=\frac{1}{n}\sum_{k=1}^n h_{n_k}$ $$\|\widetilde{h}_n-h^*\|_{\ell_2}\to 0, \,n\to\infty$$ for some $h^*\in\ell_2$. Using \eqref{isol}, we get that \begin{eqnarray*} E\langle \widetilde{h}_n-h^*,\varepsilon-b\rangle^2 & \leq & \| \widetilde{h}_n-h^*\|^2_{\ell_2}(1+\|b\|^2_{\ell_2})\to 0, \end{eqnarray*} when $n\to\infty$. In particular, $\langle \widetilde{h}_n-h^*,\varepsilon-b\rangle\to 0$, $n\to\infty$ in probability. Hence also $U(V^{x,\widetilde{h}_n}-G)\to U(V^{x,h^*}-G)$ in probability by continuity of $U$. We claim that the family $U^+(V^{x,\widetilde{h}_n}-G)$, $n\in\mathbb{N}$ is uniformly integrable. Indeed, from \eqref{ilfaitbeau} $$ U^+(V^{x,\widetilde{h}_n}-G)\leq |x_0|+ |V^{x,\widetilde{h}_n}|. $$ We know that $\sup_{n\in\mathbb{N}} \|\widetilde{h}_n\|_{\ell_2}\leq M_{x,G}<\infty$. Hence from Assumption \ref{trois} (see Lemma \ref{miki}), we get that $\{U^+(V^{x,\widetilde{h}_n}-G), \, h_n\in \ell_2, \, \|\widetilde{h}_n\|_{\ell_2} \leq M_{x,G}\}$ is uniformly integrable. Fatou's lemma used for $-U^-$ implies that $$E\left(-U^-(V^{x,h^*}-G)\right)\geq \limsup_{n\to\infty}E\left(-U^-(V^{x,\widetilde{h}_n}-G)\right),$$ and uniform integrability guarantees that $$\lim_{n\to\infty}E\left(U^+(V^{x,\widetilde{h}_n}-G)\right)= E\left(U^+(V^{x,{h}^*}-G)\right).$$ Thus, by concavity of $U$ $$ EU(V^{x,h^*}-G)\geq \limsup_{n\to\infty}EU(V^{x,\widetilde{h}_n}-G)\geq \lim_{n\to\infty}EU(V^{x,{h}_n}-G)=u(G,x), $$ and the proof will be finished as soon as we show $h^* \in \mathcal{A}(U,G,x)$. From Assumption \ref{queuneg2} and Lemma \ref{miki}, \begin{eqnarray} \nonumber EU^-(V^{x,\widetilde{h}_n}-G) &\leq & C_3E|V^{x,\widetilde{h}_n}-G|^{\gamma} +C_4 \\ \nonumber &\leq& C_3\left(2^{\gamma-1}(|x|^{\gamma}+ E|<\widetilde{h}_n, \e-b>|^{\gamma})\right) +C_4\\ \label{admiss} &\leq & C_3\left(2^{\gamma-1}\left(|x|^{\gamma}+C_{\gamma} M_{x,G}^{\gamma} \left(1+\|b \|_{\ell_2}^{\gamma}\right)\right)\right) +C_4. \end{eqnarray} Fatou's lemma used for $U^-$ implies that \begin{eqnarray*} E\left(U^-(V^{x,h^*}-G)\right) & \leq & \liminf_{n\to\infty}E\left(U^-(V^{x,\widetilde{h}_n}-G)\right) \\& \leq & C_3\left(2^{\gamma-1}\left(|x|^{\gamma}+C_{\gamma} M_{x,G}^{\gamma} \left(1+\|b \|_{\ell_2}^{\gamma}\right)\right)\right) +C_4. \end{eqnarray*} \end{proof} We consider now the problem of optimization in the small market $n$ with only the random sources $(\e_i)_{1 \leq i \leq n}.$ Let $$\mathcal{A}_n(U,G,x):=\left\{ h \in \ell_2,\; h_i=0, \, \forall i \geq n+1, \, E U^{-}(V^{x,h}-G)<+\infty \right\}.$$ Note that $\mathcal{A}_n(U,G,x) \subset \mathcal{A}_{n+1}(U,G,x) \subset \ldots \subset \mathcal{A}(U,G,x).$ We set for $n\in\mathbb{N}$ \begin{eqnarray} \label{maxupetit} u_n(G,x):=\sup_{h\in\mathcal{A}_n(U,G,x)}EU(V^{x,h}-G). \end{eqnarray} Now we arrive at the principal message of our paper: optimization problems in the small markets behave consistently with those on the big market, in a natural way. \begin{theorem}\label{matural} \label{zut} Assume that Assumptions \ref{un}, \ref{b}, \ref{AOAfini}, \ref{trois}, \ref{queuneg1}, \ref{queuneg2} and \ref{intG} hold true. Then for each $x\in \R$, we have $u_n(G,x)\uparrow u(G,x)$, $n\to\infty$. \\ Let $h_n^*$ be an optimal solution for \eqref{maxupetit} \footnote{which exists by the argument of Theorem \ref{csonti}.}. Then there exists a subsequence $(n_k)_{k\geq 1}$ and some $\widehat h\in \ell_2,$ optimal solution of \eqref{gnon}, such that for $\widehat{h}_n:=\frac{1}{n}\sum_{k=1}^n h^*_{n_k}$, $\|\widehat{h}_n-\hat h\|_{\ell_2}\to 0, \,n\to\infty$. \end{theorem} \begin{proof} The sequence $u_n(G,x)$, $n\in\mathbb{N}$ is clearly non-decreasing and it is bounded from above by $u(G,x)$. Let $\bar{h}_n:=(\widetilde{h}_0,\ldots,\widetilde{h}_n,0,\ldots)$, $n\in\mathbb{N}$ where $\widetilde{h}$ is the optimizer constructed in Theorem \ref{csonti}. Using \eqref{isol} and $\widetilde{h} \in \ell_2$, we have $$ E\langle \bar{h}_n-\widetilde{h},\varepsilon-b\rangle^2\to 0,\ n\to\infty $$ hence also $\langle \bar{h}_n,\varepsilon-b\rangle\to \langle \widetilde{h},\varepsilon-b\rangle$, $n \to \infty$ in probability. The Fatou-lemma for $U^+$ shows that $$ EU^+(V^{x,\widetilde{h}}-G)\leq \liminf_{n\to\infty}EU^+(V^{x,\bar{h}_n}-G). $$ Now we show that the family $U^-(V^{x,\bar{h}_n}-G)$, $n\in\mathbb{N}$ is uniformly integrable. As in \eqref{admiss} we get that \begin{eqnarray*} EU^-(V^{x,\bar{h}_n}-G) &\leq & C_3\left(2^{\gamma-1}\left(|x|^{\gamma}+C_{\gamma} M_{x,G}^{\gamma} \left(1+\|b \|_{\ell_2}^{\gamma}\right)\right)\right) +C_4, \end{eqnarray*} since $\widetilde{h}$ is optimal and thus $\|\bar{h}_n\|_{\ell_2}\leq\|\widetilde{h}\|_{\ell_2} \leq M_{x,G}$ (see Lemma \ref{hborne}). We also obtain that $\bar{h}_n \in \mathcal{A}_n(G,U,x)$. Uniform integrability implies that $$ EU^-(V^{x,\widetilde{h}}-G)= \lim_{n\to\infty}EU^-(V^{x,\bar{h}_n}-G). $$ It follows that $$ u(G,x)=EU(V^{x,\widetilde{h}}-G)\leq \liminf_{n\to\infty}EU(V^{x,\bar{h}_n}-G)\leq \lim_{n\to\infty}u_n(G,x) \leq u(G,x). $$ Let $h_n^*\in\mathcal{A}(U,G,x)$ be an optimal solution for \eqref{maxupetit}. As in the proof of Lemma \ref{hborne}, $\|{h}^*_n\|_{\ell_2} \leq M_{x,G}$. We proceed as in the proof of Theorem \ref{csonti}. By the Banach-Saks Property, there exists a subsequence $(n_k)_{k\geq 1}$ such that for $\widehat{h}_n:=\frac{1}{n}\sum_{k=1}^n h^*_{n_k},$ $\|\widehat{h}_n-\widehat h\|_{\ell_2}\to 0, \,n\to\infty$ for some $\widehat h \in\ell_2$. The arguments of the proof of Theorem \ref{csonti} apply verbatim and show that $\widehat{h}$ is an optimizer for the utility maximization problem \eqref{gnon} in the large market. \end{proof} \begin{remark} When $U$ is strictly concave then the optimizer is unique and hence $h^{*}$ of Theorem \ref{csonti} equals $\widehat{h}$ of Theorem \ref{matural}. \end{remark} The corollary below addressees the problem of convergence of the reservation prices $p_{n}$, $p$. These latter were introduced in \cite{hodges-neuberger}. \begin{corollary} \label{prixut} Assume that Assumptions \ref{un}, \ref{b}, \ref{AOAfini}, \ref{trois}, \ref{queuneg1}, \ref{queuneg2} and \ref{intG} hold true. The reservation price of $G$ in the market with the random sources $(\e_i)_{1 \leq i \leq n}$ (resp. with $(\e_i)_{ i \geq 1}$) is defined as a solution of \begin{eqnarray*} u_n(G,x+p_n) &= & u_n(0,x),\\ u(G,x+p) & = & u(0,x). \end{eqnarray*} These quantities are well-defined and we have $p_n\to p$, $n\to\infty$. \end{corollary} \begin{proof} We justify the definition of $p$, the case of $p_{n}$ being completely analogous. We show that the set $\{u(G,x), \,x\in\mathbb{R}\}$ is the same as $\{u(0,x), \,x\in\mathbb{R}\}$. We claim that $u(G,x)$, $u(0,x)$ are finite for all $x$. Indeed, Assumption \ref{intG}, Lemmata \ref{toutva} and \ref{hborne} imply that $-\infty<u(G,x)\leq u(0,x)<\infty$. {} As $u$ is monotone, furthermore it is concave and thus continuous on its effective domain, it suffices to show that \begin{equation}\label{shovel} u(G,-\infty)=u(0,-\infty)=-\infty,\ u(G,\infty)=u(0,\infty)=U(\infty) \end{equation} and that $u(G,x),u(0,x)<U(\infty)$ for all $x$ because in this case $\{u(G,x), \,x\in\mathbb{R}\}=\{u(0,x), \,x\in\mathbb{R}\}=(-\infty,U(\infty))$. We first concentrate on the latter claim. If $U(\infty)=\infty$ then this is obvious. Otherwise denote $h'$, $h''$ the strategies attaining $u(0,x)$, $u(G,x)$, respectively. Then, by the strictly increasing property of $U$, we have \begin{equation}\label{kelmajd} u(0,x)=EU(x+\langle h',\varepsilon-b\rangle)<EU(\infty)=U(\infty) \end{equation} and \begin{equation*}\label{kelmajd2} u(G,x)=EU(x+\langle h'',\varepsilon-b\rangle-G)<EU(\infty)=U(\infty). \end{equation*} Now we turn to showing \eqref{shovel}. It is clear that $u(G,\infty),u(0,\infty)\leq U(\infty)$ and \begin{equation}\label{romania} u(0,\infty)=\lim_{x\to\infty}u(0,x)\geq \lim_{x\to\infty}U(x)=U(\infty).{} \end{equation} Assumption \ref{intG} and Fatou's lemma also imply that $$ u(G,\infty) \geq \liminf_{x\to\infty}u(G,x)\geq\liminf_{x\to\infty}EU(x-G)\geq U(\infty). $$ Since $u(G,x)\leq u(0,x)$, it is enough to establish $\lim_{x\to -\infty}u(0,x)=-\infty$. By concavity, this is clearly the case if $u(0,\cdot)$ is not the constant function. But if $u(0,\cdot)=c$ then we would necessarily have $c\geq U(\infty)$ by \eqref{romania} which contradicts \eqref{kelmajd}. We now turn to proving convergence. Arguing by contradiction let us assume that, along a subsequence (which we continue to denote by $n$), one has $p_n\to \underline{p}$ for some $\underline{p}<p$ (the case of a limit $\overline{p}>p$ is analogous). It follows that there is $N$ such that, for $n\geq N$, $p_n<(p+\underline{p})/2<p$. Using Theorem \ref{csonti}, let $h^{\dagger}\in \mathcal{A}(G,U,x+(p+\underline{p})/2) \subset \mathcal{A}(G,U,x+p)$ satisfy $$ u(G,x+(p+\underline{p})/2)=EU(x+(p+\underline{p})/2+\langle h^{\dagger},\varepsilon-b\rangle -G). $$ Then, the definition of the reservation prices and Theorem \ref{zut} imply that \begin{eqnarray*} & & \limsup_{n\to\infty}u_n(G,x+p_n)\leq \limsup_{n\to\infty}u_n(G,x+(p+\underline{p})/2)\\ & =& u(G,x+(p+\underline{p})/2)= EU(x+(p+\underline{p})/2+\langle h^{\dagger},\varepsilon-b\rangle-G)\\ &<& EU(x+p+\langle h^{\dagger},\varepsilon-b\rangle-G)\leq u(G,x+p)\\ &=& u(0,x)=\lim_{n\to\infty}u_n(0,x) = \lim_{n\to\infty}u_n(G,x+p_n), \end{eqnarray*} a gross contradiction. \end{proof} \medskip \noindent\textbf{Acknowledgments} \smallskip \noindent M.R. was supported by the National Research, Development and Innovation Office, Hungary [Grant KH 126505] and by the ``Lend\"ulet'' programme of the Hungarian Academy of Sciences [Grant LP 2015-6].
2,877,628,091,393
arxiv
\section{Introduction} The reliance of natural language understanding models on the information in pre-trained word embeddings limits these models from being applied reliably to rare words or technical vocabulary. To overcome this vulnerability, a model must be able to compensate for a poorly modeled word embedding with background knowledge to complete the required task. For example, a natural language inference (NLI) model based on pre-2020 word embeddings may not be able to deduce from ``Jack has COVID'' that ``Jack is sick.'' By providing the definition, ``COVID is a respiratory disease,'' we want to assist this classification. We describe a general procedure for enhancing a classification model such as natural language inference (NLI) or sentiment classification, to perform the same task on sequences including poorly modeled words using definitions of those words. From the training set $\mathcal{T}$ of the original model, we construct an augmented training set $\mathcal{T}^\prime$ for a model that may accept the same token sequence optionally concatenated with a word definition. In the case of NLI, where there are two token sequences, the definition is concatenated to the premise sequence. Because $\mathcal{T}^\prime$ has the same form as $\mathcal{T}$, a model accepting the augmented information may be trained in the same way as the original model. Because there are not enough truly untrained words like ``COVID'' in natural examples, we probe performance by scrambling real words so that their word embedding becomes useless, and supplying definitions. Our method recovers most of the performance lost by scrambling. Moreover, the proposed technique removes biases in more {\em ad hoc} solutions like adding definitions to examples without special training. \section{Related Work} We focus on NLI because it depends more deeply on word meaning than sentiment or topic classification tasks. \citet{chen-etal-2018-neural-natural} pioneered the addition of background information to an NLI model's classification on a per-example basis, augmenting a sequence of token embeddings with features encoding WordNet relations between pairs of words, to achieve a 0.6\% improvement on the SNLI \citep{bowman-etal-2015-large} task. Besides this explicit reasoning approach, implicit reasoning over background knowledge can be achieved if one updates the base model itself with background information. \citet{lauscher-etal-2020-common} follows this approach to add information from ConceptNet \citep{conceptnet} and the Open Mind Common Sense corpus \citep{omcs} through a fine-tuned adapter added to a pretrained language model, achieving better performance on subsets of NLI examples that are known to require world knowledge. \citet{leapthought} explore the interplay between explicitly added knowledge and implicitly stored knowledge on artificially constructed NLI problems that require counting or relations from a taxonomy. In the above works, explicit background information comes from a taxonomy or knowledge base. Only a few studies have worked with definition text directly, and not in the context of NLI. \citet{tissier-etal-2017-dict2vec} used definitions to create embeddings for better performance on word similarity tasks, compared to word2vec \citep{mikolov2013} and fastText \citep{bojanowski2017} while maintaining performance on text classification. Recently, \citet{debiasing} used definitions to remove biases from pretrained word embeddings while maintaining coreference resolution accuracy. In contrast, our work reasons with natural language definitions without forming a new embedding. \section{Methods} \subsection{Critical words} The enhanced training set $\mathcal{T}^\prime$ will be built by providing definitions for words in existing examples, while obfuscating the existing embeddings of those words. If a random word of the original text is obfuscated, the classification still may be determined or strongly biased by the remaining words. To ensure the definitions matter, we select carefully. To explain which words of a text are important for classification, \citet{kim-etal-2020-interpretation} introduced the idea of input marginalization. Given a sequence of tokens $\x$, such that $\x_{-i}$ represents the sequence without the $i$th token $x_i$, they marginalize the probability of predicting a class $y_c$ over possible replacement words $\tilde{x_i}$ in the vocabulary $\mathcal{V}$ as \begin{equation} p(y_c | \x_{-i}) = \sum_{\tilde{x_i} \in \mathcal{V}} p(y_c | \tilde{x_i}, \x_{-i}) p(\tilde{x_i} | \x_{-i}) \end{equation} and then compare $p(y_c | \x_{-i})$ to $p(y_c | \x)$ to quantify the importance of $x_i$. The probabilities $p(\tilde{x_i} | \x_{-i})$ are computed by a language model. We simplify by looking only at the classification and not the probability. Like \citet{kim-etal-2020-interpretation}, we truncate the computation of $p(y_c | \tilde{x_i}, \x_{-i})$ to words such that $p(\tilde{x_i} | \x_{-i})$ exceeds a threshold, here .05. Ultimately we mark a word $x_i$ as a {\em critical word} if there exists a replacement $\tilde{x_i}$ such that \begin{equation} \argmax_y p(y | \tilde{x_i}, \x_{-i}) \neq \argmax_y p(y | \x) \end{equation} and \begin{equation} p(\tilde{x_i} | x_{-i}) > .05 \ldotp \end{equation} Additionally we require that the word not appear more than once in the example, because the meaning of repeated words usually impacts the classification less than the fact that they all match. Table~\ref{tab:critical} shows an example. \begin{table}[htb] \begin{tabular}{p{.75in}p{2in}} \hline Premise & A young man sits, looking out of a {\em train} [side $\rightarrow$ Neutral, small $\rightarrow$ Neutral] window. \\ Hypothesis & The man is in his room. \\ Label & Contradiction \\ \hline \end{tabular} \caption{An SNLI example, with critical words shown in italics and replacements shown in brackets.} \label{tab:critical} \end{table} A technicality remains because our classification models use subwords as tokens, whereas we consider replacements of whole words returned by \verb+pattern.en+. We remove all subwords of $x_i$ when forming $\x_{-i}$, but we consider only replacements $\tilde{x_i}$ that are a single subword long. \subsection{Definitions} We use Wiktionary as a source of definitions. The code of \citet{tissier-etal-2017-dict2vec} downloaded definitions from four commercial online dictionaries, but these are no longer freely available online as of January 2021. When possible, we look for a definition in the Simple English Wiktionary, because these definitions refer to more common usages of words and are written using simpler language. If one is not found, we consult the regular English Wiktionary.\footnote{We use the 2018-02-01 dumps.} To define a word, first we find its part of speech in the original context and lemmatize the word using the \verb+pattern.en+ library \cite{pattern-en}. Then we look for a section labeled ``English'' in the retrieved Wiktionary article, and for a subsection for the part of speech we identified. We extract the first numbered definition in this subsection. There is no guarantee that this sense of the word matches the sense used in the text, but since the word embedding for any other word would be determined only by its spelling, we expect good performance even if a different sense of the word is chosen. In practice, we find that this method usually gives us short, simple definitions that match the usage in the original text. When defining a word, we always write its definition as ``{\em word} means: {\em definition}.'' This common format ensures that the definitions and the word being defined can be recognized easily by the classifier. \subsection{Enhancing a model} \subsubsection{Without scrambling} Consider an example $(\x, y_c) \in \mathcal{T}$. If the example has a critical word $x_i \in \x$ that appears only once in the example, and $\tilde{x_i}$ is the most likely replacement word that changes the classification, we let $\x^\prime$ denote the sequence where $x_i$ is replaced by $\tilde{x_i}$, and let $y^\prime_c = \argmax_y p(y | \x^\prime)$. If definitions $\h_i$ and $\h^\prime_i$ for $x_i$ and $\tilde{x_i}$ are found by the method described above, we add $(\x, \h_i, y_c)$ and $(\x^\prime, \h^\prime_i, y^\prime_c)$ to the enhanced training set $\mathcal{T}^\prime$. \subsubsection{With scrambling} Scrambling a word prevents the model from relying on a useful word embedding. In this protocol, we generate random strings of letters, of random length between four and twelve letters, to substitute for $x_i$ and $\tilde{x_i}$, while still using the definitions of the the original words. If the original words appear in their own definitions, those occurrences are also replaced by the same strings. Unfortunately, the random strings lose any morphological features of the original words. Table~\ref{tab:background} shows an NLI example and the corresponding examples generated for the enhanced training set. \begin{table}[htb] \begin{tabular}{p{.75in}p{2in}} \hline Original & A blond man is drinking from a public fountain. / The man is drinking water. / Entailment \\ Scrambled word & a blond man is drinking from a public yfcqudqqg. yfcqudqqg means: a natural source of water; a spring. / the man is drinking water. / Entailment \\ Scrambled alternate & a blond man is drinking from a public lxuehdeig. lxuehdeig means: lxuehdeig is a transparent solid and is usually clear. windows and eyeglasses are made from it, as well as drinking glasses. / the man is drinking water. / Neutral \\ \hline \end{tabular} \caption{Adding background information to examples from SNLI} \label{tab:background} \end{table} \section{Experiments} \subsection{Setup} We consider the SNLI task \citep{bowman-etal-2015-large}. We fine-tune an XLNet model \citep{xlnet}, because it achieves near state-of-the-art performance on SNLI and outperforms Roberta \citep{roberta} and BERT \citep{bert} on later rounds of adversarial annotation for ANLI \citep{nie-etal-2020-adversarial}. Due to computing constraints we use the base, cased model. Training is run for three epochs distributed across 4 GPU's, with a batch size of 10 on each, a learning rate of $5 \times 10^{-5}$, 120 warmup steps, a single gradient accumulation step, and a maximum sequence length of 384. For the language model probabilities $p(\tilde{x_i} | \x_{-i})$, pretrained BERT (base, uncased) is used rather than XLNet because the XLNet probabilities have been observed to be very noisy on short sequences.\footnote{https://github.com/huggingface/transformers/issues/4343} One test set $SNLI_{crit}^{full}$ is constructed in the same way as the augmented training set, but our main test set $SNLI_{crit}^{true}$ is additionally constrained to use only examples of the form $(\x, \h_i, y_c)$ where $y_c$ is the original label, because labels for the examples $(\x^\prime, \h^\prime_i, y^\prime_c)$ might be incorrect. Not every SNLI example has a critical word, and we do not always find a definition with the right part of speech in Wiktionary. Our training and test sets have 272,492 and 2,457 examples ({\em vs}. 549,367 and 9,824 in SNLI). All of our derived datasets are available for download.\footnote{https://figshare.com/s/edd5dc26b78817098b72} \subsection{Results} Table~\ref{tab:results} compares various training protocols. \begin{table}[htb] \begin{tabular}{p{2in}p{.75in}} \hline {\bf Protocol} & $SNLI_{crit}^{true}$ \\ \hline Original & 85.1\% \\ No scrambling, no defs & 84.6\% \\ No scrambling, defs & 85.2\% \\ Scrambling, no defs & 36.9\% \\ Scrambling, defs & 81.2\% \\ Scrambling, subs & 84.7\% \\ Train on normal SNLI, test on scrambled no defs & 54.1\% \\ Train on normal SNLI, test on scrambled defs & 63.8\% \\ Train on unscrambled defs, test on scrambled defs & 51.4\% \\ \hline \end{tabular} \caption{Comparing enhancement protocols} \label{tab:results} \end{table} {\bf Our task cannot be solved well without reading definitions.} When words are scrambled but no definitions are provided, an SNLI model without special training achieves 54.1\% on $SNLI_{crit}^{true}$. If trained on $\mathcal{T}^\prime$ with scrambled words but no definitions, a model achieves 36.9\%, which is even lower, reflecting that the training set is constructed to prevent a model from utilizing the contextual bias. {\bf With definitions and scrambled words, performance is slightly below that of using the original words.} Our method using definitions applied to the scrambled words yields 81.2\%, compared to 84.6\% if words are left unscrambled but no definitions are provided. Most of the accuracy lost by obfuscating the words is recovered, but evidently there is slightly more information accessible in the original word embeddings. {\bf If alternatives to the critical words are not included, the classifier learns biases that do not depend on the definition.} We explore restricting the training set to verified examples $\mathcal{T}^\prime_{true} \subset \mathcal{T}^\prime$ in the same way as the $SNLI_{crit}^{true}$, still scrambling the critical or replaced words in the training and testing sets. Using this subset, a model that is not given the definitions can be trained to achieve 69.9\% performance on $SNLI_{crit}^{true}$, showing a heavy contextual bias. A model trained on this subset that uses the definitions achieves marginally higher performance (82.3\%) than the one trained on all of $\mathcal{T}^\prime$. On the other hand, testing on $SNLI_{crit}^{full}$ yields only 72.3\% compared to 80.3\% using the full $\mathcal{T}^\prime$, showing that the classifier is less sensitive to the definition. {\bf Noisy labels from replacements do not hurt accuracy much.} The only difference between the ``original'' training protocol and ``no scrambling, no defs'' is that the original trains on $\mathcal{T}$ and does not include examples with replaced words and unverified labels. Training including the replacements reduces accuracy by 0.5\% on $SNLI_{crit}^{true}$, which includes only verified labels. For comparison, training and testing on all of SNLI with the original protocol achieves 90.4\%, so a much larger effect on accuracy must be due to selecting harder examples for $SNLI_{crit}^{true}$. {\bf Definitions are not well utilized without special training.} The original SNLI model, if provided definitions of scrambled words at test time as part of the premise, achieves only 63.8\%, compared to 81.2\% for our specially trained model. {\bf If the defined words are not scrambled, the classifier uses the original embedding and ignores the definitions.} Training with definitions but no scrambling, 85.2\% accuracy is achieved, but this trained model is unable to use the definitions when words are scrambled: it achieves 51.4\% on that test set. {\bf We have not discovered a way to combine the benefit of the definitions with the knowledge in the original word embedding.} To force the model to use both techniques, we prepare a version of the training set which is half scrambled and half unscrambled. This model achieves 83.5\% on the unscrambled test set, below the result if no definitions are provided. {\bf Definitions are not simply being memorized.} We selected the subset $SNLI_{crit}^{new}$ of $SNLI_{crit}^{true}$ consisting of the 44 examples in which the defined word was not defined in a training example. The definition scrambled model achieves 68.2\% on this set, well above 45.5\% for the original SNLI model reading the scrambled words and definitions but without special training. Remembering a definition from training is thus an advantage (reflected in the higher 81.2\% accuracy on $SNLI_{crit}^{true}$), but not the whole capability. {\bf Definition reasoning is harder than simple substitutions.} When definitions are given as one-word substitutions, in the form ``{\em scrambled} means: {\em original}'' instead of ``{\em scrambled} means: {\em definition}'', the model achieves 84.7\% on $SNLI_{crit}^{true}$ compared to 81.2\% using the definition text. Of course this is not a possibility for rare words that are not synonyms of a word that has been well trained, but it suggests that the kind of multi-hop reasoning in which words just have to be matched in sequence is easier than understanding a text definition. \subsection{A hard subset of SNLI} By construction of the SentencePiece dictionary \citep{kudo-richardson-2018-sentencepiece}, only the most frequent words in the training data of the XLNet language model are represented as single tokens. Other words are tokenized by multiple subwords. Sometimes the subwords reflect a morphological change to a well-modeled word, such as a change in tense or plurality. The language model probably understands these changes well and the subwords give important hints. The lemma form of a word strips many morphological features, so when the lemma form of a word has multiple subwords, the basic concept may be less frequently encountered in training. We hypothesize that such words are less well understood by the language model. To test this hypothesis, we construct a subset $SNLI_{multi}^{true}$ of the test set, consisting of examples where a critical word exists whose lemma form spans multiple subwords, and for which an appropriate definition can be found in Wiktionary. This set consists of 332 test examples. The critical word used may be different from the one chosen for $SNLI_{crit}^{true}$. This subset is indeed harder: the XLNet model trained on all of SNLI attains only 77.7\% on this subset using no definitions, compared to 90.4\% on the original test set. In Table~\ref{tab:hard} we apply various models constructed in the previous subsection to this hard test set. Ideally, a model leveraging definitions could compensate for these weaker word embeddings, but the method here does not do so. \begin{table} \begin{centering} \begin{tabular}{p{2in}p{.75in}} \hline {\bf Protocol} & $SNLI_{multi}^{true}$ \\ \hline Normal SNLI on unscrambled & 77.7\% \\ Defs \& unscrambled on defs \& unscrambled & 77.1\% \\ Defs \& some scrambling on defs \& unscrambled & 73.8\% \\ Defs \& scrambled on defs \& scrambled & 69.9\% \\ Defs \& scrambled on defs \& unscrambled & 62.7\% \\ \hline \end{tabular} \caption{Performance on the hard SNLI subset} \label{tab:hard} \end{centering} \end{table} \section{Conclusion} This work shows how a model's training may be enhanced to support reasoning with definitions in natural text, to handle cases where word embeddings are not useful. Our method forces the definitions to be considered and avoids the application of biases independent of the definition. Using the approach, entailment examples like ``Jack has COVID / Jack is sick'' that are misclassified by an XLNet trained on normal SNLI are correctly recognized as entailment when a definition ``COVID is a respiratory disease'' is added. Methods that can leverage definitions without losing the advantage of partially useful word embeddings are still needed. In an application, it also will be necessary to select the words that would benefit from definitions, and to make a model that can accept multiple definitions. \bibliographystyle{acl_natbib}
2,877,628,091,394
arxiv
\section*{Appendix} \vspace{-0.1cm} \subsection{Proof of Lemma~\robustrefLemComplexity} \label{robustrefLemComplexity} \vspace{-0.1cm} We first prove \eqref{eq:dim-norm}, which uses an amalgamation of dimension-based and norm-based analysis. For the output layer, we use the following norm-based analysis \begin{align} &\mathbb{E}\, \sup_{f \in \mathcal{F}_V} \biggl|\frac1n \sum_{i = 1}^n \xi_i f({\bm{z}}_i)\biggr| = \mathbb{E}\, \sup_{f \in \mathcal{F}_V}|\langle a, \frac1n \sum_{i = 1}^n \xi_i \sigma({\bm{W}}^{\top}{\bm{z}}_i + {\bm{b}}) \rangle| \label{new1} \\ &\le \sup \|a\|_1 \mathbb{E}\, \sup_{f \in \mathcal{F}_V}\biggl\|\frac1n \sum_{i = 1}^n \xi_i \sigma({\bm{W}}^{\top}{\bm{z}}_i + {\bm{b}}) \biggr\|_{\infty} \le V\mathbb{E}\, \sup_{f \in \mathcal{F}_V} \max_j \biggl|\frac1n \sum_{i = 1}^n \xi_i \sigma(w_j^{\top}{\bm{z}}_i + b_j)\biggr| \nonumber \\ &\le V\mathbb{E}\, \sup_{w \in \mathbb{R}^d} \biggl|\frac1n \sum_{i = 1}^n \xi_i \sigma(w^{\top}{\bm{z}}_i + b)\biggr|. \nonumber \end{align} For notational convenience, we define $w_0 = 0, b_0 = 0$, and $a_0 = \sigma(0)^{-1}a_0 \sigma(w_0^{\top}{\bm{z}} + b_0)$ so that $a_0$ can be treated in a similar manner as other $a_i$'s. Without loss of generality, we do not separately consider $a_0$ in the following proofs. Next, we prove that \begin{align} \mathbb{E}\, \sup_{w \in \mathbb{R}^d} \biggl|\frac1n \sum_{i = 1}^n \xi_i \sigma(w^{\top}{\bm{z}}_i + b)\biggr| \lesssim \sqrt{\frac{d\log n}{n}}, \label{eq_j2} \end{align} and thus conclude the proof. The proof will be based on an $\varepsilon$-net argument together with the union bound. For any $\varepsilon$, let $W_{\varepsilon} \subset \mathbb{R}^d$ denote the subset \begin{equation*} W_{\varepsilon} = \biggl\{w = \frac{\varepsilon}{2d}(i_1, i_2, \ldots, i_d) : i_j \in \mathbb{Z}, \|w\|_1 \le \eta_n \biggr\}. \end{equation*} Then, for any $w, b$, there exists some element $\hat{w} \in W_{\varepsilon}$ such that \begin{align*} \sup_{{\bm{z}} \in \mathbb{X}} |\sigma(w^{\top}{\bm{z}} + b) - \sigma(\hat{w}^{\top}{\bm{z}} + \hat{b})| &\le \sup_{{\bm{z}}} |(w^{\top}{\bm{z}} + b) - (\hat{w}^{\top}{\bm{z}} + \hat{b})| \le \sup_{{\bm{z}}} |(w - \hat{w})^{\top}{\bm{z}}| + |b - \hat{b}| \\ &\le \|w - \hat{w}\|_1 \sup_{{\bm{z}}}\|{\bm{z}}\|_{\infty} + |b - \hat{b}| \le \varepsilon, \end{align*} where $\hat{b} = (\varepsilon/2d) \, \lfloor(2db/\varepsilon)\rfloor$ and $\lfloor\cdot \rfloor$ is the floor function. By Bernstein's Inequality, for any $w, b$, \begin{equation*} {\mathbb{P}}\biggl(|\frac1n \sum_{i = 1}^n \xi_i \sigma(w^{\top}{\bm{z}}_i + b)| > t\biggr) \le 2\exp\biggl\{-\frac{nt^2}{2(1+t/3)}\biggr\}. \end{equation*} By taking the union bound over $W_{\varepsilon}$, and use the fact that $\log \textrm{card}(W_{\varepsilon}) \lesssim d\log(nd/\varepsilon)$, we obtain \begin{equation*} \sup_{w \in \mathbb{R}^d} \biggl|\frac1n \sum_{i = 1}^n \xi_i \sigma(w^{\top}{\bm{z}}_i + b)\biggr| \lesssim \varepsilon + \sqrt{\frac{d}{n}\log \frac{nd}{\varepsilon}\log \frac{1}{\delta}}, \end{equation*} with probability at least $1 - \delta$. Then the desired result is obtained by taking $\varepsilon \sim \sqrt{(d\log n)/n}$. \vspace{-0.1cm} \subsection{Proof of Theorem~\robustrefThmLout} \label{robustrefThmLout} \vspace{-0.1cm} The proof is based on the following contraction lemma used in~\citep{neyshabur2015norm}. \begin{lemma}[Contraction Lemma] \label{lem:contract} Suppose that $g$ is $L$-Lipschitz and $g(0) = 0$. Then for any function class $\mathcal{F}$ mapping from $\mathbb{X}$ to $\mathbb{R}$ and any set $\{x_1, x_2, \ldots, x_n\}$, we have \begin{equation} \mathbb{E}\, \sup_{f \in \mathcal{F}}\biggl|\frac1n \sum_{i = 1}^n \xi_i g(f(x_i))\biggr| \le 2L \mathbb{E}\, \sup_{f \in \mathcal{F}}\biggl|\frac1n \sum_{i = 1}^n \xi_i f(x_i)\biggr|. \end{equation} \end{lemma} With the above lemma, we have the following result. \begin{lemma} \label{lem:general_risk} The constrained $L_1$ estimator $\hat{f}_n$ over $\mathcal{F}$ satisfies \begin{equation} \mathcal{R}(\hat{f}_n) \le \min_{f \in \mathcal{F}} \mathbb{E}\, |f(x) - f_*(x)| + 2\mathbb{E}\, \sup_{f \in \mathcal{F}}|\frac1n \sum_{i = 1}^n \xi_i f({\bm{z}}_i)| + 2\sqrt{\frac{\mathbb{E}\, y^2}{n}}. \end{equation} \end{lemma} \begin{proof} Define the empirical risk as: \begin{equation} \mathcal{R}_n(f) = \mathbb{E}\, \biggl( \frac1n \sum_{i = 1}^n |f_*(x_i) + \varepsilon_i - f(x_i)| \biggr) - \mathbb{E}\, |\varepsilon|. \end{equation} Since $\hat{f}_n$ minimizes $n^{-1} \sum_{i = 1}^n |f_*(x_i) + \varepsilon_i - f(x_i)|$ in $\mathcal{F}$, we have \begin{equation} \mathcal{R}(\hat{f}_n) \le \mathcal{R}(\hat{f}_n) - \{\mathcal{R}_n(\hat{f}_n) - \mathcal{R}_n(\hat{f})\} = \{\mathcal{R}(\hat{f}_n) - \mathcal{R}_n(\hat{f}_n)\} + \mathcal{R}_n(f_0), \label{new11} \end{equation} where $f_0 = \argmin_{f \in \mathcal{F}} \mathcal{R}(f)$. We also have \begin{align} \mathcal{R}_n(f_0) = \mathcal{R}(f_0) = \min_{f \in \mathcal{F}} \mathbb{E}\,(|f_*(x) + \varepsilon - f(x_i)| - |\varepsilon|) \le \min_{f \in \mathcal{F}} \mathbb{E}\, |f(x) - f_*(x)|. \label{eq4} \end{align} In the following, we will analyze the term $\mathcal{R}(\hat{f}_n) - \mathcal{R}_n(\hat{f}_n)$ in (\ref{new11}). Let ${\bm{z}}_i$'s denote independent and identically distributed copies of $x_i$'s. \begin{align*} \mathcal{R}(\hat{f}_n) - \mathcal{R}_n(\hat{f}_n) =&~\mathbb{E}\, \frac1n \sum_{i = 1}^n \biggl\{|\hat{f}_n({\bm{z}}_i) - f_*({\bm{z}}_i) - \varepsilon_i| - |\hat{f}_n(x_i) - f_*(x_i) - \varepsilon_i|\biggr\} \\ \le&~\mathbb{E}\, \sup_{f \in \mathcal{F}}\frac1n \sum_{i = 1}^n \biggl\{|f({\bm{z}}_i) - f_*({\bm{z}}_i) - \varepsilon_i| - |f(x_i) - f_*(x_i) - \varepsilon_i|\biggr\} \\ \le&~2\mathbb{E}\, \sup_{f \in \mathcal{F}}\frac1n \sum_{i = 1}^n \xi_i |f({\bm{z}}_i) - f_*({\bm{z}}_i) - \varepsilon_i|, \end{align*} where $\xi_1, \ldots, \xi_n$ are independent and identically distributed symmetric Bernoulli random variables that are independent with ${\bm{z}}_i$'s. According to Lemma~\ref{lem:contract}, since $g(x) = |x|$ is $1$-Lipschitz and $g(0) = 0$, we have \begin{align*} \mathbb{E}\, \sup_{f \in \mathcal{F}}\frac1n \sum_{i = 1}^n \xi_i |f({\bm{z}}_i) - f_*({\bm{z}}_i) - \varepsilon_i| \le&~2\mathbb{E}\, \sup_{f \in \mathcal{F}}|\frac1n \sum_{i = 1}^n \xi_i (f({\bm{z}}_i) - f_*({\bm{z}}_i) - \varepsilon_i)| \\ \le&~2\mathbb{E}\, \sup_{f \in \mathcal{F}}\biggl|\frac1n \sum_{i = 1}^n \xi_i f({\bm{z}}_i)\biggr| + 2\sqrt{\frac{\mathbb{E}\, y^2}{n}}. \end{align*} Combining this and (\ref{eq4}), we conclude the proof of Lemma~\ref{lem:general_risk}. \end{proof} \textbf{Proof of Theorem \ref{thm:l1out}}. The proof of (\ref{eq:l1risk}) is a direct consequence of Lemma \ref{lem:complexity}, Lemma \ref{lem:general_risk}, Theorem~\ref{thm:approximation} and the fact that the first moment is no more than the second moment. The proof of (\ref{eq5}) follows from the fact that $\delta(\eta)\rightarrow 0$ as $\eta \rightarrow \infty$. \vspace{-0.1cm} \subsection{Proof of Theorem~\robustrefThmMinimax} \label{robustrefThmMinimax} \vspace{-0.1cm} Define a subclass of $\mathcal{F}_V$ by \begin{align*} \mathcal{F}_0 = \biggl\{f : \mathbb{R}^d \to \mathbb{R} {\Big |} f(x) = V\sigma(w^{\top}x), \|w\|_2 = 1\biggr\}. \end{align*} In the following, we will prove the minimax bound for $\mathcal{F}_V$ by analyzing $\mathcal{F}_0$. Notice that \begin{align*} \mathbb{E}\, |\sigma(w_1^{\top}x) - \sigma(w_2^{\top}x)| &\ge \mathbb{E}\, \inf_{u} \sigma'(u) \cdot |w_1^{\top}x - w_2^{\top}x| \cdot \mathbb{I}(w_1^{\top}x, w_2^{\top}x \in \mathcal{S}) \gtrsim \|w_1 - w_2\|_2. \end{align*} Let $M_1(\varepsilon)$ denote the packing $\varepsilon$-entropy of $\mathcal{F}_0$ with $L_1$ distance, then $M_1(\varepsilon)$ is greater than the packing $\varepsilon$-entropy of $\mathbb{B}_1^d$ with $L_2$ distance, which means $ M_1(\varepsilon) \gtrsim d. $ Let $V_k(\varepsilon)$ denote the covering $\varepsilon$-entropy of $\mathcal{F}_0$ with the square root Kullback-Leibler divergence, then according to its relation with the $L_2$ distance shown in~\citep{yang1999information}, we have \begin{align} V_k(\varepsilon) \le M_2(\sqrt{2}\varepsilon) \lesssim d\log \frac{1}{\varepsilon}, \nonumber \end{align} where $M_2(\varepsilon)$ denote the packing $\varepsilon$-entropy of $\mathcal{F}_V$ with $L_2$ loss function. The second inequality is proved in a similar way to the proof of Lemma~\ref{lem:complexity}, which is omitted here for brevity. Hence, according to~\citep[Theorem 1]{yang1999information}, \begin{align*} \inf_{\hat{f}_n}\sup_{f \in \mathcal{F}_V} \mathcal{R}(\hat{f}_n(x)) \ge \inf_{\hat{f}_n}\sup_{f \in \mathcal{F}_0} \mathcal{R}(\hat{f}_n(x)) \gtrsim V\sqrt{\frac{d}{n}}, \end{align*} This concludes the proof. \vspace{-0.1cm} \subsection{Proof of Proposition~\robustrefPropSparse} \label{robustrefPropSparse} \vspace{-0.1cm} To prove the proposition, it is sufficient to verify the following Rademacher complexity bound \begin{align*} \mathbb{E}\, \sup \biggl|\frac1n \sum_{i = 1}^n \xi_i \sigma(w^{\top}{\bm{z}}_i + b)\biggr| \lesssim \sqrt{k\log d\log n}, \end{align*} which can be derived easily by adjusting the proof in Lemma \ref{lem:complexity}. Then the result follows with a similar analysis as in Theorem \ref{thm:l1out}. \vspace{-0.1cm} \subsection{Proof of Theorem~\robustrefThmSparset} \label{robustrefThmSparset} \vspace{-0.1cm} It can be verified from the identity (\ref{new1}) that \begin{equation} \mathbb{E}\, \sup_{f \in \mathcal{F}_V} \biggl|\frac1n \sum_{i = 1}^n \xi_i f(x_i)\biggr| \le \sum_{j = 0}^r \mathbb{E}\, \sup_{f \in \mathcal{F}_V}|a_j| \biggl|\frac1n \sum_{i = 1}^n \xi_i \sigma(w_j^{\top}x_i + b_j)\biggr|. \label{new2} \end{equation} Then according to Lemma~\ref{lem:contract}, we have \begin{equation} \mathbb{E}\, \sup_{f \in \mathcal{F}_V} \biggl|\frac1n \sum_{i = 1}^n \xi_i \sigma(w_j^{\top}x_i + b_j)\biggr| \lesssim \sqrt{\frac{\log n}{n}}(\|w_j\|_{\mathbb{X}} + |b_j|). \label{new3} \end{equation} Combining (\ref{new2}) and (\ref{new3}), we obtain the following lemma that may be interesting on its own right. \begin{lemma}\label{lem:small} We have \begin{align} \mathbb{E}\, \sup_{f \in \mathcal{F}_V}\biggl|\frac1n \sum_{i = 1}^n \xi_i f(x_i)\biggr| &\lesssim \sqrt{\frac{\log n}{n}}\sum_{j = 0}^r|a_j|(\|w_j\|_{\mathbb{X}} + |b_j|) \lesssim V\sqrt{\frac{\log n}{n}}\max_{j} \|w_j\|_{\mathbb{X}}.\nonumber \end{align} \end{lemma} Since $\|w\|_{\mathbb{X}} \lesssim \|w\|_1$ and $\{w : \|w\|_{\mathbb{X}} \lesssim \eta\} \subset \{w : \|w\|_1 \lesssim \eta\}$, the $\|\cdot\|_{\mathbb{X}}$ can be replaced with $\|\cdot\|_1$ in the bounds in Lemmas \ref{lem:small} and \ref{lem:general_risk}. Then, with a similar argument as in the proof of Theorem~\ref{thm:l1out}, we conclude the proof of Theorem~\ref{thm_sparse}. \subsubsection*{Acknowledgments} \bibliographystyle{abbrvnat} \section{Introduction} \vspace{-0.1cm} Neural networks have been successfully applied in modeling nonlinear regression functions in various domains of applications. A critical evaluation metric for a predictive learning model is to measure its statistical risk bound. For example, the $L_1$ or $L_2$ risks of typical parametric models such as linear regressions are at the order of $(d/n)^{1/2}$ for small $d$~\citep{seber2012linear}, where $d$ and $n$ denote respectively the input dimension and number of observations. Obtaining the risk bound for a nonparametric regression model such as neural networks is highly nontrivial. It involves an approximation error (or bias) term as well as a generalization error (or variance) term. The standard analysis of generalization error bounds may not be sufficient to describe the overall predictive performance of a model class unless the data is assumed to be generated from it. For the model class of two-layer feedforward networks and a rather general data-generating process, \citet{barron1993universal,barron1994approximation} proved an approximation error bound of $O(r^{-1/2})$ where $r$ denotes the number of neurons. The author further developed a statistical risk error bound of $O((d/n)^{1/4})$, which is the tightest statistical risk bound for the class of two-layer neural networks up to the authors' knowledge (for $d<n$). This risk bound is based on an optimal bias-variance tradeoff involving an deliberate choice of $r$. Note that the risk is at a convergence rate much slower than the classical parametric rate. We will tackle the same problem from a different perspective, and obtain a much tighter risk bound. A practical challenge closely related to statistical risks is to select the most appropriate neural network architecture for a particular data domain~\citep{DingOverview}. For two-layer neural networks, this is equivalent to selecting the number of hidden neurons $r$. While a small $r$ tends to underfit, researchers have observed that the network is not overfitting even for moderately large $r$. Nevertheless, recent research has also shown that an overly large $r$ (e.g., when $r>n$) does cause overfitting with high probability~\citep{zhang2016understanding}. It can be shown under some non-degeneracy conditions that a two-layer neural network with more than $n$ hidden neurons can perfectly fit $n$ arbitrary data, even in the presence of noise, which inevitably leads to overfitting. A theoretical choice of $r$ suggested by the asymptotic analysis in \citep{barron1994approximation} is at the order of $(n/d)^{1/2}$, and a practical choice of $r$ is often from cross-validation with an appropriate splitting ratio~\citep{DingOverview}. An alternative perspective that we advocate is to learn from a single neural network with sufficiently many neurons and an appropriate $L_1$ regularization on the neuron coefficients, instead of performing a selection from multiple candidate neural models. A potential benefit of this approach is easier hardware implementation and computation since we do not need to implement multiple models separately. Perhaps more importantly, this perspective of training enables much tighter risk bounds, as we will demonstrate. In this work, we focus on the model class of two-layer feedforward neural networks. Our main contributions are summarized below. First, we prove that $L_1$ regularization on the coefficients of the {output} layer can produce a risk bound $O((d/n)^{1/2})$ (up to a logarithmic factor) under the $L_1$ training loss, which approaches the minimax optimal rate. Such a rate has not been established under the $L_2$ training loss so far. The result indicates a potential benefit of using $L_1$ regularization for training a neural network, instead of selecting from a number of neurons. Additionally, a key ingredient of our result is a unique amalgamation of dimension-based and norm-based risk analysis, which may be interesting on its own right. The technique leads to an interesting observation that an excessively large $r$ can reduce approximation error while not increasing generalization error under $L_1$ regularizations. This implies that an explicit regularization can eliminate overfitting even when the specified number of neurons is enormous. Moreover, we prove that the $L_1$ regularization on the {input} layer can induce sparsity by producing a risk bound that does not involve $d$, where $d$ may be much larger compared with the true number of significant variables. \textbf{Related work on neural network analysis}. Despite the practical success of neural networks, a systematic understanding of their theoretical limit remains an ongoing challenge and has motivated research from various perspectives. \citet{cybenko1989approximations} showed that any continuous function could be approximated arbitrarily well by a two-layer perceptron with sigmoid activation functions. \citet{barron1993universal,barron1994approximation} established an approximation error bound of using two-layer neural networks to fit arbitrary smooth functions and their statistical risk bounds. A dimension-free Rademacher complexity for deep ReLU neural networks was recently developed~\citep{golowich2017size,barron2019complexity}. Based on a {contraction lemma}, a series of norm-based complexities and their corresponding generalization errors are developed~\citep[and the references therein]{neyshabur2015norm}. Another perspective is to assume that the data are generated by a neural network and convert its parameter estimation into a tensor decomposition problem through the score function of the known or estimated input distribution~\citep{anandkumar2014tensor,janzamin2015beating,ge2017learning,mondelli2018connection}. Also, tight error bounds have been established recently by assuming that neural networks of parsimonious structures generate the data. In this direction, \citet{schmidt2017nonparametric} proved that specific deep neural networks with few non-zero network parameters can achieve minimax rates of convergence. \citet{bauer2019deep} developed an error bound that is free from the input dimension, by assuming a generalized hierarchical interaction model. \textbf{Related work on $L_1$ regularization}. The use of $L_1$ regularization has been widely studied in linear regression problems~\citep[Chapter 3]{hastie2009elements}. The use of $L_1$ regularization for training neural networks has been recently advocated in deep learning practice. A prominent use of $L_1$ regularization was to empirically sparsify weight coefficients and thus compress a network that requires intensive memory usage~\citep{cheng2017survey}. The extension of $L_1$ regularization to group-$L_1$ regularization~\citep{yuan2006model} has also been extensively used in learning various neural networks~\citep{han2015learning,zhao2015heterogeneous,wen2016learning,scardapane2017group}. Despite the above practice, the efficacy of $L_1$ regularization in neural networks deserves more theoretical study. In the context of two-layer neural networks, we will show that the $L_1$ regularizations in the output and input layers play two different roles: the former for reducing generalization error caused by excessive neurons while the latter for sparsifying input signals in the presence of substantial redundancy. Unlike previous theoretical work, we consider the $L_1$ loss, which ranks among the most popular loss functions in, e.g., learning from ordinal data~\citep{pedregosa2017consistency} or imaging data~\citep{zhao2016loss}, and for which the statistical risk has not been studied previously. In practice, the use of $L_1$ loss for training has been implemented in prevalent computational frameworks such as Tensorflow ~\citep{abadi2016tensorflow}, Pytorch ~\citep{ketkar2017introduction}, and Keras ~\citep{gulli2017deep}. \vspace{-0.1cm} \section{Problem Formulation} \label{sec_background} \vspace{-0.1cm} \vspace{-0.1cm} \subsection{Model assumption and evaluation} \vspace{-0.1cm} Suppose we have $n$ labeled observations $\{(x_i, y_i)\}_{i=1,\ldots,n}$, where $y_i$'s are continuously-valued responses or labels. We assume that the underlying data generating model is $ y_i = f_*(x_i) + \varepsilon_i $ for some unknown function $f_*(\cdot)$, where $x_i$'s $ \in \mathbb{X} \subset \mathbb{R}^d$ are independent and identically distributed, and $\varepsilon_i$'s are independent and identically distributed that is symmetric at zero and \begin{align} \mathbb{E}\, (\varepsilon_i^2 \mid x_i ) \le \tau^2. \label{eq3} \end{align} Here, $\mathbb{X}$ is a bounded set that contains zero, for example $\{x : \|x\|_{\infty} \le M\}$ for some constant $M$. Our goal is learn a regression model $\hat{f}_n: x \mapsto \hat{f}_n(x)$ for prediction. The $\hat{f}_n$ is obtained from the following form of neural networks \begin{equation} \sum_{j = 1}^r a_j\sigma(w_j^{\top}x + b_j) + a_0, \label{model} \end{equation} where $a_0, a_j,b_j \in \mathbb{R}, w_j\in \mathbb{R}^d$, $j=1,\ldots,r$, are parameters to estimate. We let $a=[a_0,a_1,\ldots,a_r]^{ \mathrm{\scriptscriptstyle T} }$ denote the output layer coefficients. An illustration is given Figure~\ref{fig_diagram}. The estimation is typically accomplished by minimizing the empirical risk $ n^{-1} \sum_{i = 1}^n \ell(y_i, f(x_i)) $, for some loss function $l(\cdot)$ plus a regularization term. We first consider the $L_1$ regularization at the output layer. In particular, we search for such $f$ by the empirical risk minimization from the function class \begin{align} \mathcal{F}_V = \biggl\{f : \mathbb{R}^d \to \mathbb{R} {\Big |} f(x) = \sum_{j = 1}^r a_j\sigma(w_j^{\top}x + b_j) + a_0, \|a\|_1 \le V\biggr\} \label{Fv1} \end{align} where $V$ is a constant. \begin{figure}[tb] \begin{center} \centerline{\includegraphics[width=0.5\columnwidth]{fig_diagram}} \vskip -0.1in \caption{A graph showing the two-layer neural network model considered in (\ref{model}). \label{fig_diagram} \end{center} \vskip -0.3in \end{figure} The following statistical risk measures the predictive performance of a learned model $f$: \begin{align*} \mathcal{R}(f) \overset{\Delta}{=}\mathbb{E}\, \ell(y, f(x)) - \mathbb{E}\, \ell(y, f_*(x)). \end{align*} The loss function $\ell(\cdot)$ is pre-determined by data analysts, usually the $L_1$ loss defined by $\ell(y, \tilde{y}) = |y - \tilde{y}|$ or the $L_2$ loss defined by $\ell_2(y, \tilde{y}) = (y - \tilde{y})^2$. Under the $L_1$ loss, the risk is $ \mathcal{R}(f) = \mathbb{E}\, |f_*(x) + \varepsilon - f(x)| - \mathbb{E}\, |\varepsilon|, $ which is nonnegative for symmetric random variables $\varepsilon$. It is typical to use the same loss function for both training and evaluation. \vspace{-0.1cm} \subsection{Notation} \vspace{-0.1cm} Throughout the paper, we use $n,d,k,r$ to denote the number of observations, the number of input variables or input dimension, the number of significant input variables or sparsity level, the number of neurons (or hidden dimension), respectively. We write $a_n \gtrsim b_n$, $b_n \lesssim a_n$, or $b_n=O(a_n)$, if $|b_n / a_n| < c$ for some constant $c$ for all sufficiently large $n$. We write $a_n \asymp b_n$ if $a_n \gtrsim b_n$ as well as $a_n \lesssim b_n$. Let $\mathcal{N}(\bm \mu, V)$ denote Gaussian distribution with mean $\bm \mu$ and covariance $V$. Let $\|\cdot\|_1$ and $\|\cdot\|_2$ denote the common $L_1$ and $L_2$ vector norms, respectively. Let $\mathbb{X}$ denote the essential support of $X$. For any vector ${\bm{z}} \in \mathbb{R}^d$, we define $\|{\bm{z}}\|_{\mathbb{X}} \overset{\Delta}{=} \sup_{x \in \mathbb{X}} |x^{\top}{\bm{z}}|$, which may or may not be infinity. If $\mathbb{X} = \{x : \|x\|_{\infty} \le M\}$, $\|{\bm{z}}\|_{\mathbb{X}}$ is equivalent to $M \|{\bm{z}}\|_1$. Throughout the paper, $\hat{f}_n$ denotes the estimated regression function with $n$ being the number of observations. \vspace{-0.1cm} \subsection{Assumptions and classical results} \vspace{-0.1cm} We introduce some technical assumptions necessary for our analysis, and state-of-the-art statistical risk bounds built through dimension-based complexity analysis. \begin{assumption} \label{ass_activation} The activation function $\sigma(\cdot)$ is a bounded function on the real line satisfying $\sigma(x) \to 1$ as $x \to \infty$ and $\sigma(x) \to 0$ as $x \to -\infty$, and it is $L$-Lipschitz for some constant $L$. \end{assumption} \begin{assumption} \label{ass_regular} The regularization constant $V$ is larger than $2C + f_*(0)$, where $C$ is any constant such that the Fourier transform of $f_*$, denoted by $F$, satisfies \begin{align} \int_{\mathbb{R}^d} \|\omega\|_{\mathbb{X}} F(d\omega) \le C.\label{eq:smooth} \end{align} \end{assumption} \begin{assumption} \label{ass_inner} $\sigma(x)$ approaches its limits at least polynomially fast, meaning that $|\sigma(x)-{\bf 1}\{x > 0\}|<\varepsilon$ for all $|x|>x_{\varepsilon}$ where $x_{\varepsilon}$ is a polynomial of $1/\varepsilon$. Also, the value of $\eta \overset{\Delta}{=}\sup_{j} \|w_j\|_{\mathbb{X}}$ scales with $n$ polynomially meaning that $\log \eta = O(\log n)$ as $n \rightarrow \infty$. \end{assumption} \begin{assumption} \label{ass_activation2} There exists a constant $c > 0$ and a bounded subset $\mathcal{S} \subset \mathbb{R}$ such that $\mathbb{P}(X \in \mathcal{S}) > c$ and $\inf_{x \in \mathcal{S}} \sigma'(x) > c$ for $X \sim \mathcal{N}(0, 1)$. \end{assumption} We explain each assumption below. The above notation of $C,V$ follow those in \citep{barron1993universal,barron1994approximation}. Assumption~\ref{ass_activation} specifies the class of the activation functions we consider. A specific case is the popular activation function $\sigma(x)=1/\{1+\exp(-x)\}$. Assumption~\ref{ass_regular}, first introduced in~\citep{barron1993universal}, specifies the smoothness condition for $f_*$ to ensure the approximation property of neural networks (see Theorem~\ref{thm:approximation}). In Assumption~\ref{ass_inner}, the condition for $w$ is for technical convenience. It could also be replaced with the following alternative condition: There exists a constant $c > 0$ such that the distribution of $x$ satisfies $$\sup_{w: \|w\|_2=1} {\mathbb{P}}\bigl(\log (|w^{\top}x|) < c\log \varepsilon\bigr) < \varepsilon $$ for any $\varepsilon \in (0,1)$. Simply speaking, the input data $x$ is not too small with high probability. This condition is rather mild. For example, it holds when each component of $x$ has a a bounded density function. This alternative condition ensures that for some small constant $\varepsilon > 0$ and any $w \in \mathbb{R}^d$, there exists a surrogate of $w$, $\hat{w} \in \mathbb{R}^d$ with $\log \|\hat{w}\|_2 = O(-\log\varepsilon)$, such that $$ {\mathbb{P}}(|\sigma(w^{\top}x) - \sigma(\hat{w}^{\top}x)| > \varepsilon) < \varepsilon. $$ And this can be used to surrogate the assumption of $w$ in Assumption~\ref{ass_inner} throughout the proofs in the appendix. Assumption~\ref{ass_activation2} means that $\sigma(\cdot)$ is not a nearly-constant function. This condition is only used to bound the minimax lower bound in Theorem~\ref{thm:minimax}. \begin{theorem} [Approximation error bound~\citep{barron1993universal}] \label{thm:approximation} Suppose that Assumptions~\ref{ass_activation}, \ref{ass_regular}, \ref{ass_inner} hold. We have \begin{align} \inf_{f \in \mathcal{F}_V}\biggl\{\int_{\mathbb{X}} (f(x) - f_*(x))^2 \mu(dx)\biggr\}^{1/2} \le 2C \, \biggl(\frac{1}{\sqrt{r}} + \delta_{\eta}\biggr), \nonumber \end{align} where $\mu$ denotes a probability measure on $\mathbb{X}$, \begin{align} \delta_{\eta} = \inf_{0 < \varepsilon < 1/2} \biggl\{ 2\varepsilon + \sup_{|x| > \varepsilon} \bigl|\sigma(\eta x) - {\bf 1}\{x > 0\}\bigr| \biggr\},\label{eq2} \end{align} $\eta$ is defined in Assumption~\ref{ass_inner}, and $C$ is defined in (\ref{eq:smooth}). \end{theorem} \begin{theorem} [Statistical risk bound~\citep{barron1994approximation}]\label{thm_Barron} Suppose that Assumptions~\ref{ass_activation}, \ref{ass_regular}, \ref{ass_inner} hold. Then the $L_2$ estimator $\hat{f}_n$ in $\mathcal{F}_V$ satisfies $\mathbb{E}\,\{\hat{f}_n(x) - f_*(x)\}^2 \lesssim V^2/r + (rd\log n)/n.$ In particular, if we choose $r \asymp V\sqrt{n/(d\log n)}$, then $\mathbb{E}\,\{\hat{f}_n(x) - f_*(x)\}^2 \lesssim V\sqrt{(d\log n)/n}.$ \end{theorem} It is known that a typical parametric rate under the $L_2$ loss is at the order of $O(d/n)$, much faster than the above result. This gap is mainly due to excessive model complexity in bounding generalization errors. We will show in Section~\ref{sec_main} that the gap in the rate of convergence can be filled when using $L_1$ loss. Our technique will be based on the machinery of Rademacher complexity, and we bound this complexity through a joint analysis of the norm of coefficients (`norm-based') as well as dimension of parameters (`dimension-based'). \vspace{-0.1cm} \subsection{Model complexity and generalization error} \vspace{-0.1cm} The statistical risk consists of two parts. The first part is an approximation error term non-increasing in the number of neurons $r$, and the second part describes generalization errors. The key issue for risk analysis is to bound the second term using a suitable model complexity and then tradeoff with the first term. We will develop our theory based on the following measure of complexity. Let $\mathcal{F}$ denote a class of functions each mapping from $\mathbb{X}$ to $\mathbb{R}$, and $x_1, x_2, \ldots, x_n \in \mathbb{X}$. Following a similar terminology as in~\citep{neyshabur2015norm}, the {Rademacher complexity}, or simply `complexity', of a function class $\mathcal{F}$ is defined by $ \mathbb{E}\, \sup_{f \in \mathcal{F}}|n^{-1} \sum_{i = 1}^n \xi_i f(x_i)|, $ where $\xi_i, i = 1, 2, \ldots, n$ are independent symmetric Bernoulli random variables. \begin{lemma}[Rademacher complexity of $\mathcal{F}_V$] \label{lem:complexity} Suppose that Assumptions~\ref{ass_activation}, \ref{ass_inner} hold. Then for the Rademacher complexity of $\mathcal{F}_V$, we have \begin{align} \mathbb{E}\, \sup_{f \in \mathcal{F}_V}\biggl|\frac1n \sum_{i = 1}^n \xi_i f(x_i)\biggr| \lesssim \frac{V\sqrt{d\log n}}{\sqrt{n}} \label{eq:dim-norm}. \end{align} \end{lemma} The proof is included in Appendix~\ref{robustrefLemComplexity}. The bound in \eqref{eq:dim-norm} is derived from an amalgamation of dimension-based and norm-based analysis elaborated in the appendix. It is somewhat surprising that the bound does not explicitly involve the approximation error part (that depends on $r$ and $\eta$). This Rademacher complexity bound enables us to derive tight statistical risk bounds in the following section. \vspace{-0.1cm} \section{Main Results} \label{sec_main} \vspace{-0.1cm} \subsection{Statistical risk bound for the $L_1$ regularized networks in (\ref{Fv1})} \begin{theorem} [Statistical risk bound] \label{thm:l1out} Suppose that Assumptions~\ref{ass_activation}, \ref{ass_regular}, \ref{ass_inner} hold. Then the constrained $L_1$ estimator $\hat{f}_n$ over $\mathcal{F}_V$ satisfies \begin{align} \mathcal{R}(\hat{f}_n) \lesssim \biggl(\frac{1}{\sqrt{r}} + \delta_{\eta}\biggr)C + \frac{V\sqrt{d\log n}+\tau}{\sqrt{n}}, \label{eq:l1risk} \end{align} where $\delta_{\eta}$ is defined in (\ref{eq2}), and $\tau$ was introduced in (\ref{eq3}). Moreover, choosing the parameters $r, \eta$ large enough, we have \begin{align} \mathcal{R}(\hat{f}_n) \lesssim \frac{V\sqrt{d\log n}+\tau}{\sqrt{n}}.\label{eq5} \end{align} \end{theorem} The proof is in Appendix~\ref{robustrefThmLout}. We briefly explain our main idea in deriving the risk bound (\ref{eq:l1risk}). A standard statistical risk bound contains two parts which correspond to the approximation error and generalization error, respectively. The approximation error part in (\ref{eq:l1risk}) is the first term, which involves the hidden dimension $r$ and the norm of input coefficients through $\eta$. This observation motivates us to use the norm of output-layer coefficients through $V$ and the input dimension $d$ to derive a generalization error bound. In this way, the generalization error term does not involve $r$ already used for bounding the approximation error, and thus a bias-variance tradeoff through $r$ is avoided. This thought leads to the generalization error part in (\ref{eq:l1risk}), which is the second term involving $V$ and $d$. Its proof combines the machinery of both dimension-based and norm-based complexity analysis. From our analysis, the error bound in Theorem~\ref{thm:l1out} is a consequence of the $L_1$ loss function and the employed $L_1$ regularization. In comparison with the previous result of Theorem~\ref{thm_Barron}, the bound obtained in Theorem~\ref{thm:l1out} is tight and it approaches the parametric rate $\sqrt{d/n}$ for the $d<n$ regime. Though we can only prove for $L_1$ loss in this work, we conjecture that the same rate is achieved using $L_2$ loss. In the following, we further show that the above risk bound is minimax optimal. The minimax optimality indicates that deep neural networks with more than two layers will not perform much better than shallow neural networks when the underlying regression function belongs to $\mathcal{F}_V$. \begin{theorem} [Minimax risk bound] \label{thm:minimax} Suppose that Assumptions~\ref{ass_activation} and \ref{ass_activation2} hold, and $x_1, x_2, \ldots, x_n \overset{iid}{\sim} \mathcal{N}(0, {\bm{I}}_d)$, then $ \inf_{\hat{f}_n}\sup_{f \in \mathcal{F}_V} \mathcal{R}(\hat{f}_n(x)) \gtrsim V\sqrt{d/n}. $ \end{theorem} Here the $\mathcal{F}_V$ is the same one as defined in (\ref{Fv1}). All the smooth functions $f_*(\cdot)$ that satisfy $V>2C + f_*(0)$ and (\ref{eq:smooth}) belong to $\mathcal{F}_V$ according to Theorem \ref{thm:approximation}. The proof is included in Appendix~\ref{robustrefThmMinimax}. \vspace{-0.1cm} \subsection{Adaptiveness to the input sparsity} \label{sec_discuss} \vspace{-0.1cm} It is common to input a large dimensional signal to a neural network, while only few components are genuinely significant for prediction. For example, in environmental science, high dimensional weather signals are input for prediction while few are physically related \citep{xingjian2015convolutional}. In image processing, the image label is relevant to few background pixels~\citep{han2015learning}. In natural language processing, a large number of redundant sentences sourced from Wikipedia articles are input for language prediction~\citep{DingRRNN}. The practice motivates our next results to provide a tight risk bound for neural networks whose input signals are highly sparse. \begin{assumption} \label{ass_sparsity} There exists a positive integer $k\leq d$ and an index set $S \subset \{1,\ldots,d\}$ with $\textrm{card}(S) = k$, such that $f_*(x) = g_*(x_S)$ for some function $g_*(\cdot)$ with probability one. \end{assumption} The subset $S$ is generally unknown to data analysts. Nevertheless, if we know $k$, named the sparsity level, the risk bound could be further improved by a suitable regularization on the input coefficients. We have the following result where $d$ is replaced with $k$ in the risk bound of Theorem~\ref{thm:l1out}. \begin{proposition}\label{prop_sparse} Suppose that that Assumptions~\ref{ass_activation}, \ref{ass_regular}, \ref{ass_inner}, \ref{ass_sparsity} hold. Suppose that $\hat{f}_n$ is the $L_1$ estimator over the following function class $$ \biggl\{f : \mathbb{R}^d \to \mathbb{R} {\Big |} f(x) = \sum_{j = 1}^r a_j\sigma(w_j^{\top}x + b_j) + a_0, \|a\|_1 \le V, \sup_j \|w_j\|_0 \le k \biggr\}. $$ Then $\mathcal{R}(\hat{f}_n) \lesssim \sqrt{\{k\log(dn)\}/n}$. \end{proposition} The proof is included in Appendix~\ref{robustrefPropSparse}. The above statistical risk bound is also minimax optimal according to a similar argument in Theorem~\ref{thm:minimax}. From a practical point of view, the above $L_0$ constraint is usually difficult to implement, especially for a large input dimension $d$. Alternatively, one may impose an $L_1$ constraint instead of an $L_0$ constraint on the input coefficients. Our next result is concerned with the risk bound when the model is learned from a joint regularization on the output and input layers. For technical convenience, we will assume that $\mathbb{X}$ is a bounded set. \begin{theorem}\label{thm_sparse} Consider the following function class of two-layer neural networks \begin{align} \mathcal{F}_{V, \eta} = \biggl\{f : \mathbb{R}^d \to \mathbb{R} {\Big |} f(x) = \sum_{j = 1}^r a_j\sigma(w_j^{\top}x + b_j) + a_0, \|a\|_1 \le V, \sup_{1\leq j\leq r} (\|w_j\|_1 + |b_j|) \le \eta \biggr\}.\nonumber \end{align} Suppose that $V \gtrsim C$, where $C$ is defined in \eqref{eq:smooth}. Then the constrained $L_1$ estimator $\hat{f}_n$ over $\mathcal{F}_{V, \eta}$ satisfies \begin{align} \mathcal{R}(\hat{f}_n) \lesssim C \, \biggl(\frac{1}{\sqrt{r}} + \delta_{\eta} \biggr) + \frac{V\eta+\tau}{\sqrt{n}},\nonumber \end{align} where $\delta_{\eta}$ is defined in (\ref{eq2}). In particular, choosing $r$ large enough, we have $$ \mathcal{R}(\hat{f}_n) \lesssim C \delta_{\eta} + \frac{V\eta+\tau}{\sqrt{n}} $$ which does not involve the input dimension $d$ and the number of hidden neurons $r$. Moreover, suppose that $\sigma(x) = 1/(1 + e^{-x}), \quad \eta \asymp \biggl(n\log^2n\biggr)^{1/3}, $ then $\mathcal{R}(\hat{f}_n) \lesssim V \bigl\{(\log n)/n\bigr\}^{1/3}.$ \end{theorem} The proof is included in Appendix~\ref{robustrefThmSparset}. In the above result, the risk bound is at the order of $O(n^{-1/3})$, which is slower than the $O(n^{-1/2})$ in the previous Theorem~\ref{thm:l1out} and Proposition~\ref{prop_sparse} if ignoring $d$ and logarithmic factors of $n$. However, for a large input dimension $d$ that is even much larger than $n$, the bound can be much tighter than the previous bounds since it is dimension-free. \section{Conclusion and Further Remarks} \label{sec_conclusion} We studied the tradeoff between model complexity and statistical risk in two-layer neural networks from the explicit regularization perspective. We end our paper with two future problems. First, in Theorem~\ref{thm_sparse}, For a small $d$, the order of $n^{-1/3}$ seems to be an artifact resulting from our technical arguments. We conjecture that in the small $d$ regime, this risk bound could be improved to $O(n^{-1/2})$ by certain adaptive regularizations. Second, it would be interesting to emulate the current approach to yield similarly tight risk bounds for deep forward neural networks. \section*{Acknowledgement} The authors thank Yuhong Yang from the University of Minnesota for his comments in improving the paper.
2,877,628,091,395
arxiv
\section{Introduction} The contact process is a well-studied model of the spread of an infection, in which an undirected graph $G=(V,E)$ determines a collection of sites $V$ and edges $E$ which we can think of as individuals and as links between individuals along which the infection can be transmitted. Each site is either healthy or infectious; infectious sites recover at a certain fixed rate, which is usually normalized to $1$, and transmit the infection to each of their neighbours at rate $\lambda$. The contact process has been studied in a variety of different settings, including lattices \cite{speed,crit,ips,sis} (to cite just a few), infinite trees \cite{trees}, power law graphs \cite{plg,mvy} and complete graphs \cite{comp}. In each case, there is a critical value $\lambda_c$ below which the infection quickly vanishes from the graph, and above which the infection has a positive probability of surviving either for all time (if the graph is infinite), or for an amount of time that grows quickly (either exponentially or at least faster than polynomially) with the size of the graph; in the power law case $\lambda_c=0$ so long-time survival is possible whenever $\lambda>0$. In a social context, $G$ might describe a contact network in which an edge connects sites $x$ and $y$ if and only if the corresponding individuals have sufficiently frequent interactions that infection can be spread from one to the other. In the contact process, the contact network is fixed, that is, a given pair of individuals is either connected or not connected for all time. However, we can easily imagine a scenario in which connections form and break up dynamically, which we can model by having edges open and close according to certain rules; here, we use the convention of percolation theory, in which ``open'' means there is a connection across the edge; note this is the opposite of the convention for electric circuits. In this case, the edges $E$ represent \emph{possible} connections and we have a process $E_t\subseteq E$ that describes the set of open edges as a function of time. This type of process we will call a \emph{social contact process}, since it involves some form of social dynamics. In the simplest case, edges open and close independently at some fixed rates $r_+$ and $r_-$. In this case, the distribution of open edges at a given time converges to the product measure on $\{0,1\}^E$ with density $r_+/(r_-+r_+)$. Estimates on the survival region can then be obtained using the results of \cite{broman} and following the pattern of~\cite{remenik}. On the other hand, edge dynamics could depend on the state of the infection; for example, site $x$ might be less likely to connect with site $y$, if $y$ is infected. If we then relax the tendency to avoid infected sites, then for a given value of $\lambda$, we might ask at what point does the infection start to spread, if it does. Here, we consider edges opening and closing independently as described above but with the added restriction of \emph{monogamy}, that is, if two sites are connected (i.e., linked by an edge) then so long as they remain connected, they cannot connect to other sites. In this model, we think of connected pairs as partners, so we call it the \emph{partner model}. For simplicity, we study the model on the sequence of complete graphs $K_N$ on $N$ vertices, where $N$ will tend to $\infty$; this is a reasonable model for, say, the spread of a sexually transmitted infection through a population of monogamous homosexual individuals in a big city. We rescale the partner formation rate per edge to $r_+/N$ to ensure that a given individual in a pool of entirely singles finds a partner at total rate approximately $r_+$. For future reference, we use interchangeably both the words healthy and susceptible, and the words unpartnered and single, to describe respectively an individual that is not infectious, or an individual that does not have a partner. Even in this simple model, as described below, there is a phase transition between extinction and spread of the infection. \section{Statement of main results}\label{secmain} In order to analyze the partner model, we should first ensure that it is well defined, so following \cite{gc} we give a graphical construction which makes it easy to visualize its evolution in time and space. We write the model as $(V_t,E_t)$ where $V_t \subseteq V$ is the set of infectious sites at time $t$ and $E_t \subseteq E$ is the set of open edges at time $t$. In general, we assume $\min(r_+,r_-,\lambda) >0$ since if any of the parameters is equal to zero the dynamics are trivial. The complete graph $K_N=(V,E)$ has sites $V=\{1,\ldots,N\}$ and edges $E=\{ \{x,y\}:x,y \in\{\{1,\ldots,N\},x \neq y\}$. On the spacetime set $K_N\times[0,\infty)$, place independent Poisson point processes (p.p.p.s) along the fibers $\{\cdot\}\times[0,\infty)$ as follows: \begin{itemize} \item for recovery, at each site with intensity $1$ and label $\times$, \item for transmission, along each edge $xy \in E$ with intensity $\lambda$ and label $\leftrightarrow$, \item for partnership formation, along each edge with intensity $r_+/N$ and label $\uparrow$, \item for partnership breakup, along each edge with intensity $r_-$ and label $\downarrow$. \end{itemize} These define the probability space $\Omega$, whose realizations $\omega \in\Omega$ consist of collections of labelled points on $K_N\times [0,\infty)$. Since the graph is finite, the total intensity of p.p.p.s is finite, thus with probability 1 events are well ordered in time. Fixing an admissible initial configuration $(V_0,E_0)$, that is, such that no two edges $xy$ and $yz$ are both open, we determine $(V_t,E_t)$ as follows. For a well-ordered realization with event times $t_1<t_2<t_3<\cdots,$ suppose $(V_{t_i},E_{t_i})$ is known. If the event at time $t_{i+1}$ is: \begin{itemize} \item an $\times$ at site $x$ and $x \in V_{t_i}$ then $V_{t_{i+1}} = V_{t_i}\setminus\{x\}$, \item a $\leftrightarrow$ along edge $xy$, $xy \in E_{t_i}$, $x \in V_{t_i}$ and $y \notin V_{t_i}$ then $V_{t_{i+1}} = V_{t_i}\cup\{y\}$, \item a $\uparrow$ along edge $xy$ and $xz,zy \notin E_{t_i}$ for all $z$ then $E_{t_{i+1}} = E_{t_i} \cup\{xy\}$, \item a $\downarrow$ along edge $xy$ and $xy \in E_{t_i}$ then $E_{t_{i+1}} = E_{t_i} \setminus\{xy\}$. \end{itemize} Otherwise the configuration is unchanged. This gives $(V_t,E_t)$ at times $t_0:=0,t_1,t_2,\ldots;$ for $t \in(t_i,t_{i+1})$ set $V_t = V_{t_i}$ and $E_t = E_{t_i}$. For the partner model, we are mostly concerned not with the exact values of $V_t$ and $E_t$ but with the total number of susceptible and infectious singles $S_t$ and $I_t$ and the total number of partnered pairs $\mathit{SS}_t,\mathit{SI}_t,\mathit{II}_t$ of the three possible types; as shown in Section~\ref{secmf}, for each $N$, $(S_t,I_t,\mathit{SS}_t, \mathit{SI}_t,\mathit{II}_t)$ is a continuous time Markov chain. In general, it will be more convenient to work with the rescaled quantities $s_t=S_t/N$, $i_t=I_t/N$, $ss_t=\mathit{SS}_t/N$, $si_t=\mathit{SI}_t/N$ and $ii_t=\mathit{II}_t/N$. \begin{figure} \includegraphics{1117f01.eps} \caption{Markov chain used to compute $R_0$, with transition rates indicated; infectious sites are shaded.} \label{figri} \end{figure} Starting from any configuration, as shown in Section~\ref{secstoch}, after a short time the proportion of singles $y_t:=s_t+i_t$ approaches and remains close to a certain fixed value $y^* \in(0,1)$. The computation of $y^*$ is given in Section~\ref{secedge}: setting $\alpha = r_+/r_-$, we find that \begin{equation} \label{ystar} y^*=1/(2\alpha)[-1 + \sqrt{1 + 4\alpha}]. \end{equation} To determine the conditions under which the infection can spread, we use a heuristic argument. Once we know the correct values, we can then worry about proving they are correct. Suppose we start with $V_0 = \{x\} $ for some $x \in V$ with $x$ single and $y_0 \approx y^*$, and keep track of $x$ until the first moment when $x$ either: \begin{itemize} \item recovers without finding a partner, or \item if it finds a partner before recovering, breaks up from that partnership. \end{itemize} This leads to the continuous time Markov chain shown in Figure~\ref{figri}. Each of $A,B,\ldots,G$ represents a state for the chain, and arrows show possible transitions, with the arrow labelled by the transition rate. Shaded circles represent infectious individuals and unshaded circles, healthy individuals. A pair of circles connected by a line represents a partnered pair. Starting from $A$, a single infectious site either recovers (goes to $D$) at rate $1$, or finds a healthy partner at rate $r_+y^*$. Infection takes place at rate $\lambda$. If only one individual in a partnership is infectious (state $B$), then it recovers at rate 1 (state $E$), and we do not need to worry about them any more, since neither is infectious. If both are infectious (state $C$), recovery of one or the other occurs at rate $2$. While in a partnership, breakup occurs at rate $r_-$. Define the \emph{basic reproduction number} \begin{equation} \label{eqri} R_0 = \mathbb{P}(A\rightarrow F) + 2\mathbb{P}(A \rightarrow G) \end{equation} which is the expected number of infectious singles upon absorption of the above Markov chain, starting from state $A$. As intuition suggests, and Theorem~\ref{thm1} confirms, the infection can spread if $R_0>1$, and cannot spread if $R_0 \leq1$. If the dynamics is in equilibrium, that is, $(s_t,i_t,ss_t,si_t,ii_t)$ hovers around a~fixed value $(s^*,i^*,ss^*,si^*,ii^*)$, then in particular the proportion of infectious singles is roughly constant. To compute this proportion, we again use a heuristic argument. Three events affect infectious singles: \begin{itemize} \item $I\rightarrow S$, which occurs at rate $I_t = i_tN$, \item $I+I\rightarrow \mathit{II}$, which occurs\vspace*{1pt} at rate $(r_+/N){I_t \choose 2} \approx r_+(i_t^2/2)N$, and \item $S+I\rightarrow \mathit{SI}$, which occurs at rate $(r_+/N)I_tS_t = r_+i_ts_tN$. \end{itemize} If a partnership is formed, then using these rates and Figure~\ref{figri}, we can compute the expected number of infectious singles upon breakup. Fixing $i_t=i$ for some $i \in[0,y^*]$ and $s_t+i_t=y^*$, define the normalizing constant $z = 1 + r_+i/2 + r_+(y^*-i) = 1 + r_+(y^*-i/2)$ and the probabilities $p_S = 1/z$, $p_{\mathit{II}} = r_+i/(2z)$ and $p_{\mathit{SI}} = r_+(y^*-i)/z$ and let \begin{equation} \label{eqdi} \Delta(i) = p_S\Delta_S + p_{\mathit{II}}\Delta_{\mathit{II}} + p_{\mathit{SI}}\Delta_{\mathit{SI}}, \end{equation} where $\Delta_S=-1$, $\Delta_{\mathit{II}} = -2 + \mathbb{P}(C\rightarrow F) + 2\mathbb{P}(C\rightarrow G)$ and $\Delta_{\mathit{SI}} = -1 + \mathbb {P}(B\rightarrow F) + 2\mathbb{P}(B\rightarrow G)$. The function $\Delta (i)$ tracks the expected change in the number of infectious singles, per event affecting one or more infectious singles. Thus, for an equilibrium solution we should have $\Delta(i^*)=0$. As shown in Lemma~\ref{deltalemma}, to have a solution with $i^*>0$, we need $R_0>1$. As shown in Lemma~\ref{r0up}, for fixed $r_+,r_-$, $R_0$ is continuous and increasing in $\lambda$. Defining \begin{equation} \label{eqlc} \lambda_c = \sup\{\lambda\geq0:R_0 \leq1 \} \end{equation} with $\sup\mathbb{R}_+ :=\infty$, it follows that if $\lambda_c=\infty $ then $R_0<1$ for all $\lambda$, and if $\lambda_c<\infty$ then $R_0<1$ if $\lambda<\lambda_c$, $R_0=1$ if $\lambda=\lambda_c$ and $R_0>1$ if $\lambda>\lambda_c$. In models exhibiting a phase transition, one often seeks a \emph{critical exponent} $\gamma$ such that for an observable $F(\lambda)$ it holds that $F(\lambda) \sim C(\lambda-\lambda_c)^{\gamma}$. As we see in the statement of the upcoming Theorem~\ref{thm2}, here the critical exponent for $i^*$ is equal to $1$. The following two theorems are the main results of this paper. The first result tells us where and when we should expect a phase transition to occur. In particular, it gives a formula for $\lambda_c$ and describes the behaviour of $i^*$ near $\lambda_c$. \begin{theorem}\label{thm2} Let $y^*,R_0,\Delta(i)$ and $\lambda_c$ be as in \eqref{ystar}, \eqref {eqri}, \eqref{eqdi} and \eqref{eqlc} and let $r_+,r_-$ be fixed. Then $\lambda_c<\infty\Leftrightarrow r_+y^*>1 \Leftrightarrow r_+ > 1+1/r_-$ and in this case \begin{eqnarray*} && \lambda_c = \frac{2}{r_-}\frac{2}{(r_+y^*-1)} + \frac{2}{r_-} + \frac {4}{r_+y^*-1} + 1 + \frac{r_-}{r_+y^*-1}. \end{eqnarray*} If $R_0=R_0(\lambda)>1$, there is a unique solution $i^*(\lambda)\in (0,y^*)$ to the equation $\Delta(i^*)=0$ and $i^*(\lambda) \sim C(\lambda-\lambda_c)$ as $\lambda\downarrow\lambda_c$, for some constant $C>0$. \end{theorem} The second result shows that our heuristics are correct. More precisely, $R_0>1$ is a necessary and sufficient condition for spread and long-time survival of the infection. Moreover, when $R_0>1$ there is a unique and globally stable endemic equilibrium with $i^*>0$ given by $\Delta(i^*)=0$. \begin{theorem}\label{thm1} Fix $\lambda,r_+,r_-$ and let $y^*, R_0$ and $\Delta(i)$ be as defined in \eqref{ystar}, \eqref{eqri} and \eqref{eqdi}. \begin{itemize} \item If $R_0\leq1$, for each $\varepsilon>0$ there are constants $C,T,\gamma>0$ so that, from any initial configuration, with probability $\geq1-Ce^{-\gamma N}$, $|V_T|\leq\varepsilon N$. \item If $R_0<1$ there are constants $C,T,\gamma>0$ so that, from any initial configuration, with probability tending to $1$ as $N\rightarrow \infty$ all sites are healthy by time $T+C \log N$. \item If $R_0>1$, there is a unique vector $(s^*,i^*,ss^*,si^*,ii^*)$, satisfying $i^*>0$, $s^*+i^*=y^*$ and $\Delta(i^*)=0$, such that \begin{itemize} \item for\vspace*{1pt} each $\varepsilon>0$, there are constants $C,T,\gamma>0$ so that, from any initial configuration with $|V_0| \geq\varepsilon N$, with\vspace*{1pt} probability $\geq1-Ce^{-\gamma N}$, $|(s_t,i_t,ss_t,\break si_t,ii_t)-(s^*,i^*,ss^*,si^*,ii^*)|\leq\varepsilon$ for $T\leq t \leq e^{\gamma N}$, and \item there are constants $\delta,p,C,T>0$ so that, from any initial configuration with $|V_0|>0$, with probability $\geq p$, $|V_{T+C\log N}|\geq\delta N$. \end{itemize} \end{itemize} \end{theorem} To obtain the value of the endemic equilibrium and the behaviour when $|V_0| \geq\varepsilon N$, which we call the \emph{macroscopic} regime, we use the \emph{mean-field equations} (MFE) introduced in Section~\ref{secmf}, which are a set of differential equations that give a good approximation to the evolution of $(s_t,i_t,ss_t,si_t,ii_t)$ when $N$ is large. To describe the behaviour when $1\leq|V_0|\leq\varepsilon N$ for small $\varepsilon>0$, which we call the \emph{microscopic} regime, we use comparison to a branching process; if $R_0<1$ we bound above and if $R_0>1$ we bound below. The paper is laid out as follows. Sections~\ref{secedge} and \ref{secsurv} contain the heuristic calculations that allow us to determine $y^*,R_0,\lambda_c,\Delta(i)$ and prove Theorem~\ref{thm2}. In Section~\ref{secedge}, we give an informal description of the edge dynamics and compute~$y^*$. In Section~\ref{secsurv}, we analyze $R_0,\lambda _c,\Delta(i)$ and prove Theorem~\ref{thm2}, in two parts: Propositions~\ref{thm2.1} and~\ref{thm2.2}. In Section~\ref{secmf}, we introduce the mean-field equations and characterize their dynamics. In Sections~\ref{secstoch}, \ref{secmacro} and \ref{secmicro}, we consider the stochastic process and prove Theorem~\ref{thm1}. In Section~\ref{secstoch}, we develop the tools needed to relate the stochastic model to the mean-field equations. In Section~\ref{secmacro}, we prove the macroscopic part of Theorem~\ref{thm1}, and in Section~\ref{secmicro} we prove the microscopic part. \section{Proportion of singles}\label{secedge} Starting from the total number of singles $Y_t=S_t+I_t$, the transitions are: \begin{itemize} \item $Y \rightarrow Y-2$ at rate $(r_+/N)Y(Y-1)/2$, \item $Y \rightarrow Y+2$ at rate $(N-Y)r_-/2$, \end{itemize} which for $y_t := Y_t/N$ gives: \begin{itemize} \item $y\rightarrow y-2/N$ at rate $[r_+y(y-1/N)/2]N = (r_+y^2/2)N - r_+y/2$, \item $y\rightarrow y+2/N$ at rate $[(1-y)r_-/2]N$. \end{itemize} Combining these transitions gives \begin{eqnarray*} && \frac{d}{dt}\mathbb{E}(y_t \vert y_t=y) = -r_+y^2 + r_-(1-y) + \frac{r_+y}{N}. \end{eqnarray*} In Lemma~\ref{ydyn}, we make a rigorous statement about the behaviour of $y_t$. For now, though, some heuristics are helpful. Letting $y=Y/N$ and $\Delta y$ denote the increment in $y$ over a time step of size $1/N$, we find $\mathbb{E}\Delta y = O(1/N)$ while $\mathbb{E}(\Delta y)^2 = O(1/N^2)$, which means $\var(\Delta y)=O(1/N^2)$. This suggests that as $N \rightarrow\infty$ we should expect the sample paths of $y$ to approach solutions to the differential equation \begin{equation} \label{eqe} y' = -r_+y^2 + r_-(1-y). \end{equation} Notice the right-hand side is positive at $y=0$, negative at $y=1$ and strictly decreases with $y$, so there is a unique and globally stable equilibrium for $y \in[0,1]$, that lies in $(0,1)$. Setting\vspace*{1pt} $y'=0$ and letting $\alpha= r_+/r_-$ gives the equation $\alpha y^2+y-1=0$ which has the unique solution $y^*=1/(2\alpha)[-1 + \sqrt{1 + 4\alpha}]$ in $[0,1]$. Notice that $y^* \sim1 - \alpha$ as $\alpha\rightarrow0^+$ and $y^* \sim1/\sqrt{\alpha}$ as $\alpha\rightarrow\infty$. \section{Survival analysis}\label{secsurv} In this section, we analyze $R_0$, $\lambda_c$ and $\Delta(i)$ which are defined in Section~\ref{secmain}. We begin with $R_0$ defined in \eqref{eqri}. Define the recruitment probability $p_r = r_+y^*/(1+r_+y^*) = \mathbb{P}(A\rightarrow E\cup F \cup G)$ which is the probability of finding a partner before recovering and depends only on $r_+,r_-$. Define $a = 1+\lambda+r_-$, $b = 2+r_-$ which are the rates at which the Markov chain of Figure~\ref{figri} jumps away from states $B$ and $C$, respectively. Also, let \begin{eqnarray*} && \sigma= \sum_{k=0}^{\infty} \biggl( \frac{\lambda}{a}\frac{2}{b} \biggr)^k = \frac{ab}{ab-2\lambda}. \end{eqnarray*} It is easy to check that $ab>2\lambda$. Notice that any path from $A$ to $E\cup F\cup G$ must go to $B$ and then goes around the $B, C$ loop some number of times before being absorbed at $E,F$ or $G$, and $\sigma $ accounts for this looping. Summing probabilities over all possible paths we find \begin{eqnarray*} && \mathbb{P}(A \rightarrow F) = p_r\sigma\frac{r_-}{a}\quad \mbox{and} \quad\mathbb{P}(A\rightarrow G) = p_r\sigma \frac{\lambda}{a}\frac{r_-}{b} \end{eqnarray*} so we obtain the explicit expression \begin{eqnarray*} && R_0 = p_r\sigma r_-(1 + 2\lambda/b)/a \end{eqnarray*} which after re-substituting and a bit of algebra gives \begin{equation} \label{r0eq0} R_0 = p_rr_-\frac{b+2\lambda}{ab-2\lambda} = p_rr_-\frac{2+r_-+2\lambda }{2+3r_-+\lambda r_-+r_-^2}. \end{equation} \begin{lemma}\label{r0up} Fixing $r_+$ and $r_-$, $R_0$ is continuous and increasing with respect to $\lambda$. \end{lemma} \begin{pf} Continuity is obvious from the formula above. We write $R_0(\lambda)$ and compute the derivative $R_0'(\lambda)$, noting that $p_r$ is fixed. Letting $c_1=2+r_-$, $c_2=2$, $c_3=2+3r_-+r_-^2$ and $c_4=r_-$, $R_0(\lambda)=p_rr_-(c_1+c_2\lambda)/(c_3+c_4\lambda)$ so $R_0'(\lambda ) = p_rr_-(c_2c_3-c_1c_4)/(c_3+c_4\lambda)^2$ and $c_2c_3-c_1c_4 = 4+4r_-+r_-^2 > 0$ so $R_0'(\lambda)>0$. \end{pf} From this, it follows that for fixed $r_+,r_-$, if $R_0(\lambda)=1$ has a solution then it is unique and is equal to $\lambda_c$. So, setting $R_0=1$ gives \begin{equation} \label{r0eq1} p_rr_-(2+r_-+2\lambda_c) = 2+3r_-+ \lambda_c r_-+r_-^2. \end{equation} To get a handle on this equation, we first examine the limit of large $r_+$, that is, quick formation of partnerships. As noted in Section~\ref{secedge}, $y^* \sim1/\sqrt{\alpha} = \sqrt{r_-}/\sqrt{r_+}$ as $\alpha =r_+/r_-\rightarrow\infty$, so for fixed $r_-$, $r_+y^* \sim\sqrt {r_-r_+}\rightarrow\infty$, and so $p_r\rightarrow1$, as $r_+\rightarrow\infty$. Setting $p_r=1$ in the equation above, after cancelling like terms and dividing both sides by $r_-$ gives \begin{eqnarray*} && \lambda_c = 1 + 2/r_- \end{eqnarray*} for fixed $r_-$, when $r_+=\infty$. For the contact process on a large complete graph $\lambda_c=1$, so here the only difference is the term $2/r_-$, which makes it harder for the infection to spread when partnerships last a long time. Accounting for $p_r$, we still get a fairly nice expression. From \eqref {r0eq1}, putting all terms involving $\lambda_c$ on the left and all other terms on the right gives \begin{eqnarray*} && \lambda_c r_-(2p_r-1) = 2 + (3-2p_r)r_- + r_-^2(1-p_r). \end{eqnarray*} Letting $\beta= 2p_r-1$ then substituting for $\beta$ and dividing by $r_-$ gives \begin{equation} \label{r0eq2} \lambda_c\beta= 2/r_- + (2-\beta) + (1/2)r_-(1- \beta). \end{equation} This equation suggests that we view $\lambda\beta$ as a sort of force of infection, which makes sense as $\lambda$ is the transmission rate and $\beta=2p_r-1$ measures the chance of finding a partner before recovering. Although $\beta$ depends on $r_-$, $-1\leq\beta\leq1$ regardless, so we see from \eqref{r0eq2} that for fixed $\lambda$, if $r_-$ is either too small or too large, the infection cannot spread. The reason for this can be understood as follows: if $r_-$ is too small, partners tend both to recover before breaking up and transmitting the infection to anyone else, whereas if $r_-$ is too large, partnerships do not last long enough for transmission to occur. Using \eqref{r0eq2}, we can now prove the first assertion of Theorem~\ref{thm2}. \begin{proposition}\label{thm2.1} For fixed $r_+,r_-$ and $\lambda_c$ given by \eqref{eqlc}, $\lambda _c<\infty$ if and only if $r_+y^*>1$, if and only if $r_+>1+1/r_-$ and in this case \begin{eqnarray*} && \lambda_c = \frac{2}{r_-}\frac{2}{(r_+y^*-1)} + \frac{2}{r_-} + \frac {4}{r_+y^*-1} + 1 + \frac{r_-}{r_+y^*-1}. \end{eqnarray*} \end{proposition} \begin{pf} It is easy to check, using the formula $y^* = (r_-/(2r_+))(-1+(1+4r_+/r_-)^{1/2})$, that $r_+y^*>1$ if and only if $r_+>1+1/r_-$. Since $\beta\in[-1,1]$, the right-hand side of \eqref {r0eq2} is positive, so to have a solution it is necessary that $\beta >0$; dividing by $\beta$ on both sides shows that it is also sufficient. Then observe that $\beta>0$ if and only if $r_+y^*>1$. To get the formula for $\lambda_c$, divide by $\beta$ in \eqref{r0eq2} and observe that $\beta^{-1}-1=2/(r_+y^*-1)$. \end{pf} \begin{figure} \includegraphics{1117f02.eps} \caption{Level curves of $\lambda_c$ depicted in the $r_+,r_-$ plane. Starting from the top curve and going down, $\lambda _c=3,5,8,13,21,34,\infty$.} \label{figlc} \end{figure} Figure~\ref{figlc} shows level curves of $\lambda_c$ in the $r_+,r_-$ plane. Using the formula for $\lambda_c$, we can see how it scales in various limits of $r_+,r_-$ and $\alpha$. First, we see what happens when we speed up and slow down the partnership dynamics. Let $\alpha$ be fixed (and by extension, $y^*$) and let $r_-^*$ denote the unique value of $r_-$ such that $r_+y^*=1$. We find that: \begin{itemize} \item $\lambda_c \downarrow1 + 1/(\alpha y^*)$ as $r_+ \uparrow\infty$ (fast partner dynamics), \item $\lambda_c (r_+y^*-1)\downarrow4/r_-^* + 4 + r_-^*$ as $r_+y^* \downarrow1$ (slow partner dynamics). \end{itemize} In particular, in the limit of fast partner dynamics $\lambda_c$ approaches its value for the contact process on a complete graph, plus a correction for the proportion of available singles. In the slow limit, that is, as the recruitment probability approaches $1/2$, $\lambda_c$ diverges like $1/(r_+y^*-1)$, with a proportionality that itself diverges as $r_-^*$ approaches either $0$ or $\infty$. Now we fix $r_+>1$ and vary $r_-$. Note that $y^* \downarrow0$ as $r_- \downarrow0$: \begin{itemize} \item as $r_- \uparrow\infty$, $y^* \uparrow1$, $\alpha\downarrow0$ and $\lambda_c/r_- \downarrow1/(r_+-1)$, and \item as $r_+y^* \downarrow1$, $\lambda_c (r_+y^*-1) \downarrow 4/r_-^* + 4 + r_-^*$. \end{itemize} Here, in both limits $\lambda_c$ diverges, in the first case like $r_-$ and in the second case like $1/(r_+y^*-1)$. Finally, we fix $r_-$ and vary $r_+$, and we find that: \begin{itemize} \item as $r_+ \uparrow\infty$, $y^* \sim1/\sqrt{\alpha} = \sqrt {r^-/r^+}$ and $\lambda_c \rightarrow1 + 2/r_-$, and \item as $r_+y^* \downarrow 1$, $\lambda_c (r_+y^*-1) \downarrow4/r_- + 4 + r_-$. \end{itemize} The first limit agrees with the previous large $r_+$ approximation, and the second limit shows that when $r_+y^*$ is close to 1, $\lambda (r_+y^*-1)/2 \approx\lambda(r_+y^*-1)/(r_+y^*+1) = \lambda\beta$ behaves like the force of infection and we require again that $r_-$ be neither too small nor too large in order for the infection to be able to spread. We now examine $\Delta(i)$, defined in \eqref{eqdi}. \begin{lemma}\label{deltalemma} $\Delta(0)=R_0-1$, and: \begin{itemize} \item if $R_0<1$ the equation $\Delta(i)=0$ has no solution $i \in[0,y^*]$, \item if $R_0=1$ the equation $\Delta(i)=0$ has the unique solution $i=0$ and \item if $R_0>1$ the equation $\Delta(i^*)=0$ has a unique solution $i^* \in(0,y^*)$. \end{itemize} \end{lemma} \begin{pf} Letting $z = 1+r_+(y^*-i/2)$ we recall the definition: \begin{equation} \label{deq1} \Delta(i) = p_S\Delta_S + p_{\mathit{II}}\Delta_{\mathit{II}} + p_{\mathit{SI}}\Delta_{\mathit{SI}} \end{equation} with $p_S = 1/z$, $p_{\mathit{SI}} = r_+(y^*-i)/z$, $p_{\mathit{II}} = r_+i/(2z)$, $\Delta _S=-1$, $\Delta_{\mathit{II}} = -2+\mathbb{P}(C\rightarrow F)+2\mathbb {P}(C\rightarrow G)$ and $\Delta_{\mathit{SI}} = -1+\mathbb{P}(B\rightarrow F)+2\mathbb{P}(B\rightarrow G)$, where probabilities are with respect to the Markov chain in Figure~\ref{figri}. First, we show $\Delta(0)=R_0-1$. If $i=0$ then $p_S = 1/(1+r_+y^*) = \mathbb{P}(A\rightarrow D)$, $p_{\mathit{II}}=0$ and $p_{\mathit{SI}} = r_+y^*/(1+r_+y^*) = \mathbb{P}(A\rightarrow B)$ so \begin{eqnarray*} \Delta(0) &=& -\mathbb{P}(A\rightarrow D) + \mathbb{P}(A\rightarrow B) \bigl(-1 + \mathbb{P}(B\rightarrow F) + 2\mathbb{P}(B\rightarrow G)\bigr) \\ &=& -\mathbb{P}(A\rightarrow D\cup B) + \mathbb{P}(A\rightarrow F)+2\mathbb{P}(A \rightarrow G) \\ &=& -1 + R_0. \end{eqnarray*} It is easy to check that $\Delta_{\mathit{II}}\leq0$, so if $\Delta_{\mathit{SI}}\leq0$ then $\Delta(i)<0$ for $i \in[0,y^*]$, since $p_S>0$ and $\Delta_S<0$, and the other terms are $\leq0$. Since $\partial_i z = -r_+/2$, we find \begin{eqnarray*} && \partial_i p_S = r_+/2z^2 >0\quad \mbox{and}\quad\partial_i p_{\mathit{II}} = r_+/(2z) + r_+^2i/\bigl(4z^2\bigr) >0 \end{eqnarray*} and since $p_{\mathit{SI}}=1-(p_S+p_{\mathit{II}})$, $\partial_i p_{\mathit{SI}}=-\partial_ip_S -\partial_i p_{\mathit{II}}<0$. If $\Delta_{\mathit{SI}}>0$ it follows that $\partial_i \Delta(i)<0$ so if $R_0<1$ but $\Delta_{\mathit{SI}}>0$ then $\Delta(i) \leq \Delta(0)<0$ for $i \in[0,y^*]$. If $R_0 \geq1$, then since $0 \leq \Delta(0) = p_S\Delta_S + p_{\mathit{SI}}\Delta_{\mathit{SI}}$ and $\Delta_S<0$, it follows that $\Delta_{\mathit{SI}}>0$ and so $\partial_i \Delta(i)<0$. If $R_0=1$, then since $\Delta(0)=0$ it follows that $i=0$ is the only solution in $[0,y^*]$ to the equation $\Delta(i)=0$. If $i=y^*$ then $p_{\mathit{SI}}=0$ so $\Delta(y^*)<0$, and clearly $\Delta(i)$ is continuous on $[0,y^*]$. Therefore, if $R_0>1$ then since $\Delta(0)>0$, by the intermediate value theorem the equation $\Delta(i^*)$ has a solution $i^* \in(0,y^*)$, and since $\partial_i \Delta(i)<0$ the solution is unique. \end{pf} Write $\Delta(i)$ as $\Delta(\lambda,i)$ to emphasize the $\lambda$ dependence. By Lemma~\ref{deltalemma} and since $R_0=1\Leftrightarrow \lambda=\lambda_c$ and $R_0>1\Leftrightarrow\lambda>\lambda_c$, for fixed $r_+,r_-$ such that $r_+y^*>1$, we have a function $i^*(\lambda)$ defined for $\lambda\geq\lambda_c$ satisfying $\Delta(\lambda ,i^*(\lambda))=0$ such that $i^*(\lambda_c)=0$ and $i^*(\lambda)>0$ for $\lambda>\lambda_c$. Next, we see how $i^*$ behaves for $\lambda>\lambda _c$ near~$\lambda_c$. As usual, $C^1$ means continuously differentiable. \begin{proposition}\label{thm2.2} For fixed $r_+,r_-$ such that $r_+y^*>1$, $i^* \sim C(\lambda-\lambda _c)$ as $\lambda\downarrow\lambda_c$ for some constant $C>0$. \end{proposition} \begin{pf} Clearly, $p_S,p_{\mathit{SI}}$ and $p_{\mathit{II}}$ depend only on $i$ and are $C^1$ in a neighbourhood of $0$. Also, $\Delta_S$ is fixed and $\Delta_{\mathit{SI}}$ and $\Delta_{\mathit{II}}$ depend only on $\lambda$ and are rational functions of $\lambda$ whose range lies in a bounded interval, thus are $C^1$ in a neighbourhood of $\lambda_c$. Glancing at \eqref{deq1}, this means that $\Delta(\lambda,i)$ is $C^1$ in a neighbourhood of $(\lambda_c,0)$. If $\lambda\geq\lambda_c$ then $R_0\geq1$, so as shown in the proof of Lemma~\ref{deltalemma}, $\partial_i \Delta(\lambda,i)<0$ and in particular, $\partial_i \Delta(\lambda_c,0)\neq0$. Applying the implicit function theorem, there is a unique $C^1$ function $i^*(\lambda )$ defined in a neighbourhood of $\lambda_c$ (and thus coinciding with the previous definition of $i^*(\lambda)$ when $\lambda\geq\lambda_c$) satisfying $\Delta(\lambda,i^*(\lambda))=0$, and noting that $i^*(\lambda_c)=0$, \begin{eqnarray*} && i^*(\lambda) \sim-(\lambda-\lambda_c)\frac{\partial_{\lambda}\Delta (\lambda_c,0)}{\partial_i \Delta(\lambda_c,0)} \end{eqnarray*} as $\lambda\downarrow\lambda_c$. A straightforward Markov chain coupling argument shows that $\partial_{\lambda}\Delta_{\mathit{SI}},\partial _{\lambda}\Delta_{\mathit{II}}>0$, which implies $\partial_{\lambda}\Delta (\lambda,i)>0$. Since $\partial_i\Delta(\lambda,i)<0$, the result follows. \end{pf} \section{Mean-field equations}\label{secmf} A set of differential equations defined below are indispensable to our analysis of the partner model as they enable a (better and better as $N$ increases) approximate description of the model, when $N$ is large. First, we write down the transitions for the variables introduced in Section~\ref{secmain} that track the total number of singles and pairs of various types; there are ten such transitions. The existence of well-defined transitions shows that $(S_t,I_t,\mathit{SS}_t,\mathit{SI}_t,\mathit{II}_t)$ is a continuous time Markov chain: \begin{itemize} \item$I\rightarrow I-1$ and $S\rightarrow S+1$ at rate $I$, \item$S \rightarrow S-2$ and $\mathit{SS} \rightarrow \mathit{SS}+1$ at rate $(r_+/N)S(S-1)/2$, \item$S\rightarrow S-1$, $I \rightarrow I-1$ and $\mathit{SI} \rightarrow \mathit{SI}+1$ at rate $(r_+/N)\cdot S \cdot I$, \item$I \rightarrow I-2$ and $\mathit{II} \rightarrow \mathit{II}+1$ at rate $(r_+/N)I(I-1)/2$, \item$\mathit{SI} \rightarrow \mathit{SI}-1$ and $\mathit{SS} \rightarrow \mathit{SS}+1$ at rate $\mathit{SI}$, \item$\mathit{II} \rightarrow \mathit{II}-1$ and $\mathit{SI} \rightarrow \mathit{SI}+1$ at rate $2\mathit{II}$, \item$\mathit{SI} \rightarrow \mathit{SI}-1$ and $\mathit{II} \rightarrow \mathit{II}+1$ at rate $\lambda \mathit{SI}$, \item$\mathit{SS} \rightarrow \mathit{SS}-1$ and $S \rightarrow S+2$ at rate $r_-\mathit{SS}$, \item$\mathit{SI} \rightarrow \mathit{SI}-1$, $S \rightarrow S+1$ and $I\rightarrow I+1$ at rate $r_-\mathit{SI}$, and \item$\mathit{II} \rightarrow \mathit{II}-1$ and $I \rightarrow I+2$ at rate $r_-\mathit{II}$. \end{itemize} Focusing now on the rescaled quantities $(s_t,i_t,ss_t,si_t,ii_t)=(S_t,I_t,\mathit{SS}_t,\mathit{SI}_t,\break \mathit{II}_t)/N$ and noting the relation $s_t+i_t+2(ss_t+si_t+ii_t)=1$, we shall ignore $ss_t$ since it plays no role in the calculations that follow. Also, it will be convenient to use $y_t:=s_t+i_t$ instead of $s_t$. Doing so, the above transitions become: \begin{itemize} \item$i\rightarrow i-1/N$ at rate $iN$, \item$y \rightarrow y-2/N$ at rate $[r_+(y-i)(y-i-1/N)/2]N = [r_+(y-i)^2/2]N - r_+(y-i)/2$, \item$y \rightarrow y-2/N$, $i \rightarrow i-1/N$ and $si \rightarrow si+1/N$ at rate $r_+(y-i)iN$, \item$y \rightarrow y-2/N$, $i \rightarrow i-2/N$ and $ii \rightarrow ii+1/N$ at rate $[r_+i(i-1/N)/2]N = (r_+i^2/2)N - r_+i/2$, \item$si \rightarrow si-1/N$ at rate $siN$, \item$ii \rightarrow ii-1/N$ and $si \rightarrow si+1/N$ at rate $2iiN$, \item$si \rightarrow si-1/N$ and $ii \rightarrow ii+1/N$ at rate $\lambda siN$, \item$y\rightarrow y+2/N$ at rate $[r_-((1-y)/2-(si+ii))]N$, \item$si \rightarrow si-1/N$, $y \rightarrow y+2/N$ and $i\rightarrow i+1/N$ at rate $r_-siN$, and \item$ii \rightarrow ii-1/N$, $y\rightarrow y+2/N$ and $i \rightarrow i+2/N$ at rate $r_-iiN$. \end{itemize} As we did for $y_t$ in Section~\ref{secedge}, we derive some differential equations that approximate the evolution of $(y_t,i_t,si_t,ii_t)$; since we already have an equation for $y_t$ we focus on $i_t,si_t,ii_t$. We have \begin{eqnarray*} \frac{d}{dt}\mathbb{E}(i_t \vert i_t=i) &=& -\bigl(1+r_+(y-i)+2r_+(i-1/N)/2\bigr)i + r_-(si+2ii), \\ \frac{d}{dt}\mathbb{E}(si_t \vert si_t=si) &=& r_+(y-i)i+2ii-(1+\lambda +r_-)si, \\ \frac{d}{dt}\mathbb{E}(ii_t \vert ii_t=ii) &=& r_+i(i-1/N)/2 +\lambda si - (2+r_-)ii \end{eqnarray*} and as before, in a time step of size $1/N$ the increment in each variable has expected value $O(1/N)$ while its square has expected value $O(1/N^2)$. Adding in the $y'$ equation \eqref{eqe}, this suggests again that in the limit as $N\rightarrow\infty$ we should expect the sample paths of $(y_t,i_t,si_t,ii_t)$ to approach solutions to the \emph{mean-field equations} \begin{eqnarray} y' &=& -r_+y^2 + r_-(1-y), \nonumber \\ i' &=& -(1+r_+y)i + r_-(si+2ii), \nonumber \\[-8pt] \label{mfeq} \\[-8pt] \nonumber si' &=& r_+(y-i)i -(1+\lambda+r_-)si + 2ii, \\ \nonumber ii' &=& r_+i^2/2 + \lambda si -(2+r_-)ii. \end{eqnarray} It is sometimes convenient to replace $si$ with $ip := si+ii$, where the $ip$ stands for ``infected partnership''. Since $si=ip-ii$, both forms lead to the same solutions. After the change of variables, we have \begin{eqnarray} y' &=& -r_+y^2 + r_-(1-y), \nonumber \\ i' &=& -(1+r_+y)i + r_-(ip+ii), \nonumber \\[-8pt] \label{mfeq2} \\[-8pt] \nonumber ip' &=& r_+(y-i/2)i-(1+r_-)ip+ii, \\ \nonumber ii' &=& r_+i^2/2 + \lambda ip -(2+r_-+ \lambda)ii. \end{eqnarray} We will often use the shorthand $u'=F(u)$ for the MFE \eqref{mfeq} or \eqref{mfeq2}, where $u\in\mathbb{R}^4$. In both cases the MFE have the form $y'=f(y),u'=G(y,u)$, where \mbox{$u\in\mathbb{R}^3$}, that is, the $y$ dynamics does not depend on the other 3 variables, but it does influence them; systems of this form are often referred to as \emph {skew product}. The next three results have natural analogues for the stochastic model, and in fact the analogue of Lemma~\ref{mfmt} shows up in Section~\ref{secmacro} as Lemma~\ref{pmmt}. First, we show the domain of interest is an invariant set. \begin{lemma}\label{mfatt} The following set is invariant for the MFE: \begin{eqnarray*} &&\Lambda:= \bigl\{(y,i,ip,ii) \in\mathbb{R}^4_+:i\leq y \leq1,ii \leq ip \leq(1-y)/2\bigr\}. \end{eqnarray*} \end{lemma} \begin{pf} We examine the boundary and use the form \eqref{mfeq2} of the MFE. If $y=0$ then $y'>0$ and if $y=1$ then $y'<0$, so $[0,1]$ is invariant for $y$. Let $u=(i,ip,ii)$. If $u=(0,0,0)$, then $u'=(0,0,0)$, so $(0,0,0)$ is invariant for $u$. If $u\neq(0,0,0)$ and $u_j=0$ for coordinate $j$,\vspace*{-1pt} then $u_j'>0$ (note for $ip'$ that since $i\leq y$, if $i>0$ then $y-i/2>0$). If $i=y \neq0$, then since $ip+ii\leq(1-y)$, $i' \leq-y - r_+y^2 + r_-(1-y) =-y+y'< y'$. If $i=y=0$ then $i' \leq-y+y'=y'$ and since $y'>0$, $i'' \leq-y' + y'' < y''$. For the remainder, we may assume $i<y$. If $ii=ip \neq0$, then $ii' = r_+i^2/2 -(2+r_-)ip \leq r_+(y-i/2)i -(2+r_-)ip < ip'$ while if $ii=ip=0$ then we may assume $i>0$ in which case $ii' = r_+i^2/2 < r_+(y-i/2)i = ip'$. \end{pf} Written in the form \eqref{mfeq2}, the MFE have a useful monotonicity property which is described in the following lemma. \begin{lemma}\label{mfmt} Let $(y(t),u(t))$ and $(y(t),v(t))$ be solutions to the MFE written in $(y,i,ip,ii)$ coordinates, and say that $u\leq v \Leftrightarrow u_j\leq v_j \ \forall j \in\{1,2,3\}$. If $u(0)\leq v(0)$, then $u(t)\leq v(t)$ for $t>0$. \end{lemma} \begin{pf} Since trajectories are continuous it suffices to check that if $u\leq v$, $u\neq v$ and $u_j=v_j$ then $u_j' < v_j'$. Referring to \eqref {mfeq2}, $i'$ increases with $ip$ and $ii$, $ip'$ increases with $i$ and $ii$ [note $\partial_i(y-i/2)i = y-i$ and $i\leq y$] and $ii'$ increases with $i$ and $ip$. \end{pf} For what follows, we set $y=y^*$ in which case the MFE are three-dimensional. Since $\Lambda$ is invariant, \begin{eqnarray*} && \Lambda^* := \bigl\{(y,u) \in\Lambda:y=y^*\bigr\} \end{eqnarray*} is also invariant. Since $\Lambda^* \cong\{(i,ip,ii)\in\mathbb {R}^3_+:i\leq y^*, ii\leq ip \leq(1-y^*)/2\}$ is three-dimensional, elements of $\Lambda^*$ are usually written as a three-vector in either $(i,si,ii)$ or $(i,ip,ii)$ coordinates. \begin{lemma}\label{mfthmsuff} Say that $u=(i,ip,ii)$ is \emph{increasing} if $u_j'>0$ in each coordinate. For the MFE with $y=y^*$ and any solution $u(t)$: \begin{itemize} \item if $(0,0,0)$ is the only equilibrium then $u(t)\rightarrow (0,0,0)$ as $t\rightarrow\infty$, and \item if there is a unique equilibrium $u^*\neq(0,0,0)$ and a sequence of nonzero increasing states tending to $(0,0,0)$, then for $u(0)\neq (0,0,0)$, $u(t)\rightarrow u^*$ as $t\rightarrow\infty$. \end{itemize} \end{lemma} \begin{pf} Defining $\overline{u}:=(y^*,(1-y^*)/2,(1-y^*)/2)$, $\overline{u}\geq v$ for all $v\in\Lambda^*$, so letting $\overline{u}(t)$ be the solution to the MFE with $\overline{u}(0)=\overline{u}$, for $s\geq0$, $\overline{u}(0)\geq\overline{u}(s)$. Since $y=y^*$, by monotonicity (Lemma~\ref{mfmt}) $\overline{u}(t)\geq\overline{u}(t+s)$ for $t>0$, so $\overline{u}(t)$ is nonincreasing in $t$. Since $\Lambda^*$ is compact, $\lim_{t\rightarrow\infty}\overline{u}(t)$ exists and by continuity of the MFE is an equilibrium. If $(0,0,0)$ is the only equilibrium, then since $\overline{u}(t)\geq(0,0,0)$, $\overline {u}(t)\rightarrow(0,0,0)$ as $t\rightarrow\infty$, so for any solution $v(t)$, since $\overline{u}(0)\geq v(0)$, $\overline{u}(t)\geq v(t)$ for $t>0$, and since $v(t)\geq(0,0,0)$, $v(t)\rightarrow(0,0,0)$. If $u(0)$ is increasing, then $u(0)\neq(0,0,0)$ and by continuity of the MFE there is $\varepsilon>0$ so that $u(s) \geq u(0)$ for $0 \leq s \leq\varepsilon$. By monotonicity $u(t+s) \geq u(t)$ for $0 \leq s \leq \varepsilon$ and if $(k-1)\varepsilon\leq s \leq k\varepsilon$, by iterating at most $k$ times $u(t+s)\geq u(t)$, so $u(t)$ is increasing for all time. As in the previous case, $\lim_{t\rightarrow\infty}u(t)$ exists and is an equilibrium which in this case is not $(0,0,0)$. If there is a unique equilibrium $u^* \neq(0,0,0)$, and if for any nonzero solution $v(t)$ there is $T>0$ so that $v(T)\geq u$ for some increasing $u$, then setting $u(T)=u$, since $\overline{u}(t)\geq v(t) \geq u(t)$ for $t \geq T$ and $\lim_{t\rightarrow\infty}\overline{u}(t)=\lim_{t\rightarrow\infty}u(t) = u^*$ it follows that $\lim_{t\rightarrow \infty}v_t = u^*$. If $v(0)\neq(0,0,0)$, then for $t>0$, $v_j(t)>0$ in each coordinate $j$; this follows from the fact that for $j=1,2,3$, $v_j' \geq-Cv_j$ for some $C$, and if $v_j=0$ but $v_k>0$ for some $k\neq j$ then $v_j'>0$. Thus, fixing $T>0$, if $v(0)\neq(0,0,0)$ then since $\varepsilon:=\min_j v_j(T)>0$, if there is a sequence of increasing states tending to $(0,0,0)$ there is an increasing state $u$ with $\max_j u_j \leq\varepsilon$, and thus $v(T)\geq u$, as desired. \end{pf} As the next result shows, on $\Lambda^*$ the MFE have a simple dynamics with a bifurcation at $R_0=1$. Since we refer back to quantities from Section~\ref{secsurv}, in this proof we mostly use $(i,si,ii)$ coordinates. \begin{theorem}\label{mfthm} For the MFE: \begin{itemize} \item if $R_0 \leq1$ there is the unique equilibrium $(0,0,0)$ which is attracting on $\Lambda^*$ and \item if $R_0>1$ there is a unique positive equilibrium $(i^*,s^*,ii^*)$ satisfying $\Delta(i^*)=0$ which is attracting on $\Lambda^*\setminus\{(0,0,0)\}$. \end{itemize} \end{theorem} \begin{pf} By Lemma~\ref{mfthmsuff} it is enough to show that if $R_0\leq1$ then $(0,0,0)$ is the only equilibrium, and that if $R_0>1$ there is a unique equilibrium $(i^*,si^*,ii^*)\neq(0,0,0)$ satisfying $\Delta (i^*)=0$, and a sequence of increasing states converging to $(0,0,0)$. Treating $si,ii$ as a separate system with input function $i$, we have the nonhomogenous linear system \begin{eqnarray*} && \pmatrix{si' \cr ii'} = \pmatrix{-a & 2 \cr \lambda& -b} \pmatrix{si \cr ii } + r_+i \pmatrix{\bigl(y^*-i\bigr) \cr i/2 } \end{eqnarray*} or, in matrix form, $v' = Kv + Li$, with $v = (si,ii)^{\top}$, $K = {-a \quad 2\,\, \choose \,\,\lambda\!\!\quad -b}$ and $L = r_+((y^*-i),i/2)^{\top}$, whose solution is given by \begin{equation} \label{subsyssol} v(t) = \Phi(t)v(0) + \int_0^t \Phi(t-s)L(s)i(s)\,ds, \end{equation} where $\Phi(t) = \exp(Kt)$ is the solution of the associated homogenous system---note that $\Phi(t)$ is the restriction of the transition semigroup for the continuous-time Markov chain from Figure~\ref{figri} to the states $B$ and $C$. Substituting the solution for the $si,ii$ system into the equation for $i$, we have \begin{equation} \label{ieq}\hspace*{6pt} i'(t) = -\bigl(1+r_+y^*\bigr)i(t) + r_-(1,2) \biggl[ \Phi(t)v(0) + \int_0^t \Phi (t-s)L(s)i(s)\,ds \biggr],\hspace*{-24pt} \end{equation} where $(1,2)$ is a row vector that multiplies the column vector in the square brackets. This equation depends only on $i$, the initial values $v(0) = (si(0),ii(0))^{\top}$ and the solution matrix $\Phi(t)$. Linearizing \eqref{ieq} around $(i,si,ii)=(0,0,0)$ and using the ansatz $i(t)=\exp(\mu t)$, we obtain \begin{eqnarray*} &&\mu e^{\mu t} = -\bigl(1+r_+y^*\bigr)e^{\mu t} + r_-(1,2) \biggl[\Phi(t)v_0 + \int_0^t \Phi(t-s)e^{\mu s}\,ds \biggr]L_0, \end{eqnarray*} where $L_0 = r_+(y^*,0)^{\top}$, and using $\Phi(t)=\exp(Kt)$ the integral in the square brackets is \begin{eqnarray*} &&\!\! e^{Kt}\int_0^t e^{(\mu I-K)s} \,ds = e^{Kt}(\mu I-K)^{-1}\bigl(e^{(\mu I-K)t} - I\bigr) = (\mu I-K)^{-1}\bigl(e^{\mu t} - e^{Kt}\bigr), \end{eqnarray*} where $I$ is the identity matrix. Letting $t \rightarrow\infty$ and noting $\Phi(t) = e^{Kt} \rightarrow0$ since $K$ is a stable matrix, we obtain the eigenvalue equation \begin{eqnarray*} &&\mu= -\bigl(1+r_+y^*\bigr) + r_-(1,2) (\mu I-K)^{-1}L_0 \end{eqnarray*} which, expanding, is \begin{eqnarray*} \label{mueq} && \mu= -\bigl(1+r_+y^*\bigr) + r_-\frac{\mu+ b + 2\lambda}{(\mu+b)(\mu +a)-2\lambda}r_+y^* \end{eqnarray*} and setting $\mu=0$ gives the equation \begin{eqnarray*} && 1 = \frac{r_+y^*}{1+r_+y^*}\frac{r_-}{ab-2\lambda}(b+2\lambda) \end{eqnarray*} which, comparing to \eqref{r0eq0}, is exactly $R_0=1$. Recalling that $ab-2\lambda>0$, \begin{eqnarray*} &&\frac{d}{d\mu} \biggl( \frac{\mu+b+2\lambda}{(\mu+b)(\mu+a)-2\lambda } \biggr)\\ &&\qquad= \frac{(\mu+b)(\mu+a)-2\lambda- (\mu+b+2\lambda)(2\mu+ b + a)}{[(\mu+b)(\mu+a)-2\lambda]^2} \\ &&\qquad= \frac{-2\lambda-[(\mu+b)^2+2\lambda(2\mu+b+a)]}{[(\mu+b)(\mu +a)-2\lambda]^2} \end{eqnarray*} is negative when $\mu\geq0$. Setting $\mu=0$ in \eqref{mueq}, the right-hand side is positive if $R_0>1$, so since both sides are continuous in $\mu$, the left-hand side is equal to $0$ at $\mu=0$ and increases unboundedly as $\mu$ increases and the right-hand side decreases with $\mu$ it follows that \eqref{mueq} has a positive solution $\mu>0$ when $R_0>1$. To obtain the increasing states mentioned in Lemma~\ref{mfthmsuff}, we show that for \mbox{$R_0>1$} the unstable eigenvector of the linearized system near $(0,0,0)$ is strictly positive when viewed in $(i,ip,ii)$ coordinates; we can then take for the initial states small multiples of the eigenvector. To show the eigenvector is strictly positive, linearize \eqref{subsyssol} around $(i,si,ii)=(0,0,0)$ with input $i(t) =\exp(\mu t)$, substitute the solution form $v(t) = v \exp(\mu t)$ and let $t\rightarrow\infty$ to obtain $v = (\mu I-K)^{-1}L_0$ which has positive entries, which implies that in $(ip,ii)$ coordinates it also has positive entries. It remains to look for nonzero equilibria. Focusing again on \eqref {ieq}, as our steady state assumption we suppose the system was started in the distant past and has remained in equilibrium up to the present time. Since\vspace*{1pt} $\Phi(t)\rightarrow0$ as $t \rightarrow\infty$ we ignore $\Phi(t)v(0)$, and letting\vspace*{1pt} $\Phi_{\infty} = \int_0^{\infty}\Phi(s)\,ds = -K^{-1}$, $\int_0^t \Phi(t-s)L(s)i(s)\,ds$ becomes $\smash{\int_{-\infty }^t \Phi(t-s)L^{\dagger}i^{\dagger}\,ds} = \Phi_{\infty}L^{\dagger }i^{\dagger}$ where $L^{\dagger} = r_+((y^*-i^{\dagger}),i^{\dagger }/2)^{\top}$ and $i^{\dagger}$ are the equilibrium values, and we obtain \begin{eqnarray*} && \bigl(1+r_+y^*\bigr) = r_-(1,2)\Phi_{\infty}L^{\dagger}. \end{eqnarray*} Notice that $r_-(1,2)\Phi_{\infty}$ returns the expected number of infectious singles that result from an $\mathit{SI}$ or an $\mathit{II}$ partnership upon breakup, so we have $r_-(1,2)\Phi_{\infty} = (1+\Delta_{\mathit{SI}},2+\Delta _{\mathit{II}})$ and \begin{eqnarray*} \bigl(1+r_+y^*\bigr) &=& r_+\bigl[\bigl(y^*-i^{\dagger}\bigr) (1+ \Delta_{\mathit{SI}}) + \bigl(i^{\dagger }/2\bigr) (2+\Delta_{\mathit{II}}) \bigr] \\ &=& r_+y^* + r_+\bigl[\bigl(y^*-i^{\dagger}\bigr)\Delta_{\mathit{SI}} + \bigl(i^{\dagger}/2\bigr)\Delta_{\mathit{II}}\bigr] \end{eqnarray*} and subtracting $r_+y^*$, $1=r_+(y^*-i^{\dagger})\Delta_{\mathit{SI}} + r_+(i^{\dagger}/2)\Delta_{\mathit{II}}$ which comparing with~\eqref{deq1} is exactly the equation $\Delta(i^{\dagger})=0$, as desired. By Lemma~\ref {deltalemma}, we have the unique solution $i^{\dagger} = i^*$ if $R_0>1$, and there is no positive solution when $R_0\leq1$. Using the steady state assumption and \eqref{subsyssol} gives $(si^{\dagger },ii^{\dagger})=\Phi_{\infty}L^{\dagger}i^{\dagger}$, that is, $si^{\dagger},ii^{\dagger}$ are uniquely determined by $i^{\dagger}$. This proves uniqueness of the nonzero equilibrium when $R_0>1$ and uniqueness of $(0,0,0)$ as an equilibrium when $R_0 \leq1$. \end{pf} \begin{remark}\label{r0remark} Setting $y=y^*$ in \eqref{mfeq} and writing the remaining equations in matrix form, we have $u' = Au$ with $u = (i,si,ii)^{\top}$ and \begin{eqnarray*} A = \pmatrix{ -\bigl(1+r_+y^*\bigr) & r_- & 2r_- \cr r_+\bigl(y^*-i \bigr) & -a & 2 \cr r_+i & \lambda& -b } \end{eqnarray*} that depends on $u$. Using the technique of \cite{watm}, if we evaluate $A$ at $i=0$ and write it as $F-V$ with \begin{eqnarray*} F = \pmatrix{ 0 & 0 & 0 \cr r_+y^* & 0 & 0 \cr 0 & 0 & 0 },\qquad V = \pmatrix{ \bigl(1+r_+y^*\bigr) & -r_- & -2r_- \cr 0 & a & -2 \cr 0 & - \lambda & b } \end{eqnarray*} and define $R_0 = \rho(FV^{-1})$ where $\rho$ is the spectral radius, then it can be verified that this definition of $R_0$ coincides with the one given in \eqref{eqri}. Then, according to Theorem~2 of \cite {watm}, $R_0<1$ implies $(0,0,0)$ is locally asymptotically stable, while $R_0>1$ implies it is unstable. \end{remark} \section{Approximation by the mean-field equations}\label{secstoch} In this section, we show how to approximate the sample paths of $(y_t,i_t,si_t,ii_t)$ with solutions to the MFE~\eqref{mfeq}, and use this to get some control on $y_t$. Unless otherwise noted, for a vector, $|\cdot|$ denotes the $\ell^{\infty}$ norm, that is, $|u| = \max_i |u_i|$. We begin with a useful definition. \begin{definition}\label{defwhp} An event $A$ depending on a parameter $n$ is said to hold \emph{with high probability} or w.h.p. in $n$ if there exists $\gamma>0$ and $n_0$ so that $\mathbb{P}(A) \geq1-e^{-\gamma n}$ when $n\geq n_0$. \end{definition} When possible, probability estimates are given more or less explicitly, but we will occasionally use this definition to reduce clutter, especially in Section~\ref{secmacro}. We begin with a well-known large deviations result for Poisson random variables; since it is not hard to prove, we supply the proof. For a reference to large deviations theory, see Section~1.9 in \cite{prob}. \begin{lemma}\label{chern} Let $X$ be Poisson distributed with mean $\mu$, then \begin{eqnarray*} \mathbb{P}\bigl(X>(1+\delta) \mu\bigr) &\leq& e^{-\delta^2\mu/4}\qquad \mbox{for } 0<\delta\leq1/2, \\ \mathbb{P}\bigl(X<(1-\delta)\mu\bigr) &\leq& e^{-\delta^2\mu/2}\qquad \mbox{for } \delta>0. \end{eqnarray*} \end{lemma} \begin{pf} We deal separately with $X > (1+\delta)\mu$ and $X<(1-\delta)\mu$. For $t>0$ and using Markov's inequality we have \begin{eqnarray*} && \mathbb{P}\bigl(X>(1+\delta)\mu\bigr) = \mathbb{P}\bigl(e^{tX}>e^{(1+\delta)t\mu} \bigr) \leq\mathbb{E}e^{tX}e^{-(1+\delta)t\mu}. \end{eqnarray*} Notice that \begin{eqnarray*} && \mathbb{E}e^{tX} = \sum_{k\geq0}e^{tk}e^{-\mu} \frac{\mu^k}{k!} = e^{-\mu}\sum_{k\geq0} \frac{(e^t\mu)^k}{k!} = e^{-\mu}e^{e^t\mu} = \exp \bigl( \bigl(e^t-1\bigr)\mu\bigr) \end{eqnarray*} so $\mathbb{E}e^{tX}e^{-(1+\delta)t\mu}=\exp(\mu(e^t-1-(1+\delta)t))$. Minimizing $e^t-1-(1+\delta)t$ gives $t=\log(1+\delta)$, and thus $(1+\delta)-1-(1+\delta)\log(1+\delta)=\delta-(1+\delta)\log(1+\delta )$. Since $\log(1+\delta) \geq\delta-\delta^2/2$ this is at most $\delta-(1+\delta)(\delta-\delta^2/2) = -\delta^2/2+\delta^3/2$ which is $\leq-\delta^2/4$ for $0<\delta\leq1/2$. For the other direction we take a similar approach. For $t>0$ and using Markov's inequality we have \begin{eqnarray*} && \mathbb{P}\bigl(X<(1-\delta)\mu\bigr) = \mathbb{P}\bigl(e^{-tX}>e^{-(1-\delta)t\mu} \bigr) \leq\mathbb{E}e^{-tX}e^{(1-\delta)t\mu} \end{eqnarray*} and using $\mathbb{E}e^{-tX} = \exp((e^{-t}-1)\mu)$ the right-hand side above is $\exp(\mu(e^{-t}-1+(1-\delta)t))$. Minimizing $e^{-t}-1+(1-\delta)t$ gives $-t=\log(1-\delta)$, and thus $(1-\delta )-1-(1-\delta)\log(1-\delta)=-\delta-(1-\delta)\log(1-\delta)$. Since $\log(1-\delta) \geq-\delta-\delta^2/2$, this is at most $-\delta +(1-\delta)(\delta+\delta^2/2) = -\delta^2/2-\delta^3/2 \leq-\delta^2/2$. \end{pf} For the next three results, we use the notation $u_t = (y_t,i_,si_t,ii_t)$. First, we give an a priori bound on the change in $u_t$ over a short period of time. \begin{lemma}\label{apriori} Let $u_t=(y_t,i_t,si_t,ii_t)$. There are constants $C,\gamma>0$ so that for all $h>0$ and fixed $t$, \begin{eqnarray*} && \mathbb{P}\Bigl(\sup_{t\leq s \leq t+h}|u_s-u_t| \leq Ch\Bigr) \geq1-e^{-\gamma Nh}. \end{eqnarray*} \end{lemma} \begin{pf} Looking to the transitions listed in Section~\ref{secmf}, jumps in $u_t$ are of size $\leq2/N$ and occur at total rate $\leq MN$ for some $M>0$ that depends only on parameters. Thus, in a time step $h>0$ the number of events affecting $u_t$ is stochastically bounded above by a Poisson random variable $X$ with mean $MNh$, so if $X \leq x$ then $|u_s-u_t|\leq2x/N$ for all $s \in[t,t+h]$. By Lemma~\ref{chern}, $\mathbb{P}(X>(1+\delta)MNh) \leq e^{-\delta^2MNh/4}$ for $0<\delta\leq 1/2$. Taking $\delta=1/4$ and $C = 2(1+\delta)M$, $\gamma= \delta ^2M/4$ completes the proof. \end{pf} Let $u'=F(u)$ denote the MFE \eqref{mfeq}. As $N$ becomes large, for small $h>0$ we expect that with probability tending to 1, $u_{t+h} = u_t + hF(u_t) + o(h)$. Using Lemma~\ref{apriori} and re-using the estimate from Lemma~\ref{chern} we obtain a quantitative bound on the remainder. \begin{lemma}\label{dapriori} Let $u_t=(y_t,i_t,si_t,ii_t)$. For each $\varepsilon>0$ there are constants $C,\gamma>0$ so that for small enough $h>0$, \begin{eqnarray*} && \mathbb{P}\bigl(\bigl|u_{t+h}-u_t-hF(u)\bigr|\leq\varepsilon h \bigr) \geq1-Ce^{-\gamma Nh}. \end{eqnarray*} \end{lemma} \begin{pf} Let $Q_j(u)$, $j=1,\ldots,10$, denote the transition rates of the ten transitions introduced in Section~\ref{secmf}, as a function of $u$, and let $X_j(t,h)$ denote the number of type $j$ transitions occurring in the time interval $[t,t+h]$. For each $j$, $Q_j(u)=Nq_j(u) + R_j(u)$ where $q_j(u)$ is a quadratic function of $u$ and $R_j(u)$ is a remainder that satisfies $|R_j(u)|\leq M$ for some $M>0$ and all $u \in [0,1]^4$. It is easily verified that if $u_t=u$ and $X_j(t,h)=Nq_j(u)h$ for each $j$ then $u_{t+h}=u + hF(u)$. Since each transition changes $u$ by at most $2/N$, it is therefore enough to show that there are constants $C,\gamma>0$ so that for each $j$, small enough $h>0$, and all $u$, \begin{eqnarray*} && \mathbb{P}\bigl(\bigl|X_j(t,h)-Nq_j(u)h\bigr|\leq\varepsilon Nh/20 \vert u_t=u\bigr)\geq 1-Ce^{-\gamma Nh}. \end{eqnarray*} Since the domain of $q_j(u)$ is a subset of $[0,1]^4$, and thus bounded it follows that $q_j$ is bounded and Lipschitz continuous, that is, for some $L>0$ and all $v,u$ in the domain of $q_j$, $q_j(u)\leq L$ and $|q_j(v)-q_j(u)|\leq L|v-u|$, and in particular, $|Q_j(v)-Q_j(u)| \leq NL|v-u| + 2M$; for what follows, take $L\geq\varepsilon$. Let $A(t,h)$ be the event \begin{eqnarray*} && \Bigl\{\sup_{t\leq s \leq t+h}|u_s-u_t|\leq C_1h\Bigr\}, \end{eqnarray*} from Lemma~\ref{apriori}, then on the event $\{u_t=u\}\cap A(t,h)$, \begin{eqnarray*} \sup_{t\leq s \leq t+h}\bigl|Q_j(u_s)-Nq_j(u)\bigr| &\leq & \sup_{t\leq s \leq t+h}\bigl|Q_j(u_s)-Q_j(u)\bigr| + \bigl|Q_j(u)-Nq_j(u)\bigr|\\ & \leq & N(LC_1h + 3M/N). \end{eqnarray*} For ease of notation, let $q=q_j(u)$ and let $r=LC_1h+3M/N$, and note that $r \rightarrow0$ as $\max(h,1/N) \rightarrow0$. Then, on $\{ u_t=u\}\cap A(t,h)$, $X_j(t,h)$ is stochastically bounded above and below respectively by Poisson random variables with means $Nh(q+r)$ and $Nh(q-r)$, so from Lemma~\ref{chern} it follows that for $0<\delta\leq1/2$, \begin{eqnarray} && \mathbb{P}\bigl(\bigl\{\bigl|X_j(t,h) - Nhq\bigr| \leq Nh \bigl(q\delta+ r(1+\delta)\bigr)\bigr\} \cap\{ u_t=u\}\cap A(t,h) \bigr) \nonumber \\[-8pt] \label{derivest} \\[-8pt] \nonumber &&\qquad \geq1-2e^{-Nh(q-r)\delta^2/4}. \end{eqnarray} Recalling that $q \leq L$, let $h,\delta,1/N>0$ be chosen small enough that $L\delta+ r(1+\delta)\leq\varepsilon/20$, then $Nh(q\delta+ r(1+\delta)) \leq\varepsilon Nh/20$. To bound the probability uniformly in $q$, we split into two cases according as $q\geq q\delta+ r(1+\delta )$ or not, that is, as $q \geq r(1+\delta)/(1-\delta)$ or not. If $q\geq r(1+\delta)/(1-\delta)$ then letting $\gamma_1 = r[(1+\delta )/(1-\delta)-1]\delta^2/4$ which is $>0$ it follows that $Nh(q-r)\delta ^2/4 \geq\gamma_1 Nh$. If $q<q\delta+r(1+\delta)$ the lower bound on $X_j(t,h) - Nhq$ is trivial and so in that case \begin{eqnarray*} && \mathbb{P}\bigl(\bigl\{\bigl|X_j(t,h)-Nhq\bigr| \leq Nh\bigl(q\delta+ r(1+ \delta)\bigr)\bigr\}\cap\{ u_t=u\}\cap A(t,h)\bigr) \\ &&\qquad\geq 1-e^{-Nh(q+r)\delta^2/4}. \end{eqnarray*} Letting $\gamma_2 = r\delta^2/4$ which is $>0$ it follows that $Nh(q+r)\delta^2/4 \geq\gamma_2 Nh$. Letting $\gamma_3$ be such that $\mathbb{P}(A(t,h))\geq1-e^{-\gamma_3Nh}$ and letting $\gamma= \min (\gamma_1,\gamma_2,\gamma_3)$ and $C=3$ completes the proof. \end{pf} Using the above estimate, we obtain finite-time control on the evolution of $u_t$, as $N$ becomes large. \begin{proposition}\label{mfest} Let $u_t=(y_t,i_t,si_t,ii_t)$. For each $\varepsilon,T>0$ there are constants $\delta,C,\gamma>0$ so that from any initial condition $u_0$ and any solution $u(t)$ to the MFE \eqref{mfeq} satisfying $|u_0-u(0)|\leq\delta$, \begin{eqnarray*} && \mathbb{P}\Bigl(\sup_{0 \leq t \leq T}\bigl|u_t-u(t)\bigr|\leq \varepsilon\Bigr) \geq 1-Ce^{-\gamma N}. \end{eqnarray*} \end{proposition} \begin{pf} The proof is analogous to the proof in numerical analysis that the Euler method is $O(h)$ accurate. Fix $h=T/M$ for integer $M$ and define events $A_1,\ldots,A_m$ as follows: $A_1=B_1\cap D_1$ and given $A_{j-1}$, $A_j=A_{j-1} \cap B_j \cap D_j$ where \begin{eqnarray*} && B_j = \Bigl\{\sup_{h(j-1) \leq t \leq hj}|u_t - u_{hj}| \leq C_1h\Bigr\} \end{eqnarray*} is the event from Lemma~\ref{apriori} and \begin{eqnarray*} && D_j = \bigl\{\bigl|u_{hj}-u_{h(j-1)}-hF(u_{h(j-1)})\bigr| \leq\mu h\bigr\} \end{eqnarray*} is the event from Lemma~\ref{dapriori}, for $\mu>0$ to be chosen. If $\mu,h>0$ are fixed and $h$ is small enough, then there are constants $C,\gamma>0$ so that $\mathbb{P}(B_j\cap D_j)\geq1-(C/M)e^{-\gamma N}$, and since $A_M = \bigcap_{j=1}^M(B_j\cap D_j)$, $\mathbb {P}(A_M)\geq1-Ce^{-\gamma N}$. For $j=1,\ldots,M$ let \begin{eqnarray*} && E_j = \sup_{\omega\in A_j}\bigl|u_{hj}(\omega) - u(hj)\bigr|, \end{eqnarray*} where $\omega$ denotes an element of the probability space for the partner model. Letting $u'=F(u)$ denote \eqref{mfeq}, we have \begin{eqnarray*} && u(hj)-u\bigl(h(j-1)\bigr) = \int_{h(j-1)}^{hj}F \bigl(u(s)\bigr)\,ds. \end{eqnarray*} Since $F(u)$ is quadratic in $u$ and its domain is bounded, it is bounded and Lipschitz continuous, that is, for some $L>0$ and all $u,v$ in the domain, $|F(u)|\leq L$ and $|F(v)-F(u)|\leq L|v-u|$. From the first inequality, it follows that $|u(s)-u(h(j-1))|\leq L(s-h(j-1))$ for $s \geq h(j-1)$ and from this and the second inequality it follows that \begin{eqnarray*} &&\bigl|u(hj)-u\bigl(h(j-1)\bigr)-hF\bigl(u\bigl(h(j-1)\bigr)\bigr)\bigr| \\ &&\qquad = \biggl \llvert \int_{h(j-1)}^{hj}\bigl(F\bigl(u(s)\bigr)-F \bigl(u\bigl(h(j-1)\bigr)\bigr)\bigr)\,ds \biggr\rrvert \\ &&\qquad \leq \int_{h(j-1)}^{hj}\bigl|F\bigl(u(s)\bigr)-F\bigl(u \bigl(h(j-1)\bigr)\bigr)\bigr|\,ds \\ &&\qquad \leq \int_{h(j-1)}^{hj}L\bigl|u(s)-u\bigl(h(j-1)\bigr)\bigr| \,ds \\ &&\qquad \leq \int_{h(j-1)}^{hj}L^2(s-hj)\,ds = L^2\int_0^h s \,ds = L^2h^2/2. \end{eqnarray*} Also, \begin{eqnarray*} \bigl|u_{hj}-u(hj)\bigr| &=& \bigl|u_{hj}-u_{h(j-1)}-hF(u_{h(j-1)})+u_{h(j-1)}-u \bigl(h(j-1)\bigr) \\ &&{}+ hF(u_{h(j-1)}) -hF\bigl(u\bigl(h(j-1)\bigr)\bigr) + u\bigl(h(j-1) \bigr) \\ &&{}+ hF\bigl(u\bigl(h(j-1)\bigr)\bigr)-u(hj)\bigr| \\ &\leq& \bigl|u_{hj}-u_{h(j-1)}-hF(u_{h(j-1)})\bigr| + \bigl|u_{h(j-1)}-u\bigl(h(j-1)\bigr)\bigr| \\ &&{}+ \bigl|hF(u_{h(j-1)})-hF\bigl(u\bigl(h(j-1)\bigr)\bigr)\bigr| \\ &&{}+ \bigl|u(hj)-u \bigl(h(j-1)\bigr)-hF\bigl(u\bigl(h(j-1)\bigr)\bigr)\bigr| \end{eqnarray*} so using the definition of $A_j$, letting $E_0:=|u_0-u(0)|\leq\delta$ and using once more Lipschitz continuity of $F$ it follows that for $j=1,\ldots,M$, \begin{eqnarray*} && E_j \leq\mu h + E_{j-1} + hLE_{j-1} + L^2h^2/2 = (1+hL)E_{j-1} + h\bigl(\mu + hL^2/2\bigr). \end{eqnarray*} Setting $q=(1+hL)$ and $r=\mu+hL^2/2$ and iterating the inequality $E_j\leq qE_{j-1} + hr$, we find\vspace*{1pt} $E_M \leq q^ME_0 + [(q^M-1)/(q-1)]hr \leq q^M[E_0+hr/(q-1)] = (1+hL)^M[E_0 +hr/(hL)] = (1+LT/M)^M[E_0+r/L] \leq e^{LT}[E_0+r/L]\leq e^{LT}[\delta+r/L]$ and the same inequality holds for all $E_j,j=1,\ldots,M$. Since on $A_j$, $|u_s-u_{hj}|\leq C_1h$ for $h(j-1) \leq s \leq hj$, on $A_M$ we find for $j=1,\ldots,M$ and $h(j-1)\leq s \leq hj$ that \begin{eqnarray*} \bigl|u_s-u(s)\bigr| &\leq& |u_s-u_{hj}| + \bigl|u_{hj}-u(hj)\bigr| + \bigl|u(hj)-u(s)\bigr| \\ &\leq& C_1h + E_j + Lh \leq h(C_1+L) + e^{LT}[\delta+r/L] \end{eqnarray*} and taking $h,\mu,\delta>0$ small enough, this is $\leq\varepsilon$. \end{pf} Our first application of Proposition~\ref{mfest} is to control $y_t$. \begin{lemma}\label{ydyn} For each $\varepsilon>0$, there are constants $C,T,\gamma>0$ so that from any value $y_0\in[0,1]$, \begin{eqnarray*} &&\mathbb{P}\Bigl(\sup_{T\leq t \leq e^{\gamma N}}\bigl|y_t-y^*\bigr|\leq \varepsilon\Bigr)\geq 1-Ce^{-\gamma N}. \end{eqnarray*} Moreover, if $|y_0-y^*| \leq2\varepsilon/3$ we may take $T=0$. \end{lemma} \begin{pf Let $y'=f(y)$ denote the $y'$ equation in \eqref{mfeq} and let $\phi (t,y)$, $\phi:[0,1]\times\mathbb{R}_+ \rightarrow[0,1]$ denote the flow for this equation, that is, the unique function satisfying $\partial_t \phi(t,y) = f(\phi(t,y))$ and $\phi(0,y)=y$ for each $(t,y)$ in its domain. Since $\phi(t,0)\leq\phi(t,y)\leq\phi(t,1)$ and $\lim_{t\rightarrow\infty}\phi(t,y)=y^*$ for each $y\in[0,1]$, for each $\varepsilon>0$ there is $T>0$ so that $|\phi(T,y)-y^*|\leq\varepsilon /3$ for all $y \in[0,1]$. Letting $y(t)=\phi(t,y_0)$ and using Proposition~\ref{mfest}, there are constants $C_1,\gamma_1>0$ depending on $\varepsilon$ but not on $y_0$ so that with probability $\geq 1-C_1e^{-\gamma_1 N}$, $|y_T-y^*| \leq|y_T-y(T)| + |y(T)-y^*| \leq \varepsilon/3+\varepsilon/3=2\varepsilon/3$. Then, for $t \geq0$ and $y \in [y^*-(2\varepsilon/3),y^*+(2\varepsilon/3)]$, \begin{eqnarray*} && y^*-(2\varepsilon/3) \leq\phi\bigl(t,y^*-(2\varepsilon/3)\bigr)\leq\phi(t,y) \leq\phi \bigl(t,y^*+(2\varepsilon/3)\bigr) \leq y^*+(2\varepsilon/3) \end{eqnarray*} and since all solutions approach $y^*$ there is $h>0$ so that $\phi (h,y^*-2\varepsilon/3)\geq y^*-\varepsilon/3$ and $\phi(h,y^*+2\varepsilon/3) \leq y^*+\varepsilon/3$. Thus, for the given value of $h$ and any solution $y(t)$ of $y'=f(y)$, if $|y(T)-y^*|\leq2\varepsilon/3$ then $|y(t)-y^*|\leq2\varepsilon/3$ for $t\geq T$ and $|y(T+h)-y^*|\leq \varepsilon/3$. Given $y_T$ such that $|y_T-y^*|\leq2\varepsilon/3$ and setting $y(T)=y_T$, by Proposition~\ref{mfest} there are constants $C_2,\gamma_2>0$ so that $\sup_{T\leq t \leq T+h}|y_t-y(t)| \leq \varepsilon/3$ with probability $\geq1-C_2e^{-2\gamma_2 N}$, in which case \begin{eqnarray*} \sup_{T \leq t \leq T+h}\bigl|y_t-y^*\bigr| &\leq & \sup _{T \leq t \leq T+h}\bigl|y_t-y(t)\bigr|+\sup_{T \leq t \leq T+h}\bigl|y(t)-y^*\bigr|\\ &\leq & \varepsilon/3 + 2\varepsilon/3 = \varepsilon \end{eqnarray*} and $|y_{T+h} - y^*| \leq|y_{T+h} - y(T+h)| + |y(T+h)-y^*| \leq \varepsilon/3 + \varepsilon/3 = 2\varepsilon/3$ with the same probability. Iterating this for $e^{\gamma_2 N}/h$ time steps, we find that \begin{eqnarray*} &&\sup_{T \leq t \leq e^{\gamma_2 N}}\bigl|y_t - y^*\bigr| \leq\max _{i \in\{ 1,\ldots,e^{\gamma_2 N}/h\}}\sup_{T+(i-1)h \leq t \leq T+ih}\bigl|y_t - y^*\bigr| \leq \varepsilon \end{eqnarray*} with probability $\geq1- (C_2/h)e^{\gamma_2 N}e^{-2\gamma_2 N} = 1 - (C_2/h)e^{-\gamma_2 N}$, then choose $C=C_1+C_2/h$ and $\gamma= \min (\gamma_1,\gamma_2)$. Note that if $|y_0-y^*|\leq2\varepsilon/3$, the iteration step is immediately applicable, in which case we may take $T=0$. \end{pf} \section{Macroscopic behaviour}\label{secmacro} In this section, we prove the macroscopic side of Theorem~\ref{thm1} that is, when $|V_0|\geq\varepsilon N$. We begin with the analogue of Lemma~\ref{mfmt} for the partner model, which we refer to later on as monotonicity. As for the MFE, define $ip_t:=si_t+ii_t$. \begin{lemma}\label{pmmt} Let $\leq$ denote the partial order on $\mathbb{R}^3$ given by $u\leq v\Leftrightarrow u_j\leq v_j, \forall j \in\{1,2,3\}$, and let $u_t=(i_t,ip_t,ii_t)$. If\vspace*{1pt} $(V_t^{(1)},E_t^{(1)})$ and $(V_t^{(2)},E_t^{(2)})$ are two copies of the partner model with $E_0^{(1)}=E_0^{(2)}$ and $V_0^{(1)}\subseteq V_0^{(2)}$ then with respect to the coupling given by the graphical construction, $E_t^{(1)}=E_t^{(2)}$ and $V_t^{(1)}\subseteq V_t^{(2)}$ for $t>0$ and correspondingly $y_t^{(1)}=y_t^{(2)}$ and $u_t^{(1)}\leq u_t^{(2)}$. \end{lemma} \begin{pf} If $E_0^{(1)}=E_0^{(2)}$ then $E_t^{(1)}=E_t^{(2)}=:E_t$ for $t>0$. Given $\{E_t:t\geq0\}$, the only transitions affecting $V_t^{(1)}$ and $V_t^{(2)}$ are recovery of infectious sites and transmission from infectious to healthy sites along open edges, both of which preserve the order $V_t^{(1)}\subseteq V_t^{(2)}$. The equality\vspace*{1pt} $y_t^{(1)}=y_t^{(2)}$ follows directly from $|E_t^{(1)}|=|E_t^{(2)}|$ and the inequality $u_t^{(1)}\leq u_t^{(2)}$ follows directly from $V_t^{(1)}\subseteq V_t^{(2)}$. \end{pf} Using Proposition~\ref{mfest} and Lemma~\ref{pmmt}, we can prove the macroscopic part of Theorem~\ref{thm1} when $R_0\leq1$. In this section, $u_t$ will generally refer to $(i_t,si_t,ii_t)$ or $(i_t,ip_t,ii_t)$, with $y_t$ written separately. \begin{proposition}\label{thm1.1} If $R_0\leq1$, for each $\varepsilon>0$ there are constants $C,T,\gamma >0$ so that, from any initial configuration, with probability $\geq 1-Ce^{-\gamma N}$, $|V_T|\leq\varepsilon N$. \end{proposition} \begin{pf} By Lemma~\ref{pmmt}, it is enough to show the result holds when $V_0=V$ that is, everyone is initially infectious; in this case $y_0=1-2E_0/N$, $i_0=y_0$ and $ip_0=ii_0=(1-y_0)/2$. Let $u_t=(i_t,ip_t,ii_t)$ and let $(y(t),u(t))$ be the solution to the MFE with $y(0)=y_0$ and $u(0)=u_0$. By Lemma~\ref{ydyn} and Proposition~\ref{mfest}, for each $\delta>0$ there are constants $C_1,T_1,\gamma_1>0$ so that with probability $\geq1-C_1e^{-\gamma_1 N}$, $|y_{T_1}-y^*| \leq\delta$ and $|u_{T_1}-u(T_1)|\leq\delta$, so with the same probability $|(y_{T_1},u_{T_1})-(y^*,u(T_1))| \leq\delta$. Recall the set $\Lambda^*$ and let $(y^*,\overline{u}(t))$ be the solution to the MFE with $\overline{u}(0) = (y^*,(1-y^*)/2,(1-y^*)/2)$. As shown in the proof of Lemma~\ref{mfthmsuff}, $\overline{u}(t)$ decreases to an equilibrium. Since $R_0\leq1$, $(0,0,0)$ is the only equilibrium, so $\overline{u}(t)\rightarrow(0,0,0)$ as $t\rightarrow \infty$. Moreover, $\overline{u}(0)\geq v$ for each $v \in\Lambda^*$ so for any solution $(y^*,u(t))$, $\overline{u}(0)\geq u(0)$. By Lemma~\ref{mfmt}, $\overline{u}(t)\geq u(t)$ for $t\geq0$, so there is $T_2$ not depending on $u(0)$ so that $|u(T_2)| \leq\varepsilon/2$. Using Proposition~\ref{mfest}, there are constants $C_2,\gamma_2,\delta>0$ not depending on $u(0)$ so that with probability $\geq1-C_2e^{-\gamma _2 N}$, if $|(y_0,u_0)-(y^*,u(0))|\leq\delta$ then $|u_{T_2}| \leq |u(T_2)|+|u_{T_2}-u(T_2)| \leq\varepsilon/2+\varepsilon/2=\varepsilon$. Letting $T=T_1+T_2$, $C=C_1+C_2$ and $\gamma=\min(\gamma_1,\gamma_2)$ and combining the two steps completes the proof. \end{pf} Using similar ideas, we can prove the macroscopic part of Theorem~\ref {thm1} when $R_0>1$. Before showing the approach to equilibrium, we first have to show long time survival of the infection, and to do that we need the following result concerning the MFEs. \begin{lemma}\label{liftup} Suppose $R_0>1$ and let $v \in\mathbb{R}^3$ with $|v|=1$ be an unstable eigenvector of the MFEs on $\Lambda^*$ as given in the proof of Theorem~\ref{mfthm}, written in $(i,ip,ii)$ coordinates. For $0<\delta' \leq \delta$ let $(y(t),u(t))$ be a solution to the MFE with $|y(0)-y^*|\leq \delta$ and $u(0):=(i(0),ip(0),ii(0)) = \delta' v$. If $\delta>0$ is small enough, then there is $T>0$ so that $\min_j u_j(T)\geq2\delta'$ for all $0< \delta' \leq\delta$. \end{lemma} \begin{pf} First, write the MFE \eqref{mfeq2}, without the $y$ equation, in matrix form as follows: \begin{equation} \label{mfmtx} \pmatrix{i' \cr ip' \cr ii'} = \pmatrix{ -(1+r_+y) & r_- & r_- \cr r_+ \bigl(y^*-i/2\bigr) & -(1+r_-) & 1 \cr r_+i/2 & \lambda& -(2+r_-+\lambda)} \pmatrix{ i \cr ip \cr ii}. \end{equation} The $y$ dynamics\vspace*{1pt} proceeds as in \eqref{mfeq}, and note $|y(t)-y^*|\leq |y(0)-y^*|$ for $t>0$. Write \eqref{mfmtx} as $u' = A(i,y)u$ with $u=(i,si,ii)^{\top}$ to emphasize the dependence on $i,y$. As noted in the proof of Theorem~\ref{mfthm}, if $R_0>1$ then $A:= A(0,y^*)$ has a positive eigenvalue $\mu>0$ with positive eigenvector $v$ such that $|v|=1$, so the system $v' = Av$ has solutions $v(t)=cve^{\mu t}$ for any $c>0$. Let $|\cdot|$ denote the operator norm and let \begin{eqnarray*} && L=\sup_{(i,y)\in[0,1]^2}\bigl|A(i,y)\bigr| \end{eqnarray*} then any solution $u(t)$ to \eqref{mfmtx} has $|u(t)|\leq|u(0)|e^{Lt}$ for $t>0$. Fix $T>0$, then for each $\varepsilon>0$, by continuity there is $\delta>0$ so that if $\max( y-y^*,i)\leq e^{LT}\delta$ then $|A(i,y)-A|\leq\varepsilon$. Let $|y(0)-y^*|\leq\delta$ and for $0<\delta '\leq\delta$ let $u(t)$ be the solution to~\eqref{mfmtx} with $u(0)=\delta' v$, then for $0\leq t \leq T$, \begin{eqnarray*} \bigl|(u-v)'\bigr| &=& \bigl|A(i,y)u - Av\bigr| \leq\bigl|\bigl(A(i,y)-A\bigr)u\bigr| + \bigl|A(u-v)\bigr| \\ &\leq& \bigl|A(i,y)-A\bigr||u| + |A||u-v| \\ &\leq& \varepsilon|u|+ L|u-v| \\ &\leq& \varepsilon e^{Lt}\delta' + L|u-v|. \end{eqnarray*} Letting $v(0)=u(0)$, defining $E(t) := |u(t)-v(t)|$, noting that $E(0)=0$ and integrating, \begin{eqnarray*} && E(T) \leq e^{LT}\varepsilon\delta' T. \end{eqnarray*} Since $v(T) = \delta v e^{\mu T}$, \begin{eqnarray*} \min_j u_j(T) &\geq& \min _j v_j(T) - \max_j \bigl|v_j(T)-u_j(T)\bigr| \\ &\geq& \delta' e^{\mu T}\min_j v_j - \varepsilon e^{LT}\delta'T \\ &=& \delta' e^{\mu T}\Bigl(\min_j v_j -\varepsilon e^{(L-\mu)T}T\Bigr). \end{eqnarray*} Choose $T>0$ so that $e^{\mu T}\min_j v_j/2 \geq2$, then choose $\varepsilon>0$ so that $\varepsilon e^{(L-\mu)T}T \leq\min_j v_j/2$, then it follows that $\min_j u_j(T) \geq2\delta'$. \end{pf} Now we can show long-time survival of the infection when $R_0>1$ and $|V_0|\geq\varepsilon N$. \begin{lemma}\label{stayup} Suppose $R_0>1$. For each $\varepsilon>0$, there are constants $\delta ,C,\gamma>0$ so that if $|V_0|\geq\varepsilon N$ then \begin{eqnarray*} && \mathbb{P}\Bigl(\inf_{0\leq t \leq e^{\gamma N}}|V_t|\geq\delta N \Bigr)\geq 1-Ce^{-\gamma N}. \end{eqnarray*} \end{lemma} \begin{pf} Recall that an event holds with high probability or w.h.p. in $N$ if for $N$ large enough it occurs with probability $\geq1-Ce^{-\gamma N}$ for some $C,\gamma>0$. If $|V_0|\geq\varepsilon N$ then $\max (i_0,ip_0,ii_0)\geq\varepsilon/3$, so in view of Lemma~\ref{pmmt} it is enough to prove the result starting from $u_0:=(i_0,ip_0,ii_0) \in \mathcal{E} := \{(\varepsilon/3,0,0),\break (0,\varepsilon/3,0),(0,0,\varepsilon/3)\}$. For $\delta_1>0$, by Proposition~\ref{ydyn} there are $T,\gamma_1>0$ so that w.h.p. $|y_t-y^*|\leq\delta_1$ for $T\leq t \leq e^{\gamma_1 N}$. If $u(0)\neq(0,0,0)$ then for $t>0$, $\min_j u_j(t)>0$; this is shown for $u(0)\in\Lambda^*$ in the proof of Lemma~\ref{mfthmsuff}, but the same proof applies if $y\neq y^*$. Also, since $(0,0,0)$ is an equilibrium solution, by uniqueness of solutions $u(t)\neq(0,0,0)$ for $0\leq t \leq T$, so by continuity of solutions $\inf\{|u(t)|:0\leq t \leq T\} >0$. Therefore, there exists $0<\delta_2\leq\delta_1$ so that $\min_j u_j(T) \geq\delta_2$ and $\inf\{\max_j u_j(t):0 \leq t \leq T\} \geq \delta_2$ for all $u(0)\in\mathcal{E}$. For $u_0=u(0)\in\mathcal{E}$ with $y_0=y(0) \in[0,1]$, by Proposition~\ref{mfest}, w.h.p. $|u_t-u(t)| \leq\delta_2/2$ for $0\leq t \leq T$ in which case $\min (i_T,ip_T,ii_T)\geq\delta_2/2$ and $\inf\{\max(i_t,ip_t,ii_t):0\leq t \leq T\}\geq\delta_2/2$, which means that for the eigenvector $v$ with $|v|=1$ mentioned in the proof of Lemma~\ref{liftup}, $(i_T,si_T,ii_T) \geq(\delta_2/2)v$, and also $|V_t|\geq(\delta_2/2)N$ for $0\leq t \leq T$. Taking $y(t)=y_t$ and $u(T) = (\delta_2/2)v$, if $|y_t-y^*|\leq\delta _1$ then by Lemma~\ref{liftup} there is $h>0$ so that $\min_j u_j(T+h) \geq \delta_2$, and as before there is $\delta_3>0$ so that $\inf\{\max_j u_j(t):T \leq t \leq T+h\}\geq\delta_3$. By Lemma~\ref{pmmt} and the last paragraph, it is enough to consider the case $u_T=u(T)=(\delta _2/2)v$. Letting $\delta=\min(\delta_2/2,\delta_3/2)$ and using Proposition~\ref{mfest}, with probability $\geq1-Ce^{\gamma_2 N}$, $|u_{t}-u(t)|\leq\delta$ for $T\leq t \leq T+h$, in which case $u_{T+h}\geq(\delta_2/2)v$ and $|V_t|\geq N\min(i_t,ip_t,ii_t)\geq (\delta_3/2)N$ for $T\leq t \leq T+h$. Letting $\gamma=\min(\gamma _1/2,\gamma_2/2)$ and iterating for $e^{\gamma N}/h$ time steps as in the proof of Lemma~\ref{ydyn}, w.h.p. $|V_t| \geq N \min (i_t,ip_t,ii_t)\geq(\delta_3/2)N$ for $T \leq t \leq e^{\gamma N}$. Combining with the previous estimate, w.h.p. $|V_t|\geq\delta N$ for $0\leq t \leq e^{\gamma N}$ as we wanted to show. \end{pf} We now wrap up the macroscopic side of Theorem~\ref{thm1}. \begin{proposition}\label{thm1.2} Suppose $R_0>1$ and let $(y^*,i^*,ip^*,ii^*)$ with $i^*>0$ be the nontrivial equilibrium solution to the MFE \eqref{mfeq2}. Let $u_t=(i_t,ip_t,ii_t)$ and let $u^*=(i^*,ip^*,ii^*)$. For each $\varepsilon >0$, there are constants $C,T,\gamma>0$ so that if $|V_0|\geq\varepsilon N$ then \begin{eqnarray*} && \mathbb{P}\Bigl(\sup_{T \leq t \leq e^{\gamma N}}\bigl|(y_t,u_t)- \bigl(y^*,u^*\bigr)\bigr|\leq \varepsilon\Bigr)\geq1-Ce^{-\gamma N}. \end{eqnarray*} \end{proposition} \begin{pf} We begin with the lower bound. As shown in the proof of Lemma~\ref {stayup} there are $T_1,h_1,\delta_1,\gamma_1>0$ so that w.h.p. $\min (i_t,ip_t,ii_t)\geq\delta_1$, and thus $u_t\geq\delta_1 v$, for $t=T_1+kh_1$, $k=1,\ldots,(e^{\gamma_1 N}-T_1)/h_1$, where $v$ with $|v|=1$ is the eigenvector from Lemma~\ref{liftup}. Let $y(0)=y^*$ and $u(0):=(i(0),ip(0),ii(0))=\delta_1 v$. If $\delta_1>0$ is small enough, then $u_j'(0)>0$ in each coordinate and since $u^*\neq(0,0,0)$ is unique, as shown in the proof of Lemma~\ref{mfthmsuff} $u(t)$ is increasing with respect to $(i,ip,ii)$ coordinates and $\lim_{t\rightarrow\infty}u(t)=u^*$, and in particular $u(t)\leq u^*$ for $t\geq0$. We will need the stronger fact $u_j(t)<u_j^*$ for $j=1,2,3$. Looking to the equations for $i',ip',ii'$ in \eqref{mfeq2}, the derivative of each variable increases with the other two variables, and of course is equal to $0$ at $u^*$. If we had $i(t)=i^*$, then since $ip(t)\leq ip^*$ and $ii(t)\leq ii^*$ we would have $i'<0$ which contradicts the fact that $u(t)$ is increasing, and the same applies to $ip(t)$ and $ii(t)$. Using the above facts, there is $T_2$ so that $u(T_2)\geq u^*-\varepsilon /2$, and since $0<\min_j (u_j^*-u_j(T_2))=:\varepsilon'\leq\varepsilon$, there is $h_2$ so that $u(T_2+h_2)\geq u^*-\varepsilon'/2$. By Proposition~\ref{mfest}, there is $\delta_2>0$ so that if $u_0=u(0)$ and $|y_0-y^*|\leq\delta_2$ then w.h.p. $|u_t-u(t)|\leq\varepsilon'/2$ for $T_2\leq t \leq T_2+h_2$ in which case $u_t\geq u^*-\varepsilon$ for $T_2\leq t \leq T_2+h_2$ and $u_{T_2+h_2}\geq u^*-\varepsilon'$, which means\vspace*{1pt} that $u_{T_2+h_2}\geq u(T_2)$. By Lemma~\ref{ydyn}, there are $T_3,\gamma_2$ so that w.h.p. $|y_t-y^*|\leq\min(\delta_2,\varepsilon)$ for $T_3\leq t \leq e^{\gamma_2 N}$. Let $k$ be such that $T_1+kh_1 \geq T_3$ and let $T_4=T_1+kh_1$, then setting $u(T_4)=\delta_1 v$, w.h.p. $u_{T_4} \geq u(T_4)$ so it is enough to consider the case where $u_{T_4}=u(T_4)$. Letting $T=T_4+T_2$, then for some $\gamma_3>0$, with probability $\geq1-Ce^{-\gamma_3 N}$, $u_t\geq u^*-\varepsilon$ for $T\leq t \leq T+h_2$ and $u_{T+h_2}\geq u(T)$. Letting $\gamma=\min (\gamma_2/2,\gamma_3/2)$ and iterating for $(e^{\gamma N}-T)/h_2$ time steps (subtracting $T$ to make sure $y_t$ stays in bounds) as in the proof of Lemma~\ref{ydyn} it follows that $u_t\geq u^*-\varepsilon$ for $T\leq t \leq e^{\gamma N}$. To prove the upper bound, it is enough to consider any value of $y_0$ and let $u_0 = (y_0,(1/2)(1-y_0),(1/2)(1-y_0))$. Setting $y(0)=y^*$ and $u(0)=(y^*,(1/2)(1-y^*),(1/2)(1-y^*))$, then as shown in the proof of Lemma~\ref{mfthmsuff}, $u(t)$ decreases to~$u^*$. Moreover, $u_j(t)-u_j^*>0$ for the same reason as above, so there is $T_1$ so that $u(T_1)\leq u^*+\varepsilon/2$, and since $0<\min_j(u_j(T_1)-u_j^*)=:\varepsilon'\leq\varepsilon$, there is $h$ so that $u(T_1+h)\geq u^*-\varepsilon'/2$. By Proposition~\ref{mfest}, there is $\delta>0$ so that if $\max(|u_0-u(0)|,|y_0-y(0)|)\leq\delta$ then w.h.p. $|u_t-u(t)|\leq\varepsilon'/2$ for $T_1\leq t \leq T_1+h$ in which case $u_t \leq u^*+\varepsilon$ for $T_1\leq t \leq T_1+h$ and $u_{T_1+h}\leq u^*+\varepsilon'$ which means that $u_{T_1+h}\leq u(T_1)$. By Lemma~\ref {ydyn}, there are $T_2,\gamma_1$ so that w.h.p. $|y_t-y^*|\leq\delta$ for $T_2\leq t \leq e^{\gamma_1 N}$. Letting $T=T_1+T_2$ and setting $u(T_2)=(y^*,(1/2)(1-y^*),(1/2)(1-y^*)$ and $u_{T_2}=(y_{T_2},(1/2)(1-y_{T_2}),(1/2)(1-y_{T_2}))$, then for some $\gamma_2>0$, with probability $\geq1-Ce^{-\gamma_2 N}$, $u_t \leq u^*+\varepsilon$ for $T\leq t \leq T+h$ and $u_{T+h} \leq u(T)$. Letting $\gamma= \min(\gamma_1/2,\gamma_2/2)$ and iterating\vspace*{1pt} for $(e^{\gamma N}-T)/h$ time steps it follows as before that $u_t\leq u^*+\varepsilon$ for $T \leq t \leq e^{\gamma N}$. \end{pf} In the next section, we use a comparison to prove that if $R_0<1$ the infection disappears quickly from the population. To make this work, we will need a complementary result to Lemma~\ref{stayup}. \begin{lemma}\label{staydown} If $R_0\leq1$, then for each $\varepsilon>0$ there are $C,T,\gamma>0$ so that \begin{eqnarray*} && \mathbb{P}\Bigl(\sup_{T\leq t \leq e^{\gamma N}}|V_t|\leq\varepsilon N\Bigr) \geq 1-Ce^{-\gamma N}. \end{eqnarray*} \end{lemma} \begin{pf} The proof is similar to that of Lemma~\ref{stayup}. Letting $\overline {u}=(y^*,(1-y^*)/2,(1-y^*)/2)$ as in Lemma~\ref{mfthmsuff} and letting $(y^*,\overline{u}(t))$ be the solution to the MFE with $\overline {u}(0)=\overline{u}$, since $\overline{u}(t)$ decreases to $(0,0,0)$ and $\overline{u}\geq v$ for all $v \in\Lambda^*$, there is $T_1$ so that for any solution $(y^*,u(t))$, $|u(T_1)|\leq\varepsilon/6$, and since $\varepsilon':=\min_ju_j(T_1)>0$, there is $h$ so that $|u(T_1+h)|\leq\varepsilon'/2$. There is $\delta>0$ so that if $\max (|u_0-u(0)|,|y_0-y^*|)\leq\delta$ then w.h.p. $|u_t-u(t)|\leq\min(\varepsilon '/2,\varepsilon/6)$ for $T_1\leq t \leq T_1+h$ in which case $|u_t|\leq \varepsilon/3$ for $T_1\leq t \leq T_1+h$ and $|u_{T_1+h}|\leq\varepsilon'$ which means $u_{T_1+h}\leq u(T_1)$. There are $\gamma_1,T_2>0$ so that w.h.p. $|y_t-y^*|\leq\delta$ for $T_2\leq t \leq e^{\gamma_1 N}$. By monotonicity, it is enough to consider $u_{T_2}=\overline{u}$. Letting $u(T_2)=u_{T_2}$ and $T=T_1+T_2$, there are $C_1,\gamma_2$ so that with probability $\geq1- C_1e^{\gamma_2 N}$, $|u_t|\leq\varepsilon/3$ for $T\leq t \leq T+h$ and $u_T\leq u(T)$. Letting $\gamma=\min(\gamma _1,\gamma_2)$ and iterating for $(e^{\gamma N}-T)/h$ time steps, w.h.p. $|u_t|\leq\varepsilon/3$ and thus $|V_T|\leq\varepsilon N$ for $T \leq t \leq e^{\gamma N}$. \end{pf} \section{Microscopic behaviour}\label{secmicro} In this section, we compare the partner model in the regime $|V| \leq \varepsilon N$ for small $\varepsilon>0$ to a branching process to get decisive information when $R_0\neq1$. \subsection{Subcritical case: $R_0<1$} First, we introduce the comparison process to use when $R_0<1$. \begin{definition}\label{ubpdef} Define the \emph{upperbound process} (UBP) $B_t = (\mathcal {I}_t,\mathcal{SI}_t,\mathcal{II}_t)$ on state space $\{0,1,2,\ldots\}^3$ with parameter $0 \leq\delta\leq y^*$ by the following transitions: \begin{itemize} \item$\mathcal{I}\rightarrow\mathcal{I}-1$ at rate $\mathcal{I}$, \item$\mathcal{I}\rightarrow\mathcal{I}-1$ and $\mathcal {SI}\rightarrow\mathcal{SI}+1$ at rate $r_+(y^*-\delta)\mathcal{I}$, \item$\mathcal{SI}\rightarrow\mathcal{SI}+1$ at rate $2r_+\delta \mathcal{I}$, \item$\mathcal{II}\rightarrow\mathcal{II}+1$ at rate $r_+\delta \mathcal{I}$, \item$\mathcal{SI}\rightarrow\mathcal{SI}-1$ at rate $\mathcal{SI}$, \item$\mathcal{SI}\rightarrow\mathcal{SI}-1$ and $\mathcal {I}\rightarrow\mathcal{I}+1$ at rate $r_- \mathcal{SI}$, \item$\mathcal{SI}\rightarrow\mathcal{SI}-1$ and $\mathcal {II}\rightarrow\mathcal{II}+1$ at rate $\lambda\mathcal{SI}$, \item$\mathcal{II}\rightarrow\mathcal{II}-1$ and $\mathcal {SI}\rightarrow\mathcal{SI}+1$ at rate $2 \mathcal{II}$, \item$\mathcal{II}\rightarrow\mathcal{II}-1$ and $\mathcal {I}\rightarrow\mathcal{I}+2$ at rate $r_- \mathcal{II}$. \end{itemize} \end{definition} Note the UBP describes the evolution of the total number of particles of each of the three types $\mathcal{I},\mathcal{SI},\mathcal{II}$ in a multi-type continuous-time branching process; for an introduction to branching processes, see \cite{bp}. We now show that for fixed $R_0<1$, if $\delta>0$ is small enough the UBP quickly dies out. \begin{lemma}\label{ubpdown} For fixed $\lambda,r_+,r_-$, let $B_t$ denote the UBP with parameter $\delta'$ and let $R_0$ be as defined in \eqref{r0eq0}. If $R_0<1$, there are $C,\delta>0$ so that if $|B_0|\leq N$ and $\delta'\leq\delta $ then \begin{eqnarray*} && \mathbb{P}\bigl(|B_{C\log N}|=0\bigr) \rightarrow1 \qquad\mbox{as }N\rightarrow \infty. \end{eqnarray*} \end{lemma} \begin{pf} For a multi-type continuous time branching process $B_t = (b_1(t),\break \ldots, b_n(t))$, with $b_j(t)$ denoting the number of type $j$ particles alive at time $t$, we can extract some useful information from the \emph{mean matrix} $M_t$ defined by $m_{ij}(t) = \mathbb {E}(b_j(t) \vert b_k(0)=\delta_{ik})$. Since particles evolve independently, $\mathbb{E}(B_t) = B_0M_t$ and it is not hard to show that $M_t$ satisfies the equation \begin{eqnarray*} &&\frac{d}{dt}M_t = AM_t \end{eqnarray*} and, therefore, $M_t = \exp(At)$, where $A = (r_{ij})$ is the matrix whose entries $r_{ij}$ give the rate at which a particle of type $i$ produces particles of type $j$. If $\real(\lambda)<0$ for each eigenvalue $\lambda$ of $A$, then letting $\gamma_0 = \min\{|\real (\lambda)|:\lambda\in\sigma(A)\}$ where $\sigma(\cdot)$ denotes the spectrum, from standard matrix theory it follows that for any $\gamma _1<\gamma_0$, there is $C_1>0$ so that $m_{ij}\leq C_1e^{-\gamma_1t}$ for each pair $ij$. Since each $b_i(t)$ is valued on nonnegative integers, \begin{eqnarray*} \mathbb{P}\bigl(B_t\neq(0,\ldots,0)\bigr)&\leq&\sum _i \mathbb{P}\bigl(b_i(t) \neq0\bigr) \leq \sum_i\mathbb{E}b_i(t) \\ &= &\sum_{ij}b_i(0)m_{ij}(t) \leq\bigl|B(0)\bigr|n^2C_1e^{-\gamma_1 t}. \end{eqnarray*} If $|B(0)|\leq N$, then\vspace*{1pt} letting $t=C\log N$ for $C>1/\gamma_1$ and setting $\gamma= C\gamma_1-1$ and $C_2=n^2C_1$ we find \begin{eqnarray*} \mathbb{P}\bigl(B_{C\log N}\neq(0,\ldots,0)\bigr)&\leq & NC_2e^{-\gamma_1 C\log N} = NC_2N^{-\gamma_1 C}\\ & =& C_2N^{1-\gamma_1 C} = C_2N^{-\gamma} \end{eqnarray*} which tends to $0$ as $N\rightarrow\infty$. In our case, \begin{eqnarray*} && A = A(\delta) = \pmatrix{-\bigl(1+r_+\bigl(y^*-\delta\bigr)\bigr) & r_+\bigl(y^*+\delta\bigr) & r_+\delta \cr r_- & -(1+r_-+\lambda) & \lambda \cr 2r_- & 2 & -(2+r_-) }. \end{eqnarray*} Letting $\sigma(A)$ denote the spectrum and defining the \emph{spectral abcissa} $\mu(A):=\max\{\real(\lambda):\lambda\in\sigma(A)\}$, if $\mu (A(\delta))<0$, then the real part of each eigenvalue of $A$ is negative, and the above argument applies. By continuity of eigenvalues in the entries of a matrix, it is enough to show $\mu(A(0))<0$, since then there is $\delta>0$ so that if $\delta'\leq\delta$ then $\mu (A(\delta')) \leq\mu(A(0))/2 < 0$. Setting $\delta=0$, \begin{eqnarray*} && A(0)= \pmatrix{-\bigl(1+r_+y^*\bigr) & r_+y^* & 0 \cr r_- & -(1+r_-+\lambda) & \lambda \cr 2r_- & 2 & -(2+r_-)} \end{eqnarray*} and looking to Section~\ref{secmf} we see that $A(0,0)$ is the (transpose of the) linearized matrix at $(0,0,0)$ for the MFE on $\Lambda^*$, which we denote $A$. As noted in Remark~\ref{r0remark}, $(0,0,0)$ is locally asymptotically stable when $R_0<1$, and in the proof of Theorem~2 in \cite{watm} this is done by showing that $\mu(A)<0$. \end{pf} We now complete the proof of the case $R_0<1$ in Theorem~\ref{thm1}. \begin{proposition}\label{microdown} If $R_0<1$ there are constants $C,T,\gamma>0$ so that, from any initial configuration, \begin{eqnarray*} && \mathbb{P}\bigl(|V_{T+C\log N}| =0\bigr)\rightarrow1 \qquad\mbox{as } N\rightarrow \infty. \end{eqnarray*} \end{proposition} \begin{pf} Let $U_t := (I_t,\mathit{SI}_t,\mathit{II}_t)$ denote variables in the partner model and for $\delta>0$ such that $y^*-\delta\geq0$ and $y^*+\delta\leq1$, let $B_t$ denote the UBP with parameter~$\delta$. We first describe a coupling with the property that $U_0\leq B_0\Rightarrow U_t\leq B_t$ for $t>0$, with respect to the usual partial order $U \leq V \Leftrightarrow U_j\leq V_j,j=1,2,3$. For $j=1,\ldots,10$, define a countable number of independent Poisson point processes (p.p.p.'s) $\{ e_j(n):n=1,2,\ldots\}$ with respective rates $1,r_+,r_+,1,r_-,\lambda ,2,r_-,r_+,r_-$, together with independent uniform $[0,1]$ random variables attached to each event in $e_2(n),e_3(n),e_9(n), n=1,2,\ldots.$ These correspond to the nine transitions listed in the definition of the UBP, except that the second and third transition in the UBP are lumped into $e_2$, plus an additional transition for $S+S\rightarrow \mathit{SS}$ and one for $\mathit{SS}\rightarrow S+S$. Note that the rates of $e_2,e_3,e_9$ appear too large at the moment and are corrected in the next paragraph. Construct the UBP one transition at a time as follows, letting $(\mathcal{I},\mathcal{SI},\mathcal{II})$ denote the present state. Each event in $e_1(1),\ldots,e_1(\mathcal{I})$ reduces $\mathcal{I}$ by $1$. For an event in $e_2,e_3$ let $p$ denote the corresponding uniform $[0,1]$ random variable. If an event in $e_2(1),\ldots,e_2(\mathcal{I})$ occurs and $p \leq(y^*-\delta)$, reduce $\mathcal{I}$ by 1 and increase $\mathcal{SI}$ by 1, while if $y^*-\delta< p \leq y^*+\delta$ simply increase $\mathcal{SI}$ by~1. If an event in $e_3(1),\ldots,e_3(\mathcal{I})$ occurs and $p\leq\delta$, increase $\mathcal{II}$ by 1. Each event in $e_4(1),\ldots,e_4(\mathcal{SI})$ reduces $\mathcal{SI}$ by 1, each event in $e_5(1),\ldots,e_5(\mathcal {SI})$ reduces $\mathcal{SI}$ by 1 and increases $\mathcal{I}$ by 1, each event in $e_6(1),\ldots,e_6(\mathcal{SI})$ reduces $\mathcal{SI}$ by 1 and increases $\mathcal{II}$ by 1, each event in $e_7(1),\ldots,e_7(\mathcal{II})$ reduces $\mathcal{II}$ by 1 and increases $\mathcal{SI}$ by 1, and each event in $e_8(1),\ldots,e_8(\mathcal {II})$ reduces $\mathcal{II}$ by 1 and increases $\mathcal{I}$ by~2. It can be checked that the transition rates are correct. Similarly, construct the Markov chain $(S_t,I_t,\mathit{SS}_t,\mathit{SI}_t,\mathit{II}_t)$ for the partner model as follows, letting $(S,I,\mathit{SS},\mathit{SI},\mathit{II})$ denote the present state. Define $\alpha_t = y_t-y^*-i_t$ and $\beta_t = i_t/2-1/(2N)$ and note that $\alpha_t$ and $\beta_t$ are piecewise constant in time. Each event in $e_1(1),\ldots,e_1(I)$ reduces $I$ by $1$ and increases $S$ by 1. If an event in $e_2(1),\ldots,e_2(I)$ occurs and $p\leq y^*+\alpha_t$ reduce $S$ and $I$ by 1 and increase $\mathit{SI}$ by~1. If an event in $e_3(1),\ldots,e_3(I)$ occurs and $p \leq\beta_t$ reduce $I$ by 2 and increase $\mathit{II}$ by 1. Each event in $e_4(1),\ldots,e_4(\mathit{SI})$ reduces $\mathit{SI}$ by 1 and increases $\mathit{SS}$ by 1, each event in $e_5(1),\ldots,e_5(\mathit{SI})$ reduces $\mathit{SI}$ by 1 and increases $S$ and $I$ by 1, and events in $e_6,e_7,e_8$ have the same effect as before. If an event in $e_9(1),\ldots,e_9(S)$ occurs and $p \leq s_t/2-1/(2N)$ reduce $S$ by 2 and increase $\mathit{SS}$ by 1, and each event in $e_{10}(1),\ldots,e_{10}(\mathit{SS})$ reduces $\mathit{SS}$ by 1 and increases $S$ by 2. Recalling that $U_t:=(I_t,\mathit{SI}_t,\mathit{II}_t)$, if $U_0 \leq B_0$ and $\sup_{s \leq t}\max (|\alpha_s|,\beta_s)\leq\delta$ then $U_t \leq B_t$ since (as can be easily checked) the order is preserved at each transition. By Lemma~\ref{ydyn}, there are $T_1,\gamma_1>0$ so that w.h.p. $|y_t-y^*|\leq\delta/2$ for $T_1\leq t \leq e^{\gamma N}$ and since $R_0<1$, by Lemma~\ref{staydown} there are $T_2,\gamma_2$ so that $|V_t|\leq(\delta/2)N$, and thus $i_t \leq\delta/2$ for $T_2\leq t \leq e^{\gamma_2 N}$. Letting $T=\max(T_1,T_2)$ and $\gamma=\min(\gamma _1,\gamma_2)$, w.h.p. $\max(|\alpha_t|,\beta_t)\leq\delta$ for $T\leq t \leq e^{\gamma N}$. Setting $B_T=U_T$ and using Lemma~\ref{ubpdown} completes the proof. \end{pf} \subsection{Supercritical case: $R_0>1$} We introduce the comparison process for $R_0>1$, which is similar to the UBP, but different. \begin{definition}\label{lbpdef} Define the \emph{lowerbound process} (LBP) $B_t = (\mathcal {I}_t,\mathcal{SI}_t,\mathcal{II}_t)$ on state space $\{0,1,2,\ldots\}^3$ with parameters $\delta\geq0$ such that $y^*-\delta\geq0$ by the following transitions: \begin{itemize} \item$\mathcal{I}\rightarrow\mathcal{I}-1$ at rate $(1+2r_+\delta )\mathcal{I}$, \item$\mathcal{I}\rightarrow\mathcal{I}-1$ and $\mathcal {SI}\rightarrow\mathcal{SI}+1$ at rate $r_+(y^*-\delta)\mathcal{I}$, \item$\mathcal{I}\rightarrow\mathcal{I}-2$ at rate $r_+\delta\mathcal{I}$, \item$\mathcal{SI}\rightarrow\mathcal{SI}-1$ at rate $\mathcal{SI}$, \item$\mathcal{SI}\rightarrow\mathcal{SI}-1$ and $\mathcal {I}\rightarrow\mathcal{I}+1$ at rate $r_- \mathcal{SI}$, \item$\mathcal{SI}\rightarrow\mathcal{SI}-1$ and $\mathcal {II}\rightarrow\mathcal{II}+1$ at rate $\lambda\mathcal{SI}$, \item$\mathcal{II}\rightarrow\mathcal{II}-1$ and $\mathcal {SI}\rightarrow\mathcal{SI}+1$ at rate $2 \mathcal{II}$, \item$\mathcal{II}\rightarrow\mathcal{II}-1$ and $\mathcal {I}\rightarrow\mathcal{I}+2$ at rate $r_- \mathcal{II}$. \end{itemize} \end{definition} As before, the LBP describes the evolution of the total number of particles of each of the three types $\mathcal{I},\mathcal{SI},\mathcal {II}$ in a multi-type continuous-time branching process. We now show that for fixed $R_0>1$, if $\delta>0$ is small enough then the LBP survives. \begin{lemma}\label{lbpup} Let $B_t$ denote the $LBP$ with parameter $\delta'$. If $\lambda ,r_+,r_-$ are such that $R_0>1$ then there are $C,\delta>0$ so that if $\delta'\leq\delta$ then\break $\liminf_{N\rightarrow\infty}\mathbb{P}(B_{C \log N}\neq(0,0,0))>0$ and \begin{eqnarray*} && \mathbb{P}\bigl(|B_{C \log N}|\geq\delta N \vert B_{C \log N} \neq(0,0,0)\bigr) \rightarrow1 \qquad\mbox{as }N \rightarrow\infty. \end{eqnarray*} \end{lemma} \begin{pf} As in the proof of Lemma~\ref{ubpdown}, define the mean matrix $M(t)=\exp(At)$ and the spectral abcissa $\mu(A)$. If $\delta'=0$ for both the UBP and the LBP they coincide, in which case $A$ is the transpose of the linearized matrix at $(0,0,0)$ of the MFE on $\Lambda ^*$. As shown in the proof of Theorem~\ref{mfthm}, if $R_0>1$ then $\mu (A)>0$. By continuity of eigenvalues in the entries of a matrix, there is $\delta>0$ so that if $\delta'\leq\delta$ then $\mu(A(\delta'))\geq \mu(A)/2 >0$. As shown in V.7 of \cite{bp}, if $M(t)$ is such that for some $t_0>0$ and each entry $m_{ij}(t)$ of $M(t)$ one has $m_{ij}(t_0)>0$ (which is the case here), then $\mu(A)=:\lambda_1$ is an eigenvalue of $A$, and if $\lambda_1>0$ the process is said to be \emph{supercritical}. In this case, $B_te^{-\lambda_1 t} \rightarrow Wv$ where $v$ is a left eigenvector of $A$ with eigenvalue $\lambda_1$ and $W$ is a real-valued random variable. Setting $t=C\log N$ with $C>1/\lambda_1$ and letting $\gamma= C\lambda_1>1$, $B_{C \log N}N^{-\gamma} \rightarrow Wv$, so for each $\varepsilon>0$, \begin{eqnarray*} && \liminf_{N\rightarrow\infty}\mathbb{P}\bigl(|B_{C\log N}|\geq\delta N\bigr) \geq \lim_{N\rightarrow\infty}\mathbb{P}\bigl(|B_{C\log N}| \geq \varepsilon N^{\gamma}\bigr) = \mathbb{P}\bigl(W|v|\geq\varepsilon\bigr) \end{eqnarray*} and letting $\varepsilon\rightarrow0^+$ and using continuity of measure, \begin{eqnarray*} && \liminf_{N\rightarrow\infty}\mathbb{P}\bigl(|B_{C\log N}|\geq\delta N\bigr) \geq \mathbb{P}(W>0). \end{eqnarray*} Under a mild regularity assumption on the offspring distribution that holds trivially in this case, $\mathbb{P}(W>0)=\lim_{t\rightarrow\infty }\mathbb{P}(B_t\neq(0,0,0))>0$. Since $|B_t| \geq\delta N$ implies $B_t\neq(0,0,0)$, this means $\limsup_{N\rightarrow\infty}\mathbb {P}(|B_{C\log N}|\geq\delta N) \leq\break \lim_{t\rightarrow\infty}\mathbb {P}(B_t\neq(0,0,0)) =\mathbb{P}(W>0)$, so $\lim_{N\rightarrow\infty }\mathbb{P}(|B_{C\log N}|\geq\delta N)$ exists and is equal to $\mathbb {P}(W>0)$. The result then follows by observing that for $t,x>0$, $\mathbb{P}(|B_t| \geq x \vert B_t \neq(0,0,0)) = \mathbb{P}(|B_t|\geq x)/\mathbb{P}(B_t \neq(0,0,0))$. \end{pf} We now complete the proof of Theorem~\ref{thm1}. \begin{proposition} If $R_0>1$, there are constants $\delta,p,C,T>0$ so that if $|V_0|>0$ then $\mathbb{P}(|V_{T+C\log N}| \geq\delta N )\geq p$. \end{proposition} \begin{pf} We use the same approach as in the proof of Proposition~\ref {microdown}. Let $U_t := (I_t,\mathit{SI}_t,\mathit{II}_t)$ denote variables in the partner model and for $\delta_1>0$ such that $\delta_1\leq1$, $y^*-\delta_1\geq0$ and $y^*+\delta_1\leq1$, let $B_t$ denote the LBP with parameter $\delta_1$. Let $e_1,\ldots,e_{10}$ be as in the proof of Proposition~\ref{microdown}. Construct the LBP one transition at a time as follows, letting $(\mathcal{I},\mathcal{SI},\mathcal{II})$ denote the present state. Each event in $e_1(1),\ldots,e_1(\mathcal{I})$ reduces $\mathcal{I}$ by $1$. For an event in $e_2,e_3$ let $p$ denote the corresponding uniform $[0,1]$ random variable. If an event in $e_2(1),\ldots,e_2(\mathcal{I})$ occurs and $p \leq(y^*-\delta_1)$, reduce $\mathcal{I}$ by 1 and increase $\mathcal{SI}$ by 1, while if $y^*-\delta_1 < p \leq y^*+\delta _1$ simply reduce $\mathcal{I}$ by 1. If an event in $e_3(1),\ldots,e_3(\mathcal{I})$ occurs and $p\leq\delta_1$, reduce $\mathcal{I}$ by 2. Events in $e_4,e_5,e_6,e_7,e_8$ have the same effect as in the dynamics of the UBP. The Markov chain $(S_t,I_t,\mathit{SS}_t,\mathit{SI}_t,\mathit{II}_t)$ for the partner model is constructed in the same way as in the proof of Proposition~\ref{microdown}, with $\alpha _t,\beta_t$ defined in the same way, and it is easy to check in this case that if $U_0\geq B_0$ and $\sup_{s \leq t}\max(|\alpha_s|,\beta _s)\leq\delta_1$ then $U_t \geq B_t$. Define the stopping time $\tau= \inf\{t:|U_t|\geq\delta_1 N/2\}$ and note that $|V_{\tau}|\geq(\delta_1/2) N$. By Lemma~\ref{stayup} and using the strong\vspace*{1pt} Markov property, there are $\delta,\gamma>0$ so that w.h.p. $|V_t|\geq\delta N$ for $\tau\leq t \leq\tau+e^{\gamma N}$. There are $T,\gamma>0$ so that w.h.p. $|y_t - y^*|\leq\delta_1/2$ for $T\leq t \leq e^{\gamma N}$. If $\tau\leq T$, then since $T$ is fixed, we are done. If $t<\tau$ then $i_t \leq\delta_1/2$, so letting $B_T=U_T$, if $T\leq t <\tau$ then $\max(|\alpha_t|,\beta_t)\leq\delta_1$, so $U_t\geq B_t$ for $T \leq t < \tau$. The result follows from this inequality and from Lemma~\ref{lbpup}. \end{pf} \section*{Acknowledgements} The authors wish to thank Chris Hoffman for the suggestion to study the model on the complete graph, as well as the referee for a thorough reading of the article and for helpful comments.
2,877,628,091,396
arxiv
\section{Introduction} \label{SCintro} In discrete convex analysis \cite{Mdca98=valmat, Mdcasiam=valmat}, \cite[Chapter VII]{Fuj05book=valmat}, M-concave functions and their variant called M$\sp{\natural}$-concave functions play a major role. The concepts of M-concave functions and M$\sp{\natural}$-concave functions were introduced, respectively, by Murota \cite{Mstein96=valmat} and Murota--Shioura \cite{MS99gp=valmat} for functions defined on the integer vectors. In this paper we deal with \Mnat- and M-concave set functions, that is, \Mnat- and M-concave functions defined on $\{ 0, 1 \}$-vectors. M-concave set functions are exactly the same as valuated matroids introduced earlier by Dress--Wenzel \cite{DW90=valmat,DW92=valmat}. \Mnat-concavity of a set function has significance in economics, as it is equivalent to the gross substitutes property (GS) of Kelso--Crawford \cite{KC82=valmat}. See Murota \cite[Chapter 11]{Mdcasiam=valmat}, Murota \cite{Mdcaeco16=valmat}, and Shioura--Tamura \cite{ST15jorsj=valmat} for more about economic significance of \Mnat-concavity. In this paper we are interested in various types of exchange properties characterizing \Mnat-concave and M-concave set functions. We aim at collecting related results scattered in the literature and giving (reasonably) self-contained elementary proofs for them. The exchange properties for \Mnat-concave set functions treated in Section~\ref{SCmncavsetfn} are mostly based on Murota--Shioura \cite{MS99gp=valmat,MS18mnataxiom=valmat}. The proofs given in Section~\ref{SCmncavexcprf} are obtained by translating the proofs given in \cite{MS18mnataxiom=valmat} for functions on the integer lattice to those for set functions (with some simplifications). The exchange properties for M-concave set functions treated in Section~\ref{SCmcavsetfn} are based on Murota \cite{Mmax97=valmat,Mspr2000=valmat,Mdcasiam=valmat}, while the proofs are made consistent with those in Section~\ref{SCmncavexcprf}. Multiple exchange properties treated in Section~\ref{SCexchange01mult} are taken from Murota \cite{Mmultexc18=valmat,Mmultexcstr18=valmat}. \section{\Mnat-concave Set Functions} \label{SCmncavsetfn} \subsection{Definition} \label{SCmncavsetfnDef} Let $f: 2\sp{N} \to \Rminf$ be a real-valued set function on $N = \{ 1,2,\ldots, n \}$ and $\mathcal{F} = \dom f$ be the {\em effective domain} of $f$ defined by \begin{equation} \label{effdom01def} \dom f = \{ X \subseteq N \mid f(X) > -\infty \}. \end{equation} We always assume that $\dom f$ is nonempty. We say that $f$ is an {\em \Mnat-concave function}, if, for any $X, Y \in \mathcal{F}$ and $i \in X \setminus Y$, we have (i) $X - i \in \mathcal{F}$, $ Y + i \in \mathcal{F}$ and \begin{equation} \label{mnatcav1} f( X) + f( Y ) \leq f( X - i ) + f( Y + i ), \end{equation} or (ii) there exists some $j \in Y \setminus X$ such that $X - i +j \in \mathcal{F}$, $ Y + i -j \in \mathcal{F}$ and \begin{equation} \label{mnatcav2} f( X) + f( Y ) \leq f( X - i + j) + f( Y + i -j). \end{equation} Such property is referred to as an {\em exchange property}. Here we use short-hand notations $X - i = X \setminus \{ i \}$ and $Y + i = Y \cup \{ i \}$ as well as $X - i + j =(X \setminus \{ i \}) \cup \{ j \}$ and $Y + i - j =(Y \cup \{ i \}) \setminus \{ j \}$. An \Mnat-concave function can also be defined without explicit reference to its effective domain by the following expression of the exchange property \footnote In acronym $\MncavS$, ``EXC''stands for ``exchange'' and $\overline{\rm \phantom{EXC}}$ in ``$\overline{\rm EXC}$'' indicates concavity (in contrast to convexity); ``$\mathbb{B}$'' stands for ``binary'' showing that this condition applies to set functions (in contrast to ``$\mathbb{Z}$'' for functions on the integer lattice). } \begin{description} \item[\MncavSb] For any $X, Y \subseteq N$ and $i \in X \setminus Y$, we have \begin{align} f( X) + f( Y ) &\leq \max\left( f( X - i ) + f( Y + i ), \ \max_{j \in Y \setminus X} \{ f( X - i + j) + f( Y + i -j) \} \right) , \label{mnatconcavexc2} \end{align} \end{description} where $(-\infty) + a = a + (-\infty) = (-\infty) + (-\infty) = -\infty$ for $a \in \RR$, $-\infty \leq -\infty$, and the maximum taken over an empty set is defined to be $-\infty$. The effective domain of an \Mnat-concave function is equipped with a nice combinatorial structure. Let $\mathcal{F}$ be the effective domain of an \Mnat-concave function $f$. As a consequence of the exchange property $\MncavS$ of function $f$, the set family $\mathcal{F}$ satisfies the following exchange property: \begin{description} \item[\BnvexSb] For any $X, Y \in \mathcal{F}$ and $i \in X \setminus Y$, \ we have (i) $X - i \in \mathcal{F}$, $ Y + i \in \mathcal{F}$ \ or \ (ii) there exists some $j \in Y \setminus X$ such that $X - i +j \in \mathcal{F}$, $ Y + i -j \in \mathcal{F}$. \end{description} This means that $\mathcal{F}$ forms a matroid-like structure, called a {\em generalized matroid} ({\em g-matroid}). In this paper we refer to it as an {\em \Mnat-convex family} to emphasize its role for discrete convexity. An \Mnat-convex family containing the empty set as its member is exactly the family of independent sets of a matroid. An \Mnat-convex family consisting of equi-cardinal sets forms the family of bases of a matroid, which may also be called an {\em M-convex family}\index{M-convex family}. More generally, an equi-cardinal subfamil \footnote An equi-cardinal subfamily of $\mathcal{F}$ means a set family represented as $\{ X \in \mathcal{F} \mid |X| = r \}$ for some $r \in \ZZ$. } of an \Mnat-convex family is an M-convex family. M-convex families (matroid bases) and \Mnat-convex families (g-matroids) are fully studied in the literature of matroids (Frank \cite{Fra11book=valmat}, Oxley \cite{Oxl11=valmat}, Schrijver \cite{Sch03=valmat}, and Welsh \cite{Wel76=valmat}). \subsection{Exchange properties characterizing \Mnat-concave functions} \label{SCexchange01glob} Under the assumption that the $\dom f$ contains the empty set, the \Mnat-concavity can be characterized by a simpler condition: \begin{description} \item[{\bf (P1$[\mathbb{B}]$)}] For any $X, Y \subseteq N$ with $|X| < |Y|$, we have \begin{align} f( X) + f( Y ) &\leq \max_{j \in Y \setminus X} \{ f( X + j) + f( Y -j) \} . \label{mnatP1=01} \end{align} \end{description} This condition (P1$[\mathbb{B}]$), applicable to a pair of subsets $(X, Y)$ with with $|X| < |Y|$, makes the pair closer with an appropriate element $j \in Y \setminus X$ without decreasing the sum of the function values. \begin{theorem}\label{THmnatcavP1hered01} Let $f: 2\sp{N} \to \Rminf$ be a set function with $\dom f$ containing the empty set. Then $f$ is \Mnat-concave if and only if it satisfies {\rm (P1$[\mathbb{B}]$)}. \end{theorem} \begin{proof} The proof is given in Section~\ref{SCproofmnatcavP1hered01}. \end{proof} \Mnat-concave functions satisfy other cardinality-restricted exchange properties and are characterized by some combinations thereof. The exchange properties (P2$[\mathbb{B}]$) and (P3$[\mathbb{B}]$) below exclude the first possibility (\ref{mnatcav1}) in $\MncavS$ when $|X| \leq |Y|$, and (P4$[\mathbb{B}]$) is a special case of $\MncavS$ with $|X| > |Y|$. \begin{description} \item[{\bf (P2$[\mathbb{B}]$)}] For any $X, Y \subseteq N$ with $|X| = |Y|$ and $i \in X \setminus Y$, we have \begin{align} f( X) + f( Y ) &\leq \max_{j \in Y \setminus X} \{ f( X - i + j) + f( Y + i -j) \} ; \label{mnatP2=01} \end{align} \item[{\bf (P3$[\mathbb{B}]$)}] For any $X, Y \subseteq N$ with $|X| < |Y|$ and $i \in X \setminus Y$, we have \eqref{mnatP2=01}: \begin{align} f( X) + f( Y ) &\leq \max_{j \in Y \setminus X} \{ f( X - i + j) + f( Y + i -j) \}; \label{mnatP3=01} \end{align} \item[{\bf (P4$[\mathbb{B}]$)}] For any $X, Y \subseteq N$ with $|X| > |Y|$ and $i \in X \setminus Y$, we have {\rm (\ref{mnatconcavexc2})}: \begin{align} f( X) + f( Y ) &\leq \max\left( f( X - i ) + f( Y + i ), \ \max_{j \in Y \setminus X} \{ f( X - i + j) + f( Y + i -j) \} \right) . \label{mnatP4=01} \end{align} \end{description} The following theorem gives two characterizations of \Mnat-concave set functions. \begin{theorem}\label{THmconcavcardexc01} Let $f: 2\sp{N} \to \Rminf$ be a set function with $\dom f \not= \emptyset$. \noindent {\rm (1)} $f$ is \Mnat-concave if and only if it satisfies {\rm (P1$[\mathbb{B}]$)} and {\rm (P2$[\mathbb{B}]$)}. \noindent {\rm (2)} $f$ is \Mnat-concave if and only if it satisfies {\rm (P2$[\mathbb{B}]$)}, {\rm (P3$[\mathbb{B}]$)}, and {\rm (P4$[\mathbb{B}]$)}. \end{theorem} \begin{proof} The proof is given in Section~\ref{SCproofmnatexccardloc}. \end{proof} \begin{remark} \rm \label{RMmconcavcardexc01bib} Theorem~\ref{THmnatcavP1hered01} and Theorem~\ref{THmconcavcardexc01}(1) are explicit in Murota--Shioura \cite{MS18mnataxiom=valmat} as Corollary 1.4 and Corollary 1.3, respectively. Theorem~\ref{THmconcavcardexc01}(2) is an adaptation of Theorem 2.1 in \cite{MS18mnataxiom=valmat} to set functions. \finbox \end{remark} Other types of exchange properties for \Mnat-concavity shall be treated later, local exchange properties in Section~\ref{SCexchange01loc} and multiple exchange properties in Section~\ref{SCexchange01mult}. \subsection{Local exchange properties characterizing \Mnat-concave functions} \label{SCexchange01loc} \Mnat-concavity can be characterized by local exchange properties. The conditions (L1$[\mathbb{B}]$)--(L3$[\mathbb{B}]$) below are indeed ``local'' in the sense that they require the exchangeability of the form (\ref{mnatconcavexc2}) only for $(X,Y)$ with $\max(| X \setminus Y | , | Y \setminus X |) \leq 2$. \begin{description} \item[{\bf (L1$[\mathbb{B}]$)}] For any $Z \subseteq N$ and distinct $i,j \in N \setminus Z$, we have \begin{align} f( Z + i + j ) + f( Z ) \leq f(Z + i) + f(Z + j) ; \label{mnatconcavexc20loc} \end{align} \item[{\bf (L2$[\mathbb{B}]$)}] For any $Z \subseteq N$ and distinct $i,j,k \in N \setminus Z$, we have \begin{align} & f( Z + i + j ) + f( Z + k) \notag \\ & \quad \leq \max\left[ f(Z + i + k) + f(Z + j), \ f(Z + j + k) + f(Z + i) \right] ; \label{mnatconcavexc21loc} \end{align} \item[{\bf (L3$[\mathbb{B}]$)}] For any $Z \subseteq N$ and distinct $i,j,k,l \in N \setminus Z$, we have \begin{align} & f( Z + i + j ) + f( Z + k + l) \notag \\ & \quad \leq \max\left[ f(Z + i + k) + f(Z + j +l ), \ f(Z + j + k) + f(Z + i + l) \right]. \label{mnatconcavexc22loc} \end{align} \end{description} \begin{remark} \rm \label{RMmlocnonunimax} Condition (L2$[\mathbb{B}]$) is equivalent to saying that, for any $Z \subseteq N$ and distinct $i,j,k \not\in Z$, the maximum value in $\{ f( Z + i + j ) + f( Z + k), \ f(Z + i + k) + f(Z + j), \ f(Z + j + k) + f(Z + i) \}$ is attained by at least two elements therein. Similarly, (L3$[\mathbb{B}]$) is equivalent to saying that, for any $Z \subseteq N$ and distinct $i,j,k,l \not\in Z$, the maximum value in $\{ f( Z + i + j ) + f( Z + k + l), \ f(Z + i + k) + f(Z + j +l ), \ f(Z + j + k) + f(Z + i + l) \}$ is attained by at least two elements therein. \finbox \end{remark} The set of the above three conditions will be referred to as $\MncavlocS$. That is, \begin{description} \item[\MncavlocSb] The conditions (L1$[\mathbb{B}]$), (L2$[\mathbb{B}]$), and (L3$[\mathbb{B}]$) hold. \end{description} \noindent The following is a statement expected naturally. \begin{propositionM} \label{PRmconcavlocexc01onlyif} An \Mnat-concave set function satisfies $\MncavlocS$. \end{propositionM} \begin{proof} The conditions (L1$[\mathbb{B}]$) and (L2$[\mathbb{B}]$) are immediate consequences of the exchange property $\MncavS$. The derivation of (L3$[\mathbb{B}]$) below demonstrates a typical reasoning about \Mnat-concavity \footnote (L3$[\mathbb{B}]$) is a special case of (P2$[\mathbb{B}]$) that appeared in Theorem~\ref{THmconcavcardexc01}. However, we need to prove (L3$[\mathbb{B}]$) without using (P2$[\mathbb{B}]$), since (P2$[\mathbb{B}]$) is not proved yet and moreover, the proof of (P2$[\mathbb{B}]$) given in Section~\ref{SCproofmnatexccardloc} relies on this Proposition~\ref{PRmconcavlocexc01onlyif}. } To simplify notations we write $\alpha_{i} = f(Z + i)$, $\alpha_{ij} = f(Z + i + j)$, and $\alpha_{ijk} = f(Z + i + j + k)$, etc., and assume $i=1$, $j=2$, $k=3$, $l=4$ in (L3$[\mathbb{B}]$) to obtain \begin{equation} \label{mnatconcavexc22locAlpha} \alpha_{12}+\alpha_{34} \leq \max\{\alpha_{13}+\alpha_{24},\alpha_{14}+\alpha_{23}\}. \end{equation} To prove this by contradiction, suppose that \begin{equation}\label{vmloc22prf1} \alpha_{12} + \alpha_{34} > \max\{\alpha_{13} + \alpha_{24}, \alpha_{14} + \alpha_{23}\}. \end{equation} With the notation $A=\alpha_{12} + \alpha_{34}$ we obtain \begin{align} A = \alpha_{12} + \alpha_{34} & \leq \max \{\alpha_{1}+\alpha_{234}, \alpha_{13} + \alpha_{24}, \alpha_{14} + \alpha_{23}\} =\alpha_{1}+\alpha_{234} \label{vmloc22prf2} \end{align} from \MncavS (with $i=2$) and (\ref{vmloc22prf1}). Similarly, we have \begin{equation}\label{vmloc22prf3} A \leq \alpha_{2}+\alpha_{134}, \qquad A \leq \alpha_{3}+\alpha_{124}, \qquad A \leq \alpha_{4}+\alpha_{123}. \end{equation} On the other hand, we have \begin{align} &\alpha_{1} + \alpha_{123} \leq \alpha_{12} + \alpha_{13}, \qquad \alpha_{2} + \alpha_{234} \leq \alpha_{23} + \alpha_{24}, \label{vmloc22prf42} \\ &\alpha_{3} + \alpha_{134} \leq \alpha_{13} + \alpha_{34}, \qquad \alpha_{4} + \alpha_{124} \leq \alpha_{14} + \alpha_{24} \label{vmloc22prf44} \end{align} by $\MncavS$. By adding the four inequalities in (\ref{vmloc22prf2}) and (\ref{vmloc22prf3}) and using the inequalities in (\ref{vmloc22prf42}), (\ref{vmloc22prf44}), and (\ref{vmloc22prf1}), we obtain \begin{align*} 4A &\leq (\alpha_{1} + \alpha_{234}) +(\alpha_{2} + \alpha_{134}) +(\alpha_{3} + \alpha_{124}) +(\alpha_{4} + \alpha_{123}) \notag \\ & = (\alpha_{1} + \alpha_{123}) +(\alpha_{2} + \alpha_{234}) +(\alpha_{3} + \alpha_{134}) +(\alpha_{4} + \alpha_{124}) \notag \\ & \leq (\alpha_{12} + \alpha_{13}) +(\alpha_{23} + \alpha_{24}) +(\alpha_{13} + \alpha_{34}) +(\alpha_{14} + \alpha_{24}) \notag \\ & = (\alpha_{12} + \alpha_{34}) +(\alpha_{23} + \alpha_{14}) + 2(\alpha_{13} + \alpha_{24}) \notag \\ & < 4A . \end{align*} This is a contradiction. Thus (L3$[\mathbb{B}]$) is shown. \end{proof} The converse of Proposition~\ref{PRmconcavlocexc01onlyif} is also true, that is, the local exchange property $\MncavlocS$ characterizes \Mnat-concavity under some assumption on the effective domain of the function. \begin{theorem}\label{THmnatcavlocexc01} A set function $f: 2\sp{N} \to \Rminf$ is \Mnat-concave if and only if the effective domain $\dom f$ is an \Mnat-convex family and $\MncavlocS$ is satisfied. \end{theorem} \begin{proof} The ``only if'' part is already shown in Proposition~\ref{PRmconcavlocexc01onlyif}. The proof of the ``if'' part is given in Section~\ref{SCproofmnatexccardloc}. \end{proof} If the effective domain contains the empty set, in addition to being an \Mnat-convex family, we can dispense with the third condition (L3$[\mathbb{B}]$) in $\MncavlocS$, as is stated in the following theorem. Recall that an \Mnat-convex family containing the empty set is exactly the family of independent sets of a matroid. \begin{theorem}\label{THmnatcavlocexc01hered} Let $f: 2\sp{N} \to \Rminf$ be a set function such that $\dom f$ is the family of independent sets of a matroid (an \Mnat-convex family containing the empty set). Then $f$ is \Mnat-concave if and only if it satisfies {\rm (L1$[\mathbb{B}]$)} and {\rm (L2$[\mathbb{B}]$)}. \end{theorem} \begin{proof} The proof is given in Section~\ref{SCproofmnatlocexc01hered}. \end{proof} \begin{remark} \rm \label{RMmnatlocdomcond} In Theorems \ref{THmnatcavlocexc01} and \ref{THmnatcavlocexc01hered} the assumptions on $\dom f$ are indispensable. For example, let $N=\{ 1,2,\ldots, 6 \}$ and define $f(\{ 1,2,3 \}) = f(\{ 4,5,6 \})=0$, and $f(X)=-\infty$ for $X \not= \{ 1,2,3 \}, \{ 4,5,6 \}$. This function $f$ is not \Mnat-concave, since $\dom f = \{ \{ 1,2,3 \}, \{ 4,5,6 \} \}$ is not an \Mnat-convex family. However, $f$ satisfies the conditions (L1$[\mathbb{B}]$), (L2$[\mathbb{B}]$), and (L3$[\mathbb{B}]$) in a trivial manner, since the left-hand sides of (\ref{mnatconcavexc20loc})--(\ref{mnatconcavexc22loc}) are always equal to $-\infty$. \finbox \end{remark} \begin{remark} \rm \label{RMmnatcavlocexc01bib} Theorem~\ref{THmnatcavlocexc01} is due to Murota--Shioura \cite{MS18mnataxiom=valmat}; see also Murota \cite{Mstein96=valmat, Mdcasiam=valmat}, Murota--Shioura \cite{MS99gp=valmat}. Theorem~\ref{THmnatcavlocexc01hered} is due to Reijnierse--van Gallekom--Potters \cite[Theorem~10]{RGP02=valmat} (also M{\"u}ller \cite[Theorem~13.5]{Mul06=valmat}, Shioura--Tamura \cite[Theorem~6.5]{ST15jorsj=valmat}). \finbox \end{remark} \begin{remark} \rm \label{RMmnatlocweakcond} In Theorem~\ref{THmnatcavlocexc01}, the assumption that $\dom f$ should be an \Mnat-convex family can be weakened. See Proposition~\ref{PRmnatcavlocexcW01} in Section~\ref{SCproofmnatexccardloc}. \finbox \end{remark} \section{Proofs about Exchange Properties of \Mnat-concave Functions} \label{SCmncavexcprf} Theorems about exchange properties stated in Section~\ref{SCmncavsetfn} are proved in this section. The proofs are ordered in accordance with logical dependence, that is, if the proof of Theorem~A relies on Theorem~B, Theorem~B is proved before it is used. \subsection{Proof of Theorems \ref{THmconcavcardexc01}~and \ref{THmnatcavlocexc01}} \label{SCproofmnatexccardloc} In this section we prove the equivalence of the exchange properties $\MncavS$, $\MncavlocS$, and some combinations of (P1$[\mathbb{B}]$) to (P4$[\mathbb{B}]$) stated in Theorems \ref{THmconcavcardexc01} and \ref{THmnatcavlocexc01}. The proof is based on Murota--Shioura \cite{MS18mnataxiom=valmat} and can be summarized as follows: \begin{center} \begin{tabular}{|ccc|} \hline \MncavS & {$\stackrel{\mbox{\small Prop.~\ref{PRmconcavlocexc01onlyif}}}{\Longrightarrow}$} & $\MncavlocS$ $+$ DOM \\ {\small Obviously} $\Uparrow$ \phantom{\small Obviously} && \phantom{\small Lem \ref{LMmnatexcdiffcard01}, \ref{LMmnatexcequicard01}} $\Downarrow$ {\small Lem \ref{LMmnatexcdiffcard01}, \ref{LMmnatexcequicard01}} \\ (P2$[\mathbb{B}]$), (P3$[\mathbb{B}]$), (P4$[\mathbb{B}]$) & {$\stackrel{\mbox{\small Lem \ref{LMp12toP3}, \ref{LMp12toP4}}}{\Longleftarrow}$} & (P1$[\mathbb{B}]$), (P2$[\mathbb{B}]$) \\ \hline \end{tabular} \end{center} where ``DOM'' denotes some conditions on the effective domain to be specfied below. Recall that (P1$[\mathbb{B}]$) to (P4$[\mathbb{B}]$) are defined in Section~\ref{SCexchange01glob}, and \MncavlocS consists of three local exchange properties (L1$[\mathbb{B}]$), (L2$[\mathbb{B}]$), and (L3$[\mathbb{B}]$) in Section~\ref{SCexchange01loc}. We use notation \[ f_{p}(X) = f[+p](X) = f(X) + \sum_{i \in X} p_{i} \] for $p \in \mathbb{R}\sp{N}$ and $X \subseteq N$. We start with some property of an \Mnat-convex family. \begin{propositionM} \label{PRmnsetconnected01} A set family $\mathcal{F}$ satisfying $\BnvexS$ has the following properties \footnote (\ref{Fconnected<2}) is not implied by (\ref{Fconnected<1}) and (\ref{Fconnected=}). For example, $\mathcal{F} = \{ \{ 1,2 \}, \{ 1,4 \}, \{ 1,5 \}, \{ 4,5 \}$, $\{ 3,4,5 \} \}$ satisfies (\ref{Fconnected<1}) and (\ref{Fconnected=}), and not (\ref{Fconnected<2}). } \begin{align} & \bullet \mbox{ If $X, Y \in \mathcal{F}$ and $|X| < |Y|$, there exists $j \in Y \setminus X$ such that $Y - j \in \mathcal{F}$; \label{Fconnected<1} \\ & \bullet \mbox{ If $X, Y \in \mathcal{F}$, $|X| = |Y|$, and $X \not= Y$, then \notag \\ & \qquad \mbox{ there exist $i \in X \setminus Y$ and $j \in Y \setminus X$ such that $Y + i - j \in \mathcal{F}$; \label{Fconnected=} \\ & \bullet \mbox{ If $X, Y \in \mathcal{F}$, $|X| < |Y|$, and $X \setminus Y \not= \emptyset$, then \notag \\ & \qquad \mbox{ there exist $i \in X \setminus Y$ and $j \in Y \setminus X$ such that $Y + i - j \in \mathcal{F}$. \label{Fconnected<2} \end{align} \end{propositionM} \begin{proof} We prove (\ref{Fconnected<1}) by induction on $|X \bigtriangleup Y| = |X \setminus Y| + |Y \setminus X|$. If $|X \bigtriangleup Y| = 1$, then $X = Y - j \in \mathcal{F}$ holds with the unique element $j \in Y \setminus X$. For the induction step, assume $|X \bigtriangleup Y| > 1$. If $X \setminus Y = \emptyset$, then \BnvexS applied to $Y$, $X$, and an arbitrary $j \in Y \setminus X$ implies $Y - j \in \mathcal{F}$. Otherwise, take any $i \in X \setminus Y$. Then, we have (i) $X - i \in \mathcal{F}$, or (ii) there exists some $k \in Y \setminus X$ such that $X - i +k \in \mathcal{F}$. Let $X' = X-i$ in case (i) and $X' = X-i+ k$ in case (ii). Since $|X'| \le |X| < |Y|$ and $|X' \bigtriangleup Y| < |X \bigtriangleup Y|$, we can apply the induction hypothesis to $X'$ and $Y$ to obtain $Y - j \in \mathcal{F}$ for some $j \in Y \setminus X' \subseteq Y \setminus X$. To prove (\ref{Fconnected=}) and (\ref{Fconnected<2}) assume $|X| \leq |Y|$ and $X \setminus Y \not= \emptyset$. We apply \BnvexS to $X$, $Y$, and $i \in X \setminus Y$, to obtain (i) $ Y + i \in \mathcal{F}$, or (ii) there exists some $j \in Y \setminus X$ such that $ Y + i -j \in \mathcal{F}$. In case (i) we have $|X| \le |Y| < |Y+i|$. Then, by (\ref{Fconnected<1}), we obtain $Y+i - j \in \mathcal{F}$ for some $j \in (Y+i) \setminus X = Y \setminus X$. \end{proof} To prove Theorem \ref{THmnatcavlocexc01}, we will show a stronger statement. \begin{proposition}\label{PRmnatcavlocexcW01} A set function $f: 2\sp{N} \to \Rminf$ is \Mnat-concave if and only if the effective domain $\dom f$ satisfies \eqref{Fconnected<1}, \eqref{Fconnected=}, and \eqref{Fconnected<2} and $\MncavlocS$ is satisfied. \end{proposition} In Lemmas \ref{LMmnatexcdiffcard01} and \ref{LMmnatexcequicard01} below, we derive (P1$[\mathbb{B}]$) and (P2$[\mathbb{B}]$) from $\MncavlocS$, respectively, under the connectedness conditions (\ref{Fconnected<1}), (\ref{Fconnected=}), and (\ref{Fconnected<2}) on $\dom f$. \begin{lemmaM} \label{LMmnatexcdiffcard01} If $\dom f$ satisfies {\rm (\ref{Fconnected<1})} and {\rm (\ref{Fconnected<2})}, then $\MncavlocS$ implies {\rm (P1$[\mathbb{B}]$)}. \end{lemmaM} \begin{proof} To prove (P1$[\mathbb{B}]$) by contradiction, we assume that there exists a pair $(X,Y)$ for which (\ref{mnatP1=01}) fails. That is, we assume that the set of such pairs \begin{align*} \mathcal{D} = \{(X, Y) \mid {} & X, Y \in \dom f,\ |X| < |Y|, \ \\ & f(X) + f(Y) > f(X + j) + f(Y - j) \mbox{ for all } j \in Y \setminus X \} \end{align*} is nonempty. Take a pair $(X,Y) \in \mathcal{D}$ with $|X \bigtriangleup Y| = |X \setminus Y| + |Y \setminus X|$ minimum. For a fixed $\varepsilon > 0$, define $p \in \mathbb{R}\sp{N}$ by \[ p_{j} = \left\{ \begin{array}{ll} f(X) - f(X + j) & (j \in Y \setminus X,\ X + j \in \dom f),\\ f(Y - j) - f(Y) + \varepsilon & (j \in Y \setminus X,\ X + j \not\in \dom f, \ Y - j \in \dom f),\\ 0 & (\mbox{otherwise}). \end{array} \right. \] \medskip Claim 1: \begin{eqnarray} f_{p}(X + j) & = & f_{p}(X) \qquad (j \in Y \setminus X,\ X + j \in \dom f), \label{vmexcard1l1c11=01}\\ f_{p}(Y - j) & < & f_{p}(Y) \qquad (j \in Y \setminus X). \label{vmexcard1l1c12=01} \end{eqnarray} \noindent (Proof of Claim~1) The equality (\ref{vmexcard1l1c11=01}) is obvious from the definition of $p$. If $X+j \in \dom f$, (\ref{vmexcard1l1c12=01}) follows from (\ref{vmexcard1l1c11=01}) and $f_{p}(X) + f_{p}(Y) > f_{p}(X + j) + f_{p}(Y - j)$. If $X+j \not\in \dom f$, (\ref{vmexcard1l1c12=01}) follows from the fact that $f_{p}(Y - j) - f_{p}(Y) = -\varepsilon$ or $-\infty$ depending on whether $Y-j \in \dom f$ or not. \medskip In the following, we divide into cases, Case~1: $|Y|-|X| \ge 2$ and Case~2: $|Y|-|X| = 1$, and derive a contradiction in each case. We first consider Case~1. Since $|X| < |Y|$, the assumption (\ref{Fconnected<1}) implies $Y - j_{0} \in \dom f$ for some $j_0 \in Y \setminus X$. \medskip Claim 2: For $Y' = Y - j_{0}$ we have \begin{equation}\label{vmexcard1clm3} f_{p}(X) + f_{p}(Y') > f_{p}(X + j) + f_{p}(Y' - j) \qquad (j \in Y' \setminus X). \end{equation} \noindent (Proof of Claim 2) It suffices to consider the case where $X + j \in \dom f$. Then we have $f_{p}(X) = f_{p}(X + j)$ by (\ref{vmexcard1l1c11=01}). We also have \begin{align*} f_{p}(Y' - j) & = [ f_{p}(Y) + f_{p}(Y - j_{0} - j) ] - f_{p}(Y) \\ & \leq [ f_{p}(Y - j_{0}) + f_{p}(Y - j) ] - f_{p}(Y) \\ & < f_{p}(Y - j_{0}) = f_{p}(Y') \end{align*} by \MncavlocS and (\ref{vmexcard1l1c12=01}). Therefore, (\ref{vmexcard1clm3}) holds. \medskip Since $|Y'| = |Y|-1 > |X|$, (\ref{vmexcard1clm3}) implies $(X, Y') \in \cal D$. This contradicts the choice of $(X, Y)$, since $|X \bigtriangleup Y'| = |X \bigtriangleup Y| -1 $. Therefore, Case~1 cannot occur. We next consider Case~2. \medskip Claim 3: There exist $i_{0} \in X \setminus Y$ and $j_{0} \in Y \setminus X$ such that $Y + i_{0} - j_{0} \in \dom f$ and \begin{equation}\label{vmexcard1l1a1-2} f_{p}(Y + i_{0} - j_{0}) \geq f_{p}(Y + i - j) \qquad (i \in X \setminus Y, \ j \in Y \setminus X). \end{equation} \noindent (Proof of Claim 3) We first note that $|X\setminus Y| \ge 1$ holds. Indeed, if $|X\setminus Y|=0$, then $X=Y-i$ and $Y=X+i$ for the unique element $i$ of $Y \setminus X$, and $f(X) + f(Y) = f( X + i) + f( Y-i)$, a contradiction to $(X,Y) \in \mathcal{D}$. By assumption (\ref{Fconnected<2}), $Y + i - j \in \dom f$ for some $i \in X\setminus Y$ and $j \in Y \setminus X$. Any pair $(i,j)$ that maximizes $f_{p}(Y + i - j)$ over $i \in X \setminus Y$ and $j \in Y \setminus X$ serves as $(i_{0}, j_{0})$. \medskip Claim 4: For $Y' = Y + i_{0} - j_{0}$ we have (\ref{vmexcard1clm3}). \noindent (Proof of Claim 4) It suffices to consider the case where $X + j \in \dom f$. Then we have $f_{p}(X) = f_{p}(X + j)$ by (\ref{vmexcard1l1c11=01}). We also have \begin{align*} & f_{p}(Y' - j) = [ f_{p}(Y + i_{0} - j_{0} - j) + f_{p}(Y) ] - f_{p}(Y) \\ & \leq \max\{f_{p}(Y + i_{0} - j_{0}) + f_{p}(Y - j), f_{p}(Y + i_{0} - j) + f_{p}(Y - j_{0})\} - f_{p}(Y) \\ & \leq f_{p}(Y + i_{0} - j_{0}) + \max\{f_{p}(Y - j) - f_{p}(Y), f_{p}(Y - j_{0}) - f_{p}(Y)\} \\ & < f_{p}(Y + i_{0} - j_{0}) = f_{p}(Y') \end{align*} by $\MncavlocS$, (\ref{vmexcard1l1a1-2}), and (\ref{vmexcard1l1c12=01}). Therefore, (\ref{vmexcard1clm3}) holds. \medskip Since $|Y'| = |Y| > |X|$, (\ref{vmexcard1clm3}) implies $(X, Y') \in \cal D$. This contradicts the choice of $(X, Y)$, since $|X \bigtriangleup Y'| = |X \bigtriangleup Y| -2$. Therefore, Case~2 cannot occur either. Hence, $\mathcal{D}$ must be empty, which means that (P1$[\mathbb{B}]$) holds. \end{proof} \begin{lemmaM} \label{LMmnatexcequicard01} If $\dom f$ satisfies {\rm (\ref{Fconnected=})}, $\MncavlocS$ implies {\rm (P2$[\mathbb{B}]$)}. \end{lemmaM} \begin{proof} To prove (P2$[\mathbb{B}]$) by contradiction, we assume that there exists a pair $(X,Y)$ for which (\ref{mnatP2=01}) fails. That is, we assume that the set of such pairs \begin{align*} \mathcal{D} = \{(X, Y) \mid {} & X, Y \in \dom f,\ |X| = |Y|,\ \exists i_{*} \in X \setminus Y \ \mbox{s.t.} \\ & f(X) + f(Y) > f(X- i_{*} +j ) + f(Y+ i_{*}-j ) \mbox{ for all } j \in Y \setminus X \} \end{align*} is nonempty. Take a pair $(X,Y) \in \mathcal{D}$ with $|X \setminus Y|$ minimum, and fix $i_{*} \in X \setminus Y$ appearing in the definition of $\mathcal{D}$. We have $|X \setminus Y| = |Y \setminus X| \geq 2$ by $\MncavlocS$. For a fixed $\varepsilon > 0$, define $p \in \mathbb{R}\sp{N}$ by \begin{equation} \label{mnatlocToP2pjdef} \begin{array}{rcl} p_{j} & = & \left\{ \begin{array}{ll} f(X) - f(X- i_{*} +j) & (j \in Y \setminus X,\ X- i_{*} +j \in \dom f),\\ f(Y+ i_{*} -j) - f(Y) + \varepsilon \\ & \hspace{-15mm} (j \in Y \setminus X,\ X- i_{*} +j \not\in \dom f,\ Y+ i_{*} -j \in \dom f),\\ 0 & (\mbox{otherwise}). \end{array} \right. \end{array} \end{equation} \medskip Claim 1: \begin{eqnarray} f_{p}(X - i_{*} + j) & = & f_{p}(X) \qquad (j \in Y \setminus X,\ X - i_{*} + j \in \dom f), \label{vmexequiszl3-c1-1=01}\\ f_{p}(Y + i_{*} - j) & < & f_{p}(Y) \qquad (j \in Y \setminus X). \label{vmexequiszl3-c1-2=01} \end{eqnarray} \noindent (Proof of Claim~1) The equality (\ref{vmexequiszl3-c1-1=01}) is obvious from the definition of $p$. If $X-i_{*}+j \in \dom f$, (\ref{vmexequiszl3-c1-2=01}) follows from (\ref{vmexequiszl3-c1-1=01}) and $f_{p}(X) + f_{p}(Y) > f_{p}(X - i_{*} + j) + f_{p}(Y +i_{*} - j)$. If $X-i_{*}+j \not\in \dom f$, (\ref{vmexequiszl3-c1-2=01}) follows from the fact that $f_{p}(Y +i_{*} - j) - f_{p}(Y) = -\varepsilon$ or $-\infty$ depending on whether $Y+i_{*}-j \in \dom f$ or not. \medskip Claim 2: There exist $i_{0} \in (X \setminus Y) - i_{*}$ and $j_{0} \in Y \setminus X$ such that $Y + i_{0} - j_{0} \in \dom f$ and \begin{equation}\label{vmexequiszl3-a1=01} f_{p}(Y + i_{0} - j_{0}) \geq f_{p}(Y + i_{0} - j) \qquad (j \in Y \setminus X). \end{equation} \noindent (Proof of Claim 2) First, we show the existence of $i_{0} \in X\setminus Y$ and $j \in Y\setminus X$ such that $Y + i_{0} - j \in \dom f$ and $i_{0} \not= i_{*}$. By assumption (\ref{Fconnected=}), there exist $i_{1} \in X \setminus Y$ and $j_{1} \in Y \setminus X$ such that $Y'=Y + i_{1} - j_{1} \in \dom f$. If $i_{1} \not= i_{*}$, we are done with $(i_{0},j) = (i_{1},j_{1})$. Otherwise, again by (\ref{Fconnected=}), there exist $i_{2} \in X \setminus Y'$ and $j_{2} \in Y' \setminus X$ such that $Y'' = Y' + i_{2} - j_{2} \in \dom f$. By the local exchange property (L3$[\mathbb{B}]$) in (\ref{mnatconcavexc22loc}) for $Y$ and $Y''$ we obtain $Y + i_{2} - j_{1} \in \dom f$ or $Y + i_{2} - j_{2} \in \dom f$. Hence we can take $(i_{0},j) =(i_{2},j_{1})$ or $(i_{0},j) =(i_{2},j_{2})$, where $i_{2}$ is distinct from $i_{*}$. Next we choose the element $j_{0}$. By the choice of $i_{0}$, we have $f_{p}(Y+i_{0}-j) > -\infty$ for some $j \in Y \setminus X$. By letting $j_{0}$ to be an element $j \in Y\setminus X$ that maximizes $f_{p}(Y+i_{0}-j)$, we obtain (\ref{vmexequiszl3-a1=01}). \medskip Claim 3: For $Y' = Y + i_{0} - j_{0}$ we have \begin{equation}\label{vmexequiszlem3-1=01} f_{p}(X) + f_{p}(Y') > f_{p}(X - i_{*} + j) + f_{p}(Y' + i_{*} - j) \qquad (j \in Y' \setminus X). \end{equation} \noindent (Proof of Claim 3) It suffices to consider the case where $X - i_{*} + j \in \dom f$. Then we have $f_{p}(X) = f_{p}(X - i_{*} + j)$ by (\ref{vmexequiszl3-c1-1=01}). We also have \begin{align*} & f_{p}(Y'+ i_{*} -j) = [ f_{p}(Y+ i_{0} + i_{*} - j_{0} -j) + f_{p}(Y) ] - f_{p}(Y) \\ & \leq \max\{f_{p}(Y+ i_{0} - j_{0}) + f_{p}(Y+ i_{*} -j), f_{p}(Y+ i_{0} -j ) + f_{p}(Y+ i_{*} - j_{0} )\} - f_{p}(Y) \\ & \leq f_{p}(Y+ i_{0} - j_{0}) + \max\{f_{p}(Y+ i_{*}-j) - f_{p}(Y),\ f_{p}(Y+ i_{*} - j_{0} ) - f_{p}(Y)\} \\ & < f_{p}(Y+ i_{0} - j_{0}) = f_{p}(Y'), \end{align*} where the first inequality is due to (L3$[\mathbb{B}]$) of $\MncavlocS$, and the second and third are by (\ref{vmexequiszl3-a1=01}) and (\ref{vmexequiszl3-c1-2=01}). Hence follows (\ref{vmexequiszlem3-1=01}). \medskip Since $|X|=|Y'|$ and $i_{*} \in X \setminus Y'$, (\ref{vmexequiszlem3-1=01}) implies $(X, Y') \in \cal D$. This contradicts the choice of $(X, Y)$, since $|X \setminus Y'| =|X \setminus Y| -1$. Therefore, $\mathcal{D}$ must be empty, which means that (P2$[\mathbb{B}]$) holds. \end{proof} Next we derive (P3$[\mathbb{B}]$) from (P1$[\mathbb{B}]$) and (P2$[\mathbb{B}]$). \begin{lemmaM} \label{LMp12toP3} {\rm (P1$[\mathbb{B}]$)} \& {\rm (P2$[\mathbb{B}]$)} $\Longrightarrow$ {\rm (P3$[\mathbb{B}]$)}. \end{lemmaM} \begin{proof} To prove (P3$[\mathbb{B}]$) by contradiction, we assume that there exists a pair $(X,Y)$ for which (\ref{mnatP2=01}) fails. That is, we assume that the set of such pairs \[ \begin{array}{l} \mathcal{D} = \{(X, Y) \mid X, Y \in \dom f,\ |X| < |Y|,\ \exists i_{*} \in X \setminus Y \ \mbox{s.t.} \\ \phantom{\mathcal{D} = \{(X, Y) \mid \ } f(X) + f(Y) > f(X- i_{*} +j ) + f(Y+ i_{*}-j ) \mbox{ for all } j \in Y \setminus X \} \end{array} \] is nonempty. Take a pair $(X,Y) \in \mathcal{D}$ with $|X \bigtriangleup Y| = |X \setminus Y| + |Y \setminus X|$ minimum, and fix $i_{*} \in X \setminus Y$ appearing in the definition of $\mathcal{D}$. For a fixed $\varepsilon > 0$, define $p \in \mathbb{R}\sp{N}$ by (\ref{mnatlocToP2pjdef}). Then we have (\ref{vmexequiszl3-c1-1=01}) and (\ref{vmexequiszl3-c1-2=01}) in Claim 1 in the proof of Lemma \ref{LMmnatexcequicard01}. \OMIT \[ \begin{array}{rcl} p_{j} & = & \left\{ \begin{array}{ll} f(X) - f(X- i_{*} +j) & (j \in Y \setminus X,\ X- i_{*} +j \in \dom f),\\ f(Y+ i_{*} -j) - f(Y) + \varepsilon \\ & \hspace{-15mm} (j \in Y \setminus X,\ X- i_{*} +j \not\in \dom f,\ Y+ i_{*} -j \in \dom f),\\ 0 & (\mbox{otherwise}). \end{array} \right. \end{array} \] \medskip Claim 1: \begin{eqnarray} f_{p}(X - i_{*} + j) & = & f_{p}(X) \qquad (j \in Y \setminus X,\ X - i_{*} + j \in \dom f), \label{vmexequiszl3-c1-1A}\\ f_{p}(Y + i_{*} - j) & < & f_{p}(Y) \qquad (j \in Y \setminus X). \label{vmexequiszl3-c1-2A} \end{eqnarray} \noindent (Proof of Claim~1) The equality (\ref{vmexequiszl3-c1-1A}) is obvious from the definition of $p$. The inequality (\ref{vmexequiszl3-c1-2A}) follows from (\ref{vmexequiszl3-c1-1A}) and $f_{p}(X) + f_{p}(Y) > f_{p}(X - i_{*} + j) + f_{p}(Y +i_{*} - j)$ if $X-i_{*}+j \in \dom f$. If $X-i_{*}+j \not\in \dom f$, (\ref{vmexequiszl3-c1-2A}) follows from the fact that $f_{p}(Y +i_{*} - j) - f_{p}(Y) = -\varepsilon$ or $-\infty$ depending on whether $Y+i_{*}-j \in \dom f$ or not. \medskip Claim 1: There exists $j_{0} \in Y \setminus X$ such that $Y - j_{0} \in \dom f$ and \begin{equation}\label{vmexequiszl3-a1A} f_{p}(Y - j_{0}) \geq f_{p}(Y - j) \qquad (j \in Y \setminus X). \end{equation} \noindent (Proof of Claim 1) Since $|X| < |Y|$, (P1$[\mathbb{B}]$) implies that there exists $j \in Y \setminus X$ such that $Y - j \in \dom f$. Any $j \in Y \setminus X$ that maximizes $f_{p}(Y - j)$ serves as $j_{0}$. \medskip Claim 2: For $Y' = Y - j_0$ we have \begin{equation}\label{vmexequiszlem3-1A} f_{p}(X) + f_{p}(Y') > f_{p}(X - i_{*} + j) + f_{p}(Y' + i_{*} - j) \qquad (j \in Y' \setminus X). \end{equation} \noindent (Proof of Claim 2) Since this inequality is obvious when $X -i_{*} + j \not\in \dom f$, we assume that $X - i_{*} + j \in \dom f$. Then we have $f_{p}(X) = f_{p}(X - i_{*} + j)$ by (\ref{vmexequiszl3-c1-1=01}). We also have \begin{align*} & f_{p}(Y'+ i_{*} -j) = [ f_{p}(Y + i_{*} - j_{0} -j) + f_{p}(Y) ] - f_{p}(Y) \\ & \leq \max\{f_{p}(Y - j_{0}) + f_{p}(Y+ i_{*} -j), f_{p}(Y -j ) + f_{p}(Y+ i_{*} - j_{0} )\} - f_{p}(Y) \\ & \leq f_{p}(Y - j_{0}) + \max\{f_{p}(Y+ i_{*}-j) - f_{p}(Y),\ f_{p}(Y+ i_{*} - j_{0} ) - f_{p}(Y)\} \\ & < f_{p}(Y - j_{0}) = f_{p}(Y'), \end{align*} where the first inequality is due to (P1$[\mathbb{B}]$), and the second and third inequalities are by (\ref{vmexequiszl3-a1A}) and (\ref{vmexequiszl3-c1-2=01}). Hence follows (\ref{vmexequiszlem3-1A}). \medskip By $|X| < |Y|$ and $|Y'| = |Y|-1$ we have $|X| \leq |Y'|$, in which the possibility of equality is excluded. Indeed, if $|X| = |Y'|$, then (\ref{vmexequiszlem3-1A}) contradicts (P2$[\mathbb{B}]$) for $(X, Y', i_{*})$. Therefore, $|X|<|Y'|$ holds. Hence, we have $(X, Y') \in \cal D$ by (\ref{vmexequiszlem3-1A}), which is a contradiction to the choice of $(X, Y)$ since $|X' \bigtriangleup Y| \leq |X \bigtriangleup Y| -1$. Therefore, $\mathcal{D}$ must be empty, which means that (P3$[\mathbb{B}]$) holds. \end{proof} Finally we derive (P4$[\mathbb{B}]$) from (P1$[\mathbb{B}]$) and (P2$[\mathbb{B}]$). \begin{lemmaM} \label{LMp12toP4} {\rm (P1$[\mathbb{B}]$)} \& {\rm (P2$[\mathbb{B}]$)} $\Longrightarrow$ {\rm (P4$[\mathbb{B}]$)}. \end{lemmaM} \begin{proof} To prove (P4$[\mathbb{B}]$) by contradiction, we assume that there exists a pair $(X,Y)$ for which (\ref{mnatconcavexc2}) fails. That is, we assume that the set of such pairs \[ \begin{array}{l} \mathcal{D} = \{(X, Y) \mid X, Y \in \dom f,\ |X| > |Y|,\ \\ \phantom{\mathcal{D} = \{(X, Y) \mid \ } \exists i_{*} \in X \setminus Y \ \mbox{s.t.} \ f(X) + f(Y) > f(X- i_{*}) + f(Y+ i_{*}) \\ \phantom{\mathcal{D} = \{(X, Y) \mid \ } \mbox{and } f(X) + f(Y) > f(X- i_{*} +j ) + f(Y+ i_{*}-j ) \mbox{ for all } j \in Y \setminus X \} \end{array} \] is nonempty. Take a pair $(X,Y) \in \mathcal{D}$ with $|X \bigtriangleup Y| = |X \setminus Y| + |Y \setminus X|$ minimum, and fix $i_{*} \in X \setminus Y$ appearing in the definition of $\mathcal{D}$. For a fixed $\varepsilon > 0$, define $p \in \mathbb{R}\sp{N}$ as follows. The component $p_{i_{*}}$ is defined by \begin{align*} & \begin{array}{rcl} p_{i_{*}} & = & \left\{ \begin{array}{ll} f(X) - f(X- i_{*}) & (X- i_{*} \in \dom f), \\ f(Y+ i_{*}) - f(Y) + \varepsilon & (X- i_{*} \not\in \dom f,\ Y+ i_{*} \in \dom f), \\ 0 & (X- i_{*} \not\in \dom f,\ Y+ i_{*} \not\in \dom f). \end{array} \right. \end{array} \end{align*} The component $p_j$ for each $j \in Y \setminus X$ is defined by \[ p_{j} = \left\{ \begin{array}{ll} f(X) - f(X- i_{*} +j) + p_{i_{*}} & (X- i_{*} +j \in \dom f), \\ f(Y+ i_{*} -j) - f(Y) + p_{i_{*}} + \varepsilon & (X- i_{*} +j \not\in \dom f,\ Y+ i_{*} -j \in \dom f),\\ 0 & (X- i_{*} +j \not\in \dom f,\ Y+ i_{*} -j \not\in \dom f). \end{array} \right. \] We set $p_j=0$ for all other components of $p$. \medskip Claim 1: \begin{eqnarray} f_{p}(X - i_{*} ) & = & f_{p}(X) \qquad (X - i_{*} \in \dom f), \label{vmexlocl3c13-b} \\ f_{p}(X - i_{*} + j) & = & f_{p}(X) \qquad (j \in Y \setminus X,\ X - i_{*} + j \in \dom f), \label{vmexlocl3c11-b} \\ f_{p}(Y + i_{*} ) & < & f_{p}(Y) , \label{vmexlocl3c14-b} \\ f_{p}(Y + i_{*} - j) & < & f_{p}(Y) \qquad (j \in Y \setminus X). \label{vmexlocl3c12-b} \end{eqnarray} \noindent (Proof of Claim~1) Similar to the proof of Claim 1 in the proof of Lemma \ref{LMmnatexcequicard01}. \OMIT The equalities (\ref{vmexlocl3c13-b}) and (\ref{vmexlocl3c11-b}) are obvious from the definition of $p$. If $X-i_{*} \in \dom f$, (\ref{vmexlocl3c14-b}) follows from (\ref{vmexlocl3c13-b}) and $f_{p}(X) + f_{p}(Y) > f_{p}(X - i_{*}) + f_{p}(Y +i_{*})$; otherwise, (\ref{vmexlocl3c14-b}) follows from the fact that $f_{p}(Y +i_{*}) - f_{p}(Y) = -\varepsilon$ or $-\infty$ depending on whether $Y+i_{*} \in \dom f$ or not. Similarly, (\ref{vmexlocl3c12-b}) follows from (\ref{vmexlocl3c11-b}) and $f_{p}(X) + f_{p}(Y) > f_{p}(X - i_{*} + j) + f_{p}(Y +i_{*} - j)$ if $X-i_{*}+j \in \dom f$; otherwise, (\ref{vmexlocl3c12-b}) follows from the fact that $f_{p}(Y +i_{*} - j) - f_{p}(Y) = -\varepsilon$ or $-\infty$ depending on whether $Y+i_{*}-j \in \dom f$ or not. \medskip To write (\ref{vmexlocl3c13-b}) and (\ref{vmexlocl3c11-b}) in one formula, it is convenient to introduce a special symbol, say, $\circ$ to denote a null element such that $Z + \circ = Z$ for any $Z \subseteq N$. Then (\ref{vmexlocl3c13-b}) and (\ref{vmexlocl3c11-b}) together are expressed as \begin{eqnarray} f_{p}(X - i_{*} + j) & = & f_{p}(X) \qquad (j \in (Y \setminus X)\sp{\circ}, \ X - i_{*} + j \in \dom f), \label{vmexlocl3c113-b} \end{eqnarray} where, for any $Z \subseteq N$, ``$j \in Z\sp{\circ}$'' means that $j \in Z$ or $j = \circ$. Similarly, with the understanding that $Z - \circ = Z$ for any $Z \subseteq N$, (\ref{vmexlocl3c14-b}) and (\ref{vmexlocl3c12-b}) are expressed as \begin{eqnarray} f_{p}(Y + i_{*} - j) & < & f_{p}(Y) \qquad (j \in (Y \setminus X)\sp{\circ}). \label{vmexlocl3c124-b} \end{eqnarray} \medskip Claim 2: There exists $i_{0} \in X \setminus Y - i_{*}$ and $j_{0} \in (Y \setminus X)\sp{\circ}$ such that $Y + i_{0} - j_{0} \in \dom f$ and \begin{equation} \label{vmexlocl3a1-b} f_{p}(Y + i_{0} - j_{0}) \geq f_{p}(Y + i_{0} - j) \qquad ( j \in (Y \setminus X)\sp{\circ} ). \end{equation} \noindent (Proof of Claim 2) Since $|X| > |Y|$, (P1$[\mathbb{B}]$) implies the existence of $i_{0} \in X \setminus Y$ with $ f(X) + f(Y) \le f(X- i_{0} ) + f(Y + i_{0})$, where $i_{0} \ne i_{*}$ by $(X, Y) \in \mathcal{D}$. This inequality implies $Y+i_{0}-j \in \dom f$ with $j = \circ$. Any $j \in (Y \setminus X)\sp{\circ}$ that maximizes $f_{p}(Y + i_{0} - j)$ serves as $j_{0}$. \medskip Claim 3: For $Y'= Y + i_{0} - j_{0}$ we have \begin{align} f_{p}(X) + f_{p}(Y') & > f_{p}(X - i_{*} + j) + f_{p}(Y' + i_{*} - j) \qquad (j \in (Y' \setminus X)\sp{\circ}). \label{vmexloc32-b} \end{align} \noindent (Proof of Claim 3) First note that $i_{*} \in X \setminus Y'$ by the choice of $Y'$. Since the inequality (\ref{vmexloc32-b}) is obvious when $X - i_{*} + j \not\in \dom f$, we assume that $X - i_{*} + j \in \dom f$. Then we have $f_{p}(X) = f_{p}(X - i_{*} + j)$ by (\ref{vmexlocl3c113-b}). We also have \begin{align*} & f_{p}(Y'+ i_{*} -j) = [ f_{p}(Y+ i_{0} + i_{*} - j_{0} -j) + f_{p}(Y) ] - f_{p}(Y) \\ & \leq \max\{f_{p}(Y+ i_{0} - j_{0}) + f_{p}(Y+ i_{*} -j), f_{p}(Y+ i_{0} -j ) + f_{p}(Y+ i_{*} - j_{0} )\} - f_{p}(Y) \\ & \leq f_{p}(Y+ i_{0} - j_{0}) + \max\{f_{p}(Y+ i_{*}-j) - f_{p}(Y),\ f_{p}(Y+ i_{*} - j_{0} ) - f_{p}(Y)\} \\ & < f_{p}(Y+ i_{0} - j_{0}) = f_{p}(Y'), \end{align*} where the first inequality is by (P1$[\mathbb{B}]$) or (P2$[\mathbb{B}]$), and the second and third inequalities follow from (\ref{vmexlocl3a1-b}) and (\ref{vmexlocl3c124-b}), respectively. Therefore, (\ref{vmexloc32-b}) holds. \medskip By $|X| > |Y|$ and $|Y'| \le |Y|+1$ we have $|X| \geq |Y'|$, in which the possibility of equality is excluded. Indeed, if $|X| = |Y'|$, then (\ref{vmexloc32-b}) contradicts (P2$[\mathbb{B}]$) for $(X, Y', i_{*})$. Therefore, $|X|>|Y'|$ holds. Hence, we have $(X, Y') \in \cal D$ by (\ref{vmexloc32-b}), which is a contradiction to the choice of $(X, Y)$, since $|X' \bigtriangleup Y| \leq |X \bigtriangleup Y| -1$. Therefore, $\mathcal{D}$ must be empty, which means that (P4$[\mathbb{B}]$) is satisfied. \end{proof} \subsection{Proof of Theorem~\ref{THmnatcavlocexc01hered}} \label{SCproofmnatlocexc01hered} We prove that, when $\dom f$ is an \Mnat-convex family containing the empty set, $f$ is \Mnat-concave if and only if it satisfies the first two conditions (L1$[\mathbb{B}]$) and (L2$[\mathbb{B}]$) of the local exchange property $\MncavlocS$. By Theorem~\ref{THmnatcavlocexc01}, it suffices to show that (L2$[\mathbb{B}]$) implies (L3$[\mathbb{B}]$) under this assumption. To be specific, we show (\ref{mnatconcavexc22locAlpha}): $ \alpha_{12}+\alpha_{34} \leq \max\{\alpha_{13}+\alpha_{24},\alpha_{14}+\alpha_{23}\}$ in the proof of Proposition~\ref{PRmconcavlocexc01onlyif}, where $\alpha_{ij}=f(X+i+j)$. We may assume $\alpha_{12} > - \infty$ and $\alpha_{34} > - \infty$, since otherwise the inequality holds trivially. Then we have $\alpha_{i} =f(X+i) > - \infty$ for $i \in \{ 1,2,3,4 \}$ by the assumption on $\dom f$, whereas $\alpha_{ij} \in \Rminf$. By (L2$[\mathbb{B}]$) in (\ref{mnatconcavexc21loc}) we have $ \alpha_{12}+\alpha_{3} \leq \max\{\alpha_{13}+\alpha_{2},\alpha_{23}+\alpha_{1}\}$, where we may assume \begin{equation} \label{Mnatlocproofeqn3} \alpha_{12}+\alpha_{3} \leq \alpha_{13}+\alpha_{2} \end{equation} by symmetry $1 \leftrightarrow 2$. Consider the following three pairs of inequalities: \begin{eqnarray} \alpha_{34}+\alpha_{2} & \leq & \alpha_{24}+\alpha_{3}, \label{Mnatlocproofeqn4} \\ \alpha_{34}+\alpha_{2} & \leq & \alpha_{23}+\alpha_{4}; \label{Mnatlocproofeqn42} \\ \alpha_{12}+\alpha_{4} & \leq & \alpha_{14}+\alpha_{2}, \label{Mnatlocproofeqn5} \\ \alpha_{12}+\alpha_{4} & \leq & \alpha_{24}+\alpha_{1}; \label{Mnatlocproofeqn52} \\ \alpha_{34}+\alpha_{1} & \leq & \alpha_{13}+\alpha_{4}, \label{Mnatlocproofeqn6} \\ \alpha_{34}+\alpha_{1} & \leq & \alpha_{14}+\alpha_{3} . \label{Mnatlocproofeqn62} \end{eqnarray} The condition (L2$[\mathbb{B}]$) implies the following: (i) (\ref{Mnatlocproofeqn4}) or (\ref{Mnatlocproofeqn42}) (or both) holds \footnote We cannot assume (\ref{Mnatlocproofeqn4}) based on the apparent symmetry $3 \leftrightarrow 4$ in (\ref{Mnatlocproofeqn4}) and (\ref{Mnatlocproofeqn42}), because this symmetry is not present in (\ref{Mnatlocproofeqn3}). } (ii) (\ref{Mnatlocproofeqn5}) or (\ref{Mnatlocproofeqn52}) (or both) holds, (iii) (\ref{Mnatlocproofeqn6}) or (\ref{Mnatlocproofeqn62}) (or both) holds. Hence, it suffices to consider the following four cases: Case 1: (\ref{Mnatlocproofeqn4}) holds; Case 2: (\ref{Mnatlocproofeqn42}) and (\ref{Mnatlocproofeqn5}) hold; Case 3: (\ref{Mnatlocproofeqn42}), (\ref{Mnatlocproofeqn52}), and (\ref{Mnatlocproofeqn6}) hold; Case 4: (\ref{Mnatlocproofeqn42}), (\ref{Mnatlocproofeqn52}), and (\ref{Mnatlocproofeqn62}) hold. In Case 1, the addition of (\ref{Mnatlocproofeqn3}) and (\ref{Mnatlocproofeqn4}) yields $\alpha_{12}+\alpha_{34} \leq \alpha_{13}+\alpha_{24}$, which implies (\ref{mnatconcavexc22locAlpha}). In Case 2, the addition of (\ref{Mnatlocproofeqn42}) and (\ref{Mnatlocproofeqn5}) yields $\alpha_{34} + \alpha_{12} \leq \alpha_{23}+\alpha_{14}$. In Case 3, the addition of (\ref{Mnatlocproofeqn52}) and (\ref{Mnatlocproofeqn6}) yields $\alpha_{12}+\alpha_{34} \leq \alpha_{24}+\alpha_{13}$. In Case 4, the addition of (\ref{Mnatlocproofeqn3}), (\ref{Mnatlocproofeqn42}), (\ref{Mnatlocproofeqn52}), and (\ref{Mnatlocproofeqn62}) yields \[ 2 (\alpha_{12}+\alpha_{34}) \leq \alpha_{13}+\alpha_{24}+\alpha_{14}+\alpha_{23} \leq 2 \max\{\alpha_{13}+\alpha_{24},\alpha_{14}+\alpha_{23}\}, \] which shows (\ref{mnatconcavexc22locAlpha}). This completes the proof of Theorem~\ref{THmnatcavlocexc01hered}. \subsection{Proof of Theorem~\ref{THmnatcavP1hered01}} \label{SCproofmnatcavP1hered01} We prove that, when $\dom f$ contains the empty set, $f$ is \Mnat-concave if and only if it satisfies (P1$[\mathbb{B}]$). The ``only-if'' part is already shown in Theorem~\ref{THmconcavcardexc01} (1). We prove the ``if'' part by means of a local exchange theorem, Theorem~\ref{THmnatcavlocexc01hered} in Section~\ref{SCexchange01loc}, which has already been proved in Section~\ref{SCproofmnatlocexc01hered}. The first two conditions (L1$[\mathbb{B}]$) and (L2$[\mathbb{B}]$) of the local exchange property $\MncavlocS$ are immediate consequences of (P1$[\mathbb{B}]$). In addition, $\dom f$ is an \Mnat-convex family, as shown below. Therefore, $f$ is \Mnat-concave by Theorem~\ref{THmnatcavlocexc01hered}. For \Mnat-convexity of $\dom f$, we show that $\mathcal{F}= \dom f$ satisfies the axioms for independent sets of a matroid: \hbox{(I-1)} $ \emptyset \in \mathcal{F}$, \ \hbox{(I-2)} $X \subseteq Y \in \mathcal{F} \ \Rightarrow \ X \in \mathcal{F}$, \ \hbox{(I-3)} $X, Y\in \mathcal{F}, \ |X| <|Y| \ \Rightarrow \ X + j \in \mathcal{F}$ for some $j \in Y \setminus X$. Here (I-1) holds by assumption and (I-3) is immediate from (P1$[\mathbb{B}]$). The second property (I-2) can be shown as follows. By condition (P1$[\mathbb{B}]$) we have: \begin{align} & X, Y \in \mathcal{F}, \ X \subsetneqq Y \ \Longrightarrow \ \mbox{there exists $j \in Y \setminus X$ such that $X + j, Y - j \in \mathcal{F}$, \label{mnsetP1} \end{align} which implies, by Lemma~\ref{LMgenbox01} below, that \begin{align} & X, Y \in \mathcal{F}, \ X \subseteq Z \subseteq Y \ \Longrightarrow \ Z \in \mathcal{F}. \label{mnsetbox} \end{align} Since $\emptyset \subseteq X \subseteq Y$ and $\emptyset, Y \in \mathcal{F}$ in (I-2), we have $X \in \mathcal{F}$. This completes the proof of Theorem~\ref{THmnatcavP1hered01}. \begin{lemmaM} \label{LMgenbox01} If a set family $\mathcal{F}$ satisfies {\rm (\ref{mnsetP1})}, then it satisfies {\rm (\ref{mnsetbox})}. \end{lemmaM} \begin{proof} We show (\ref{mnsetbox}) by induction on $|Y \setminus X|$. (\ref{mnsetbox}) is trivially true if $|Y \setminus X| \leq 1$. Assume $|Y \setminus X| \geq 2$, take $j \in Y \setminus X$ in (\ref{mnsetP1}), and consider $Z$ satisfying $X \subsetneqq Z \subsetneqq Y$. If $j \in Z$, then $X + j \subseteq Z \subseteq Y$ with $|Y \setminus (X+j)| = |Y \setminus X| -1$. If $j \not\in Z$, then $X \subseteq Z \subseteq Y-j$ with $|(Y-j) \setminus X| = |Y \setminus X| -1$. In either case we obtain $Z \in \mathcal{F}$ by the induction hypothesis. \end{proof} \section{M-concave Set Functions (Valuated Matroids)} \label{SCmcavsetfn} \subsection{Definition} Let $f: 2\sp{N} \to \Rminf$ be a real-valued set function on $N = \{ 1,2,\ldots, n \}$ and $\mathcal{B} = \dom f$ be the effective domain of $f$. We say that a function $f$ is a {\em valuated matroid} (or {\em matroid valuation}), if, for any $X, Y \in \mathcal{B}$ and $i \in X \setminus Y$, there exists some $j \in Y \setminus X$ such that $X - i +j \in \mathcal{B}$, $ Y + i -j \in \mathcal{B}$ and \begin{equation} \label{valmatexc1} f( X) + f( Y ) \leq f( X - i + j) + f( Y + i -j). \end{equation} This property is referred to as the {\em exchange property}. A valuated matroid is also called an {\em M-concave set function}. In this paper we use this terminology to emphasize its concavity aspects. An M-concave function can also be defined without explicit reference to its effective domain by considering by the following expression of the exchange property \begin{description} \item[\McavSb] For any $X, Y \subseteq N$ and $i \in X \setminus Y$, we have \begin{align} f( X) + f( Y ) &\leq \max_{j \in Y \setminus X} \{ f( X - i + j) + f( Y + i -j) \} . \label{valmatexc2} \end{align} \end{description} The effective domain of an M-concave function is equipped with a nice combinatorial structure. Let $\mathcal{B}$ denote the effective domain of an M-concave function $f$. As a consequence of the exchange property $\McavS$ of function $f$, the set family $\mathcal{B}$ satisfies the following exchange property: \begin{description} \item[\BvexSb] For any $X, Y \in \mathcal{B}$ and $i \in X \setminus Y$, there exists some $j \in Y \setminus X$ such that $X - i +j \in \mathcal{B}$ and $ Y + i -j \in \mathcal{B}$. \end{description} This means that $\mathcal{B}$ forms the family of bases of a matroid. We often refer to a set family $\mathcal{B}$ as an {\em M-convex family} if it is a nonempty family and satisfies the exchange property $\BvexS$. Therefore, an M-convex family is a synonym of the base family of a matroid. A set family $\mathcal{B}$ satisfying $\BvexS$ consists of equi-cardinal subsets, that is, \begin{align} X, Y \in \mathcal{B} \ \Longrightarrow \ |X| = |Y| \label{msetequicard} \end{align} holds if $\mathcal{B}$ satisfies the exchange property $\BvexS$. In view of the importance in our context, we state this fact as a proposition with a formal proof, although this is well known in matroid theory. \begin{propositionM} \label{PRmsetequicard01} An M-convex family consists of equi-cardinal subsets. \end{propositionM} \begin{proof} We show (\ref{msetequicard}) by induction on $|X \bigtriangleup Y| = |X \setminus Y| + |Y \setminus X|$. If $|X \bigtriangleup Y| = 0$, (\ref{msetequicard}) is trivially true. If $|X \bigtriangleup Y| \geq 1$, we may assume $X \setminus Y \not= \emptyset$. Take any $i \in X \setminus Y$. By $\BvexS$, $Y + i - j \in \mathcal{B}$ for some $j \in Y \setminus X$. For $Y' = Y + i - j $ we have $|X \bigtriangleup Y'| < |X \bigtriangleup Y|$, and hence $|X| = |Y'|$ by the induction hypothesis. Since $|Y| = |Y'|$, this shows $|X| = |Y|$. \end{proof} \begin{remark} \rm \label{RMvalmat} The concept of valuated matroid is introduced by Dress--Wenzel \cite{DW90=valmat,DW92=valmat}. The subsequent development leading to discrete convex analysis is expounded in Murota \cite[Chapter 5]{Mspr2000=valmat}. \finbox \end{remark} \subsection{Relation between M- and \Mnat-concave functions} \label{SCrelmmncavsetfn} We show that, while M-concave functions are a special case of \Mnat-concave functions, they are in fact equivalent concepts in the sense to be formulated in Proposition~\ref{PRmnatequicardvalmat}. First, M-concave functions form a subclass of \Mnat-concave functions with equi-cardinal effective domains. \begin{propositionM} \label{PRmcav=mnatcav+equicard} A set function $f$ is M-concave if and only if it is an \Mnat-concave function and $|X| = |Y|$ for all $X, Y \in \dom f$. \end{propositionM} \begin{proof} As is already noted, the effective domain of an M-concave function consists of equi-cardinal sets (Proposition~\ref{PRmsetequicard01}). For a function $f$ with equi-cardinal $\dom f$, $\MncavS$ is equivalent to $\McavS$ since (\ref{mnatcav1}) cannot happen. \end{proof} Second, we discuss the essential equivalence of M- and \Mnat-concave functions. For a function $f: 2^{N} \to \Rminf$, we associate a function $\tilde{f}$ on an equi-cardinal family on a larger set $\tilde{N}$. Denote by $r$ and $r'$ the maximum and minimum, respectively, of $|X|$ for $X \in \dom f$. Let $s \geq r-r'$ and $S = \{ n+1,n+2,\ldots, n+s \}$. We enlarge the underlying set to $\tilde{N} = N \cup S = \{ 1,2,\ldots, \tilde n \}$, where $\tilde n =n+s$, and define $\tilde{f}: 2^{\tilde N} \to \Rminf$ by \begin{align} \label{assocMdef} \tilde{f}(Z) = \left\{ \begin{array}{ll} f(Z \cap N) & (|Z| = r) , \\ -\infty & (\mbox{otherwise}) , \\ \end{array} \right. \end{align} for which $\dom \tilde{f}$ is an equi-cardinal family. For $X \subseteq N$ and $U \subseteq S$, we have $\tilde{f}(X \cup U) = f(X)$ if $|X|+|U|=r $. Therefore, if we want to maximize $f$, for example, we may maximize the associated function $\tilde{f}$ to obtain an optimal solution for $f$. We illustrate the construction (\ref{assocMdef}) by a simple example. \begin{example} \rm \label{EXmnatcvU32toM} Consider a function $f$ defined on $N=\{ 1,2,3 \}$ as $f(\emptyset)=f(\{ 1 \})=0$, $f(\{ 2 \})=f(\{ 3 \})=f(\{ 1,2 \})=f(\{ 1,3 \})=f(\{ 2,3 \})=1$, and $f(\{ 1,2,3 \})=-\infty$. We have $r=2$ and $r'=0$, and hence we can take $s =2$, $S = \{ 4,5 \}$, and $\tilde{N} = \{ 1,2,3,4,5 \}$. The corresponding function $\tilde{f}$ is given by \begin{align*} & \tilde{f}(\{ 4,5 \})=\tilde{f}(\{ 1,k \})=0, \quad \tilde{f}(\{ 2,k \}) =\tilde{f}(\{ 3,k \}) =1 \quad (k=4,5), \\ & \tilde{f}(\{ 1,2 \}) =\tilde{f}(\{ 1,3 \}) =\tilde{f}(\{ 2,3 \}) =1, \quad \tilde{f}(\{ 1,2,3 \})=-\infty. \end{align*} This function $f$ is \Mnat-concave, satisfying the condition $\MncavS$, while the corresponding function $\tilde{f}$ is M-concave, satisfying the condition $\McavS$. \finbox \end{example} \begin{propositionM}[\cite{Mmultexcstr18=valmat}] \label{PRmnatequicardvalmat} A set function $f$ is \Mnat-concave if and only if $\tilde{f}$ is M-concave. \end{propositionM} \begin{proof} The exchange property \McavS for $\tilde{f}$ amounts to the following, where $X, Y \in \dom f$ and $U, V \subseteq S$ with $|X|+|U|=|Y|+|V|=r$. \begin{itemize} \item For any $i \in X \setminus Y$ there exists $j \in Y \setminus X$ with \begin{align} & \tilde f( X \cup U) +\tilde f( Y \cup V ) \leq \tilde f( (X - i + j ) \cup U ) + \tilde f( (Y + i -j) \cup V ), \label{assocM11} \end{align} or there exists $j \in V \setminus U$ with \begin{align} & \tilde f( X \cup U) +\tilde f( Y \cup V ) \leq \tilde f( (X - i) \cup (U +j) ) + \tilde f( (Y + i) \cup (V-j) ). \label{assocM12} \end{align} \item For any $i \in U \setminus V$ there exists $j \in Y \setminus X$ with \begin{align} & \tilde f( X \cup U) +\tilde f( Y \cup V ) \leq \tilde f( (X + j ) \cup (U -i) ) + \tilde f( (Y -j) \cup (V+i) ), \label{assocM21} \end{align} or there exists $j \in V \setminus U$ with \begin{align} & \tilde f( X \cup U) +\tilde f( Y \cup V ) \leq \tilde f( X \cup (U -i +j) ) + \tilde f( Y \cup (V+i-j) ). \label{assocM22} \end{align} \end{itemize} Suppose that $f$ is \Mnat-concave. For any $i \in X \setminus Y$ we have (\ref{mnatcav1}) or (\ref{mnatcav2}). In the case of (\ref{mnatcav2}) we obtain (\ref{assocM11}). In the case of (\ref{mnatcav1}) we obtain (\ref{assocM12}) for any $j \in V \setminus U$, if $V \setminus U$ is nonempty. If $V \setminus U$ is empty, then $|X| \leq |Y|$ and we have (\ref{assocM11}) by (P2$[\mathbb{B}]$) and (P3$[\mathbb{B}]$). Next, take any $i \in U \setminus V$ (when $U \setminus V \not= \emptyset$). If $V \setminus U$ is nonempty, (\ref{assocM22}) holds for any $j \in V \setminus U$. If $V \setminus U$ is empty, we have $|U| > |V|$ and hence $|X| < |Y|$. Then (P1$[\mathbb{B}]$) shows (\ref{assocM21}). Thus we have shown the ``only-if'' part. The converse (``if'' part) is also true, since (\ref{mnatcav1}) follows from (\ref{assocM12}), and (\ref{mnatcav2}) from (\ref{assocM11}). \end{proof} \subsection{Exchange properties characterizing M-concave functions} \label{SCmexch01} The local exchange property for M-concave set functions reads as follows. \begin{description} \item[\McavlocSb] For any $X, Y \subseteq N$ with $|X \setminus Y | = 2$, there exist $i \in X \setminus Y$ and $j \in Y \setminus X$ such that \begin{equation} \label{valmatexc1loc} f( X) + f( Y ) \leq f( X - i + j) + f( Y + i -j). \end{equation} \end{description} \begin{theorem}\label{THmcavlocexc01} A set function $f: 2\sp{N} \to \Rminf$ is M-concave if and only if $\dom f$ is a matroid basis family (an M-convex family) and $\McavlocS$ is satisfied. \end{theorem} \begin{proof} By Proposition~\ref{PRmcav=mnatcav+equicard}, an M-concave function is precisely an \Mnat-concave function with an equi-cardinal effective domain. We use Theorem~\ref{THmnatcavlocexc01} that characterizes \Mnat-concavity by the local exchange property $\MncavlocS$. If $\dom f$ is equi-cardinal, the first two conditions (L1$[\mathbb{B}]$) and (L2$[\mathbb{B}]$) in $\MncavlocS$ are satisfied trivially, since the left-hand sides of (\ref{mnatconcavexc20loc}) and (\ref{mnatconcavexc21loc}) are always equal to $-\infty$. The third condition (L3$[\mathbb{B}]$) in $\MncavlocS$ is equivalent to $\McavlocS$. \end{proof} \begin{remark} \rm \label{RMmlocdomcond} In Theorem~\ref{THmcavlocexc01}, the assumption on $\dom f$ is indispensable. For example, let $N=\{ 1,2,\ldots, 6 \}$ and define $f(\{ 1,2,3 \}) = f(\{ 4,5,6 \})=0$, and $f(X)=-\infty$ for $X \not= \{ 1,2,3 \}, \{ 4,5,6 \}$. This function $f$ is not M-concave, since $\dom f = \{ \{ 1,2,3 \}, \{ 4,5,6 \} \}$ is not an M-convex family. However, $f$ satisfies the condition \McavlocS in a trivial manner, since the left-hand side of \eqref{valmatexc1loc} is equal to $-\infty$ whenever $|X \setminus Y | = 2$. \finbox \end{remark} We consider another (seemingly) weaker exchange property: \begin{description} \item[\McavwSb] For any distinct $X, Y \subseteq N$, there exist $i \in X \setminus Y$ and $j \in Y \setminus X$ that satisfy \eqref{valmatexc1loc}. \end{description} If $f$ has this property, its effective domain $\mathcal{B} = \dom f$ satisfies \begin{description} \item[\BvexwSb] For any distinct $X, Y \in \mathcal{B}$, there exist $i \in X \setminus Y$ and $j \in Y \setminus X$ such that $X - i +j \in \mathcal{B}$ and $ Y + i -j \in \mathcal{B}$. \end{description} The seemingly weaker condition $\McavwS$ is, in fact, equivalent to $\McavS$. \begin{theorem}\label{THmcavexcweak01} A set function $f: 2\sp{N} \to \Rminf$ is M-concave if and only if \McavwS is satisfied. \end{theorem} \begin{proof} The implication ``$\McavS \Rightarrow \McavwS$'' is obvious. To prove the converse, assume $\McavwS$ for $f$. Then $\dom f$ has the exchange property $\BvexwS$. It is known \cite[Theorem 2.3.14]{Mspr2000=valmat} that $\BvexwS$ is equivalent to $\BvexS$ \footnote $\BvexwS$ and $\BvexS$ here correspond, respectively, to (BM$_{\pm\rm w}$) and (BM$_{\pm}$) in \cite{Mspr2000=valmat}. } Then the claim follows from Theorem~\ref{THmcavlocexc01}. \end{proof} \begin{remark} \rm \label{RMmlocweakcond} Theorem~\ref{THmcavlocexc01} is due to Dress--Wenzel \cite{DWperf92=valmat} and Murota \cite{Mmax97=valmat}, and Theorem~\ref{THmcavexcweak01} is to Murota \cite{Mmax97=valmat}. See also \cite[Theorem 5.2.25]{Mspr2000=valmat}, where the exchange properties \McavlocS and \McavwS are called (VM$_{\rm loc}$) and (VM$_{\rm w}$), respectively. \finbox \end{remark} \section{Multiple Exchange Properties} \label{SCexchange01mult} \subsection{Theorems of multiple exchange properties} As a generalization of the exchange property \MncavS for \Mnat-concave functions we may conceive two versions of {\em multiple exchange property}: \begin{description} \item[\MncavmSb] For any $X, Y \subseteq N$ and $I \subseteq X \setminus Y$, there exists $J \subseteq Y \setminus X$ such that \begin{align} f( X) + f( Y ) \leq f((X \setminus I) \cup J) +f((Y \setminus J) \cup I) , \label{mnatconcavexcmult} \end{align} \item[\MncavmsSb] For any $X, Y \subseteq N$ and $I \subseteq X \setminus Y$, there exists $J \subseteq Y \setminus X$ with $|J| \leq |I|$ and (\ref{mnatconcavexcmult}), \end{description} where the latter, requiring the cardinality condition $|J| \leq |I|$ on $J$, is a stronger property than the former. Thus the stronger form $\MncavmsS$ implies the weaker form $\MncavmS$. The subscript ``m'' stands for ``multiple'' and ``s'' for ``stronger.'' Obviously, the (ordinary) exchange property \MncavS follows from the stronger form \MncavmsS as its special case with $|I|=1$, but it does not immediately follow from the weaker form $\MncavmS$. These three conditions are, in fact, equivalent, as is stated in the following theorem. \begin{theorem} \label{THmultexchmnat} For a function $f: 2\sp{N} \to \Rminf$ with $\dom f \not= \emptyset$, the three conditions $\MncavS$, $\MncavmS$, and $\MncavmsS$ are pairwise equivalent. Therefore, every M$\sp{\natural}$-concave set function has the stronger multiple exchange property $\MncavmsS$. \end{theorem} \begin{proof} The proof is given in Section~\ref{SCproofmnatmult01}. It is based on the Fenchel-type duality theorem \cite[Theorem~8.21]{Mdcasiam=valmat}. \end{proof} For M-concave functions, the multiple exchange property takes the following form, since the effective domain is equi-cardinal. \begin{description} \item[\McavmSb] For any $X, Y \subseteq N$ and $I \subseteq X \setminus Y$, there exists $J \subseteq Y \setminus X$ with $|J| = |I|$ and (\ref{mnatconcavexcmult}). \end{description} \begin{theorem} \label{THmultexchvalmat} Every M-concave function (valuated matroid) has the multiple exchange property $\McavmS$. \end{theorem} \begin{proof} This follows from the implication ``$\MncavS \Rightarrow \MncavmS$'' in Theorem~\ref{THmultexchmnat}. using Proposition~\ref{PRmcav=mnatcav+equicard}. See the proof of Lemma \ref{LMmexc0m} in Section~\ref{SCproofmnatmult01} for detail. \end{proof} \begin{remark} \rm \label{RMmultexchbib} Theorem~\ref{THmultexchvalmat} and the equivalence of $\MncavS$ and $\MncavmS$ in Theorem~\ref{THmultexchmnat} are due to Murota \cite{Mmultexc18=valmat}, whereas the equivalence of $\MncavS$ and to the stronger version $\MncavmsS$ is established in Murota \cite{Mmultexcstr18=valmat}. \finbox \end{remark} \begin{remark} \rm \label{RMmultBexc} Theorem~\ref{THmultexchvalmat} implies a classical result in matroid theory (cf., Kung \cite{Kun86b=valmat}, Schrijver \cite[Section 39.9a]{Sch03=valmat}) that the base family $\mathcal{B}$ of a matroid has the multiple exchange property: \begin{description} \item[\BvexmSb] For any $X, Y \in \mathcal{B}$ and $I \subseteq X \setminus Y$, there exists $J \subseteq Y \setminus X$ with $|J| = |I|$ such that $(X \setminus I) \cup J \in \mathcal{B}$ and $(Y \setminus J) \cup I \in \mathcal{B}$. \end{description} It follows from Theorem~\ref{THmultexchmnat} that a nonempty family $\mathcal{F} \subseteq 2\sp{N}$ satisfies \BnvexS if and only if it satisfies the multiple exchange property: \begin{description} \item[\BnvexmSb] For any $X, Y \in \mathcal{F}$ and $I \subseteq X \setminus Y$, there exists $J \subseteq Y \setminus X$ such that $(X \setminus I) \cup J \in \mathcal{F}$ and $(Y \setminus J) \cup I \in \mathcal{F}$ \end{description} as well as its stronger form with an additional condition $|J| \leq |I|$ on $J$. Therefore, every g-matroid has this multiple exchange property. \finbox \end{remark} \begin{remark} \rm \label{RMmulteco} The multiple exchange property $\MncavmS$ here is the same as the ``strong no complementarities property (SNC)'' introduced by Gul--Stacchetti \cite{GS99=valmat}, where it is shown that (SNC) implies the gross substitutes property (GS) of Kelso--Crawford \cite{KC82=valmat}. By a result of Fujishige--Yang \cite{FY03gs=valmat}, on the other hand, (GS) is equivalent to $\MncavS$. Therefore, Theorem~\ref{THmultexchmnat} above reveals that (SNC) is equivalent to (GS). \finbox \end{remark} \subsection{Proof of Theorem~\ref{THmultexchmnat}} \label{SCproofmnatmult01} We prove Theorem~\ref{THmultexchmnat} about multiple exchange properties. Our proof first shows the equivalence of $\MncavS$ and $\MncavmS$ in Lemmas \ref{LMmnatexc0m} and \ref{LMmnatexcm0}. Using this we further show the implication ``$\MncavS \Rightarrow \MncavmsS$'' in Lemma \ref{LMmnatexc0ms}. The converse ``$\MncavS \Leftarrow \MncavmsS$'' is obvious, as is already mentioned in before Theorem \ref{THmultexchmnat}. \begin{lemmaM} \label{LMmnatexc0m} $\MncavmS$ implies $\MncavS$. \end{lemmaM} \begin{proof} First, it can be shown that $\dom f$ satisfies $\BnvexS$; see \cite[Section 5.1]{Mmultexc18=valmat} for the detail. Then the proof is reduced, by Theorem~\ref{THmnatcavlocexc01}, to showing the local exchange property $\MncavlocS$ in Section~\ref{SCexchange01loc}. The first two conditions (L1$[\mathbb{B}]$) and (L2$[\mathbb{B}]$) of $\MncavlocS$ are immediate from $\MncavmS$, and third condition (L3$[\mathbb{B}]$) can be shown similarly to the proof of Theorem~\ref{THmnatcavlocexc01hered}; see \cite[Section 5.2]{Mmultexc18=valmat} for the detail. \end{proof} \begin{lemmaM} \label{LMmnatexcm0} $\MncavS$ implies $\MncavmS$. \end{lemmaM} \begin{proof} Let $f: 2^{N} \to \Rminf$ be an M$^{\natural}$-concave function, which, by definition, satisfies the exchange property $\MncavS$. Let $X, Y \in \dom f$ and $I \subseteq X \setminus Y$. With the notations: \begin{align} &C = X \cap Y, \qquad X_{0} = X \setminus Y = X \setminus C, \qquad Y_{0} = Y \setminus X = Y \setminus C , \label{mexcCX0Y0def} \\ & f_{1}(J) = f((X \setminus I) \cup J) = f( (X_{0} \setminus I) \cup C \cup J) \qquad (J \subseteq Y_{0}), \label{mexcf1def} \\ & f_{2}(J) = f((Y \setminus J) \cup I) = f( I \cup C \cup (Y_{0} \setminus J) ) \qquad (J \subseteq Y_{0}), \label{mexcf2def} \end{align} the multiple exchange property \MncavmS is rewritten as \begin{align} f( X) + f( Y ) \leq \max_{J \subseteq Y_{0}} \{ f_{1}(J) + f_{2}(J) \}. \label{mnatconcavexcmult3} \end{align} Both $f_{1}$ and $f_{2}$ are M$^{\natural}$-concave set functions on $Y_{0}$, where the nonemptiness of $\dom f_{1}$ and $\dom f_{2}$ can be shown by induction on $|I|$ using $\BnvexS$. For $i=1,2$, let $g_{i}$ be the (convex) conjugate function of $f_{i}$ defined as \begin{align*} g_{1}(q) &= \max_{J \subseteq Y_{0}} \{ f_{1}(J) - q(J) \} \qquad (q \in \RR^{Y_{0}}), \\ g_{2}(q) &= \max_{J \subseteq Y_{0}} \{ f_{2}(J) - q(J) \} \qquad (q \in \RR^{Y_{0}}), \end{align*} where $q(J) = \sum_{j \in J} q_{j}$. The Fenchel-type duality \cite[Theorem~8.21(1)]{Mdcasiam=valmat} show \footnote The assumption $\dom g_{1} \cap \dom g_{2} \not= \emptyset$ in Murota \cite[Theorem 8.21 (1)]{Mdcasiam=valmat} is satisfied, since $\dom g_{1} = \dom g_{2} = \RR^{N}$. } \begin{equation} \label{mnatconcavexcmult3fenc} \max_{J \subseteq Y_{0}} \{ f_{1}(J) + f_{2}(J) \} = \inf_{q \in \RR^{Y_{0}}} \{ g_{1}(q) + g_{2}(-q) \}, \end{equation} where the maximum on the left-hand side is defined to be $-\infty$ if $\dom f_{1} \cap \dom f_{2} = \emptyset$. Combining \eqref{mnatconcavexcmult3fenc} with Lemma \ref{LMg1qg2q} below, we obtain \[ \max_{J \subseteq Y_{0}} \{ f_{1}(J) + f_{2}(J) \} = \inf_{q \in \RR^{Y_{0}}} \{ g_{1}(q) + g_{2}(-q) \} \geq f( X) + f( Y ), \] which shows the desired inequality (\ref{mnatconcavexcmult3}) as well as the finiteness of the value of \eqref{mnatconcavexcmult3fenc}. \end{proof} \begin{lemmaM} \label{LMg1qg2q} For any $q \in \RR^{Y_{0}}$, we have $g_{1}(q) + g_{2}(-q) \geq f( X) + f( Y )$. \end{lemmaM} \begin{proof} Let $g$ be the (convex) conjugate function of $f$, i.e., \begin{equation} \label{gpdef} g(p) = \max_{Z \subseteq N} \{ f(Z) - p(Z) \} \qquad (p \in \RR^{N}). \end{equation} \begin{figure}\begin{center} \includegraphics[height=30mm]{fg2mEXCp1p2ver2.eps} \quad \includegraphics[height=30mm]{fg2mEXCmaxminpver2.eps} \vspace{0.5\baselineskip} \caption{Vectors $p^{(1)}$, $p^{(2)}$, $p^{(1)} \vee p^{(2)}$, and $p^{(1)} \wedge p^{(2)}$} \label{FGp1p2def} \end{center}\end{figure} For a vector $q \in \RR^{Y_{0}}$ we define $p^{(1)}, p^{(2)} \in \RR^{N}$ by \begin{align*} p^{(1)}_{i} &= p^{(2)}_{i} = \left\{ \begin{array}{ll} q_{i} & (i \in Y_{0}) , \\ - M & (i \in C ), \\ + M & (i \in N \setminus (X \cup Y) ) , \\ \end{array} \right. \quad p^{(1)}_{i} = - p^{(2)}_{i} = \left\{ \begin{array}{ll} - M & (i \in X_{0} \setminus I ), \\ + M & (i \in I ), \\ \end{array} \right. \end{align*} where $M$ is a sufficiently large positive number (see Fig.~\ref{FGp1p2def}). Then the maximizer $Z$ of $g(p)$ in (\ref{gpdef}) for $p = p^{(1)}$ must include $(X_{0} \setminus I) \cup C$ and avoid $I \cup ( N \setminus (X \cup Y) )$. For $p = p^{(2)}$, the maximizer $Z$ must include $I \cup C$ and avoid $( X \setminus (I \cup C) ) \cup ( N \setminus (X \cup Y) )$. Therefore, we have \begin{align*} g_{1}(q) &= \max_{J \subseteq Y_{0}} \{ f( (X_{0} \setminus I) \cup C \cup J) - q(J) \} \\ & = g(p^{(1)}) - M (|X_{0} \setminus I|+|C|), \\ g_{2}(-q) &= \max_{J \subseteq Y_{0}} \{ f( I \cup C \cup (Y_{0} \setminus J) ) + q(J) \} \\ &= \max_{K \subseteq Y_{0}} \{ f( I \cup C \cup K ) - q(K) \} + q(Y_{0}) \\ & = g(p^{(2)}) - M (|I|+|C|) + q(Y_{0}). \end{align*} The function $g$ is submodular by \cite[Theorem~6.19]{Mdcasiam=valmat} and therefore \begin{align} & g_{1}(q) + g_{2}(-q) \notag \\ & = g(p^{(1)}) +g(p^{(2)}) - M (|X|+|C|) + q(Y_{0}) \notag \\ &\geq g(p^{(1)} \vee p^{(2)}) +g(p^{(1)} \wedge p^{(2)}) - M (|X|+|C|) + q(Y_{0}). \label{g1g2gg} \end{align} Since \begin{align*} & (p^{(1)} \vee p^{(2)})_{i} = (p^{(1)} \wedge p^{(2)})_{i} = \left\{ \begin{array}{ll} q_{i} & (i \in Y_{0}) , \\ - M & (i \in C ), \\ + M & (i \in N \setminus (X \cup Y) ) , \\ \end{array} \right. \\ & (p^{(1)} \vee p^{(2)})_{i} = - (p^{(1)} \wedge p^{(2)})_{i} = + M \quad (i \in X_{0}), \\ \end{align*} we have \begin{align} g(p^{(1)} \vee p^{(2)}) & \geq f(Y) - q(Y_{0}) + M|C|, \label{gpveep} \\ g(p^{(1)} \wedge p^{(2)}) & \geq f(X) + M |X| , \label{gpwedgep} \end{align} where (\ref{gpveep}) and (\ref{gpwedgep}) follow from (\ref{gpdef}) with $Z=Y$ and $Z=X$, respectively. The substitution of (\ref{gpveep}) and (\ref{gpwedgep}) into (\ref{g1g2gg}) yields the desired inequality $g_{1}(q) + g_{2}(-q) \geq f(X) + f(Y)$. \end{proof} Lemma \ref{LMmnatexcm0} implies the multiple exchange property $\McavmS$ for M-concave functions. \begin{lemmaM} \label{LMmexc0m} $\McavmS$ holds for every M-concave function. \end{lemmaM} \begin{proof} Let $f$ be an M-concave function. By Proposition~\ref{PRmcav=mnatcav+equicard}, $f$ is an \Mnat-convex function such that $|X| = |Y|$ for all $X, Y \in \dom f$. Hence $\MncavmS$ for $f$ is equivalent to $\McavmS$, whereas $f$ satisfies $\MncavmS$ by Lemma \ref{LMmnatexcm0}. \end{proof} Finally we establish the stronger multiple exchange property $\MncavmsS$ for \Mnat-concave functions. \begin{lemmaM} \label{LMmnatexc0ms} $\MncavS$ implies $\MncavmsS$. \end{lemmaM} \begin{proof} Let $f: 2^{N} \to \Rminf$ be an M$^{\natural}$-concave function, and consider the associated M-concave function $\tilde{f}: 2^{\tilde N} \to \Rminf$ as defined in (\ref{assocMdef}), where $\tilde{N} = N \cup S$ (cf., Proposition~\ref{PRmnatequicardvalmat}). We apply Lemma~\ref{LMmexc0m} to this M-concave function $\tilde{f}$. Suppose that we are given $X, Y \in \dom f$ and a subset $I \subseteq X \setminus Y$. Take any $U, V \subseteq S$ with $|U|=r - |X|$ and $|V|=r - |Y|$, where $r = \max \{ |Z| \mid Z \in \dom f \}$. Then $X \cup U, Y \cup V \in \dom \tilde f$ and $I \subseteq (X \cup U) \setminus (Y \cup V)$. By Lemma \ref{LMmexc0m} for $\tilde f$, there exists $J \subseteq Y \setminus X$ and $W \subseteq V \setminus U$ such that $|J| + |W| = |I|$ and \begin{align*} \tilde f( X \cup U) +\tilde f( Y \cup V ) \leq \tilde f( \, ( (X \setminus I) \cup J ) \cup (U \cup W ) \, ) + \tilde f( \, ( (Y \setminus J) \cup I) \cup (V \setminus W ) \, ), \end{align*} which implies $f( X) + f( Y ) \leq f((X \setminus I) \cup J) +f((Y \setminus J) \cup I)$ in (\ref{mnatconcavexcmult}). Here we have $|J| \leq |I|$ since $|J| + |W| = |I|$. \end{proof} \section*{Acknowledgement} The author thanks Akiyoshi Shioura for discussion and comments. This work was supported by JSPS KAKENHI Grant Number JP20K11697.
2,877,628,091,397
arxiv
\section{Introduction} Filaments eruptions and other ejections of mass from the Sun are often accompanied by a dimming of the local coronal emission at many different wavelengths, and by the formation of transient coronal holes (see, e.g., Harrison \& Lyons 2000; Kahler \& Hudson 2001; Harrison et al. 2003; Howard \& Harrison 2004; Attrill et al. 2006; Harra et al. 2007; Imada et al. 2007; Reinard \& Biesecker 2008, 2009; Jin et al. 2009; Dai et al. 2010). The dimmings are in most cases caused by a decrease in the coronal density due to the opening-up of the magnetic field and escape of the entrained material into the heliosphere. The closing-down of the flux proceeds from the inside outward, with the field lines rooted nearest the photospheric polarity inversion line (PIL) pinching off first, giving rise to a progressively growing post-eruption loop arcade (Kopp \& Pneumann 1976). Chromospheric evaporation fills each newly reconnected loop with high-temperature plasma, which cools as the loop collapses; thus the hottest loops are located at the leading edge of the outward-expanding ``reconnection wave'' (see, e.g., Warren et al. 1999; Sheeley et al. 2004, 2007). The different viewing angles afforded by the two {\it STEREO} spacecraft (Howard et al. 2008) provide a unique opportunity to study the 3-dimensional structure of coronal dimmings and post-eruption arcades. Moreover, with the Extreme-Ultraviolet Imager (EUVI), these events can be observed simultaneously in the 17.1, 19.5, and 28.4~nm bandpasses, corresponding to temperatures of $\sim$1, $\sim$1.5 and $\sim$2~MK respectively, thereby allowing one to distinguish more easily between ``real'' dimmings due to mass loss (often termed transient coronal holes) and dimmings due to heating and cooling effects in post-eruption arcades. We have studied several events involving the eruption of high-latitude filaments during the 2008--2009 activity minimum, when the separation between the {\it STEREO}~A and B spacecraft was on the order of 90$^\circ$. In this Letter, we focus on new results concerning the temperature dependence of the dimmings and the subsequent reconnection waves. \section{EUV Observations} The sequence of images in Figure~\ref{fig:Jan14STA} shows a filament eruption at the northeast limb on 2009 January~14, as observed by EUVI~A in the 17.1, 19.5, and 28.4~nm bandpasses (see also the accompanying online movies). Each image represents the ratio of the local brightness at the given time to that at 19:06~UT on January 13, before the start of the eruption; a shift has been applied to remove the effect of the photospheric differential rotation on the disk (Howard, Harvey and Forgach, 1990). By taking the ratio of intensities rather than subtracting them, we are able to bring out features above the limb which, because of the rapid falloff of the coronal density with height, would not be visible in ordinary base-difference images. At 04:06~UT, the last of the prominence material (which is best seen in \ion{He}{2} 30.4~nm) is being ejected from the limb. This dense, cool material is only faintly visible at 19.5 and 28.4~nm, but appears in \ion{Fe}{9} 17.1~nm as a bright blob with a narrow, dark tail. We interpret this dark tail as a density depletion associated with the pinching-off of the magnetic field behind the ejection. Poleward and equatorward of the disconnection region, however, the corona is noticeably darker in the higher-temperature lines than in 17.1~nm. At 06:06 UT (second row of images in Figure~\ref{fig:Jan14STA}), a cusp-shaped post-eruption arcade has begun to form in 19.5 and 28.4~nm, with the structure being brighter and more extended in the latter wavelength. In both cases, the surrounding corona (off-limb) has been strongly depleted of material at the given temperature, as indicated by the dark voids in the images. In contrast, the 17.1~nm image is dominated by neutral gray, and shows neither a bright arcade nor a large region of depleted density. Instead, we continue to see a remnant of the wakelike depletion and some dark areas which lie inside the outer boundary of the bright 28.4 arcade (indicated by the yellow contours), and which evidently represent plasma that has been heated to temperatures well above 1~MK. The same situation continues to hold as the hot post-eruption arcades expand (bottom row of images). Figure~\ref{fig:Jan14STB} shows the filament eruption and its aftermath as viewed on the disk from the {\it STEREO}~B spacecraft. Here, each image represents the ratio of the local brightness at the given time to that recorded $\sim$2~hr earlier (rather than before the eruption). The \ion{Fe}{12} 19.5~nm image taken at 05:05~UT shows a pair of large, dark transient coronal holes, one on each side of the footpoint brightenings of the post-eruption arcade. It should be noted that the brightenings appear in the vicinity of the PIL well before the holes (which remain essentially stationary) reach their darkest level. The transient holes and footpoint brightenings are barely visible in the \ion{Fe}{9} 17.1~nm image, as expected from the corresponding limb views of Figure~\ref{fig:Jan14STA}, where the 17.1~nm intensities undergo relatively little change during the event. More puzzling, at first sight, is the rather weak signature of the transient holes at 28.4~nm, despite the very strong darkenings that are seen above the limb in Figure~\ref{fig:Jan14STA}. The weakness of the on-disk holes can be attributed to contributions to the 28.4~nm bandpass from low-temperature lines such as \ion{Si}{7} 27.5~nm, which become particularly significant in darker regions of the disk (see e.g., Figure~21 in Del~Zanna et al. 2003). Between 05:05 and 07:05 UT, the brightenings increase in intensity and continue to spread through the area occupied by the transient holes. In the 17.1~nm running-ratio image at 07:06~UT, the poleward and (to a lesser extent) the equatorward sides of the footpoint brightenings are bordered by dark ribbons, which lie inside the boundaries of the bright 28.4 and 19.5~nm emission. These dark ribbons evidently represent the footpoint areas of the newly reconnected loops which earlier contained \ion{Fe}{9} plasma but have now been heated to higher temperatures (see Fig.~\ref{fig:Jan14STA}). Thus the dark, outward-propagating dark ribbons observed in the relatively cool 17.1~nm line are a heating effect due to the closing-down of flux, and not a density-depletion effect due to the opening-up of field lines. Note that, in running-ratio images, transient holes no longer appear as dark areas after they have reached their darkest level in base-ratio images; comparing the two types of images thus offers a way to distinguish the ``heat waves'' from the transient coronal holes. At 09:05 UT, the footpoint brightenings continue to propagate outward, but no further intensity increases occur between the diverging fronts. A dark ribbon is still present along the limbward side of the poleward-propagating 17.1~nm brightening. Comparing the positions of the 19.5 and 28.4~nm brightenings, we see that the higher-latitude fronts appear to be shifted more relative to each other than the equatorward-propagating fronts. This difference may be a projection effect caused by the fact that the 28.4~nm emission extends to greater heights than the 19.5~nm emission, and thus seems to extend farther toward the limb. As another characteristic example, Figures~\ref{fig:Dec27STA} and \ref{fig:Dec27STB} show the off-limb and on-disk views of a filament eruption that occurred on 2008 December~27 (see also the online movies). The temperature/heating effect is clearly seen in the limb view from EUVI~A (Figure~\ref{fig:Dec27STA}), where the post-eruption arcade is bright in 28.4~nm, fainter in 19.5~nm, and dark in 17.1~nm. Conversely, the region of strongly depleted 28.4~nm emission above the arcade appears as neutral gray in the 17.1~nm base-ratio images, indicating that it is mainly the hotter plasma that escapes when the magnetic field opens up. Considering now the on-disk view from EUVI~B (Figure~\ref{fig:Dec27STB}), we first note that the dimmings in the 19.5~nm running-ratio images at 05:06 and 07:06~UT represent transient coronal holes. The faint Y-shaped darkening seen at 05:06~UT in 17.1~nm is a heating effect, since it coincides with a similarly shaped brightening in 28.4~nm. Subsequently, as this plasma cools, the double-ribbon brightening begins to appear in 17.1~nm, bordered on its poleward and equatorward sides by dark patches that represent the outward-propagating 17.1~nm ``heat wave''. The high-latitude filament eruptions of December 27 and January 14 both gave rise to slow CMEs observed with the white-light coronagraphs on {\it STEREO}. \section{Physical Interpretation} Figure~\ref{fig:pfss} shows the coronal magnetic field on 2009 January 14, as viewed from {\it STEREO}~B; the field lines were derived from a potential-field source-surface (PFSS) extrapolation of magnetograph measurements from the Mount Wilson Observatory (MWO). As indicated by the yellow dot, the filament eruption occurred along the PIL encircling the negative-polarity north polar cap. In order to account for the pair of transient coronal holes, most of the overlying coronal loops rooted between the polar hole boundary and the field-line ``part'' or separatrix on the equatorward side of the PIL must have opened up during the eruption. From the fact that the post-eruption arcade started to form well before the twin dimmings reached their maximum strength, we deduce that the innermost loops near the PIL pinched off while the mass loss was still at an early stage. This can be seen from the off-limb 19.5~nm ratio images in Figure~\ref{fig:Jan14STA}. At 04:05 UT the pinch-off has already occurred, while the evacuation of mass is still ongoing as evidenced by the darker void at 06:05 UT. It is often assumed that all of the entrained coronal material is expelled when the magnetic field opens up. That the background 17.1~nm emission undergoes relatively little change during filament eruptions provides strong evidence to the contrary: most of the cooler coronal plasma does not escape before the field closes down again but instead remains trapped and never leaves the Sun. This result is consistent with multi-wavelength Doppler observations of a transient coronal hole by Imada et al. (2007; see also Jin et al. 2009), using the EUV Imaging Spectrometer on {\it Hinode}. They found that the outflow speeds at the hole boundary were strongly temperature-dependent, with a steep transition from slow to fast flows occurring near 1~MK. As indicated by their Figure~6, the velocity $v$ varies from $\sim 30$~km~s$^{-1}$ in \ion{Fe}{9} to $\sim 90$~km~s$^{-1}$ in \ion{Fe}{12} to $\sim 160$~km~s$^{-1}$ in \ion{Fe}{15}. Correspondingly, the timescale $\tau_{\rm esc}\sim R_\odot/v$ for the plasma to travel a solar radius varies from $\sim$7~hr in \ion{Fe}{9} to $\sim$2~hr in 4 hr \ion{Fe}{12} to only $\sim$1~hr in \ion{Fe}{15}. Since the transient holes in the January~14 and December~27 events reached their greatest dimming level $\sim 3$ to $4$~hr after they first appeared in the base-ratio images, we conclude that the \ion{Fe}{9} emitting plasma did not have sufficient time to escape before the onset of reconnection. An interesting analogy may be drawn between the temperature dependence of transient holes and the relationship between polar plumes and the interplume regions in coronal holes. EUV plumes, which are best observed in lower-temperature lines like \ion{Fe}{9} 17.1~nm, are both denser and cooler than the interplume medium; moreover, Doppler measurements indicate that the flow speeds in plumes are much smaller (at low heights) than in the rest of the coronal hole (see e.g., Wilhelm et al. 1998; Cranmer et al. 1999). Evidently, when the magnetic field opens up during a filament eruption, the hotter component of the corona behaves like the interplume medium, whereas the cooler component behaves like the plume gas. According to this analogy, the hot and cool components would exist along separate field lines or ``strands'' of flux tubes. As demonstrated by the events described in section 2, the 17.1~nm corona shows its most pronounced dimmings not during the eruption, but during the reconnection phase. These propagating dark ribbons or ``heat waves'' occur at the leading edge of the 17.1~nm post-eruption arcade, and result from the reconnective heating of the cool plasma that was not ejected during the eruption. By implication, the post-eruption brightenings seen at 19.5 and 28.4~nm have two different sources: reconnected loops that were refilled via chromospheric evaporation and underwent subsequent cooling, and reconnected loops that were already filled with cool plasma, but were then heated to temperatures well above 1~MK. \section{Conclusions} Our main points may be summarized as follows: 1. When viewed either on the disk or at the limb, transient coronal holes are much less visible in \ion{Fe}{9} 17.1~nm than in higher-temperature emission lines such as \ion{Fe}{12} 19.5~nm. This observational result implies that most of the cooler coronal plasma does not escape when the magnetic field opens up. 2. The cooler plasma remains trapped because it flows outward too slowly to escape before the field lines close down again. As shown by the Doppler measurements of Imada et al. (2007), the outflow velocities in transient holes decrease dramatically at temperatures below $\sim$1~MK, from close to the sound speed to only $\sim$30~km~s$^{-1}$. 3. The strongest darkenings in 17.1 nm occur not during the opening-up of the magnetic field, but when it closes down again. The trapped plasma is then heated to high temperatures, producing darkenings in 17.1~nm which coincide with brightenings in 28.4 and 19.5~nm. As the post-eruption arcade expands, a dark wavefront is observed in 17.1~nm at the leading edges of the arcade, which appears as an outward-propagating dark ribbon when viewed on the disk. Off-limb this gives the (wrong) impression of dark loops that are ``opening up''. 4. An analogy exists between the temperature dependence of transient holes and the relationship between polar plumes and the interplume regions in coronal holes, with the plume (interplume) behaving in some ways like the cool (hot) plasma in transient holes. Many important questions remain to be addressed. What is the physical reason for the steep transition between slow and fast outflow near 1~MK? Is a greater fraction of the cool plasma ejected in energetic events involving fast CMEs? Are dark on-disk ``heat waves'' also seen in higher-temperature lines such as \ion{Fe}{12} 19.5~nm, where the pinched-off loops are refilled by chromospheric evaporation? This preliminary study suggests that the continued investigation of temperature effects in coronal dimmings may provide a key to a better physical understanding of CME eruptions. \acknowledgments We are indebted to G. A. Doschek, G. Stenborg, I. Ugarte-Urra, H. P. Warren and P. R. Young for informative discussions. The SECCHI data is produced by an international consortium of the NRL, LMSAL and NASA GSFC (USA), RAL and U. Bham (UK), MPS (Germany), CSL (Belgium), IOTA and IAS (France). This work was supported by NASA and the Office of Naval Research.
2,877,628,091,398
arxiv
\section{Introduction} Let $E$ be an elliptic curve over a number field $K$. A famous {\sl Mordell-Weil theorem} asserts that the (abelian) group $E(K)$ of $K$-points on $E$ is finitely generated \cite{Cassels,SilvermanTate,Wash}. The first step in its proof (and actual finding a finite set that generates $E(K)$) is a {\sl weak Mordell-Weil theorem} that asserts that the quotient $E(K)/2 E(K)$ is a finite (abelian) group. This step is called 2-descent and its basic ingredient is a criterion for when a $K$-point on $E$ is twice another $K$-point (under an additional assumption that all points of order 2 on $E$ are defined over $K$). In this paper we give a new treatment of this criterion that seems to be less computational than previous ones (\cite[Ch. 5, pp. 102--104]{Lang}, \cite{Husemoller}, \cite[Th. 4.2 on pp. 85-87]{Knapp}, \cite[Lemma 7.6 on p. 67]{Buhler} \cite[pp. 331--332]{Bombieri}). This approach allows us to describe explicitly 2-power torsion on elliptic curves. In addition we obtain explicitly families of elliptic curves with various torsion subgroups over arbitrary fields of characteristic different from 2 (the problem of constructing elliptic curves with given torsion goes back to B.Levi \cite {SS}). The paper is organized as follows. We work with elliptic curves $E$ over an arbitrary field $K$ with $\fchar(K)\ne 2$. In Section \ref{l1} we discuss the criterion of divisibility by 2 and explicit formulas for the ``half-points'' in $E(K)$. Next we discuss a criterion of divisibility by any power of $2$ in $E(K)$ (Section \ref{power2}). In Section \ref{torsion} we collect useful results about elliptic curves and their torsion. In Sections \ref{l4},\ref{l8}, and \ref{l6} we will use explicit formulas of Section \ref{l1} in order to construct {\sl versal} families of elliptic curves $E$ such that $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/2m\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ with $m=2,4,3$, respectively. (In addition, in Section \ref{l4} we construct a {\sl versal} family of elliptic curves $E$ such that $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$.) Such families are parameterized by $K$-points of rational curves that are closely related to certain modular curves of genus zero (see \cite{SS,Kubert,Silver,Silver2}); however, our approach remains quite elementary. In addition, in Sections \ref{l8} and \ref{l5} we construct {\sl versal} families of elliptic curves $E$ such that $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$ and $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$, respectively. These two families are parameterized by $K$-points of curves that are closely related to certain modular curves of genus 1. As an unexpected application, we describe explicitly (and without computations) elliptic curves $E$ over small finite fields $\mathbb{F}_q$ such that $E(\mathbb{F}_q)$ is isomorphic to a certain finite group (of small order). Using deep highly nontrivial results of B. Mazur \cite{Mazur} and of S. Kamienny and M. Kenku--F. Momose \cite{Kam,Ken}, we describe explicitly elliptic curves $E$ over the field $\Q$ of rational numbers and over quadratic fields $K$ such that the torsion subgroup $E(\Q)_t$ of $E(\Q)$ (resp. $E(K)_t$ of $E(K)$) is isomorphic to a certain finite group. {\bf Acknowledgements}. We are grateful to Robin Chapman for helpful comments. Our special thanks go to Tatiana Bandman for help with \textbf{magma}. \section{Division by 2} \label{l1} Let $K$ be a field of characteristic different from $2$. Let \begin{equation} \label{E2} E: y^2=(x-\alpha_1)(x-\alpha_2)(x-\alpha_3) \end{equation} be an elliptic curve over $K$, where $\alpha_1,\alpha_2,\alpha_3$ are {\sl distinct} elements of $K$. This means that $E(K)$ contains all three points of order 2, namely, the points \begin{equation} \label{W2} W_1=(\alpha_1,0), W_2=(\alpha_2,0), W_3=(\alpha_3,0). \end{equation} The following statement is pretty well known (\cite[pp. 269--270]{Cassels}, \cite[Ch. 5, pp. 102--104]{Lang}, \cite{Husemoller}, \cite[Th. 4.2 on pp. 85-87]{Knapp}, \cite[Lemma 7.6 on p. 67]{Buhler} \cite[pp. 331--332]{Bombieri}, \cite[pp. 212--214]{Wash}; see also \cite{Yelton}). \begin{thm} \label{th0} Let $P=(x_0,y_0)$ be a $K$-point on $E$. Then $P$ is divisible by $2$ in $E(K)$ if and only if all three elements $x_0-\alpha_i$ are squares in $K$. \end{thm} While the proof of the claim that the divisibility implies the squareness is straightforward, it seems that the known elementary proofs of the converse statement are more involved/computational. (Notice that there is another approach, which is based on Galois cohomology \cite[Sect. X.1, pp. 313--315]{Silverman} and it works for hyperelliptic jacobians as well \cite{Schaefer}.) We start with an elementary proof of the divisibility that seems to be less computational. (In additional, it will give us immediately explicit formulas for the coordinates of all four $\frac{1}{2}P$.) \begin{proof} So, let us assume that all three elements $x_0-\alpha_i$ are squares in $K$, and let $Q=(x_1,y_1)$ be a point on $E$ with $2 Q=P$. Since $P\ne \infty$, we have $y_1\ne 0$, and therefore the equation of the {\sl tangent line} $L$ to $E$ at $Q$ may be written in the form $$L: y=l x+m.$$ (Here $x_1,y_1, l, m$ are elements of an overfield of $K$.) In particular, $y_1=l x_1+m$. By the definition of $Q$ and $L$, the point $-P=(x_0,-y_0)$ is the ``third'' common point of $L$ and $E$; in particular, $-y_0=l x_0+m$, i.e., $y_0=-(l x_0+m)$. Standard arguments (the restriction of the equation for $E$ to $L$, see \cite[pp. 25--27]{SilvermanTate}, \cite[pp. 12--14]{Wash}, \cite[p. 331]{Bombieri}) tell us that the monic cubic polynomial $$(x-\alpha_1)(x-\alpha_2)(x-\alpha_3)-(l x+m)^2$$ coincides with $(x-x_1)^2 (x-x_0)$. This implies that $$-(l \alpha_i+m)^2=(\alpha_i-x_1)^2 (\alpha_i-x_0) \ \text{for all}\ i=1,2,3.$$ Since $2Q=P\ne \infty$, none of $x_1 -\alpha_i$ vanishes. Recall that all $x_0-\alpha_i$ are squares in $K$ and they are obviously distinct. Consequently, the corresponding square roots \cite[p. 331]{Bombieri} \begin{equation*} \label{alphaR} r_i:=\frac{l\alpha_i+m}{x_1 - \alpha_i}=\sqrt{x_0-\alpha_i} \end{equation*} are {\sl distinct} elements of $K$. In other words, the transformation $$z \mapsto \frac{l z+m}{-z+x_1}$$ of the projective line sends the three distinct $K$-points $\alpha_1,\alpha_2,\alpha_3$ to the three distinct $K$-points $r_1,r_2,r_3$, respectively. This implies that our transformation is {\sl not} constant, i.e., is an honest linear fractional transformation \footnote {Another way to see this is to assume the contrary. Then the {\sl determinant} $l x_1+m=0$, i.e., $y_0=0$, and therefore $P=2Q$ is the infinite point, which is not true.} and is defined over $K$. Since one of the ``matrix entries", $-1$, is already a nonzero element of $K$, all other matrix entries $l, m, x_1$ also lie in $K$. Since $y_1=l x_1 +m$, it also lies in $K$. So, $Q=(x_1,y_1)$ is a $K$-point of $E$. \end{proof} Let us get explicit formulas for $x_1,y_1, l, m$ in terms of $r_1,r_2,r_3$. We have $$\alpha_i=x_0-r_i^2, \ l\alpha_i+m=r_i (x_1 - \alpha_i),$$ and therefore $$l (x_0-r_i^2)+m=r_i [x_1-(x_2-r_i^2)]=r_i^3+(x_1-x_2)r_i,$$ which is equivalent to $r_i^3+ l r_i^2+(x_1-x_0)r_i-(l x_0+m)=0$, and this equality holds for all $i=1,2,3$. This means that the monic cubic polynomial \begin{equation*} \label{polyH} h(t)=t^3+l t^2+(x_1-x_0)t-(l x_0+m) \end{equation*} coincides with $(t-r_1)(t-r_2)(t-r_3)$. Recall that $-(l x_0+m)=y_0$ and get \begin{equation} \label{product} r_1 r_2 r_3=-y_0. \end{equation} We also get \begin{equation*} \label{slope} l=-(r_1+r_2+r_3), \ x_1-x_0=r_1 r_2+r_2 r_3+r_3 r_1. \end{equation*} This implies that \begin{equation} \label{x1} x_1=x_0+(r_1 r_2+r_2 r_3+r_3 r_1). \end{equation} Since $y_1=l x_1+m$ and $-y_0=l x_0+m$, we obtain that $$m=-y_0-l x_0=-y_0+(r_1+r_2+r_3)x_0,$$ and therefore $$y_1= -(r_1+r_2+r_3)[x_0+(r_1 r_2+r_2 r_3+r_3 r_1)]+[-y_0+(r_1+r_2+r_3)x_0],$$ i.e., \begin{equation} \label{y1} y_1=-y_0-(r_1+r_2+r_3)(r_1 r_2+r_2 r_3+r_3 r_1). \end{equation} Notice that there are precisely four points $Q \in E(K)$ with $2Q=P$, \begin{equation} \label{halfP} Q=\left(x_0+(r_1 r_2+r_2 r_3+r_3 r_1),-y_0-(r_1+r_2+r_3)(r_1 r_2+r_2 r_3+r_3 r_1)\right), \end{equation} each of which corresponds to one of the {\sl four} choices of the three square roots $r_i=\sqrt{x_0-\alpha_i}\in K$ ($i=1,2,3$) with $r_1 r_2 r_3=-y_0$. Using the latter equality, we may rewrite \eqref{y1} as \footnote{This was brought to our attention by Robin Chapman.} \begin{equation} \label{chap} y_1=-(r_1+r_2)(r_2+r_3)(r_3+r_1). \end{equation} In addition, \begin{equation} \label{x1prod} x_1=\alpha_i+(r_i+r_j)(r_i+r_k), \end{equation} where $i,j,k$ is any permutation of $1,2,3$. Indeed, $$x_1-\alpha_i=(x_0-\alpha_i)+r_1 r_2+r_2 r_3+r_3 r_1=$$ $$r_i^2+r_1 r_2+r_2 r_3+r_3 r_1=(r_i+r_j)(r_i+r_k).$$ The remaining four choices of the ``signs" of $r_1,r_2,r_3$ bring us to the same values of abscissas and the opposite values of ordinates and give the results of division by 2 of the point $-P$. Conversely, if we know $Q=(x_1,y_1)$, then we may recover the corresponding $(r_1,r_2,r_3)$. Namely, the equalities (\ref{x1prod}) and (\ref{chap}) imply that \begin{equation*} \label{QtoR} \begin{aligned} r_j+r_k=-\frac{y_1}{x_1-\alpha_i},\\ r_i=\frac{-(r_j+r_k)+(r_i+r_j)+(r_i+r_k)}{2}\\=-\frac{y_1}{2}\cdot \left(-\frac{1}{x_1-\alpha_i}+\frac{1}{x_1-\alpha_j}+\frac{1}{x_1-\alpha_k}\right) \end{aligned} \end{equation*} for any permutation $i,j,k$ of $1,2,3$. \begin{ex} \label{halfW3} Let us choose as $P=(x_0,y_0)$ the point $W_3=(\alpha_3,0)$ of order $2$ on $E$. Then $r_3=0$, and we have two arbitrary independent choices of (nonzero) $r_1=\sqrt{\alpha_3-\alpha_1}$ and $r_2=\sqrt{\alpha_3-\alpha_2}$. Thus $$Q=(\alpha_3+r_1 r_2, -(r_1+r_2)r_1 r_2)=(\alpha_3+r_1 r_2, -r_1(\alpha_3-\alpha_2)-r_2(\alpha_3-\alpha_1))$$ is a point on $E$ with $2Q=P$; in particular, $Q$ is a point of order $4$. The same is true for the (three remaining) points $-Q=(\alpha_3+r_1 r_2, r_1(\alpha_3-\alpha_2)+r_2(\alpha_3-\alpha_1))$, \newline $(\alpha_3-r_1 r_2, - r_1(\alpha_3-\alpha_2)+r_2(\alpha_3-\alpha_1))$, and $(\alpha_3-r_1 r_2, r_1(\alpha_3-\alpha_2)-r_2(\alpha_3-\alpha_1))$. \end{ex} Recall that, in formula (\ref{halfP}) for the coordinates of the points $\frac{1}{2}{P}$, one may arbitrarily choose the signs of $r_1,r_2,r_3$ under condition \eqref{product}. Let $Q$ be one of $\frac{1}{2}{P}$'s that corresponds to a certain choice of $r_1,r_2, r_3$. The remaining three {\sl halves} of $P$ correspond to $(r_1,-r_2, -r_3)$, $(-r_1,r_2, -r_3)$, $(-r_1,-r_2, r_3)$. Let us denote these halves by $\mathcal{Q}_1, \mathcal{Q}_2, \mathcal{Q}_3$, respectively. For each $i=1,2,3$, the difference $\mathcal{Q}_i-Q$ is a point of order 2 on $E$. Which one? The following assertion answers this question. \begin{thm} \label{sign} Let $i,j,k$ be a permutation of $1,2,3$. Then \begin{itemize} \item[(i)] If $P=W_i$, then $\mathcal{Q}_i= -Q$. \item[(ii)] If $P \ne W_i$, then all three points $\mathcal{Q}_i, -Q, W_i$ are distinct. \item[(iii)] The points $\mathcal{Q}_i, -Q, W_i$ lie on the line $$y=(r_j+r_k)(x-\alpha_i).$$ \item[(iv)] $\mathcal{Q}_i-Q=W_i.$ \end{itemize} \end{thm} \begin{proof} First, assume that $P=W_i$. In this case, formulas \eqref{x1} and \eqref{y1} tell us that $$Q=(\alpha_i+r_j r_k, - r_j r_k(r_j+r_k)),$$ which implies that $$\mathcal{Q}_i=(\alpha_i+r_j r_k, r_j r_k(r_j+r_k))=-Q$$ and $$\mathcal{Q}_i-Q=-2Q=-P=P=W_i.$$ This proves (i) and a special case of (iv) when $P=W_i$. \begin{comment} Since $\mathcal{Q}_i\ne \mathcal{Q}_j$, we conclude that $$\mathcal{Q}_j \ne -Q, \ \mathcal{Q}_k \ne Q$$ (recall that $i,j,k$ is a permutation of $1,2,3$). \end{comment} Now assume that $P \ne W_i$ and prove that the three points $\mathcal{Q}_i, -Q, W_i$ are {\sl distinct}. Since none of $Q_i$ and $-Q$ is of order $2$, none of them is $W_i$. On the other hand, if $\mathcal{Q}_i= -Q$, then $$2Q=P=2\mathcal{Q}_i=-2Q=-P,$$ and so $P$ has order $2$, say $P=W_j$. Applying (a) to $j$ instead of $i$, we get $\mathcal{Q}_j= -Q$; but $\mathcal{Q}_i\ne\mathcal{Q}_j$ since $i\ne j$. Therefore $\mathcal{Q}_i, -Q, W_i$ are three {\sl distinct} points. This proves (ii). Let us prove (iii). Since $$x_1-\alpha_i=(r_i+r_j)(r_i+r_k), \ y_1=-(r_1+r_2)(r_2+r_3)(r_3+r_1),$$ we have $y_1=(r_j+r_k)(x_1-\alpha_i)$. Further $$x(-\mathcal{Q}_i)-\alpha_i=(r_i-r_j)(r_i-r_k),$$ $$y(-\mathcal{Q}_i)=(r_i-r_j)(-r_j-r_k)(-r_k+r_i)=(r_j+r_k)\left(x(-\mathcal{Q}_i)-\alpha_i\right).$$ Therefore $\mathcal{Q}_i, -Q, W_i$ lie on the line $$y=(r_j+r_k)(x-\alpha_i).$$ We have already proven (iv) when $P=W_i$. So, let us assume that $P \ne W_i$. Now (iv) follows from (iii) combined with (i). \end{proof} \section{Division by $2^n$} \label{power2} Using the formulas above that describe the division by 2 on $E$, one may easily deduce the following necessary and sufficient condition of divisibility by any power of 2. For an overfield $L$ of $K$, we consider a sequence of points $Q_{\mu}$ in $E(L)$ such that $Q_0=P$ and $2 Q_{\mu+1}=Q_{\mu}$ for all $\mu=0,1,2, \dots $. Let $r_1^{(\mu)}, r_2^{(\mu)}, r_3^{(\mu)}$ ($\mu=0,1,2, \dots $) be arbitrary sequences of elements of $L$ that satisfy the relations $$(r_i^{(\mu)})^2=x(Q_{\mu})-\alpha_i.$$ Then for each permutation $i,j,k$ of $1,2,3$ we obtain, in light of the formula (\ref{x1prod}), $$ x(Q_{\mu+1})-\alpha_i= \bigl(r_i^{(\mu)}+r_j^{(\mu)}\bigr)\bigl(r_i^{(\mu)}+r_k^{(\mu)}\bigr), $$ which implies that $$(r_i^{(\mu+1)})^2=(r_i^{(\mu)}+r_j^{(\mu)})(r_i^{(\mu)}+r_k^{(\mu)}).$$ By changing the signs of $r_i^{(\mu)}, r_j^{(\mu)}, r_k^{(\mu)}$ in the product $(r_i^{(\mu)}+r_j^{(\mu)})(r_i^{(\mu)}+r_k^{(\mu)})$, we obtain all possible values of the abscissas of $Q_{(\mu+1)}$ with $2Q_{\mu+1}=Q_{\mu}$. Suppose that $Q_{\mu}\in E(K)$. Then $Q_{\mu}$ is divisible by $2$ in $E(K)$ if and only if one may choose $r_i^{(\mu)}, r_j^{(\mu)}, r_k^{(\mu)}$ in such a way that $(r_i^{(\mu)}+r_j^{(\mu)})(r_i^{(\mu)}+r_k^{(\mu)})$ are squares in $K$ for all $i=1,2,3$. We proved the following statement. \begin{thm} \label{divisionByPower} Let $P=(x_0,y_0)\in E(K)$. Let $r_1^{(\mu)}, r_2^{(\mu)}, r_3^{(\mu)}$ $($$\mu=0,1,2, \dots$ $)$ be sequences of elements of $L$ that satisfy the relations $$(r_i^{0})^2=r_i^2=x_0-\alpha_i, \ (r_i^{(\mu+1)})^2=(r_i^{(\mu)}+r_j^{(\mu)})(r_i^{(\mu)}+r_k^{(\mu)})$$ for all permutations $i,j,k$ of $1,2,3$. Then $P$ is divisible by $2^n$ in $E(K)$ if and only if all $x_0-\alpha_i$ are squares in $K$, and, for each $\mu=0,1, \dots n-1$, one may choose square roots $r_1^{(\mu)}, r_2^{(\mu)}, r_3^{(\mu)}$ in such a way that the products $(r_i^{(\mu)}+r_j^{(\mu)})(r_i^{(\mu)}+r_k^{(\mu)})$ are squares in $K$ {\rm(}and therefore all $r_i^{(\mu)}$ lie in $K$ for $\mu=0,1, \dots n-1${\rm)}. \end{thm} The knowledge of sequences $r_1^{(\mu)}, r_2^{(\mu)}, r_3^{(\mu)}$ allows us step by step to find the points $\frac{1}{2}P, \frac{1}{4}P, \frac{1}{8}P$ etc. \begin{ex} Let $P=(x_0,y_0)$, let $R$ be a point of $E$ such that $4R=P$, and let $Q=2R=(x_1,y_1)$. By formulas (\ref{x1}) and (\ref{chap}), $$x_1=x_0+(r_1 r_2+r_2 r_3+r_3 r_1), \ y_1=-(r_1+r_2)(r_2+r_3)(r_3+r_1),$$ where the square roots $$r_i=\sqrt{x_0-\alpha_i}, \ i=1,2,3,$$ are chosen in such a way that $r_1 r_2 r_3=-y_0$. Further, let $$r_i^{(1)}=\sqrt{(r_i+r_j)(r_i+r_k)}$$ be square roots that are chosen in such a way that $$r_1^{(1)} r_2^{(1)} r_3^{(1)}=-y_1=(r_1+r_2)(r_2+r_3)(r_3+r_1).$$ In light of (\ref{x1}) and (\ref{chap}), $$x(R)=x_1+r_1^{(1)} r_2^{(1)}+r_2 ^{(1)}r_3^{(1)}+r_3^{(1)} r_1^{(1)},$$ $$y(R)=-(r_1^{(1)}+r_2^{(1)})(r_2^{(1)}+r_3^{(1)})(r_3^{(1)}+r_1^{(1)}),$$ which implies that \begin{equation}\label{1/4} \begin{aligned}x(R)=x_0+(r_1 r_2+r_2 r_3+r_3 r_1)+(r_1^{(1)} r_2^{(1)}+r_2 ^{(1)}r_3^{(1)}+r_3^{(1)} r_1^{(1)}),\\ y(R)=-(r_1^{(1)}+r_2^{(1)})(r_2^{(1)}+r_3^{(1)})(r_3^{(1)}+r_1^{(1)}).\end{aligned} \end{equation} \end{ex} \section{Torsion of elliptic curves} \label{torsion} In the sequel, we will freely use the following well-known elementary observation. {\sl Let $\kappa$ be a nonzero element of $K$. Then there is a canonical isomorphism of the elliptic curves $$E: y^2=(x-\alpha_1)(x-\alpha_2)(x-\alpha_3)$$ and $$E(\kappa): {y^{\prime}}^2=\left(x^{\prime}-\frac{\alpha_1}{\kappa^2}\right) \left(x^{\prime}-\frac{\alpha_2}{\kappa^2}\right)\left(x^{\prime}-\frac{\alpha_3}{\kappa^2}\right)$$ that is given by the change of variables $$x^{\prime}=\frac{x}{\kappa^2}, \ y^{\prime}=\frac{y}{\kappa^3}$$ and respects the group structure. Under this isomorphism the point $(\alpha_i,0)\in E(K)$ goes to $(\alpha_i / \kappa^2 ,0)\in E(\kappa)(K)$ for all $i=1,2,3$. In addition, if $P=(0,y(P))$ lies in $E(K)$, then it goes (under this isomorphism) to $(0,y(P)/\kappa^3)\in E(\kappa)(K)$.} We will also use the following classical result of Hasse (Hasse bound) \cite[Th. 4.2 on p. 97]{Wash}. \begin{thm} \label{hasse} If $q$ is a prime power, $\mathbb{F}_q$ a $q$-element finite field and $E$ is an elliptic curve over $\mathbb{F}_q$, then $E(\mathbb{F}_q)$ is a finite abelian group whose cardinality $|E(\mathbb{F}_q)|$ satisfies the inequalities \begin{equation} \label{HasseB} q-2\sqrt{q}+1 \le |E(\mathbb{F}_q)|\leq q+2\sqrt{q}+1. \end{equation} \end{thm} Another result that we are going to use is the following immediate corollary of a celebrated theorem of B. Mazur (\cite{Mazur}, \cite[Th. 2.5.2 and p. 187]{Robledo}). \begin{thm} \label{mazurQ} If $E$ is an elliptic curve over $\Q$ and the torsion subgroup $E(\Q)_t$ of $E(\Q)$ is not cyclic, then $E(\Q)_t$ is isomorphic to $\mathbb{Z}/2m\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ with $m=1,2,3$ or $4$. In particular, if $m=3$ or $4$ and $E(\Q)$ contains a subgroup isomorphic to $\mathbb{Z}/2m\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$, then $E(\Q)_t$ is isomorphic to $\mathbb{Z}/2m\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. \end{thm} The next assertion follows readily from the list of possible torsion subgroups of elliptic curves over quadratic fields obtained by S. Kamienny \cite{Kam} and M.A. Kenku - F. Momose \cite{Ken} (see also \cite[Th. 1]{KamN}). \begin{thm} \label{Kquad} Let $E$ be an elliptic curve over a quadratic field $K$. Assume that all points of order 2 on $E$ are defined over $K$. Let $E(K)_t$ be the torsion subgroup of $E(K)$. Then $E(K)_t$ is isomorphic either to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$ or to $\mathbb{Z}/2m\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ with $1\le m\le 6$. In particular, $E(K)_t$ enjoys the following properties. \begin{enumerate} \item If $m=5$ or $6$ and $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/2m\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$, then $E(K)_t$ is isomorphic to $\mathbb{Z}/2m\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. \item If $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$, then $E(K)_t$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. \end{enumerate} \end{thm} \section{Rational points of order 4} \label{l4} We are going to describe explicitly elliptic curves (\ref{E2}) that contain a $K$-point of order $4$. In order to do that, we consider the elliptic curve $$\mathcal{E}_{1,\lambda}: y^2=(x+\lambda^2)(x+1)x$$ over $K$. Here $\lambda$ is an element of $K\setminus \{0, \pm 1\}$. In this case, we have $$\alpha_1=-\lambda^2,\ \alpha_2=-1,\ \alpha_3=0.$$ Notice that $$\mathcal{E}_{1,\lambda}=\mathcal{E}_{1,-\lambda}.$$ All three differences $$\alpha_3-\alpha_1=\lambda^2,\ \alpha_3-\alpha_2=1^2,\ \alpha_3 - \alpha_3=0^2$$ are squares in $K$. Dividing the order $2$ point $W_3=(0,0)\in \mathcal{E}_{1,\lambda}(K)$ by $2$, we get $r_3=0$ and the four choices $$r_1=\pm \lambda,\ r_2=\pm 1.$$ Now Example \ref{halfW3} gives us four points $Q$ with $2Q=W_3$, namely, $$(\lambda, \mp (\lambda+1)\lambda),\ (-\lambda, \pm (\lambda-1)\lambda).$$ This implies that the group $\mathcal{E}_{1,\lambda}(K)$ contains the subgroup generated by any $Q$ and $W_1$, which is $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. \begin{rem} \label{W3L} Our computations show that if $ {Q}$ is a $K$-point on $ {E}_{1,\lambda}$, then $$2{Q}=W_3 \text{ if and only if } x({Q})=\pm \lambda.$$ Both cases (signs) do occur. \end{rem} \begin{rem} There is another family of elliptic curves (\cite[Table 3 on p. 217]{Kubert} (see also \cite[Part 2]{Silver}, \cite[Appendix E]{Robledo})) $$\mathfrak{E}_{1,t}: y^2+xy-\left(t^2-\frac{1}{16}\right)y=x^3-\left(t^2-\frac{1}{16}\right)x^2$$ whose group of $K$-points contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. If we put $$y_1:=y+\frac{x-(t^2-\frac{1}{16})}{2},$$ then the equation may be rewritten as $$y_1^2=x^3-\left(t^2-\frac{1}{16}\right)x^2+\left[\frac{x-(t^2-\frac{1}{16})}{2}\right]^2=\left(x-t^2+\frac{1}{16}\right)\left(x+\frac{t}{2}+\frac{1}{8}\right)\left(x-\frac{t}{2}+\frac{1}{8}\right).$$ If we put $x_1:=x-t^2+1/16$, then the equation becomes $$y_1^2=x_1\left(x_1+\left(t+\frac{1}{4}\right)^2\right)\left(x_1+\left(t-\frac{1}{4}\right)^2\right),$$ which defines the elliptic curve $\mathcal{E}_{1,\lambda}(1/\kappa)$ with $$\lambda=\frac{t-\frac{1}{4}}{t+\frac{1}{4}}, \ \kappa=t+\frac{1}{4}.$$ In particular, $\mathfrak{E}_{1,t}$ is isomorphic to $\mathcal{E}_{1,\lambda}$. \end{rem} \begin{thm} \label{family2} Let $E$ be an elliptic curve over $K$. Then $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ if and only if there exists $\lambda \in K \setminus \{0,\pm 1\}$ such that $E$ is isomorphic to $\mathcal{E}_{1,\lambda}$. \end{thm} \begin{proof} We already know that $\mathcal{E}_{1,\lambda}(K)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. Conversely, suppose that $E$ is an elliptic curve over $K$ such that $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. Then $E(K)$ contains all three points of order 2, and therefore $E$ can be represented in the form \eqref{E2}. It is also clear that at least one of the points \eqref{W2} is divisible by 2 in $E(K)$. Suppose that $W_3$ is divisible by $2$. We may assume that $\alpha_3=0$. By Theorem \ref{th0}, both nonzero differences $$-\alpha_1=\alpha_3-\alpha_1, \ -\alpha_2=\alpha_3-\alpha_2$$ are squares in $K$; in addition, they are {\sl distinct} elements of $K$. Thus there are nonzero $a,b \in K$ such that $a \ne \pm b$ and $-\alpha_1=a^2,\ -\alpha_2=b^2$. Since $\alpha_3=0$, the equation for $E$ is $$E: y^2=(x+a^2)(x+b^2)x.$$ If we put $\kappa=b$, then we obtain that $E$ is isomorphic to $$E(\kappa): {y^{\prime}}^2=\left(x^{\prime}+\frac{a^2}{b^2}\right)(x^{\prime}+1) x^{\prime},$$ which is nothing else but $\mathcal{E}_{1,\lambda}$ with $\lambda=a/b$. \end{proof} \begin{cor} Let $E$ be an elliptic curve over $\mathbb{F}_5$. The group $E(\mathbb{F}_5)$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ if and only if $E$ is isomorphic to the elliptic curve $y^2=x^3-x$. \end{cor} \begin{proof} Suppose that $E(\mathbb{F}_5)$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family2}, $E$ is isomorphic to $$y^2=(x+\lambda^2)(x+1)x \ \text{ with } \lambda \in \mathbb{F}_5 \setminus \{0,1,-1\}.$$ This implies that $\lambda=\pm 2, \lambda^2=-1$, and so $E$ is isomorphic to $$\mathcal{E}_{1,2}: y^2=(x-1)(x+1)=x^3-x.$$ Now we need to check that $\mathcal{E}_{1,2}(\mathbb F_5)\cong \mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family2}, $E(\mathbb{F}_5)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$; in particular, $8$ divides $|E(\mathbb{F}_5)|$. In order to finish the proof, it suffices to check that $|E(\mathbb{F}_5)|<16$, but this inequality follows from the Hasse bound \eqref{HasseB} $$|E(\mathbb{F}_5)|\le 5+2\sqrt{5}+1<11.$$ \end{proof} \begin{cor} Let $E$ be an elliptic curve over $\mathbb{F}_7$. The group $E(\mathbb{F}_7)$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ if and only if $E$ is isomorphic to the elliptic curve $y^2=(x+2)(x+1)x$. \end{cor} \begin{proof} Suppose that $E(\mathbb{F}_7)$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. It follows from Theorem \ref{family2} that $E$ is isomorphic to $y^2=(x+\lambda^2)(x+1)x$ with $\lambda \in \mathbb{F}_7 \setminus \{0,1,-1\}$. This implies that $\lambda=\pm 2$ or $\pm 3$, and therefore $\lambda^2=4$ or $2$, i.e., $E$ is isomorphic to one of the two elliptic curves $$\mathcal{E}_{1,3}:y^2=(x+2)(x+1)x, \ \mathcal{E}_{1,2}: y^2=(x+4)(x+1)x.$$ Since $ 1/4=2$ in $\mathbb{F}_7$, the elliptic curve $\mathcal{E}_{1,3}$ coincides with $\mathcal{E}_{1,2}(2)$; in particular, $\mathcal{E}_{1,2}$ and $\mathcal{E}_{1,3}$ are isomorphic. Now suppose that $E=\mathcal{E}_{1,2}$. We need to prove that $E(\mathbb{F}_7)$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family2}, $E(\mathbb{F}_7)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$; in particular, $8$ divides $|E(\mathbb{F}_7)|$. In order to finish the proof, it suffices to check that $|E(\mathbb{F}_7)|<16$, but this inequality follows from the Hasse bound \eqref{HasseB} $$|E(\mathbb{F}_7)|\le 7+2\sqrt{7}+1<14.$$ \end{proof} \begin{thm} \label{full4} Suppose that $K$ contains $\mathbf{i}=\sqrt{-1}$. Let $a,b$ be nonzero elements of $K$ such that $a \ne \pm b, \ a \ne \pm \mathbf{i}b$. Let us consider the elliptic curve $$E_{a,b}: y^2=(x-\alpha_1)(x-\alpha_2)(x-\alpha_3)$$ over $K$ with $\alpha_1=(a^2-b^2)^2, \ \alpha_2=(a^2+b^2)^2,\ \alpha_3=0$. Then all points of order $2$ on $E$ are divisible by $2$ in $E(K)$, i.e., $E(K)$ contains all twelve points of order $4$. In particular, $E_{a,b}(K)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. \end{thm} \begin{proof} Clearly, all $\alpha_i$ and $-\alpha_j$ are squares in $K$. In addition, $$\alpha_2-\alpha_1=(a^2+b^2)^2-(a^2-b^2)^2=(2ab)^2, \ \alpha_1-\alpha_2= (2\mathbf{i}ab)^2. $$ This implies that all $\alpha_i-\alpha_j$ are squares in $K$. It follows from Theorem \ref{th0} that all points $W_i=(\alpha_i,0)$ of order $2$ are divisible by $2$ in $E(K)$, and therefore $E(K)$ contains all twelve ($3\times 4$) points of order $4$. \end{proof} Keeping the notation and assumptions of Theorem \ref{full4}, we describe explicitly all twelve points of order 4, using formula (\ref{halfP}). \begin{enumerate} \item Dividing the point $W_2=(\alpha_2,0)=\left((a^2+b^2)^2,0\right)$ by 2, we have $r_2=0$ and get four choices $r_1= \pm 2ab, \ r_3=\pm (a^2+b^2)$. This gives us four points $Q$ with $2Q=W_2$, namely, two points $$\left((a^2+b^2)^2+2ab(a^2+b^2),\ \pm (a^2+b^2+2ab)2ab(a^2+b^2)\right)$$ $$=\left((a^2+b^2)(a+b)^2,\ \pm 2ab(a^2+b^2)(a+b)^2\right)$$ and two points $\left((a^2+b^2)(a-b)^2,\ \pm 2ab(a^2+b^2)(a-b)^2\right)$. \item Dividing the point $W_3=(\alpha_3,0)=(0,0)$ by $2$, we have $r_3=0$ and get four choices $r_1=\pm \mathbf{i} (a^2-b^2), \ r_2= \pm \mathbf{i} (a^2+b^2)$. This gives us four points $Q$ with $2Q=W_3$, namely, two points $$\left((a^2-b^2)(a^2+b^2),\ \pm (\mathbf{i}( (a^2-b^2)+\mathbf{i} (a^2+b^2))(a^2-b^2)(a^2+b^2)\right)$$ $$=\left(a^4-b^4,\ \pm 2\mathbf{i}a^2(a^4-b^4)\right)$$ and two points $\left(b^4-a^4,\ \pm 2\mathbf{i}b^2(b^4-a^4)\right)$. \item Dividing the point $W_1=(\alpha_1,0)=\left((a^2-b^2)^2,0\right)$ by $2$, we have $r_1=0$ and get four choices $r_2=\pm 2\mathbf{i}ab, \ r_3=\pm (a^2-b^2)$. This gives us four points $Q$ with $2Q=W_3$, namely, two points $$\left((a^2-b^2)^2+2\mathbf{i}ab(a^2-b^2),\ \pm (2\mathbf{i}ab+(a^2-b^2))2\mathbf{i}ab(a^2-b^2)\right) $$ $$=\left((a^2-b^2)(a+\mathbf{i}b)^2,\ \pm 2\mathbf{i}ab(a^2-b^2)(a+\mathbf{i}b)^2\right)$$ and two points $\left((a^2-b^2)(a-\mathbf{i}b)^2, \ \pm 2\mathbf{i}ab(a^2-b^2)(a-\mathbf{i}b)^2\right)$. \end{enumerate} \begin{rem} \label{rem44} Let $\lambda$ be an element of $K \setminus \{0, \pm 1, \pm \sqrt{-1}\}$. We write $\mathcal{E}_{2,\lambda}$ for the elliptic curve $$\mathcal{E}_{2,\lambda}: y^2= \left(x+\frac{(\lambda^2-1)^2}{(\lambda^2+1)^2}\right)(x+1)x $$ over $K$. The elliptic curves $\mathcal{E}_{2,\lambda}$ and $E_{a,b}$ are isomorphic if $a=\lambda b$. Indeed, one has only to put $\kappa=a^2+b^2$ and notice that $E_{a,b}(\kappa)=\mathcal{E}_{2,\lambda}$. It follows from Theorem \ref{full4} that $\mathcal{E}_{2,\lambda}(K)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. There is another family of elliptic curves with this property, namely, $$y^2=x(x-1)\left(x-\frac{(u+u^{-1})^2}{4}\right) $$ (\cite{Shioda}, \cite[pp. 451--453]{Silver}; see also Remark \ref{equivfam}). \end{rem} \begin{thm} \label{family4} Let $E$ be an elliptic curve over $K$. Then $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$ if and only if $K$ contains $\sqrt{-1}$ and there exists \newline $\lambda \in K \setminus \{0,\pm 1, \pm \sqrt{-1} \}$ such that $E$ is isomorphic to $\mathcal{E}_{2,\lambda}$. \end{thm} \begin{proof} Recall (Remark \ref{rem44}) that $\mathcal{E}_{2,\lambda}(K)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. Conversely, suppose that $E$ is an elliptic curve over $K$ and $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. By Theorem \ref{family2}, there is $\delta \in K \setminus \{0,\pm 1\}$ such that $E$ is isomorphic to $$\mathcal{E}_{1,\delta}:y^2=(x+\delta^2)(x+1)x.$$ Hence we may assume that $\alpha_1=-\delta^2, \alpha_2=-1, \alpha_3=0$. It follows from Theorem \ref{th0} that all $\pm 1, \pm (\delta^2-1)$ are squares in $K$. (In particular, $\mathbf{i}=\sqrt{-1}$ lies in $K$.) So, there is $\gamma \in K$ with $\gamma^2=1-\delta^2$. Clearly, $\gamma \ne 0, \pm 1$. We have $$\delta^2+\gamma^2=1.$$ The well-known parametrization of the ``unit circle" (that goes back to Euler) tells us that there exists $\lambda \in K$ such that $\lambda^2+1 \ne 0$ and $$\delta=\frac{\lambda^2-1}{\lambda^2+1}, \ \gamma= \frac{2\lambda}{\lambda^2+1}.$$ Now one has only to plug in the formula for $\delta$ into the equation of $\mathcal{E}_{1,\delta}$ and get $\mathcal{E}_{2,\lambda}$. \end{proof} \begin{rem}\label{equivfam} Using a different parametrization of the unit circle in the proof of Theorem \ref{family4}, we obtain the family of elliptic curves $$E: y^2= \left(x+\frac{(2\lambda)^2}{(\lambda^2+1)^2}\right)(x+1)x$$ with the same property as the family $\mathcal{E}_{2,\lambda}$. Notice that, for each $\lambda\in K\setminus\{0,\pm1\}$, the elliptic curve $E$ is isomorphic to the elliptic curve $$y^2=x(x-1)\left(x- (u+u^{-1})^2/4 \right)$$ mentioned in Remark \ref{rem44}. Indeed, the latter differs from $E(\kappa)$, where $ \kappa=2\lambda\sqrt{-1}/(\lambda^2+1)$, only with the change of the parameter $\lambda$ by $u$. \end{rem} \begin{cor} Let $E$ be an elliptic curve over $\mathbb{F}_q$, where $q=9,13,17$. The group $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$ if and only if $E$ is isomorphic to one of elliptic curves $\mathcal{E}_{2,\lambda}$. If $q=9$, then $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$ if and only if $E$ is isomorphic to $y^2=x^3-x$. \end{cor} \begin{proof} First, $\mathbb{F}_q$ contains $\sqrt{-1}$. Suppose that $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. It follows from Theorem \ref{family4} that $E$ is isomorphic to $\mathcal{E}_{2,\lambda}$. Conversely, suppose that $E$ is isomorphic to one of these curves. We need to prove that $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. By Theorem \ref{family4}, $E(\mathbb{F}_q)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$; in particular, $16$ divides $|E(\mathbb{F}_q)|$. In order to finish the proof, it suffices to check that $|E(\mathbb{F}_q)|<32$, but this inequality follows from the Hasse bound \eqref{HasseB} $$|E(\mathbb{F}_q)|\le q+2\sqrt{q}+1\le 17+2\sqrt{17}+1<27.$$ Now assume that $q=9$. Then $\lambda$ is one of four $\pm (1\pm \mathbf{i})$. For all such $\lambda$ we have $$\lambda^2= \pm 2\mathbf{i}=\mp \mathbf{i}, \ \frac{(\lambda^2-1)^2}{(\lambda^2+1)^2}=\frac{(1\mp \mathbf{i})^2}{(-1\mp \mathbf{i})^2}=\frac{\mp 2\mathbf{i}}{\pm 2\mathbf{i}}=-1.$$ Therefore the equation for $\mathcal{E}_{2,\lambda}$ is $$y^2=(x-1)(x+1)x=x^3-x.$$ \end{proof} \begin{cor} Let $E$ be an elliptic curve over $\mathbb{F}_{29}$. The group $E(\mathbb{F}_{29})$ is isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$ if and only if $E$ is isomorphic to one of elliptic curves $\mathcal{E}_{2,\lambda}$. \end{cor} \begin{proof} First, $\mathbb{F}_{29}$ contains $\sqrt{-1}$. Suppose that $E(\mathbb{F}_{29})$ is isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. Then $E(\mathbb{F}_{29})$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. It follows from Theorem \ref{family4} that $E$ is isomorphic to $\mathcal{E}_{2,\lambda}$. Conversely, suppose that $E$ is isomorphic to one of these curves. We need to prove that $E(\mathbb{F}_{29})$ is isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. By Theorem \ref{family4}, $E(\mathbb{F}_{29})$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$; in particular, $16$ divides $|E(\mathbb{F}_{29})|$. The Hasse bound \eqref{HasseB} tells us that $$29+1-2\sqrt{29} \le |E(\mathbb{F}_q)| \le 29+1+2\sqrt{29}$$ and therefore $$19< |E(\mathbb{F}_{29})|< 41.$$ It follows that $|E(\mathbb{F}_{29})|=32$; in particular, $E(\mathbb{F}_{29})$ is a finite $2$-group. Clearly, $E(\mathbb{F}_{29})$ is isomorphic to a product of two cyclic $2$-groups, each of which has order divisible by $4$. It follows that $E(\mathbb{F}_{29})$ is isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. \end{proof} \begin{thm} \label{Qi} Let $K=\Q(\sqrt{-1})$ and $E$ be an elliptic curve over $\Q(\sqrt{-1})$. Then the torsion subgroup $E(\Q(\sqrt{-1})_t$ of $E(\Q(\sqrt{-1}))$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$ if and only if there exists $\lambda \in K \setminus \{0,\pm 1, \pm \sqrt{-1} \}$ such that $E$ is isomorphic to $\mathcal{E}_{2,\lambda}$. \end{thm} \begin{proof} By Theorem \ref{Kquad}, if $E(\Q(\sqrt{-1}))$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$ then $E(\Q(\sqrt{-1})_t$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. Now the desired result follows from Theorem \ref{family2}. \end{proof} \section{Points of order 8} \label{l8} Let us return to the curve $\mathcal{E}_{1,\lambda}$ and consider ${Q}\in \mathcal{E}_{1,\lambda}(K)$ with $2{Q}=W_3$. Let us try to divide ${Q}$ by $2$ in $E(K)$. By Remark \ref{W3L}, $x({Q})=\pm \lambda$. First, we assume that $x( {Q})= \lambda$ (such a ${Q}$ does exist). \begin{lem} \label{divLambda} Let $Q$ be a point of $\mathcal{E}_{1,\lambda}(K)$ with $x( {Q})= \lambda$. Then $Q$ is divisible by $2$ in $\mathcal{E}_{1,\lambda}(K)$ if and only if there exists $c \in K \setminus \{ 0, \pm1, \pm 1\pm \sqrt{2}, \pm \sqrt{-1}\}$ such that $$\lambda=\left[\frac{c-\frac{1}{c}}{2}\right]^2.$$ \end{lem} \begin{proof} We have $$\lambda-\alpha_1=\lambda-(-\lambda^2)=\lambda+\lambda^2, \ \lambda-\alpha_2=\lambda-(-1)=\lambda+1,\ \lambda-\alpha_3=\lambda-0=\lambda.$$ By Theorem \ref{th0}, $ {Q} \in 2\mathcal{E}_{1,\lambda}(K)$ if and only if all three $\lambda+\lambda^2, \lambda+1, \lambda$ are squares in $K$. The latter means that both $\lambda$ and $\lambda+1$ are squares in $K$, i.e., there exist $a,b\in K$ such that $a^2=\lambda+1, \lambda=b^2$. This implies that the pair $(a, b)$ is a $K$-point on the hyperbola $$u^2-v^2=1.$$ Recall that $\lambda \ne 0,\pm 1$. Using the well-known parametrization $$u=\frac{t+\frac{1}{t}}{2}, v=\frac{t-\frac{1}{t}}{2}$$ of the hyperbola, we obtain that both $\lambda$ and $\lambda+1$ are squares in $K$ if and only if there exists a {\sl nonzero} $c \in K$ such that \begin{equation*} \label{lambda8plus} \lambda=\left[\frac{c-\frac{1}{c}}{2}\right]^2. \end{equation*} If this is the case, then $$a=\pm \frac{c+\frac{1}{c}}{2},\ b=\pm \frac{c-\frac{1}{c}}{2}$$ and $$\lambda+1=\left[\frac{c+\frac{1}{c}}{2}\right]^2.$$ Recall that $\lambda \ne 0, \pm 1$. This means that $$\frac{c-\frac{1}{c}}{2} \ne 0, \pm 1, \pm \sqrt{-1}, \ \text{ i.e., }$$ \begin{equation*} c \ne 0, \pm1, \pm 1\pm \sqrt{2}, \pm \sqrt{-1}. \end{equation*} \end{proof} Now let us assume that $x({Q})=-\lambda$ (such a $ {Q}$ does exist). \begin{lem} \label{divMLambda} Let $Q$ be a point of $\mathcal{E}_{1,\lambda}(K)$ with $x( {Q})=- \lambda$. Then $Q$ is divisible by $2$ in $\mathcal{E}_{1,\lambda}(K)$ if and only if there exists $c \in K \setminus \{ 0, \pm1, \pm 1\pm \sqrt{2}, \pm \sqrt{-1}\}$ such that $$\lambda=-\left[\frac{c-\frac{1}{c}}{2}\right]^2.$$ \end{lem} \begin{proof} Applying Lemma \ref{divLambda} to $-\lambda$ (instead of $\lambda$) and the curve $\mathcal{E}_{1,-\lambda}=\mathcal{E}_{1,\lambda}$, we obtain that $ {Q}\in 2\mathcal{E}_{1,-\lambda}(K)=2\mathcal{E}_{1,\lambda}(K)$ if and only if there exists $$c \in K \setminus \{ 0, \pm1, \pm 1\pm \sqrt{2}, \pm \sqrt{-1}\}$$ such that \begin{equation*} \label{lambda8minus} -\lambda=\left[\frac{c-\frac{1}{c}}{2}\right]^2. \end{equation*} \end{proof} Lemmas \ref{divLambda} and \ref{divMLambda} give us the following statement. \begin{prop} \label{div4W3} The point $W_3=(0,0)$ is divisible by $4$ in $\mathcal{E}_{1,\lambda}(K)$ if and only if there exists $c \in K$ such that $c \ne 0, \pm 1, \pm 1\pm \sqrt{2}, \pm \sqrt{-1}$ and $$\lambda=\pm \left[\frac{c-\frac{1}{c}}{2}\right]^2, \ \text{ i.e., } \ \lambda^2= \left[\frac{c-\frac{1}{c}}{2}\right]^4.$$ \end{prop} \begin{prop} \label{W3by4} The following conditions are equivalent. \begin{itemize} \item[(i)] If $Q \in \mathcal{E}_{1,\lambda}(K)$ is any point with $2Q=W_3$, then $Q$ lies in $2 \mathcal{E}_{1,\lambda}(K)$. \item[(ii)] If $R$ is any point of $\mathcal{E}_{1,\lambda}$ with $4R=W_3$, then $R$ lies in $\mathcal{E}_{1,\lambda}(K)$. \item[(iii)] There exist $c,d \in K\setminus \{ 0, \pm1, \pm 1\pm \sqrt{2}, \pm \sqrt{-1}\}$ such that $$\lambda= \left[\frac{c-\frac{1}{c}}{2}\right]^2, \ -\lambda= \left[\frac{d-\frac{1}{d}}{2}\right]^2.$$ \end{itemize} If these equivalent conditions hold, then $K$ contains $\sqrt{-1}$ and $\mathcal{E}_{1,\lambda}(K)$ contains all (twelve) points of order 4. \end{prop} \begin{proof} The equivalence of (i) and (ii) is obvious. It is also clear that (ii) implies that all points of order (dividing) 4 lie in $\mathcal{E}_{1,\lambda}(K)$. Recall (Remark \ref{W3L}) that $Q$ with $2Q=W_3$ are exactly the points of $\mathcal{E}_{1,\lambda}$ with $x(Q)=\pm \lambda$. Now the equivalence of (ii) and (iii) follows from Lemmas \ref{divLambda} and \ref{divMLambda}. In order to finish the proof, we notice that $\lambda \ne 0$ and $$-1=\frac{-\lambda}{\lambda}=\left[\frac{ \left[\frac{d-\frac{1}{d}}{2}\right]}{\left[\frac{c-\frac{1}{c}}{2}\right]}\right]^2.$$ \end{proof} Suppose that $$\lambda= \left[\frac{c-\frac{1}{c}}{2}\right]^2 \ \text{ with } c \in K \setminus \{ 0, \pm1, \pm 1\pm \sqrt{2}, \pm \sqrt{-1}\}$$ and consider $Q=(\lambda, (\lambda+1)\lambda)\in \mathcal{E}_{1,\lambda}(K)$ of order $4$ with $2Q=W_3$. Let us find a point $R \in \mathcal{E}_{1,\lambda}(K)$ of order $8$ with $2R=Q$. First, notice that $$Q=(\lambda, (\lambda+1)\lambda)= \left(\left[\frac{c-\frac{1}{c}}{2}\right]^2, \left[\frac{c+\frac{1}{c}}{2}\right]^2 \cdot \left[\frac{c-\frac{1}{c}}{2}\right]^2 \right)$$ $$=\left(\frac{(c^2-1)^2}{4c^2},\frac{(c^4-1)^2}{4c^4}\right).$$ We have $$r_1=\sqrt{\lambda+\lambda^2}=\sqrt{(\lambda+1)\lambda}, \ r_2=\sqrt{\lambda+1}, \ r_3=\sqrt{\lambda}; \ r_1 r_2 r_3=-(\lambda+1)\lambda.$$ This means that $$r_1=\pm \frac{c-\frac{1}{c}}{2}\cdot \frac{c+\frac{1}{c}}{2}, \ r_2=\pm \frac{c+\frac{1}{c}}{2}, \ r_3= \pm \frac{c-\frac{1}{c}}{2},$$ and the signs should be chosen in such a way that the product $\ r_1 r_2 r_3$ coincides with $$-\left[\frac{c-\frac{1}{c}}{2}\right]^2\cdot \left[\frac{c+\frac{1}{c}}{2}\right]^2.$$ For example, we may take $$r_1=- \frac{c-\frac{1}{c}}{2}\cdot \frac{c+\frac{1}{c}}{2}=-\frac{c^2-\frac{1}{c^2}}{4}=-\frac{c^4-1}{4c^2}, \ r_2= \frac{c+\frac{1}{c}}{2}, \ r_3= \frac{c-\frac{1}{c}}{2}$$ and get (since $r_2+r_3=c$ and $r_2 r_3=(c^4-1)/4c^2)$) $$r_1+r_2+r_3=-\frac{c^4-1}{4c^2}+c=\frac{-c^4+4c^3+1}{4c^2},$$ $$r_1 r_2+r_2 r_3+r_3 r_1=c r_1+r_2 r_3= -\frac{c(c^4-1)}{4c^2}+ \frac{c^4-1}{4c^2} =\frac{(1-c)(c^4-1)}{4c^2}.$$ Now \eqref{x1} and \eqref{chap} tell us that the coordinates of the corresponding $R$ with $2R=Q$ are as follows: $$x(R)=x(Q)+r_1 r_2+r_2 r_3+r_3 r_1= \frac{(c^2-1)^2}{4c^2}+\frac{(1-c)(c^4-1)}{4c^2}=\frac{(1-c)^3(c+1)}{4c},$$ $$y(R)=-(r_1+r_2)(r_2+r_3)(r_1+r_3)=$$ $$-\left(- \frac{c-\frac{1}{c}}{2}\cdot \frac{c+\frac{1}{c}}{2}+ \frac{c+\frac{1}{c}}{2}\right) c \left(- \frac{c-\frac{1}{c}}{2}\cdot \frac{c+\frac{1}{c}}{2}+ \frac{c-\frac{1}{c}}{2}\right) =$$ $$-\left(1- \frac{c-\frac{1}{c}}{2}\right)\cdot \frac{c+\frac{1}{c}}{2}\cdot c \cdot \left(1- \frac{c+\frac{1}{c}}{2}\right) \frac{c-\frac{1}{c}}{2}=$$ $$-\frac{c^2-\frac{1}{c^2}}{16}\cdot \left(c-2-\frac{1}{c}\right)\left(c-2+\frac{1}{c}\right) c= -\frac{\left(c^2-\frac{1}{c^2}\right)\left((c-2)^2-\frac{1}{c^2} \right)c}{16}.$$ So, we get the $K$-point of order 8 \begin{equation*} \label{order8} R=\left(\frac{(1-c)^3(c+1)}{4c}, -\frac{\left(c^2-\frac{1}{c^2}\right)\left((c-2)^2-\frac{1}{c^2}\right)c}{16}\right) \end{equation*} on the elliptic curve $$\mathcal{E}_{4,c}:=\mathcal{E}_{1,\left(\pm \frac{c-\frac{1}{c}}{2}\right)^2}: y^2=\left[x+\left(\frac{c-\frac{1}{c}}{2}\right)^4\right](x+1)x$$ for any $c \in K\setminus \{0,\pm 1,\pm 1 \pm \sqrt{2}, \pm \sqrt{-1}\}$. The group $\mathcal{E}_{4,c}(K)$ contains the subgroup generated by $R$ and $W_1$, which is isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. \begin{thm} \label{family8} Let $E$ be an elliptic curve over $K$. Then $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ if and only if there exists $c \in K \setminus \{ 0, \pm1, \pm 1\pm \sqrt{2}, \pm \sqrt{-1}\}$ such that $E$ is isomorphic to $\mathcal{E}_{4,c}$. \end{thm} \begin{proof} We know that $\mathcal{E}_{4,c}(K)$ contains a subgroup isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. Conversely, suppose that $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. This implies that $E(K)$ contains all three points of order 2, i.e., $E$ may be represented in the form \eqref{E2}. Clearly, one of the points \eqref{W2} is divisible by 4 in $E(K)$. We may assume that $W_3$ is divisible by 4. We may also assume that $\alpha_3=0$, i.e., $W_3=(0,0)$. Then we know that there exist distinct nonzero $a,b \in K$ such that $\alpha_1=-a^2, \alpha_2=-b^2$, i.e., the equation of $E$ is $$y^2=(x+a^2)(x+b^2)x.$$ Replacing $E$ by $E(b)$ and putting $\lambda=a/b$, we may assume that $$E=\mathcal{E}_{1,\lambda}:y^2=(x+\lambda^2)(x+1)x.$$ Since $W_3$ is divisible by 4 in $\mathcal{E}_{1,\lambda}(K)$, the desired result follows from Proposition \ref{div4W3}. \end{proof} \begin{rem} There is another family of elliptic curves (\cite[Table 3 on p. 217]{Kubert}, \cite[Appendix E]{Robledo})) $$y^2+(1-a(t))xy -b(t)y=x^3-b(t)x^2$$ with $$a(t)=\frac{(2t+1)(8t^2+4t+1)}{2(4t+1)(8t^2-1)t},\ b(t)=\frac{(2t+1)(8t^2+4t+1)}{(8t^2-1)^2},$$ whose group of rational points contains a subgroup isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. Let us assume that $t$ is an element of an arbitrary field $K$ (with $\fchar(K)\ne 2)$ such that $$t \ne 0, \ 8t^2-1 \ne 0, \ 4t+1 \ne 0$$ and put $$U(t):=(2t+1)(8t^2+4t+1), \ A(t)=2(4t+1)(8t^2-1)t \ne 0, \ B(t)= (8t^2-1)^2 \ne 0,$$ $$ a(t)=\frac{U(t)}{A(t)}, \ b(t)=\frac{U(t)}{B(t)}.$$ Let us consider the cubic curve $\mathfrak{E}_{4,t}$ over $K$ defined by the same equation $$\mathfrak{E}_{4,t}: y^2+(1-a(t))xy -b(t)y=x^3-b(t)x^2$$ as above. In light of Theorem \ref{family8}, if $\mathfrak{E}_{4,t}$ is an elliptic curve over $K$, then $\mathfrak{E}_{4,t}$ is isomorphic to $\mathcal{E}_{4,c}$ for a certain $c \in K$. Let us find the corresponding $\lambda$ (as a rational function of $t$). First, rewrite the equation for $\mathcal{E}_{4,t}$ as $$\left(y+\frac{(1-a(t)x)-b(t)}{2}\right)^2=x^3-b(t)x^2+\left(\frac{(1-a(t))x-b(t)}{2}\right)^2,$$ i.e., $$\left(y+\frac{(1-a(t)x)-b(t)}{2}\right)^2=x^3-\frac{U(t)}{B(t)}\cdot x^2+\left(\frac{\left(1-\frac{U(t)}{A(t)}\right)x-\frac{U(t)}{B(t)}}{2}\right)^2,$$ Second, multiplying the last equation by $(A(t)B(t))^6$ and introducing new variables $$y_1=(A(t)B(t))^3\cdot \left(y+\frac{(1-a(t))x-b(t)}{2}\right), \ x_1=(A(t)B(t))^2\cdot x,$$ we obtain (with help of {\bf magma}) the following equation for an isomorphic cubic curve $\tilde{\mathfrak{E}}_{4,t}:$ $$\begin{aligned} y_1^2=x_1^3+\frac{-U(t)A(t)^2 B(t)+((U(t)-A(t))^2 B(t)^2}{4} x_1^2\\+\frac{(U(t)-A(t)) U(t)A(t)^3 B(t)^3}{2} x_1+\frac{A(t)^6 B(t)^4 U(t)^2}{4}\end{aligned}$$ $$=(x_1-\alpha_1)(x_1-\alpha_2)(x_1-\alpha_3),$$ where $$\begin{aligned} \alpha_1= -(-4194304 t^{15} - 5242880 t^{14} - 262144 t^{13} + 2162688 t^{12} + 753664 t^{11}\\ - 262144 t^{10} - 172032 t^9 - 2048 t^8 + 14336 t^7 + 2304 t^6 - 320 t^5 - 112 t^4 - 8t^3), \end{aligned} $$ $$\begin{aligned}\alpha_2= -(4194304 t^{16} + 4194304 t^{15} - 1048576 t^{14} - 2359296 t^{13} - 327680 t^{12}\\ + 491520 t^{11} + 163840 t^{10} - 40960 t^9 - 25600 t^8 + 1792 t^6 + 192 t^5 - 48 t^4 - 8 t^3 ,\end{aligned}$$ $$ \begin{aligned}\alpha_3=-(-4194304 t^{15} - 5242880 t^{14} - 262144 t^{13} + 2424832 t^{12} + 1015808 t^{11}\\ - 294912 t^{10} - 286720 t^9 - 25600 t^8 + 30720 t^7 + 8960 t^6 - 832 t^5\\ - 720 t^4 - 72 t^3 + 16 t^2 + 4 t + 1/4).\end{aligned}$$ Using {\bf magma}, we obtain that $$\alpha_2-\alpha_1=-2^{22} t^4 (t+1/2)^4(t^2-1/8)^4, \ \alpha_3-\alpha_1=-2^{18}(t+1/4)^4 (t^2-1/8)^4.$$ This implies that $\tilde{\mathfrak{E}}_{4,t}$ (and therefore $\mathfrak{E}_{4,t}$) is an elliptic curve over $K$ (i.e., all three $\alpha_1, \alpha_2,\alpha_3$ are {\sl distinct} elements of $K$) if and only if $$t \ne 0, -\frac{1}{2}, -\frac{1}{4}, \pm \frac{1}{2\sqrt{2}}$$ and $$\frac{\alpha_2-\alpha_1}{\alpha_3-\alpha_1}=\left(\frac{2t(t+1/2)}{t+1/4}\right)^4 \ne 1.$$ Assume that all these conditions hold. Then the change of variable $x_2=x_1+\alpha_1$ transforms $\tilde{\mathfrak{E}}_{3,t}$ to the elliptic curve $$E: y_1^2=x_2(x_2-(\alpha_2-\alpha_1))(x_2-(\alpha_3-\alpha_1))=$$ $$x_2\left(x_2+2^{22} t^4 (t+1/2)^4(t^2-1/8)^4\right)\left(x_2+2^{18}(t+1/4)^4 (t^2-1/8)^4\right).$$ If we put $\kappa=2^9 (t+1/4)^2 (t^2-1/8)^2$, then $$\kappa^2=-(\alpha_3-\alpha_1)$$ and $E$ is isomorphic to the elliptic curve $$E(\kappa): {y^{\prime}}^2=x^{\prime}\left(x^{\prime}+ \frac{\alpha_2-\alpha_1}{\alpha_3-\alpha_1}\right)(x^{\prime}+1) =x^{\prime}\left(x^{\prime}+ \left(\frac{2t(t+1/2)}{t+1/4}\right)^4\right)(x^{\prime}+1).$$ Notice that $$\frac{2t(t+1/2)}{t+1/4}= \frac{2t(4t+2)}{(4t+1)} = \frac{4t(4t+2)}{2(4t+1)} = \frac{(4t+1)^2-1}{2(4t+1)}= \frac{(4t+1)-\frac{1}{(4t+1)}}{2}, $$ and therefore $E(\kappa)=\mathcal{E}_{4,c}$ with $c=(4t+1)$. This implies that $\mathfrak{E}_{4,t}$ is isomorphic to $\mathcal{E}_{4,c}$ with $c=(4t+1)$. \end{rem} \begin{rem} Suppose that $K=\mathbb{F}_q$ with $q=3,5,7$ or $9$. Then $$\mathbb{F}_q \setminus \{ 0,1,-1, \pm 1\pm \sqrt{2}, \pm \sqrt{-1}\}=\emptyset .$$ \end{rem} \begin{cor} Let $E$ be an elliptic curve over $\mathbb{F}_q$, where $q=11,13,17,19$. The group $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ if and only if $E$ is isomorphic to one of elliptic curves $\mathcal{E}_{4,c}$. \end{cor} \begin{proof} Suppose that $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. It follows from Theorem \ref{family8} that $E$ is isomorphic to one of the elliptic curves $$\mathcal{E}_{4,c}: y^2=\left[x+\left(\frac{c-\frac{1}{c}}{2}\right)^4\right](x+1)x$$ with $c \in K\setminus \{0,\pm 1,\pm \sqrt{-1}, \pm \sqrt{-1}\}$. Conversely, suppose that $E$ is isomorphic to one of these curves. We need to prove that $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family8}, $E(\mathbb{F}_q)$ contains a subgroup isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$; in particular, $16$ divides $|E(\mathbb{F}_q)|$. In order to finish the proof, it suffices to check that $|E(\mathbb{F}_q)|<32$, but this inequality follows from the Hasse bound \eqref{HasseB} $$|E(\mathbb{F}_q)|\le q+2\sqrt{q}+1\le 19+2\sqrt{19}+1<29.$$ \end{proof} \begin{cor} Let $E$ be an elliptic curve over $\mathbb{F}_{47}$. The group $E(\mathbb{F}_{47})$ is isomorphic to $\mathbb{Z}/24\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ if and only if $E$ is isomorphic to one of elliptic curves $\mathcal{E}_{4,c}$. \end{cor} \begin{proof} Suppose that $E(\mathbb{F}_{47})$ is isomorphic to $\mathbb{Z}/24\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. Then it contains a subgroup isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. It follows from Theorem \ref{family8} that $E$ is isomorphic to one of elliptic curves $$\mathcal{E}_{4,c}: y^2=\left[x+\left(\frac{c-\frac{1}{c}}{2}\right)^4\right](x+1)x$$ with $c \in K\setminus \{0, \pm1, \pm 1\pm \sqrt{2}, \pm \sqrt{-1}\}$. Conversely, suppose that $E$ is isomorphic to one of these curves. We need to prove that $E(\mathbb{F}_{47})$ is isomorphic to $\mathbb{Z}/24\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family8}, $E(\mathbb{F}_{47})$ contains a subgroup isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$; in particular, $16$ divides $|E(\mathbb{F}_{47})|$. The Hasse bound tells us that $$47+1-2\sqrt{47}\le |E(\mathbb{F}_{47})|\le 47+1+2\sqrt{47}$$ and therefore $34<|E(\mathbb{F}_{47})|<62$. This implies that $|E(\mathbb{F}_{47})|=48$; in particular, $E(\mathbb{F}_{47})$ contains a point of order 3. This implies that $E(\mathbb{F}_{47})$ contains a subgroup isomorphic to $$(\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}) \oplus \mathbb{Z}/3\mathbb{Z}\cong \mathbb{Z}/24\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}.$$ Since this subgroup has the same order 48 as the whole group $E(\mathbb{F}_{47})$, we get the desired result. \end{proof} \begin{thm} \label{Q8} Let $K=\Q$ and $E$ be an elliptic curve over $\Q$. Then the torsion subgroup $E(\Q)_t$ of $E(\Q)$ is isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ if and only if there exists $c \in \Q \setminus \{ 0, \pm 1\}$ such that $E$ is isomorphic to $\mathcal{E}_{4,c}$. \end{thm} \begin{proof} By Theorem \ref{mazurQ} applied to $m=4$, if $E(\Q)$ contains a subgroup isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ then $E(\Q)_t$ is isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. Now the desired result follows from Theorem \ref{family8}, since neither $\sqrt{2}$ nor $\sqrt{-1}$ lie in $\Q$. \end{proof} \begin{thm} \label{family84} Let $E$ be an elliptic curve over $K$. Then $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$ if and only if $K$ contains $\mathbf{i}=\sqrt{-1}$ and there exist $$c,d \in K \setminus \{ 0, \pm1, \pm 1\pm \sqrt{2}, \pm \sqrt{-1}\} \ \text{ such that } \ c-\frac{1}{c}=\mathbf{i}\left(d-\frac{1}{d}\right)$$ and $E$ is isomorphic to $\mathcal{E}_{4,c}$. \end{thm} \begin{rem} The above equation defines an open dense set in the plane affine curve \begin{equation} \label{cd} \mathcal{M}_{8,4}:(c^2-1)d=\mathbf{i}(d^2-1)c. \end{equation} It is immediate that the corresponding projective closure is a nonsingular cubic $\bar{\mathcal{M}}_{8,4}$ with a $K$-point, i.e., an elliptic curve. To obtain a Weierstrass normal form of $\bar{\mathcal{M}}_{8,4}$, we first slightly simplify equation\eqref{cd} by the change of variables $d=s, \mathbf{i}c=t$ and get $s^2t+ts^2+s-t=0$. Then, using the birational transformation $$s=\frac{\eta}{\xi+\xi^2},\ t=\frac \eta{1+\xi},$$ we obtain $\eta^2=\xi^3-\xi$. \footnote{See \cite[Example 1.4.2 on p. 88]{Silver2} for an explicit description of the (finite) set of all $\Q(\mathbf{i})$-points on this elliptic curve; none of them corresponds to $(c,d)$ that satisfy the conditions of Theorem \ref{family84}.} \end{rem} \begin{proof}[Proof of Theorem \ref{family84}] We have already seen that $\mathcal{E}_{4,c}(K)$ contains an order 8 point $R$ with $4R=W_3$. It follows from Proposition \ref{W3by4} that $\mathcal{E}_{4,c}(K)$ contains all points of order $4$. In particular, it contains an order 4 point $\mathcal{Q}$ with $2\mathcal{Q}=W_1$. Clearly, $R$ and $\mathcal{Q}$ generate a subgroup isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. Conversely, suppose that $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/8\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. This implies that $E(K)$ contains all twelve points of order 4. In particular, $E$ may be represented in the form \eqref{E2}. Clearly, one of the points of order 2 is divisible by 4 in $E(K)$. We may assume that $W_3$ is divisible by 4. The same arguments as in the proof of Theorem \ref{family8} allow us to assume that $$E=\mathcal{E}_{1,\lambda}:y^2=(x+\lambda^2)(x+1)x.$$ Since $W_3$ is divisible by 4 in $\mathcal{E}_{1,\lambda}(K)$ and all points of order dividing 4 lie in $\mathcal{E}_{1,\lambda}(K)$, every point $R$ of $\mathcal{E}_{1,\lambda}$ with $4R=W_3$ also lies in $\mathcal{E}_{1,\lambda}(K)$. It follows from Proposition \ref{div4W3} that $K$ contains $\mathbf{i}=\sqrt{-1}$ and there exist $$c,d \in K\setminus \{ 0,1,-1, \pm 1\pm \sqrt{2}, \pm \sqrt{-1}\}$$ such that $$\lambda= \left[\frac{c-\frac{1}{c}}{2}\right]^2, \ -\lambda= \left[\frac{d-\frac{1}{d}}{2}\right]^2.$$ This implies that $$c-\frac{1}{c}=\pm \mathbf{i} \left (d-\frac{1}{d}\right).$$ Replacing if necessary $d$ by $-d$, we obtain the desired $$c-\frac{1}{c}= \mathbf{i} \left (d-\frac{1}{d}\right).$$ \end{proof} \section{Points of order 3} \label{l6} The following assertion gives a simple description of points of order 3 on elliptic curves. \begin{prop} \label{order3} A point $P=(x_0,y_0)\in E(K)$ has order $3$ if and only if one can choose three square roots $r_i=\sqrt{x_0-\alpha_i}$ in such a way that $$r_1 r_2 +r_2 r_3+r_3 r_1=0.$$ \end{prop} \begin{proof} Indeed, let $P$ be a point of order 3. Then $2(-P)=P$. Hence, all $x_0-\alpha_i$ are squares in $K$. By (\ref{x1}), $$x(-P)=x_0+(r_1 r_2+r_2 r_3+r_3 r_1)$$ for a suitable choice of $r_1,r_2,r_3$. Since $x(-P)=x(P)=x_0$, we get $r_1 r_2 +r_2 r_3+r_3 r_1=0$. Conversely, suppose that there exists a triple of square roots $r_i=\sqrt{x_0-\alpha_i}$ such that $r_1 r_2 +r_2 r_3+r_3 r_1=0$. Since $P\in E(K)$, $$(r_1 r_2 r_3)^2=(x_0-\alpha_1)(x_0-\alpha_2)(x_0-\alpha_3)=y_0^2,$$ i.e., $r_1 r_2 r_3=\pm y_0$. Replacing $r_1,r_2,r_3$ by $-r_1,-r_2,-r_3$ if necessary, we may assume that $r_1 r_2 r_3=-y_0$. Then there exists a point $Q=(x(Q),y(Q))\in E(K)$ such that $2Q=P$, and $x_1=x(Q), y_1=y(Q)$ are expressed in terms of $r_1,r_2,r_3$ as in (\ref{halfP}). Therefore $$x(Q)=x_0+(r_1 r_2 +r_2 r_3+r_3 r_1)=x_0,$$ $$y(Q)=-y_0-(r_1+r_2+r_3)(r_1 r_2 +r_2 r_3+r_3 r_1)=-y_0,$$ i.e., $Q=-P, 2(-P)=P$, and so $P$ has order $3$. \end{proof} \begin{thm} \label{fam3} Let $a_1,a_2,a_3$ be elements of $K$ such that all $a_1^2, a_2^2,a_3^2$ are distinct. Let us consider the elliptic curve $$E=E_{a_1,a_2,a_3}: y^2=(x+a_1^2)(x+a_2^2)(x+a_3^2)$$ over $K$. Let $ P=(0,a_1 a_2 a_3) .$ Then $P$ enjoys the following properties. \begin{itemize} \item[(i)] $P$ is divisible by $2$ in $E(K)$. More precisely, there are four points $Q\in E(K)$ with $2Q=P$, namely, \begin{equation*}\begin{aligned} &(a_2 a_3-a_1 a_2-a_3 a_1, (a_1-a_2)(a_2+a_3)(a_3-a_1)), \\ &(a_3 a_1-a_1 a_2-a_2 a_3, (a_1-a_2)(a_2-a_3)(a_3+a_1)),\\ &(a_1 a_2-a_2 a_3-a_3 a_1,(a_1+a_2)(a_2-a_3)(a_3-a_1), \\ &(a_1 a_2+a_2 a_3+a_3 a_1, (a_1+a_2)(a_2+a_3)(a_3+a_1)). \end{aligned} \end{equation*} \item[(ii)] The following conditions are equivalent. \begin{enumerate} \item $P$ has order $3$. \item None of $a_i$ vanishes, i.e., $\pm a_1, \pm a_2,\pm a_3$ are six distinct elements of $K$, and one of the following four equalities holds: \begin{equation*} \begin{aligned} &a_2 a_3=a_1 a_2+a_3 a_1, \ a_3 a_1=a_1 a_2+a_2 a_3, \\ &a_1 a_2=a_2 a_3+a_3 a_1, \ a_1 a_2+a_2 a_3+a_3 a_1=0. \end{aligned} \end{equation*} \end{enumerate} \item[(iii)] Suppose that equivalent conditions $(i)-(ii)$ hold. Then one of four points $Q$ coincides with $-Q$ and has order 3, while the three other points are of order $6$. In addition, $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. \end{itemize} \end{thm} \begin{rem} Clearly, $E_{a_1,a_2,a_3}=E_{\pm a_1, \pm a_2, \pm a_3}$. \end{rem} \begin{proof}[Proof of Theorem \ref{fam3}] We have $$\alpha_1=-a_1^2,\ \alpha_2=-a_2^2,\ \alpha_3=-a_3^2.$$ Let us try to divide $P$ by 2 in $E(K)$. We have $$r_1=\pm a_1, \ r_2=\pm a_2,\ r_3=\pm a_3.$$ Since all $r_i$ lie in $K$, the point $P=(0,a_1 a_2 a_3)$ is divisible by 2 in $E(K)$. Let $Q$ be a point on $E$ with $2Q=P$. By (\ref{x1}) and (\ref{chap}), $$x(Q)=r_1 r_2+r_2 r_3+r_3 r_1, \ y(Q)=-(r_1+r_2)(r_2+r_3)(r_3+r_1)$$ with $r_1 r_2 r_3=-a_1 a_2 a_3$. Plugging in $r_i= \pm a_i$ into the formulas for $x(Q)$ and $y(Q)$, we get explicit formulas for points $Q$ as in the statement of the theorem. This proves (i). Let us prove (ii). Suppose that $P$ has order 3. Since $P$ is not of order $2$, we have $0=x(P)\ne \alpha_i$ for all $i=1,2,3$. Since $$\{\alpha_1,\alpha_2,\alpha_3\}= \{-a_1^2,-a_2^2,-a_3^2\},$$ none of $a_i$ vanishes. It follows from Proposition \ref{order3} that one may choose the signs for $r_i$ in such a way that $r_1 r_2+r_2 r_3+r_3 r_1=0$. Plugging in $r_i=\pm a_i$ into this formula, we get four relations between $a_1,a_2,a_3$ as in (ii)(2). Now suppose that one of the relations as in (ii)(2) holds. This means that one may choose the signs of $r_i=\pm a_i$ in such a way that $r_1 r_2+r_2 r_3+r_3 r_1=0$. It follows from Proposition \ref{order3} that $P$ has order $3$. This proves (ii). Let us prove (iii). Since $P$ has order $3$, $2(-P)=P$, i.e., $-P$ is one of the four $Q$'s. Suppose that $Q$ is a point of $E$ with $2 {Q}=P, \ {Q}\ne -P$. Clearly, the order of $ {Q}$ is either 3 or 6. Assume that $ {Q}$ has order $3$. Then $P=2 {Q}=- {Q}$ and therefore $ {Q}=-P$, which is not the case. Hence $ {Q}$ has order $6$. Then $3 {Q}$ has order $2$, i.e., coincides with $W_i=(-a_i^2,0)$ for some $i \in \{1,2,3\}$. Pick $j \in \{1,2,3\}\setminus \{i\}$ and consider the point $W_j=(-a_j^2,0)\ne W_i$. Then the subgroup of $E(K)$ generated by $ {Q}$ and $W_j$ is isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. This proves (iii). \end{proof} \begin{rem} In Theorem \ref{fam3} we do {\sl not} assume that $\fchar(K)\ne {\it3}$! \end{rem} \begin{cor} \label{fam3H} Let $a_1,a_2,a_3$ be elements of $K$ such that $a_1^2, a_2^2,a_3^2$ are distinct. Then the following conditions are equivalent. \begin{itemize} \item[(i)] The point $P =(0,a_1 a_2 a_3)\in E_{a_1,a_2,a_3}(K)$ has order $3$. \item[(ii)] None of $a_i$ vanishes, and one may choose signs for $$a=\pm a_1,\ b=\pm a_2,\ c=\pm a_3$$ in such a way that $c=ab/(a+b)$. \end{itemize} If these conditions hold, then $$ E_{a_1,a_2,a_3}=E_{\lambda, b}: y^2=\left(x^2+(\lambda b)^2\right)\left(x+b^2\right) \left(x+\left(\frac{\lambda}{\lambda+1} b\right)^2 \right),$$ where $\lambda=a/b \in K\setminus \{ 0, \pm 1, -2, -\frac{1}{2}\}.$ \end{cor} \begin{proof} Suppose that condition (ii) of the corollary holds, i.e., none of $a_i$ vanishes, and one may choose signs for $$a=\pm a_1,\ b=\pm a_2,\ c=\pm a_3$$ in such a way that $c=ab/(a+b)$. Then none of $a,b,c$ vanishes and $ab=ac+bc$. By Theorem \ref{fam3}(ii), $\mathcal{P}=(0,abc)$ is a point of order $3$ on the elliptic curve $$E_{\lambda,b}= E_{a_1,a_2,a_3}.$$ Since $abc=\pm a_1 a_2 a_3$, either $\mathcal{P}=P$ or $\mathcal{P}=-P$. In both cases $P$ has order $3$. Notice that $\pm a_1, \pm a_2, \pm a_3$ are six distinct elements of $K$. This means that $\pm a, \pm b, \pm c$ are also six distinct elements of $K$. If we put $\lambda=a/b$, then $$\pm \lambda b, \ \pm b,\ \pm \frac{\lambda+1}{\lambda}b$$ are six distinct elements of $K$. This means (in light of the inequalities $a\ne 0, b \ne 0$) that $$\lambda \ne 0, \pm 1, -2, -\frac{1}{2}.$$ Suppose $P$ has order $3$. By Theorem \ref{fam3}(ii), none of $a_i$ vanishes and one of the following four equalities holds: \begin{equation*}\begin{aligned}&a_2 a_3=a_1 a_2+a_3 a_1, \ a_3 a_1=a_1 a_2+a_2 a_3,\\ & a_1 a_2=a_2 a_3+a_3 a_1, \ a_1 a_2+a_2 a_3+a_3 a_1=0. \end{aligned} \end{equation*} Here are the corresponding choices of $a,b,c$ with $c=ab/(a+b)$: \begin{equation*}\begin{aligned} &a=a_1, b=-a_2, c= a_3; \ a=a_1, b=-a_2, c=a_3;\\ &a=a_1, b=a_2, c=a_3; \ a=a_1, b=a_2, c=-a_3. \end{aligned} \end{equation*} In order to finish the proof, we just need to notice that $a=\lambda b$ and $$c=\frac{ab}{a+b}=\frac{\lambda b\cdot b}{\lambda b+b}=\frac{\lambda}{\lambda+1} b.$$ \end{proof} \begin{thm} \label{family3} Let $E$ be an elliptic curve over $K$. Then $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ if and only if there exists $\lambda \in K \setminus \{ 0, \pm 1, -2, -\frac{1}{2}\}$ such that $E$ is isomorphic to $$\mathcal{E}_{3,\lambda}: y^2=\left(x^2+\lambda^2\right)(x+1)\left(x+\left(\frac{\lambda}{\lambda+1}\right)^2\right).$$ \end{thm} \begin{proof}[Proof of Theorem \ref{family3}] Let $\lambda \in K \setminus \{ 0, \pm 1, -2, -1/2\}$ and put $a_1=\lambda, a_2=1, a_3=\lambda/(\lambda+1)$. Then all $a_i$ do not vanish, $a_1^2,a_2^2,a_3^2$ are three distinct elements of $K$, $a_1 a_2=a_2 a_3+a_3 a_1$, and $\mathcal{E}_{3,\lambda}=E_{a_1,a_2,a_3}$. It follows from Theorem \ref{fam3} that $\mathcal{E}_{3,\lambda}$ contains a subgroup isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. Conversely, suppose that $E$ is an elliptic curve over $K$ such that $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. It follows that all three points of order $2$ lie in $E(K)$, and therefore $E$ can be represented in the form \eqref{E2}. It is also clear that $E(K)$ contains a point of order 3. Let us choose a point $P=(x(P),y(P))\in E(K)$ of order 3. We may assume that $x(P)=0$. We have $P=2(-P)$, and therefore $P$ is divisible by 2 in $E(K)$. By Theorem \ref{th0}, all $x(P)-\alpha_i=-\alpha_i$ are squares in $K$. This implies that there exist elements $a_1,a_2,a_3 \in K$ such that $\alpha_i=-a_i^2$. Clearly, all three $a_1^2,a_2^2, a_3^2$ are distinct. Since $P$ lies on $E$, $$y(P)^2=(x(P)+a_1^2)(x(P)+a_2^2)(x(P)+a_3^2)=a_1^2 a_2^2 a_3^2=(a_1 a_2 a_3)^2,$$ and therefore $y(P)= \pm a_1 a_2 a_3$. Replacing $P$ by $-P$ if necessary, we may assume that $y(P)=a_1 a_2 a_3$, i.e., $P=(0,a_1 a_2 a_3)$ is a $K$-point of order 3 on $$E=E_{a_1,a_2,a_3}: y^2=(x+a_1)^2(x+a_2^2)(x+a_3)^2.$$ It follows from Corollary \ref{fam3H} that there exist {\sl nonzero} $b \in K$ and $\lambda \in K\setminus \{ 0, \pm 1,-2, -1/2\}$ such that $$E=E_{a_1,a_2,a_3}=E_{\lambda,b}:y^2=\left(x+(\lambda b)^2\right)\left(x+b^2\right)\left(x+\left[\frac{\lambda}{\lambda+1} b\right]^2 \right).$$ But $E_{\lambda,b}$ is isomorphic to $$E_{\lambda,b}(b): {y^{\prime}}^2=(x^{\prime}+\lambda^2)(x^{\prime}+1)\left(x^{\prime}+\left[\frac{\lambda}{\lambda+1} \right]^2\right)$$ while the latter coincides with $\mathcal{E}_{3,\lambda}$. \end{proof} \begin{rem} There is a family of elliptic curves over $\Q$ \cite[Table 3 on p. 217]{Kubert} (see also \cite[Appendix E]{Robledo}), $$\mathfrak{E}_{3,t}:y^2+(1-a(t))xy -b(t)y=x^3-b(t)x^2,$$ with $$a(t)=\frac{10-2t}{t^2-9}, \ b(t)=\frac{-2(t-1)^2(t-5)}{(t^2-9)^2}$$ (with $t\in \Q\setminus \{1, 5, \pm 3, 9\}$), whose group of rational points contains a subgroup isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. (The point $(0,0)$ of $\mathfrak{E}_{3,t}$ has order 6, ibid.) Let us assume that $t\ne \pm 3$ is an element of an arbitrary field $K$ (with $\fchar(K)\ne 2)$ and consider the cubic curve $\mathfrak{E}_{3,t}$ over $K$ defined by the same equation as above. In light of Theorem \ref{family3}, if $\mathfrak{E}_{3,t}$ is an elliptic curve over $K$, then $\mathfrak{E}_{3,t}$ is isomorphic to $\mathcal{E}_{3,\lambda}$ for a certain $\lambda \in K$. Let us find the corresponding $\lambda$ (as a rational function of $t$). First, rewrite the equation for $\mathcal{E}_{3,\lambda}$ as $$\left(y+\frac{(1-a(t)x)-b(t)}{2}\right)^2=x^3-b(t)x^2+\left(\frac{(1-a(t))x-b(t)}{2}\right)^2.$$ Second, multiplying the last equation by $(t^2-9)^6$ and introducing new variables $$y_1=(t^2-9)^3\cdot \left(y+\frac{(1-a(t))x-b(t)}{2}\right), \ x_1=(t^2-9)^2\cdot x,$$ we obtain (with help of {\bf magma}) the equation for an isomorphic cubic curve $$\tilde{\mathfrak{E}}_{3,t}: y_1^2=(x_1-\alpha_1)(x_1-\alpha_2)(x_1-\alpha_3),$$ where $$\alpha_1=-(2t^3-10t^2-18t+90)=-2(t-5)(t-3)(t+3),$$ $$ \alpha_2=-(2t^3-10t^2+14t-6)=-2(t-3)(t-1)^2,$$ $$ \alpha_3=-\left(\frac{1}{4}t^4-t^3-\frac{5}{2}t^2+7t-\frac{15}{4}\right)=-\frac{1}{4}(t-5)(t+3)(t-1)^2.$$ We have $$\alpha_1-\alpha_2=-2^5(t-3), \ \alpha_2-\alpha_3=\frac{1}{4}\cdot (t-1)^3(t-9), \ \alpha_3-\alpha_1=-\frac{1}{4}\cdot (t-5)^3(t+3).$$ This implies that $\tilde{\mathfrak{E}}_{3,t}$ (and therefore $\mathfrak{E}_{3,t}$) is an elliptic curve over $K$ if and only if $$t \in K\setminus \{1,\pm 3, 5,9\}.$$ Further we assume that this condition holds and therefore $\tilde{\mathfrak{E}}_{3,t}$ and $\mathfrak{E}_{3,t}$ are elliptic curves over $K$. Clearly, all three points of order 2 on $\tilde{\mathfrak{E}}_{3,t}$ are defined over $K$ and the $K$-point $$Q=(x_1(Q), y_1(Q))=(0, -(t-5)(t-3)(t+3)(t-1)^2)$$ lies on $\tilde{\mathfrak{E}}_{3,t}$. We prove that $Q$ has order 6. Let us consider the point $P=2Q\in E(K)$ with coordinates $x_1(P), y_1(P)\in K$. (Since $y_1(P)\ne 0$, $P\ne \infty$.) According to formulas of Section 1, there exists a unique triple ${r_1,r_2,r_3}$ of distinct elements of $K$ such that $$(r_1+r_2)(r_2+r_3)(r_3+r_1)=-y_1(Q)=(t-5)(t-3)(t+3)(t-1)^2$$ and for all $i=1,2,3$ $$x_1(P)-\alpha_i=r_i^2,$$ $$ \ 0 \ne -\alpha_i=x_1(Q)-\alpha_i=(r_i+r_j)(r_i+r_k),$$ where $(i,j,k)$ is a permutation of $(1,2,3)$. This implies that $$r_1+r_2=\frac{(t-5)(t-3)(t+3)(t-1)^2}{-a_3}=\frac{(t-5)(t-3)(t+3)(t-1)^2}{\frac{1}{4}(t-5)(t+3)(t-1)^2}=4(t-3),$$ $$r_2+r_3=\frac{(t-5)(t-3)(t+3)(t-1)^2}{-a_1}=\frac{(t-5)(t-3)(t+3)(t-1)^2}{2(t-5)(t-3)(t+3)}=\frac{1}{2}\cdot (t-1)^2,$$ $$r_3+r_1=\frac{(t-5)(t-3)(t+3)(t-1)^2}{-a_2}=\frac{(t-5)(t-3)(t+3)(t-1)^2}{2(t-3)(t-1)^2}=\frac{1}{2}\cdot (t-5)(t+3).$$ Hence $$r_1+r_2=4(t-3), \ r_2+r_3=\frac{(t-1)^2}{2}, \ r_3+r_1=\frac{(t+3)(t-5)}{2},$$ and therefore $$r_1+r_2+r_3 =\frac{1}{2}\cdot\left((r_1+r_2)+(r_2+r_3)+(r_3+r_1)\right)=\frac{1}{2} \cdot (t^2+2t-19),$$ which, in turn, implies that $$r_1=2t-10=2(t-5), \ r_2=2t-2=2(t-1), \ r_3=\frac{1}{2} \cdot (t-1)(t-5)=\frac{1}{8}r_1 r_2.$$ One may easily check that $$c(t):=-2t^3+14t^2-22t+10=r_i^2+\alpha_i \ \text {for all}\ \ i=1,2,3.$$ This implies that $$x_1(P)=c(t), \ c(t)-\alpha_i=r_i^2 \ \text {for all}\ \ i=1,2,3$$ and $\tilde{\mathfrak{E}}_{3,t}$ is isomorphic to the elliptic curve $$E_{r_1,r_2,r_3}: y_1^2=(x_2+r_1^2)(x_2+r_2^2)(x_3+r_3^2)$$ with $x_2=x_1-c(t)$. In addition, $$y_1(P)=-r_1 r_2 r_3=-2(t-1)^2(t-5).$$ We have $$r_1 r_2=8r_3, \ r_2-r_1=8.$$ This implies that $(r_2-r_1)r_3= r_1 r_2$, which means that $$(-r_1) r_2+r_2 r_3+(-r_1) r_3=0.$$ It follows from Proposition \ref{order3} that $P$ has order 3 in $\tilde{\mathfrak{E}}_{3,t}(K)$. (In particular, all $r_i \ne 0$.) Since $2Q=P$, the order of $Q$ in $\tilde{\mathfrak{E}}_{3,t}$ is 6. Notice that $$-r_3=\frac{(-r_1) r_2}{(-r_1)+r_2}$$ and $$E_{r_1,r_2,r_3}=E_{-r_1,r_2,-r_3}.$$ It follows from Corollary \ref{fam3H} and the end of the proof of Theorem \ref{family3} that $E_{r_1,r_2,r_3}$ is isomorphic to $\mathcal{E}_{3,\lambda}$ with $$\lambda=\frac{-r_1}{r_2}=\frac{-(2t-10)}{2t-2}=-\frac{t-5}{t-1}.$$ This implies that $\mathfrak{E}_{3,t}$ is isomorphic to $\mathcal{E}_{3,\lambda}$ with $\lambda=-(t-5)/(t-1)$. \end{rem} \begin{cor} Let $E$ be an elliptic curve over $\mathbb{F}_q$, where $q=7,9,11,13$. The group $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ if and only if $E$ is isomorphic to one of elliptic curves $\mathcal{E}_{3,\lambda}$. \end{cor} \begin{proof} Suppose that $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family3}, $E$ is isomorphic to one of elliptic curves $\mathcal{E}_{3,\lambda}$. Conversely, suppose that $E$ is isomorphic to one of these curves. We need to prove that $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family3}, $E(\mathbb{F}_q)$ contains a subgroup isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$; in particular, $12$ divides $|E(\mathbb{F}_q)|$. In order to finish the proof, it suffices to check that $|E(\mathbb{F}_q)|<24$, but this inequality follows from the Hasse bound \eqref{HasseB} $$|E(\mathbb{F}_q)|\le q+2\sqrt{q}+1\le 13+2\sqrt{13}+1<22.$$ \end{proof} \begin{cor} Let $E$ be an elliptic curve over $\mathbb{F}_{23}$. The group $E(\mathbb{F}_{23})$ is isomorphic to $\mathbb{Z}/12\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ if and only if $E$ is isomorphic to one of elliptic curves $\mathcal{E}_{3,\lambda}$. \end{cor} \begin{proof} Suppose that $E(\mathbb{F}_{23})$ is isomorphic to $\mathbb{Z}/12\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. Then it contains a subgroup isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. It follows from Theorem \ref{family3} that $E$ is isomorphic to one of elliptic curves $\mathcal{E}_{3,\lambda}$. Conversely, suppose that $E$ is isomorphic to one of these curves. We need to prove that $E(\mathbb{F}_{23})$ is isomorphic to $\mathbb{Z}/12\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family3}, $E(\mathbb{F}_{23})$ contains a subgroup isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$; in particular, $12$ divides $|E(\mathbb{F}_{23})|$. The Hasse bound \eqref{HasseB} tells us that $$23+1-2\sqrt{23}\le |E(\mathbb{F}_{23})|\le 23+1+2\sqrt{23}$$ and therefore $14<|E(\mathbb{F}_{23})|< 34$. It follows that $|E(\mathbb{F}_{23})|=24$; in particular the 2-primary component $E(\mathbb{F}_{23})(2)$ of $E(\mathbb{F}_{23})$ has order 8. On the other hand, $E(\mathbb{F}_{23})(2)$ is isomorphic to a product of two cyclic groups, each of which has even order. This implies that $E(\mathbb{F}_{23})(2)$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. Taking into account that $E(\mathbb{F}_{23})$ contains a point of order 3, we conclude that it contains a subgroup isomorphic to $$(\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z})\oplus \mathbb{Z}/3\mathbb{Z}\cong \mathbb{Z}/12\mathbb{Z}\oplus \mathbb{Z}/3\mathbb{Z}.$$ This subgroup has the same order 24 as the whole group $E(\mathbb{F}_{23})$, which ends the proof. \end{proof} \begin{thm} \label{Q6} Let $K=\Q$ and $E$ an elliptic curve over $\Q$. Then the torsion subgroup $E(\Q)_t$ of $E(\Q)$ is isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ if and only if there exists $\lambda \in \Q \setminus \{ 0, \pm 1, -2, -\frac{1}{2}\}$ such that $E$ is isomorphic to $\mathcal{E}_{3,\lambda}$. \end{thm} \begin{proof} By Theorem \ref{mazurQ} applied to $m=3$, if $E(\Q)$ contains a subgroup isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ then $E(\Q)_t$ is isomorphic to $\mathbb{Z}/6\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. Now the desired result follows from Theorem \ref{family3}. \end{proof} \begin{comment} Most (if not all) of the results of this section are pretty well known; however, our proofs seem to be more elementary/less computational. Now let $K$ be a field of characteristic 3, and let $E: y^2=f(x)$ be an elliptic curve over $K$, where $f(x)$ is a monic cubic polynomial \begin{equation}\label{a2a4}f(x)=x^3+a_2 x^2 +a_4 x+a_6=(x-\alpha_1)(x-\alpha_2)(x-\alpha_3), \end{equation} where $\alpha_1, \alpha_2, \alpha_3$ are {\sl distinct} elements of an overfield of $K$. (The notation for the coefficients of $f(x)$ follows \cite{Tate}.) Since $f(x)$ has {\sl no} multiple roots, the polynomial $f(x)$ and its derivative $f^{\prime}(x)=2a_2 x+a_4$ have no common factors/roots; in parti\-cular, $f^{\prime}(x)$ is a nonzero polynomial, i.e., $a_2$ and $a_4$ cannot be zero simultaneously.~ \footnote{Another way to see this is to assume the contrary. Then $f(x)=(x+\sqrt[3]{a_6})^3$, which is impossible since $f(x)$ has no multiple roots \cite[p. 79]{Wash}. } Notice also that \begin{equation} \label{minA2} f^{\prime}(x)^2-4 a_2 f(x)=(2a_2 x+a_4)^2-a_2(x^3+a_2 x^2 +a_4 x+a_6)= \end{equation} $$(a_2^2 x^2+4 a_2 a_4 x+a_4^2)-a_2(x^3+a_2 x^2 +a_4 x+a_6) =-a_2 x^3-a_2 a_6=-a_2(x^3+a_6).$$ So, $E$ is defined by the equation $$E:y^2=f(x).$$ Now let $K$ be a field of characteristic 3, and let $E: y^2=f(x)$ be an elliptic curve over $K$, where $f(x)$ is a monic cubic polynomial \begin{equation}\label{a2a4}f(x)=x^3+a_2 x^2 +a_4 x+a_6=(x-\alpha_1)(x-\alpha_2)(x-\alpha_3), \end{equation} where $\alpha_1, \alpha_2, \alpha_3$ are {\sl distinct} elements of an overfield of $K$. (The notation for the coefficients of $f(x)$ follows \cite{Tate}.) \begin{thm} \label{slopeT} Let $P=(x_0,y_0)\in E(K)$ with $y_0 \ne 0$. Let $\tilde{l} \in K$ be the slope of the tangent line to $E$ at $P$. Then $P$ has order $3$ if and only if $\tilde{l}^2=a_2.$ \end{thm} \begin{rem} Recall that all points on $E$ with $y$-coordinate equal zero have order $2$ and therefore cannot be of order $3$. \end{rem} \begin{proof} Let $l$ be the slope of the tangent line to $E$ at $-P$. Then, obviously, $l=-\tilde l $, and so $\tilde l^2=l^2$. Suppose $P$ has order 3. Then $2(-P)=P$. \begin{comment}and $$x_1=x(Q)=x(O)=x_0, y_1=y(Q)=-x(P)=-y_0.$$ In addition, the tangent line to $E$ at $Q=(x_0,-y_0)$ is $y=-(\tilde{l}x+\tilde{m})$; in particular, the slope $l$ of the tangent line to $E$ at $Q$ coincides with $-\tilde{l}$. As we know (Section 1), there is a triple of square roots $$r_1=\sqrt{x_0-\alpha_1},\ r_2=\sqrt{x_0-\alpha_2},\ r_3=\sqrt{x_0-\alpha_3}$$ such that $$l=r_1+r_2+r_3, \ x(-P)-x_0=r_1 r_2+r_2 r_3+r_3 r_1.$$ Since $x(-P)=x_0$, we get $r_1 r_2+r_2 r_3+r_3 r_1=0$. On the other hand, $$l^2=(r_1+r_2+r_3)^2=(r_1^2+r_2^2+r_3^2)+2(r_1 r_2+r_2 r_3+r_3 r_1)$$ $$=r_1^2+r_2^2+r_3^2= (x_0-\alpha_1)+(x_0-\alpha_2)+(x-\alpha_3)=-(\alpha_1+\alpha_2+\alpha_3)=a_2.$$ Since $\tilde l^2=l^2$, we get $\tilde{l}^2=a_2$. Conversely, suppose that $\tilde{l}^2=a_2=-(\alpha_1+\alpha_2+\alpha_3)$ and consider the point $\tilde{P}=2P$. By the same token, there is a triple of square roots $$\tilde{r}_1=\sqrt{\tilde{x}_0-\alpha_1},\ \tilde{r}_2=\sqrt{\tilde{x}_0-\alpha_2},\ \tilde{r}_3=\sqrt{\tilde{x}_0-\alpha_3}$$ such that $$\tilde{l}=\tilde{r}_1+\tilde{r}_2+\tilde{r}_3, \quad x(P)-\tilde{x}_0=\tilde{r}_1\tilde{r}_2+\tilde{r}_2 \tilde{r}_3 +\tilde{r}_3 \tilde{r}_1.$$ We have $$-(\alpha_1+\alpha_2+\alpha_3)=\tilde{l}^2=(\tilde{r}_1+\tilde{r}_2+\tilde{r}_3)^2=$$ $$(\tilde{r}_1^2+\tilde{r}_2^2+\tilde{r}_3^2) +2(\tilde{r}_1\tilde{r}_2+\tilde{r}_2 \tilde{r}_3 +\tilde{r}_3 \tilde{r}_1)=(\tilde{x}_0-\alpha_1)+(\tilde{x}_0-\alpha_2)+(\tilde{x}_0-\alpha_3)=-(\alpha_1+\alpha_2+\alpha_3).$$ This implies that $$\tilde{r}_1\tilde{r}_2+\tilde{r}_2 \tilde{r}_3 +\tilde{r}_3 \tilde{r}_1=0.$$ It follows from Proposition \ref{order3} that $\tilde{P}=2P$ has order $3$. Consequently, the point $P$ also has order $3$. \end{proof} \begin{comment} \begin{thm} \label{slopeM} Let $E$ be an elliptic curve given by an equation $y^2=f(x)$, where $f(x)$ is as in \eqref{a2a4} \begin{itemize} \item[(i)] If $a_2\ne 0$, then $E(K)$ contains exactly two points of order $3$, namely, $(\sqrt[3]{-a_6}, \pm \sqrt{f(\sqrt[3]{-a_6})})$. \item[(ii)] If $a_2 = 0$, then $E(K)$ does not contain points of order $3$. \end{itemize} \end{thm} \begin{proof} Let us prove (i). Let $a_2\neq0$. By Theorem \ref{slopeT}, a point $P=(x_0,y_0)$ has order $3$ if and only if $\tilde l^2=a_2$, where $\tilde l$ is the slope of the tangent line to $E$ at $P$. We have $$\tilde l^2=\left(\frac{f^{\prime}(x_0)}{2y_0}\right)^2=\frac{(3x_0^2+2a_2x_0+a_4)^2}{4f(x_0)}= \frac{(-a_2x_0+a_4)^2}{f(x_0)}.$$ So we have to find all $x_0$ such that $$\frac{(-a_2x_0+a_4)^2}{f(x_0)}=a_2,$$ i.e., $$(-a_2x_0+a_4)^2-a_2(x_0^3+a_2x_0^2+a_4x_0+a_6)=0,$$ which is equivalent to $$-a_2(x^3+a_6)-a_4^2=0.$$ Since $a_2\neq0$, the latter equation has only one solution $x_0=-\sqrt[3]{(a_6+a_4^2)/a_2}$. So we have two points of order $3$, namely, $(-\sqrt[3]{(a_6+a_4^2)/a_2},\sqrt{f(-\sqrt[3]{(a_6+a_4^2)/a_2})})$ and $(-\sqrt[3]{(a_6+a_4^2)/a_2},-\sqrt{f(-\sqrt[3]{(a_6+a_4^2)/a_2})})$. Now let us prove (ii). By Theorem \ref{slopeT}, a point $P=(x_0,y_0)$ has order $3$ if and only if $$\frac{(a_2x_0+a_4)^2}{4f(x_0)}=a_2.$$ Since $a_2=0$, we get $$\frac{( a_4)^2}{4f(x_0)}=0.$$ But the latter equation has no solutions since $a_0$ and $a_4$ cannot be zero simultaneously. \end{proof} \begin{cor} Let $E$ be an elliptic curve over $K$. Then $E(K)$ contains a point of order $3$ if and only if $E$ is not isomorphic to the elliptic curve $$\mathbf{E}:y^2=x^3-x.$$ In particular, up to an isomorphism, there is exactly one supersingular elliptic curve, namely, $\mathbf{E}$. \end{cor} \begin{proof} Since $\fchar(K)\ne 2$ and $K$ is algebraically closed, $E$ may be represented in the {\sl Legendre form}, i.e., $E$ is isomorphic to $E_{\lambda}:y^2=x(x-1)(x-\lambda)$ for a certain $\lambda \in K\setminus \{0,1\}$ \cite[Sect. 2.5.1, pp. 35--36]{Wash}. So, $\alpha_1=0, \alpha_2=1, \alpha_3=\lambda$ and $a_2=-(\alpha_1+\alpha_2+\alpha_3)=-(\lambda+1)$. Theorem \ref{slopeT} and Corollary \ref{slopeM} imply that $E_{\lambda}(K)$ does {\sl not} contain a point of order 3 if and only if $\alpha_1+\alpha_2+\alpha_3=0$, i.e., $\lambda+1=0, \lambda=-1$. But $E_{-1}:y^2=x(x-1)(x+1)$ is nothing else but $\mathbf{E}:y^2=x^3-x$. \end{proof} \begin{rem} See \cite{Tate} for a classification (up to an isomorphism) of elliptic curves over algebraically closed fields of characteristic 3. \end{rem} \end{comment} \section{Points of order $5$} \label{l5} The following assertion gives a description of points of order $5$ on elliptic curves. \begin{prop}\label{order5} Let $P=(x_0,y_0)\in E(K)$. The point $P$ has order $5$ if and only if, for any permutation $i,j,k$ of $1,2,3$, one can choose square roots $r_i=\sqrt{x_0-\alpha_i}$ and $r_i^{(1)}=\sqrt{(r_i+r_j)(r_i+r_k)}$ in such a way that \begin{equation} \label{ORDER5} \begin{aligned} (r_1 r_2+r_2 r_3+r_3 r_1)+(r_1^{(1)} r_2^{(1)}+r_2 ^{(1)}r_3^{(1)}+r_3^{(1)} r_1^{(1)})=0,\\ r_1 r_2+r_2 r_3+r_3 r_1\ne 0. \end{aligned} \end{equation} \end{prop} \begin{rem} Notice that if we drop the condition $r_1r_2r_3=-y_0$ in formulas \eqref{x1} and \eqref{chap}, then we get $8$ points $Q$ such that $2Q=\pm P$. Similarly, if we drop the conditions $r_1r_2r_3=-y_0$, $r_1^{(1)} r_2^{(1)} r_3^{(1)}=(r_1+r_2)(r_2+r_3)(r_3+r_1)$ in the formulas \eqref{1/4}, then we obtain all points $R$ for which $4R=\pm P$. \end{rem} \begin{proof}[Proof of Proposition \ref{order5}] Suppose that $P$ has order $5$. Then $-P$ is a $1/4$th of $P$. Therefore there exist $r_i$ and $r_j^{(1)}$ such that $$x(-P)=x(P)+(r_1 r_2+r_2 r_3+r_3 r_1)+(r_1^{(1)} r_2^{(1)}+r_2 ^{(1)}r_3^{(1)}+r_3^{(1)} r_1^{(1)}).$$ Since $x(P)=x(-P)$, we have $$(r_1 r_2+r_2 r_3+r_3 r_1)+(r_1^{(1)} r_2^{(1)}+r_2 ^{(1)}r_3^{(1)}+r_3^{(1)} r_1^{(1)})=0.$$ On the other hand, if $r_1 r_2+r_2 r_3+r_3 r_1$, then the corresponding $Q$ (with $2Q=P$) satisfies $$x(Q)=x(P)+(r_1 r_2+r_2 r_3+r_3 r_1)=x(P)$$ and therefore $Q=P$ or $-P$. Since $2Q=P$, either $P=2P$ or $Q=-P=-2Q$ has order $5$. Clearly, $P\ne 2P$. If $Q=-2Q$ then $Q$ has order dividing $3$, which is not true, because its order is $5$. The contradiction obtained proves that $r_1 r_2+r_2 r_3+r_3 r_1 \ne 0$. Conversely, suppose there exist square roots $$r_i=\sqrt{x_0-\alpha_i} \ \text{ and } \ r_i^{(1)}=\sqrt{(r_i+r_j)(r_i+r_k)}$$ that satisfy (\ref{ORDER5}). Replacing if necessary all $r_i$ by $-r_i$, we may and will assume that $r_1 r_2 r_3=-y(P)$. Let $Q=(x(Q),y(Q))$ be the corresponding half of $P$ with $x(Q)=x(P)+(r_1 r_2+r_2 r_3+r_3 r_1)$. Since $r_1 r_2+r_2 r_3+r_3 r_1\ne 0$, we have $x(Q)\ne x(P)$; in particular, $Q \ne -P$. Replacing if necessary all $r_i^{(1)}$ by $r_i^{(1)}$ , we may and will assume that $$r_1^{(1)} r_2^{(1)} r_3^{(1)}=(r_1+r_2)(r_2+r_3)(r_3+r_1)=-y(Q).$$ Let $R=(x(R),y(R))$ be the corresponding half of $Q$. Then $4R=2(2R)=2Q=P$ and $$x(R)=x(P)+(r_1 r_2+r_2 r_3+r_3 r_1)+(r_1^{(1)} r_2^{(1)}+r_2 ^{(1)}r_3^{(1)}+r_3^{(1)} r_1^{(1)})=x(P).$$ This means that either $R=P$ or $R=-P$. If $R=P$, then $R=4R$ and $R$ has order $3$. This implies that both $Q=2R$ and $P=4R$ also have order $3$. It follows that $P=-Q$, which is not the case. Therefore $R=-P$. This means that $R=-4R$, i.e., $R$ has order $5$ and therefore $P=-R$ also has order $5$. \end{proof} In what follows we will use the following identities in the polynomial ring $\mathbb{Z}[t_1,t_2,t_3]$ that could be checked either directly or by using {\bf magma}. \begin{equation} \label{M0} \begin{aligned} (-t_1^2+t_2^2+t_3^2) (t_1^2-t_2^2+t_3^2)+ (t_1^2-t_2^2+t_3^2) ( t_1^2+t_2^2-t_3^2)\\+( t_1^2+t_2^2-t_3^2)(-t_1^2+t_2^2+t_3^2)=\\ -(t_1+t_2+t_3)(-t_1+t_2+t_3)(t_1-t_2+t_3)(t_1+t_2-t_3), \end{aligned} \end{equation} \begin{equation} \label{M1} \begin{aligned} (-t_1^2+t_2^2+t_3^2) (t_1^2-t_2^2+t_3^2)+ (t_1^2-t_2^2+t_3^2) ( t_1^2+t_2^2-t_3^2)\\+( t_1^2+t_2^2-t_3^2)(-t_1^2+t_2^2+t_3^2)+ 4t_1^2t_2t_3 +4t_1t_2^2 t_3+ 4t_1t_2t_3^2\\= t_1^4+t_2^4+t_3^4-2t_1^2t_2^2-2t_2^2t_3^2-2t_1^2t_3^2-4t_1^2t_2t_3-4t_1t_2^2t_3 -4t_1t_2t_3^2\\= (t_1+t_2+t_3)\left(t_1^3 +t_2^3+t_3^3-t_1^2t_2-t_1t_2^2-t_2^2t_3-t_2t_3^2-t_1^2t_3- t_1t_3^2-2t_1t_2t_3\right). \end{aligned} \end{equation} \begin{thm} \label{Ea1a2a3} Let $a_1, a_2, a_3$ be elements of $K$ such that $ \pm a_1, \pm a_2, \pm a_3 $ are six distinct elements of $K$ and none of three elements $$\beta_1=-a_1^2+a_2^2+a_3^2, \beta_2=a_1^2-a_2^2+a_3^2, \beta_3=a_1^2+a_2^2-a_3^2$$ vanishes. Then the following conditions hold. \begin{itemize} \item[(i)] None of $a_i$ vanishes and $\beta_1^2, \beta_2^2, \beta_3^2$ are three distinct elements of $K$. \item[(ii)] Let us consider an elliptic curve $$E_{5;a_1,a_2,a_3}: y^2=\left(x+\frac{\beta_1^2}{4}\right)\left(x+\frac{\beta_2^2}{4}\right)\left(x+\frac{\beta_3^2}{4}\right)$$ with $P=(0, -\beta_1 \beta_2 \beta_3/8) \in E_{5;a_1,a_2,a_3}(K)$. Then $P$ enjoys the following properties. \begin{enumerate} \item $P \in 2 E_{5;a_1,a_2,a_3}(K)$. \item Assume that \begin{equation} \label{Aord5} \begin{aligned} a_1^3 +a_2^3+a_3^3-a_1^2a_2-a_1a_2^2-a_2^2a_3-a_2a_3^2-a_1^2a_3- a_1a_3^2-2a_1a_2a_3=0,\\ (a_1+a_2+a_3)(a_1-a_2-a_3)(a_1+a_2-a_3)(a_1-a_2+a_3)\neq 0.\end{aligned} \end{equation}\end{enumerate} Then $P$ has order $5$. In addition, $E_{5;a_1,a_2,a_3}(K)$ contains a subgroup isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. \end{itemize} \end{thm} \begin{proof} (i) Since $a_i \ne - a_i$, none of $a_i$ vanishes. Let $i,j \in\{1,2,3\}$ be two distinct indices and $k \in\{1,2,3\}$ be the third one. Then $$\beta_i-\beta_j=a_j^2-a_i^2 \ne 0, \ \beta_i+\beta_j=2 a_k^2 \ne 0.$$ This implies that $\beta_i^2 \ne \beta_j^2$. (ii) Keeping our notation, we obtain that $$r_1=\pm \frac{\beta_1}{2}=\pm\frac{-a_1^2+a_2^2+a_3^2}{2}, r_2=\pm \frac{\beta_2}{2}=\frac{a_1^2-a_2^2+a_3^2}{2}, r_3=\pm \frac{\beta_3}{2}=\pm \frac{a_1^2+a_2^2-a_3^2}{2},$$ $$r_i^{(1)}=\pm\sqrt{(r_i+r_j)(r_i+r_k)}$$ where $i,j,k$ is any permutation of $1,2,3$. Thanks to Proposition \ref{order5}, it suffices to check that one may choose the square roots $r_i$ and $r_i^{(1)}$ in such a way that $r_1 r_2+r_2 r_3+r_3 r_1\ne 0$ and \begin{equation} \label{ast} (r_1 r_2+r_2 r_3+r_3 r_1)+(r_1^{(1)} r_2^{(1)}+r_2 ^{(1)}r_3^{(1)}+r_3^{(1)} r_1^{(1)})=0. \end{equation} Let us put $$r_i=\frac{\beta_i}{2}=\frac{-a_i^2+a_j^2+a_k^2}{2}.$$ We have $$r_1+r_2=a_3^2,\quad r_1+r_3=a_2^2,\quad r_2+r_3=a_1^2. $$ It follows that $$(r_1^{(1)})^2=a_2^2a_3^2,\quad (r_2^{(1)})^2=a_1^2a_3^2,\quad (r_3^{(1)})^2=a_1^2a_1^2.$$ Let us put $$r_1^{(1)}=a_2a_3, \quad r_2^{(1)}=a_1a_3,\quad r_3^{(1)}=a_1a_2.$$ Then the condition (\ref{ast}) may be rewritten as follows. $$\begin{aligned}&(-a_1^2+a_2^2+a_3^2) (a_1^2-a_2^2+a_3^2)+ (a_1^2-a_2^2+a_3^2) ( a_1^2+a_2^2-a_3^2)\\+&( a_1^2+a_2^2-a_3^2)(-a_1^2+a_2^2+a_3^2)+ 4a_1^2a_2a_3 +4a_1a_2^2 a_3+ 4a_1a_2a_3^2=0. \end{aligned}$$ \begin{comment} i.e., \begin{equation} \label{doubleAST} a_1^4+a_2^4+a_3^4-2a_1^2a_2^2-2a_2^2a_3^2-2a_1^2a_3^2-4a_1^2a_2a_3-4a_1a_2^2a_3 -4a_1a_2a_3^2=0.\end{equation} The left hand side of (\ref{doubleAST}) splits into a product $$(a_1+a_2+a_3)(a_1^3 +a_2^3+a_3^3-a_1^2a_2-a_1a_2^2-a_2^2a_3-a_2a_3^2-a_1^2a_3- a_1a_3^2-2a_1a_2a_3)$$ \end{comment} In light of (\ref{M1}), the condition (\ref{ast}) may be rewritten as $$(a_1+a_2+a_3)(a_1^3 +a_2^3+a_3^3-a_1^2a_2-a_1a_2^2-a_2^2a_3-a_2a_3^2-a_1^2a_3- a_1a_3^2-2a_1a_2a_3)=0.$$ The latter equality follows readily from the assumption (\ref{Aord5}) of Theorem. By Proposition \ref{order5}, it suffices to check that $r_1r_2+r_2r_3+r_3r_1\ne 0$. In other words, we need to prove that \begin{equation} \label{r1r2r3} \begin{aligned}(-a_1^2+a_2^2+a_3^2)(a_1^2-a_2^2+a_3^2)+(a_1^2-a_2^2+a_3^2) ( a_1^2+a_2^2-a_3^2)\\+ ( a_1^2+a_2^2-a_3^2)(-a_1^2+a_2^2+a_3^2)\ne 0.\end{aligned} \end{equation} In light of (\ref{M0}), this inequality is equivalent to \begin{comment} Opening brackets in (\ref{r1r2r3}) we get the equivalent inequality $$a_1^4+a_2^4+a_3^4-2a_1^2a_2^2-2a_2^2a_3^2-2a_1^2a_3^2\ne 0,$$ whose right hand side splits into a product and we get the equivalent inequality \end{comment} $$ (a_1+a_2+a_3)(a_1-a_2-a_3)(a_1+a_2-a_3)(a_1-a_2+a_3) \ne0.$$ But the latter inequality holds, by the assumption (\ref{Aord5}) of the theorem. Hence, $P$ has order $5$. Clearly, $P$ and all points of order $2$ generate a subgroup that is isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. \end{proof} \begin{thm} \label{twist5} Let $E$ be an elliptic curve over $K$. Then the following conditions are equivalent. \begin{itemize} \item[(i)] $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. \begin{comment} There is an elliptic curve $E^{\prime}$ over $K$ that is isomorphic to $E$ either over $K$ or over a quadratic extension of $K$ and such that $E^{\prime}(K)$ contains a subgroup isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. \end{comment} \item[(ii)] There exists a triple $\{a_1,a_2,a_3\}\subset K$ that satisfies all the conditions of Theorem \ref{Ea1a2a3}, including (\ref{Aord5}), and such that $E$ is isomorphic to $E_{5;a_1,a_2,a_3}$. \end{itemize} \end{thm} \begin{proof} (i) follows from (ii), thanks to Theorem \ref{Ea1a2a3}. Suppose (i) holds. In order to prove (ii) it suffices to check that $E$ is isomorphic to a certain $E_{5;a_1,a_2,a_3}$ over $K$. We may assume that $E$ is defined by an equation of the form \eqref{E2}. Suppose that $P=(0,y(P))\in E(K)$ has order $5$. Then $P=4(-P)$ is divisible by $4$ in $E(K)$. This implies the existence of square roots $r_i=\sqrt{-\alpha_i}\in K$ and $r_i^{(1)}=\sqrt{(r_i+r_j)(r_i+r_k)}\in K$ in such a way that $$ x(-P)=x(P)+(r_1 r_2+r_2 r_3+r_3 r_1)+(r_1^{(1)} r_2^{(1)}+r_2 ^{(1)}r_3^{(1)}+r_3^{(1)} r_1^{(1)}),$$ $$r_1^{(1)} r_2^{(1) }r_3^{(1)}=(r_1+r_2)(r_2+r_3)(r_3+r_1).$$ Since $x(-P)=x(P)=0$, \begin{equation} \label{R5} (r_1 r_2+r_2 r_3+r_3 r_1)+(r_1^{(1)} r_2^{(1)}+r_2 ^{(1)}r_3^{(1)}+r_3^{(1)} r_1^{(1)})=0. \end{equation} Since the order of $P$ is {\sl not} $3$, \begin{equation} \label{NOT3} r_1 r_2+r_2 r_3+r_3 r_1\ne 0. \end{equation} Recall that none of $r_i+r_j$ vanishes. Let us choose square roots $$b_1=\sqrt{r_2+r_3}, b_2=\sqrt{r_1+r_3}, b_3=\sqrt{r_1+r_2}$$ in such a way that $r_1^{(1)} = b_2 b_3, r_2^{(1)} = b_3 b_1$. Since $$r_1^{(1)} r_2^{(1) }r_3^{(1)}= b_1^2b_2^2b_3^2=(b_1b_2 b_3)^2,$$ we conclude that $$r_3^{(1)} =\frac{r_1^{(1)} r_2^{(1) }r_3^{(1)}}{r_2^{(1)} r_3^{(1) }}=\frac{(b_1b_2 b_3)^2}{(b_2b_3)(b_3b_1)}=b_1b_2.$$ We obtain that \begin{equation} \label{b1b2b3} r_1^{(1)} = b_2 b_3, r_2^{(1)} = b_3 b_1, r_3^{(1)}=b_1b_2. \end{equation} Unfortunately, $b_i$ do not have to lie in $K$. However, all the ratios $b_i/b_j$ lie in $K^{*}$. We have $$r_2+r_3=b_1^2, r_1+r_3=b_2^2, r_1+r_2=b_3^2$$ and therefore \begin{equation} \label{r5F} \begin{aligned} &r_1=\frac{-b_1^2+b_2^2+b_3^2}{2}, \ r_2=\frac{b_1^2-b_2^2+b_3^2}{2}, \ r_3=\frac{b_1^2+b_2^2-b_3^2}{2},\\ &\alpha_1=-r_1^2=\frac{(-b_1^2+b_2^2+b_3^2)^2}{4}, \ \alpha_2=-r_2^2=-\frac{(b_1^2-b_2^2+b_3^2)^2}{4},\\ &\alpha_3=-r_3^2=-\frac{(b_1^2+b_2^2-b_3^2)^2}{4}, \\ &P=(0, -(r_1+r_2)(r_2+r_3)(r_3+r_1))=(0, -b_1^2 b_2^2 b_3^2)\in E(K). \end{aligned} \end{equation} Since none of $r_i$ vanishes, we get $$-b_1^2+b_2^2+b_3^2\ne 0, \ b_1^2-b_2^2+b_3^2\ne 0, \ b_1^2+b_2^2-b_3^2\ne 0.$$ Let us put $$\gamma_1=-b_1^2+b_2^2+b_3^2, \gamma_2=b_1^2-b_2^2+b_3^2, \gamma_3=b_1^2+b_2^2-b_3^2.$$ It follows from Theorem \ref{Ea1a2a3}(i) that all $\beta_i$ are {\sl distinct} nonzero elements of $K$. The inequality (\ref{NOT3}) combined with first formula of (\ref{r5F}) gives us \begin{equation*} \begin{aligned}(-b_1^2+b_2^2+b_3^2)(b_1^2-b_2^2+b_3^2)+(b_1^2-b_2^2+b_3^2) ( b_1^2+b_2^2-b_3^2)\\+ ( b_1^2+b_2^2-b_3^2)(-b_1^2+b_2^2+b_3^2)\ne 0,\end{aligned} \end{equation*} which is equivalent (thanks to (\ref{M0})) to $$ (b_1+b_2+b_3)(b_1-b_2-b_3)(b_1+b_2-b_3)(b_1-b_2+b_3)\ne 0.$$ In particular, $$b_1+b_2+b_3 \ne 0.$$ The equality (\ref{R5}) gives us (thanks to (\ref{M1})) $$(b_1+b_2+b_3)(b_1^3 +b_2^3+b_3^3-b_1^2b_2-b_1b_2^2-a_2^2b_3-b_2b_3^2-b_1^2b_3- b_1b_3^2-2b_1b_2b_3)=0,$$ i.e., $$(b_1^3 +b_2^3+b_3^3-b_1^2b_2-b_1b_2^2-a_2^2b_3-b_2b_3^2-b_1^2b_3- b_1b_3^2-2b_1b_2b_3)=0.$$ Let us put $$a_1=\frac{b_1}{b_3}, \ a_2=\frac{b_2}{b_3}, \ a_3=\frac{b_3}{b_3}=1.$$ All $a_i$ lie in $K$. Clearly, the triple $\{a_1,a_2,a_3\}$ satisfies all the conditions of Theorem \ref{Ea1a2a3} including (\ref{Aord5}). Let us put \begin{equation*} \begin{aligned} \beta_1=-a_1^2+a_2^2+a_3^2=\frac{\gamma_1}{b_3^2}=\frac{\gamma_1}{r_1+r_2},\\ \beta_2=a_1^2-a_2^2+a_3^2=\frac{\gamma_2}{b_3^2}=\frac{\gamma_2}{r_1+r_2}, \\ \beta_3=a_1^2+a_2^2-a_3^2=\frac{\gamma_3}{b_3^2}=\frac{\gamma_3}{r_1+r_2}. \end{aligned} \end{equation*} The equation of $E$ is $$y^2=\left(x+\frac{\gamma_1^2}{4}\right)\left(x+\frac{\gamma_2^2}{4}\right)\left(x+\frac{\gamma_3^2}{4}\right).$$ Then $E$ is isomorphic to \begin{equation*} \begin{aligned} E(r_1+r_2): {y^{\prime}}^2=\left(x^{\prime}+\frac{\gamma_1^2}{4(r_1+r_2)^2}\right) \left(x^{\prime}+\frac{\gamma_2^2}{4(r_1+r_2)^2}\right) \left(x^{\prime}+\frac{\gamma_3^2}{4(r_1+r_2)^2}\right)=\\ \left(x^{\prime}+\frac{\beta_1^2}{4}\right)\left(x^{\prime}+\frac{\gamma_2^2}{4}\right)\left(x^{\prime}+\frac{\gamma_3^2}{4}\right). \end{aligned} \end{equation*} Clearly, $E(r_1+r_2)$ coincides with $E_{5;a_1,a_2,a_3}$. \begin{comment} Dividing the equation by $b_3^{12}=(r_1+r_2)^6$, we obtain that the map $$x,y \mapsto x^{\prime}=\frac{x}{r_1+r_2}, \ y^{\prime}=\frac{y}{r_1+r_2}$$ establishes a $K$-isomorphism between $E^{\prime}$ and an elliptic curve $$\frac{{y^{\prime}}^2}{r_1+r_2}=\left(x+\frac{\beta_1^2}{4}\right)\left(x+\frac{\beta_2^2}{4}\right)\left(x+\frac{\beta_3^2}{4}\right).$$ One has only to notice that the latter curve is isomorphic to $E_{5;a_1,a_2,a_3}$ over $K(b_3)=K(\sqrt{r_1+r_2})$. \end{comment} \end{proof} \begin{rem} \label{RLM5} Let $E_{5;a_1,a_2,a_3}$ be as in Theorem \ref{Ea1a2a3}. Clearly, $E_{5;a_1,a_2,a_3}(a_3)= E_{5;a_1/a_3,a_2/a_3,1}$. Let us put $\lambda=a_1/a_3, \mu=a_2/a_3$. Then \begin{equation} \label{ellLM} \begin{aligned} E_{5;a_1/a_3,a_2/a_3,1}= E_{5;\lambda,\mu,1}:\\ y^2=\left[x+\left(\frac{-\lambda^2+\mu^2+1}{2}\right)^2\right] \left[x+\left(\frac{\lambda^2-\mu^2+1}{2}\right)^2\right]\left[x+\left(\frac{\lambda^2+\mu^2-1}{2}\right)^2\right]. \end{aligned} \end{equation} The equation of (isomorphic) $E_{5;\lambda,\mu,1}\left(\frac{\lambda^2+\mu^2-1}{2}\right)$ is as follows. \begin{equation} E_{5;\lambda,\mu,1}\left(\frac{\lambda^2+\mu^2-1}{2}\right): y^2=\left[x+ \left(\frac{1-\lambda^2+\mu^2}{\lambda^2+\mu^2-1}\right)^2 \right] \left[x+ \left(\frac{ \lambda^2-\mu^2+1}{\lambda^2+\mu^2-1}\right)^2 \right](x+1). \end{equation} The conditions on $a_1,a_2,a_3$ may be rewritten in terms of $\lambda,\mu$ as follows. \begin{equation} \label{LMU} \begin{aligned} \lambda^3+\mu^3-\lambda^2\mu-\lambda\mu^2-\lambda^2-2\lambda\mu-\mu^2-\lambda-\mu+1=0,\\ \lambda\pm\mu\neq \pm1,\ \lambda\neq0, \ \mu\neq0,\ \lambda\neq\pm\mu,\\ \lambda^2+\mu^2\neq1,\ \lambda^2-\mu^2\neq\pm1. \end{aligned}\end{equation} The equality (\ref{LMU}) is equivalent to \begin{equation} \label{LMU1} (\lambda+\mu)(\lambda-\mu)^2-(\lambda+\mu)^2-(\lambda+\mu)+1=0. \end{equation} Multiplying (\ref{LMU1}) by (non-vanishing) $(\lambda+\mu)$, we get the equivalent equation \begin{equation} \label{LMU2} (\lambda^2-\mu^2)^2-(\lambda+\mu)^3-(\lambda+\mu)^2+(\lambda+\mu)=0. \end{equation} Let us make the change of variables $$\xi=\lambda+\mu, \eta=\lambda^2-\mu^2.$$ Then (\ref{LMU2}) may be rewritten as \begin{equation} \label{curve5} \eta^2=\xi(\xi^2+\xi-1), \end{equation} which is an (affine model of an) elliptic curve if $\fchar(K)\ne 5$ and a singular rational plane cubic (Cartesian leaf) if $\fchar(K)=5$. Since \begin{equation} \label{lmsquare} \lambda^2+\mu^2=\frac{(\lambda+\mu)^2+(\lambda-\mu)^2}{2}=\frac{\xi^2+\frac{\eta^2}{\xi^2}}{2}= \frac{\xi^2+\frac{\xi^2+\xi-1}{\xi}}{2}=\frac{\xi^3+\xi^2+\xi-1}{2\xi}, \end{equation} the only restrictions on $(\xi,\eta)$ besides the equality (\ref{curve5}) are the inequalities $$\xi(\xi^2+\xi-1)\ne 0,\pm 1; \ \xi^3+\xi^2+\xi-1 \ne 2\xi, \ \pm 1 \ne \frac{\eta}{\xi}=\sqrt{\frac{\xi(\xi^2+\xi-1)}{\xi^2}},$$ i.e. \begin{equation} \label{zapret5} \xi \ne 0,\pm 1, \frac{-1\pm \sqrt{5}}{2}. \end{equation} This means that \begin{equation} \label{frorbidKSI} (\xi,\eta) \not\in \{(0,0), (\pm1, \pm1), (\frac{-1\pm \sqrt{5}}{2},0)\}. \end{equation} In light of (\ref{lmsquare}), the equation (\ref{ellLM}) may be rewritten with coefficients being rational functions in $\xi,\eta$ (rather than $(\lambda,\mu)$) as follows. \begin{equation*} \mathcal{E}_{5,\xi,\eta}:y^2=\left[x+ \left(\frac{2(1-\eta)}{\xi^3+\xi^2+\xi-3}\right)^2 \right] \left[x+ \left(\frac{2( \eta+1)}{\xi^3+\xi^2+\xi-3}\right)^2 \right](x+1). \end{equation*} \end{rem} \begin{comment} \begin{rem} \label{RS5} Let us consider the family of elliptic curves \begin{equation} \mathcal{E}_{5,\lambda,\mu}: y^2=\left[x+ \left(\frac{1-\lambda^2+\mu^2}{\lambda^2+\mu^2-1}\right)^2 \right] \left[x+ \left(\frac{ \lambda^2-\mu^2+1}{\lambda^2+\mu^2-1}\right)^2 \right](x+1). \end{equation} where $\lambda, \mu\in K$ satisfy \begin{equation} \label{restriction5} \begin{aligned} (\lambda+\mu)(\lambda-\mu)^2-(\lambda+\mu)^2-(\lambda+\mu)+1=0,\\ \lambda,\mu\neq0,\pm1, \lambda\neq \pm\mu. \end{aligned}\end{equation} Then $$\mathcal{E}_{5,\lambda,\mu}=E_{5;\lambda,\mu,1}\left(\frac{\lambda^2+\mu^2-1}{2}\right).$$ \end{rem} \end{comment} \begin{thm} \label{family5} Let $E$ be an elliptic curve over $K$. Then the following conditions are equivalent. \begin{itemize} \item[(i)] $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. \item[(ii)] There exist $(\xi, \eta)\in K^2$ that satisfy the equation (\ref{curve5}) and inequalities (\ref{frorbidKSI}) and such that $E$ is isomorphic to $\mathcal{E}_{5,\xi,\eta}$. \end{itemize} \end{thm} \begin{proof} The result follows from Theorem \ref{twist5} combined with Remark \ref{RLM5}. \end{proof} \begin{rem} In Theorem \ref{family5} we do {\sl not} assume that $\fchar(K)\ne{\it 5}!$ \end{rem} \begin{cor} Let $E$ be an elliptic curve over $\mathbb{F}_q$ with $q =13,17,19,23,25,27 $. Then $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$ if and only if $E$ is isomorphic to one of $\mathcal{E}_{5,\xi,\eta}$. \end{cor} \begin{proof} Suppose that $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family5}, $E$ is isomorphic to one of elliptic curves $\mathcal{E}_{5,\xi,\eta}$. Conversely, suppose that $E$ is isomorphic to one of these curves. We need to prove that $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family5}, $E(\mathbb{F}_q)$ contains a subgroup isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$; in particular, $20$ divides $|E(\mathbb{F}_q)|$. In order to finish the proof, it suffices to check that $|E(\mathbb{F}_q)|<40$, but this inequality follows from the Hasse bound \eqref{HasseB} $$|E(\mathbb{F}_q)|\le q+2\sqrt{q}+1\le 27+2\sqrt{27}+1<40.$$ \end{proof} \begin{cor} Let $E$ be an elliptic curve over $\mathbb{F}_q$ with $q=31,37,41,43$. Then $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/20\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$ if and only if $E$ is isomorphic to one of $\mathcal{E}_{5,\xi,\eta}$. \end{cor} \begin{proof} Suppose that $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/20\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z};$ the latter contains a subgroup isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family5}, $E$ is isomorphic to one of elliptic curves $\mathcal{E}_{5,\xi,\eta}$. Conversely, suppose that $E$ is isomorphic to one of these curves. We need to prove that $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/20\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family5}, $E(\mathbb{F}_q)$ contains a subgroup isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$; in particular, $20$ divides $|E(\mathbb{F}_q)|$. It follows from the Hasse bound \eqref{HasseB} that $$20<31-2\sqrt{31}+1 \le |E(\mathbb{F}_q)|\le 43+2\sqrt{43}+1<60.$$ This implies that $|E(\mathbb{F}_q)|=40$, and therefore $E(\mathbb{F}_q)$ is isomorphic to a direct sum of $\mathbb{Z}/5\mathbb{Z}$ and the order 8 abelian group $E(\mathbb{F}_q)(2)$; in addition, the latter group is isomorphic to a direct sum of two cyclic groups of even order (because it contains a subgroup isomorphic to $\mathbb{Z}/2\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$). This implies that $E(\mathbb{F}_q)(2)$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. It follows that $E(\mathbb{F}_q)$ is isomorphic to a direct sum $$\mathbb{Z}/5\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}\cong \mathbb{Z}/20\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}.$$ \end{proof} \begin{cor} Let $E$ be an elliptic curve over $\mathbb{F}_q$ with $q =59 $ or $61$. Then $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/30\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$ if and only if $E$ is isomorphic to one of $\mathcal{E}_{5,\xi,\eta}$. \end{cor} \begin{proof} Suppose that $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/30\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$; the latter contains a subgroup isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family5}, $E$ is isomorphic to one of elliptic curves $\mathcal{E}_{5,\xi,\eta}$. Conversely, suppose that $E$ is isomorphic to one of these curves. We need to prove that $E(\mathbb{F}_q)$ is isomorphic to $\mathbb{Z}/30\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. By Theorem \ref{family5}, $E(\mathbb{F}_q)$ contains a subgroup isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$; in particular, $20$ divides $|E(\mathbb{F}_q)|$. It follows from the Hasse bound \eqref{HasseB} that $$40<59-2\sqrt{59}+1\le |E(\mathbb{F}_q)|< 61+2\sqrt{61}+1<80.$$ This implies that $|E(\mathbb{F}_q)|=60$; in particular, $E(\mathbb{F}_q)$ contains a subgroup isomorphic to $\mathbb{Z}/3\mathbb{Z}$. This implies that $E(\mathbb{F}_q)$ contains a subgroup isomorphic to $$(\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z})\oplus \mathbb{Z}/3\mathbb{Z}\cong \mathbb{Z}/30\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z};$$ the order of this subgroup is 60, i.e., it coincides with the order of the whole group $E(\mathbb{F}_q)$. \end{proof} \begin{thm} \label{familyQu5} Let $K$ be a quadratic field and $E$ be an elliptic curve over $K$. Then the following conditions are equivalent. \begin{itemize} \item[(i)] The torsion subgroup $E(K)_t$ of $E(K)$ is isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. \item[(ii)] There exist $(\xi, \eta)\in K^2$ that satisfy the equation (\ref{curve5}) and inequalities (\ref{frorbidKSI}) and such that $E$ is isomorphic to $\mathcal{E}_{5,\xi,\eta}$. \end{itemize} \end{thm} \begin{proof} By Theorem \ref{Kquad}, if $E(K)$ contains a subgroup isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ then $E(K)_t$ is isomorphic to $\mathbb{Z}/10\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$. Now the desired result follows from Theorem \ref{family5}. \end{proof}
2,877,628,091,399
arxiv
\section{Introduction} \label{sec:intro} With computational methods, tools and workflows becoming ubiquitous in more and more disciplines, the software applications and user communities on HPC platforms are rapidly growing diverse. Many of the \nth{2} generation HPC applications~\cite{weidner2016rethinking} have moved beyond tightly-coupled, compute-centric methods and algorithms and embrace more heterogeneous, multi-component workflows, which involve adaptive, dynamic, computation and data-centric methodologies. While diverging from the traditional HPC application profiles, many of these applications still rely on the large number of tightly coupled cores, cutting-edge hardware and advanced interconnect topologies provided by HPC clusters. Examples of \nth{2} generation applications are user-level scheduling frameworks like pilot jobs, and applications with dynamic, or hard-to-predict runtime trajectories like Kalman Filter and Adaptive Mesh Refinement (AMR) applications. The more traditional HPC applications and frameworks like MPI have also started to explore adaptive techniques to scale up on platforms that are continuously growing in capacity. For these applications, running at extreme scales bears a twofold risk: a statistically increased risk of hardware and software failure, and increasing costs in case of application failure. Implementing adaptivity and resilience can alleviate these risks. For example, an application that understands its performance profile with a given configuration might decide to terminate early or adjust when it detects inefficient execution, e.g., due to excessive swapping or slow I/O. Most of these dynamic and adaptive techniques require the applications to have a model about themselves (self aware) and their environment (context aware). With such a model, applications can implement mechanisms like feedback loops to validate their execution parameters and trajectory, and to react and adjust according to their objectives. Telemetry data is the continuous streams of run-time information that is generated by HPC systems, and the services and applications running on them. It includes operating system metrics at the process, and thread level, metrics describing the state of I/O resources, network interconnects, and storage facilities, as well as metrics describing the state of job schedulers and other HPC services. In short, telemetry data integrates all the information that is generated \textit{about} platforms and applications. It is distinct from the data that is generated \textit{by} the applications, which we refer to as application data. Existing approaches to context awareness and management and provisioning of telemetry data are scattered throughout the application and infrastructure landscape. None are comprehensive across platforms, environments and applications. This causes significant development overheads, with duplication of localized solutions that reduce portability and mobility. It impedes the development and adoption of adaptive, context aware strategies and architectures. From our perspective, a comprehensive and unifying framework for telemetry data management must be provided by future HPC platforms as a system service to facilitate a more efficient application development lifecycle, and a swift adoption of adaptive application research into production. \subsection{Approach and Contributions} We propose a solution to the provisioning and integration of telemetry data on HPC platforms. This is important and timely because an increasing number of HPC applications rely on it to implement context aware, dynamic and adaptive execution strategies. We are not aware of any other solution emerging. This paper introduces \textsc{Seastar}{}, a model, API, and implementation blueprint that facilitates the collection, management and use of telemetry data on HPC platforms, and simplifies the development of context aware HPC applications. This paper makes conceptual and practical contributions to HPC platform and application design: \begin{enumerate} [itemsep=0mm] \item{It develops a graph-based model called \textsc{Seastar}{} that allows to capture telemetry data within a dynamic graph that represents the continuously changing application and platform structure of an HPC cluster. } \item{It defines a programming interface (API) for applications and system services to query and analyze platform and application structure and telemetry data as a core concept to simplify the development of adaptive applications (\cref{sec:api}). } \item{It describes an architecture blueprint for a framework that implements \textsc{Seastar}{} on an existing HPC cluster (\cref{sec:implementation}). } \end{enumerate} This paper is structured as follows: \Cref{sec:background} discusses concepts related to telemetry data. \Cref{sec:usecases} presents application use-cases and challenges. \Cref{sec:model} introduces \textsc{Seastar}{}, a graph-based model that captures and organises telemetry data. \Cref{sec:api} describes the API for applications and platform services to interact with telemetry data. \Cref{sec:implementation} lays out a blueprint for an implementation of \textsc{Seastar}{}. Section~\ref{sec:conclusion} presents plans to evaluate \textsc{Seastar}{} and discusses future research into telemetry data management at scale and in distributed contexts. \section{Background} \label{sec:background} In~\cite{weidner2016rethinking} we have argued that bringing together application developers with HPC-resource providers on both technical and cultural levels is a big challenge with substantial \textit{potential} benefits. The prevailing separation between the two communities is the main cause for the lack of interfaces and information flow across the application-platform divide. Similar observations can be found in~\cite{fialho2014framework} where Fialho \textit{et al.} point out a lack of a common frameworks for telemetry data as many HPC performance optimization tools implement some or several aspects of the full performance optimization task but almost none are comprehensive across architectures, environments, applications, and workloads. Similarly, {\'A}brah{\'a}m \textit{et al.}~\cite{abraham2015preparing} propose methodologies to efficiently collect run-time information as a preparation for autonomic exascale applications. \subsection{Application Areas} \label{sec:usecases} Use-cases for telemetry data are manifold and an exhaustive survey would not be feasible in this context. Here we lay out six high-level application areas for telemetry data in HPC along with brief examples to illustrate the broad landscape of telemetry data usage.\smallskip{} \noindent{}\textbf{Application Development Lifecycle} is an iterative process from concept to production. It requires profiling, collecting information about performance data, networking, and I/O patterns so that the application developer can decide between alternatives or fine-tune for a specific architecture. Profiling data is collected by instrumenting either the program source code, its binary executable, or its run-time environment. Especially during the development of large-scale parallel code, profiling tools like e.g., Vampir/NG~\cite{brunst2003distributed}, PAPI~\cite{browne2000portable}, and TAU(g)~\cite{huck2006taug} play a critical role in the optimization process. While all these tools collect large amounts of telemetry data, the data is not accessible outside these frameworks or programmatically during the runtime of an application.\smallskip{} \noindent{}\textbf{Adaptive Applications} have many application areas. Some of the more prominent examples are Adaptive Mesh Refinement (AMR) and Kalman-Filters which exhibit hard-to-predict execution trajectories and heterogeneous computational loads. When these are ignored, the performance of these applications can suffer significantly. Adaptivity is also needed to handle external factors, e.g., Eisenhauer \textit{et al.}~\cite{eisenhauer2009event} have shown how one application's massive I/O operations perturb the performance of other applications on the same system. Telemetry data is critical to implement adaptivity. \smallskip{} \noindent{}\textbf{Adaptive Runtime Systems} provide low-level load balancing and scaling capabilities for parallel and distributed applications. Adaptive MPI~\cite{huang2004adaptive} for example is an alternative run-time for MPI applications. Charm++~\cite{kale1993charm++} and Parallax/HPX~\cite{kaiser2009parallex} provide their own programming models and APIs. All frameworks collect telemetry data via operating system interfaces and evaluate them via a performance model to make (re-)scheduling decisions. However, the model and associated data is generally not easily accessible externally.\smallskip{} \noindent{}\textbf{Computational Steering} allows applications to be dynamically configured (steered) at run-time; as opposed to adaptive run-time systems where adaptivity is transparently provided by the underlying framework. Here the feedback loop is moved into the application space, which also requires context data available in application space. Hence steering frameworks often have a monitoring component, e.g., FALCON~\cite{gu1995falcon}, an on-line monitoring and steering framework for large-scale parallel applications, and \cite{eisenhauer1998object} an object-based infrastructure for program monitoring and steering.\smallskip{} \noindent{}\textbf{Resource Aware Scheduling} allows the (re-)scheduling of HPC workloads based on the observed resource utilization. I/O aware scheduling~\cite{zhou2015aware} for example, can control the status of jobs on the fly during execution based on run-time monitoring of system state and I/O activities. Another example is the \textit{COBALT} scheduler~\cite{tang2010analyzing}. In comparison, most existing HPC job schedulers employ a static, a priori performance model. Fluctuations in the performance metrics of a resource, e.g., disk or network I/O hotspots are not monitored or acted upon. While this works well with static and homogeneous workloads, it fails with the increasing presence of \nth{2} generation applications.\smallskip{} \noindent{}\textbf{Application-Level Scheduling} is a tactic to circumvent the static constraints and granularity of HPC job schedulers. A commonly used method is to employ \textit{pilot jobs} or ``placeholder jobs'' submitted as a single job to the job scheduler. Once they are active they accept user jobs that are then executed within the placeholder job. Examples of application-level scheduling frameworks are HTCondor~\cite{thain2005distributed} and RADICAL Pilot~\cite{merzky2015radical}. Most application-level scheduling systems collect telemetry data via operating-system interfaces to determine how to schedule their computational workload most efficiently and to detect errors. \subsection{Context Awareness} The term context awareness is often used in close proximity with monitoring and telemetry data. If we look again at the application areas in~\cref{sec:usecases}, all of them require some understanding of the HPC platform context, whether it is information about other applications running, the execution environment or the state of the platform and its components. The development of context-aware applications gained significance with the emergence of grid computing in the early 1990s when application developers and scientists had access to a growing distributed ecosystem of computational resources and federated HPC systems. While grids strove to unify access, job submission, and file transfer across systems, they did not provide abstractions for the different execution environments. Heterogeneity across hardware architectures, cluster and network configurations, parallel run-time environments and software stacks made it very difficult to develop applications that ran well at multiple sites. Consequently, methods and mechanisms were implemented to detect properties of the system an application was running on and set application parameters accordingly. Context awareness is not used consistently in the literature. We offer our own definition to avoid ambiguity. Our definition uses the fundamental building block of the executable representation of an application: the operating system (OS) \textit{process}. An HPC application consists of many, potentially communicating processes. Their composition and properties change throughout the application's life-/run-time. Together with the related terms, \textit{self awareness} and \textit{location awareness}, our working definition of context awareness is as follows:\smallskip{} \noindent{}\textbf{Self Awareness:} An application is \textbf{logically} self aware if it collects information about its application-level structure, properties, and data with the aim to use these information to control and optimize its internal processing workflows, algorithms, etc. An application is \textbf{physically} self aware if it collects information about of its OS process structure and properties.\smallskip{} \noindent{}\textbf{Location Awareness:} An application is location aware if it has a model to \textit{understand} of the spatial mapping of its processes within the HPC platform.\smallskip{} \noindent{}\textbf{Context Awareness:} An application is context aware if it is location aware and has an \textit{understanding} of the properties of the executing platform and can correlate these with its own properties. \subsection{HPC System Monitoring} System monitoring is at the heart of most HPC systems. It allows system administrators to have a high-level overview of the entire system and to identify potential issues and bottlenecks. A problem with system monitoring in HPC is that it is often considered an administrative tool and not exposed to users and applications. One of the most widely used monitoring systems is Ganglia~\cite{massie2004ganglia}, a client-server system that extracts telemetry data from node operating systems and hypervisors. While data in Ganglia is internally represented in XML, it is normally available only as pre-rendered graphs rather than programmatically. Ganglia does not have the notion of an application, which makes it difficult to correlate application behavior with observed metrics. New monitoring systems and tools have evolved in the context of cloud computing. Naturally, cloud resources are treated as ephemeral and their performance can fluctuate due to both, internal as well as external factors. Hence, system monitoring has emerged as an important pillar for cloud applications and infrastructure. Important tools in this area are Amazon AWS CloudWatch~\cite{cloudwatch2006online} and Prometheus~\cite{prometheus2016online}. As opposed to the monitoring systems found on HPC platforms, these systems provide extensive APIs that can easily be consumed by applications and other system services. However, neither of the two system captures the structure of the underlying platform. \section{Challenges and Motivation} As diverse as the application areas for telemetry data, as diverse are the approaches for its management. From this diversity arises a number of challenges towards a comprehensive, unified framework for telemetry data management in HPC environments. In this section we list the ones we consider most important along with a specific use-case that has motivated our research in this area. \subsection{Challenges} From the application areas and use-cases we have identified a set of challenges and shortcomings related to operation telemetry data management:\smallskip{} \noindent{}\textbf{Data Access:} Applications access operating system facilities, such as the Linux \texttt{/proc} file-system, and sometimes higher-level interfaces to extract telemetry data. None of these interfaces are entirely consistent across platforms and operating systems which introduces portability issues. In addition, many of the interfaces are relatively low-level which can pose additional hurdles in the development process. \smallskip{} \noindent{}\textbf{Historical Data:} Existing operating system interfaces only provide \textit{ad hoc} data. If HPC applications require historical telemetry data, e.g., to analyze previous or similar runs, they need to collect and store this data themselves. \smallskip{} \noindent{}\textbf{Data Contextualization:} Just looking at telemetry data in isolation is not sufficient to understand the behavior of an application or system. The data needs to be interpreted in its context. Application performance data like network and filesystem I/O, can only be interpreted if we have an understanding of the properties of the underlying hardware and software stack, as well as an understanding of the other actors sharing the same resources. Similarly, the more information that is made available about the running applications the better the interpretation of the behavior of hardware and system services.\smallskip{} \noindent{}\textbf{Data Correlation:} It is often not feasible to collect all telemetry data that is necessary to contextualize a set of metrics in the same context. Some metrics can only be collected in an application context, others might be only accessible through a system service. In order to correlate data that is generated by different, independent entities, a common spatial and temporal reference system is required. In order to correlate for example the I/O throughput of a specific operating-system thread with the status and load of distributed file-system partition, information about the locality of the thread is required.\smallskip{} \noindent{}\textbf{Data Analysis:} The volume of telemetry data can become quickly very large at scale. This makes it difficult to analyze, especially on the application-side. For example, trying to find suspicious I/O patterns in an application running across 10,000 processes is not a trivial endeavor. None of the analyzed systems provide or can make use of analytics facilities that would allow them to derive high-level signals from a high-volume stream of complex input data.\smallskip{} \subsection{Motivating Use-Case} We use the RADICAL-Pilot~\cite{merzky2015radical} pilot job system to develop bioinformatics workflows. Many of these workflows spawn large numbers of short-running processes that can exhibit highly irregular I/O and computation patterns. Confined to the static resources allocated by HPC schedulers, we use pilot jobs to (re-)schedule workflow tasks based on their actual behavior and communication requirements. Furthermore, we want to circumvent system issues like filesystem I/O and network bottlenecks, which seem to occur in a surprisingly consistent frequency due to other applications running in the same vicinity. Lastly, we want to capture and catalog the execution trajectories and properties of all our workflows to be able to make predication about the behavior of similar workloads. While RADICAL-Pilot provides effective mechanisms to run many jobs within a single HPC queueing system job, it does not provide any convenient mechanisms to collect the telemetry data required. We explored multiple different ways to collect this data as part of the application logic. The overhead and inefficiency encountered in the process, especially at larger scales, required us to take a step back and think about what would be required to support applications like ours. \textsc{Seastar}{} is the direct outcome of this. \section{Seastar Model} \label{sec:model} To provide a generic model to capture telemetry data on an HPC platform, we define a set of requirements from which we then derive the graph-based \textsc{Seastar}{} model. The overarching goal is not to introduce yet another platform- or application-specific framework orthogonal to already existing approaches. Instead, we strive to develop a generic framework that is (a) agnostic, i.e., applicable to a broad set of HPC applications and platforms, and can (b) incorporate existing data sources and put them into a common context. We define the following requirements: \begin{enumerate} [itemsep=0mm] \item{The model must capture the physical representation (the anatomy) of an application, i.e., its processes, threads, and the interdependencies between them.} \item{The model must capture the layout (anatomy) of the platform, i.e., its hardware components, and the interdependencies between them.} \item{The model must capture the mapping between the application and the platform anatomies, i.e., the physical application representation \textit{within} its platform context.} \item{Different actors are interested in different aspects of the system. The model must support structure and data at an arbitrary level of detail.} \item{Depending on the use-case, current (live) and / or previous (historic) data might be required. The model must capture both.} \end{enumerate} \noindent HPC applications span a wide area of categories, ranging from tightly-coupled parallel applications to distributed workflows and service-oriented architectures. Each class of application has its own internal logical representation, concepts and building blocks. The only commonality that exists across all applications is that once they run, they have the same physical representation. The physical representation of applications and platforms, i.e., their anatomies serve as the starting point for our model definition. For the application anatomy, we assume a time-variant network of communicating processes. Each process and communication link can be split up into hierarchical networks of sub-components. We make an analogous assumption for the platform anatomy. We make the following assumptions for the \textsc{Seastar}{} model: \begin{enumerate} [itemsep=0mm] \item{The physical anatomy of an application can be described as nested, hierarchical networks of connected entities.} \item{The physical anatomy of an application can change during its lifetime.} \item{The anatomy of an HPC platform can also described as nested, hierarchical networks of connected entities.} \item{The anatomy of an HPC platform can change during its lifetime.} \item{The context of an application is defined as its locality within an HPC platform, i.e., the mapping of an application anatomy to a platform anatomy.} \item{The context of an application can change during its lifetime.} \end{enumerate} \noindent Based on these assumptions, we define a graph-based representation of applications and platforms. It consists of multi-layer, directed \textit{anatomy graphs} that represent applications and platforms. Vertices and edges of anatomy graphs can hold an arbitrary number of time-series attributes that represent observed telemetry data. A mapping of the application anatomy graphs to a platform graph, called the \textit{context graph}, represents the time-variant localities of applications within a platform (\cref{fig:anatomy_graphs}). \subsection{Anatomy Graphs} Anatomy graphs capture the changing anatomies of applications ($AAG$) and the HPC platform ($PAG$). They are the foundation for the context graph, which captures the mapping between ($AAG$s) and ($PAG$). Anatomy graphs are nested directed graphs which represent application components (vertices) and the connection between them (edges). Each vertex and edge can have an arbitrary number of attributes that represent a time series of data that can be associated with it. Vertices can have pointers to a nested graph that represents its parent component at a finer level of granularity. Nesting is strictly hierarchical: edges can only connect vertexes within the same (sub-) graph. Connecting the vertices of subgraphs with different parent edges is not allowed, even if the subgraphs are at the same hierarch depth. Anatomy graphs can be conveniently written as typed and attributed \textit{E-Graphs}~\cite{ehrig2004fundamental}: \begin{equation*} AG = (V_{g}, V_{d}, E_{g}, E_{na}, E_{ea}, (source_i, target_i)_{i=1,2,3}), \end{equation*} \noindent with graph nodes $V_{g}$ and data nodes $V_{d}$, graph edges $E_{g}$, node attribute edges $E_{na}$, and edge attribute edges $E_{ea}$, and source and target functions: \begin{equation*} \begin{split} – source_1 :E_{g} \rightarrow V_{g},\ source_2 :E_{na} \rightarrow V_{g},\ source_3 :E_{ea} \rightarrow E_{g} \\ – target_1 :E_{g} \rightarrow V_{g},\ target2 :E_{na} \rightarrow V_{d},\ target_3 :E_{ea} \rightarrow V_{d} \end{split} \end{equation*} \noindent We amend the \textit{E-Graphs} definition in~\cite{ehrig2004fundamental} so that data nodes ($V_{d}$) can be a pointer to another (nested) anatomy graph $AG_{n}$. To capture the potential changes in application and platform anatomy over time, $AG$ is time-dependent: \begin{equation*} \begin{split} AAG(t) = \\ (V_{g}(t), V_{d}(t), E_{g}(t), E_{na}(t), E_{ea}(t), (source_i, target_i)_{i=1,2,3}) \end{split} \end{equation*} \noindent Figure~\ref{fig:anatomy_graphs} shows and example of application and platform anatomy graphs. Anatomy graphs allow us to capture a complete picture of the changing structures of applications and HPC platform. By changing the time parameter $t$ for an $AAG(t)$, we can ``navigate'' back and forth in the evolution of an application from beginning (startup) to end (termination). The ability to track the anatomy of an evolving application is very important for the post mortem and ad-hoc analysis and optimization of dynamic applications and task scheduling frameworks. \subsection{Context Graph} Context graphs (\cref{fig:anatomy_graphs} r.) capture the time-varying relationship between a platform anatomy graph and application graphs. The locality of all applications $AG_{App_1..A_n}(t)$ within the platform $AG_{P}$ is captured through a fixed mapping function ($\bullet$). We define the resulting graph as the \textit{global context graph} ($CG_{Global}$) (see Figure \ref{fig:anatomy_graphs} c.): $$ CG_{Global}(App, P, t) = AG_{P} \bullet AG_{App_1..App_n}(t) $$ \noindent Additionally, we define application-specific context graphs ($CG_{App_n}$) as sub-graphs of $CG_{Global}$: $$ CG_{App_1}(App_1, P, t) = AG_{P} \bullet AG_{App_1}(t) $$ \noindent This spatio-temporal representation creates a set of graph structures in which the individual components and their mappings can be attributed with context information. We can think of the vertices of an application graph ($V_{App}$) as the operating system processes comprising an application and of the platform graph vertices ($V_{P}$) as the physical or virtual nodes of an HPC cluster. The edges can then represent communication between processes ($E_{A}$) and network links between nodes ($E_{App}$) respectively. \begin{figure}[t] \includegraphics[width=\columnwidth]{model_graphs} \caption{A context graph maps the spatial-temporal application anatomy graphs to the spatial-temporal platform graphs. Each instance of a context graph captures the structure and properties of applications and platforms at a given instant.} \vspace{-1.0em} \label{fig:anatomy_graphs} \end{figure} \subsection{Time-Series Data} Telemetry data, e.g., operating system metrics, is captured as time-series data and attached to the node and edge attributes of the graphs. Currently, the \textsc{Seastar}{} model does not make assumptions about this data. Timestamps are set by the entity collecting the data. On an implenentation level, this assumes that all HPC platform components (nodes) use the same, synchronized timebase. \section{Seastar API} \label{sec:api} \textsc{Seastar}{} provides the structure to capture telemetry data in a graph-based model. The \textsc{Seastar}{} API allows applications, platform services and human actors to explore and interact with this model. The API uses a RESTful representation and the JSON format to describe return objects. The return object structure is that of an attributed graph or edge node. From each node, the hierarchical graph can be traversed via \texttt{parent\_nodes}, \texttt{child\_node}, and \texttt{sibling\_nodes}. A \texttt{timestamp} field positions the object in temporal space. Attributes describing edge connections between siblings, e.g., the communication between two MPI processes follow the the same pattern. \begin{lstlisting}[language=json, caption={JSON resource object structure}] { timestamp: 1491830507, parent_node: { job: <id> }, child_nodes: { threads: [] }, sibling_nodes: { processes: [] }, attributes: { m1: [], m2: [], ... } } \end{lstlisting} \noindent The current iteration of the API defines only a subset of possible resource types but it can easily be extended to additional types and hierarchies. For application graphs, \texttt{job}, \texttt{process}, and \texttt{thread} are defined. For the platform graph \texttt{node}, \texttt{processor}, and \texttt{core} are defined. \subsection{Model Queries} The API uses \textit{GraphQL}~\cite{graphql2017online} as the query language to the context graph hierarchies. GraphQL allows the caller to extract complex structures from the model in a single API call. \begin{lstlisting}[language=JSON, caption={Get memory consumption of all sibling processes of a job via a GraphQL query.}] { process(id: 1) { siblings { processes { memory_uses } } } } \end{lstlisting} \subsection{Context Awareness} Context awareness requires self awareness and location awareness. Self awareness can be established via the special \texttt{self} path element. In the current iteration of the API it can be called on a job, process, or thread resource and returns the appropriate object for the application from which it was called. \begin{lstlisting}[language=bash, caption={Self awareness via \texttt{self}}] GET /job/self GET /process/self GET /thread/self \end{lstlisting} \noindent Location awareness is realized via the special \texttt{context} path element. It allows to follow the context mapping from platform graph to application graph(s) and vice versa: \begin{lstlisting}[language=bash, caption={Location awareness via \texttt{context}.}] GET /thread/self/context # on application GET /node/42/context # on platform \end{lstlisting} \noindent Accessing \textit{context} from a thread for example will return a processor core object, accessing it from a core will return a list of thread objects and so on. Combined with the use of \texttt{parent}, \texttt{self} and \texttt{context} allows for comprehensive context awareness and exploration. \subsection{Derived Metrics} Derived metrics are a core concept of the API as they allow to define high-level metrics relevant to a specific use-case, user group, experiment, etc. Derived metrics are generally applied to the telemetry data on the framework side, i.e., within the \textsc{Seastar}{} service. This allows developers to push complexity out of their applications. For example, an I/O-sensitive application might want to terminate or reconfigure if the overall I/O throughput is below a certain threshold. Instead of periodically querying the I/O metrics for all processes comprising an application, it is possible to register a derived metric ``I/O Threshold''. \begin{lstlisting}[language=json, caption={Adding a derived metric on job-level.}] PUT /dmetrics data { metric_name: "i_o_threshold", scope: "job", function: "..." } \end{lstlisting} Once a metric is registered, it is available via the \texttt{metrics} section of the resource object(s) defined in \texttt{scope}. Currently the API does not come with its own language to define the custom metric \texttt{function}. It simply uses the query language of the backend system. For our implementation blueprint explained in more detail in the next section, it uses the functional expression language used by the Prometheus time series database. \subsection{Notifications} Together with derived metrics, notifications are another key concept to address the endemic pull-based data gathering process found in many applications. The notification API allows the caller to subscribe to one or more metrics via a callback mechanism. Whenever the metric changes (beyond a defined threshold), the callback is engaged. Notifications are user-defined HTTP callbacks, so-called webhooks. When a new notification is available, the \textsc{Seastar}{} API server makes an HTTP request to the client URI configured for the webhook. \begin{lstlisting}[language=json, caption={Adding a derived metric on job-level.}] PUT /callbacks data { callback_uri: "http://host/path...", scope: "job", metric: "i_o_threshold", } \end{lstlisting} \section{Implementation Blueprint} \label{sec:implementation} \begin{figure*} \includegraphics[width=\textwidth]{implementation} \caption{(Left) The \textsc{Seastar}{} implementation architecture: model databases, data sensors and API services are connected via Kafka. (Right) The API service (\texttt{seastar\_apid}) is implemented as a multi-level, partitioned caching architecture to minimize telemetry data traffic on the platform. Frontend instances provide the API to the consumers via a local cache which is populated with data relevant to the instance's partition.} \vspace{-1.0em} \label{fig:impl_arch} \end{figure*} \textsc{Seastar}{} tries to be agnostic of applications and platform architectures and hence does not make many assumptions about how it should be implemented. In this section, we discuss the \textit{blueprint} for one possible implementation of \textsc{Seastar}{} within an existing HPC cluster. This blueprint has its origin in the \textsc{Seastar}{} research prototype~\cite{seastar2016online} we have been building to explore various concepts around the API. In lieu of an actual HPC cluster, our experimental environment \textit{Elasticluster}~\cite{elasticluster2016online} to start up an on-demand SLURM-based Linux cluster in the AWS Cloud. This allows us to experiment in isolation, and also to dynamically change the scale of the cluster. Our implementation of \textsc{Seastar}{} is mostly based on existing technology, not only to minimize the implementation overhead, but also because there are a plethora of open-source tools available that provide subsets of the required functionality at a level of maturity and scalability that would be otherwise impossible to accomplish. The implementation architecture (\cref{fig:impl_arch} l.) consists of four main components: the model server which holds a persistent copy of the context graph and metrics, the API server which provides the \textsc{Seastar}{} API, and the data sensors, which collect OS, and cluster-level metrics, and the data backbone which provides a high-throughput, scalable, and buffered data transport mechanism. \subsection{Model Database} The implementation of the \textsc{Seastar}{} model is split across two different databases. A graph-database contains the context, i.e., the spatial-temporal layout of applications and platform. Another database specialized in storing and serving large volumes of time-series data efficiently stores the telemetry data. The node and edge attributes in the graph-database representing the telemetry data are pointers to the respective entries in the time-series database. This distinction is not visible in the \textsc{Seastar}{} API where structure and data appear consistent again. \subsubsection{Context Graph Database} \label{sec:impl-context-db} To store the time-variant context graph, we use OrientDB, an open source multi-model, NoSQL database management system written in Java (\cref{fig:impl_arch} l. - A). It supports graph, document, key/value, and object models, with all relationships managed with direct connections between records. \subsubsection{Time-Series Database} \label{sec:impl-time-series-db} For the time-series database we have chosen Prometheus, an open source monitoring system and time-series database (\cref{fig:impl_arch} l. - B). Prometheus can store and process time-series data very efficiently. It has a built-in functional expression language that lets the user select and aggregate time series data in real time. Furthermore, it has an \textit{Alertmanager} component which can trigger notifications based on predefined queries. This allows for a straight-forward implementation of the derived metrics and notification functionality of the \textsc{Seastar}{} API. \subsection{Data Transport} \label{sec:impl-data-sensors} We use Apache Kafka, an open-source stream processing platform Kafka as the data transport layer (\cref{fig:impl_arch} l. - C). Kafka provides a publish-subscribe-based, unified, high-throughput, low-latency platform for handling real-time data feeds. Kafka makes extensive use of memory channels, and uses disks as buffers if communication channels are congested or streaming targets are temporarily not available. This feature adds the necessary resilience to a distributed system like \textsc{Seastar}{}. Kafka can furthermore be scaled out easily by adding additional nodes. Kafka is responsible for streaming data in two directions: from the graph- and time-series- databases to the local API services on the individual cluster nodes (\cref{fig:impl_arch} l. - D) and from the data sensors to the graph- and time-series- databases (\cref{fig:impl_arch} l. - E). \subsection{Data Sensors} \label{sec:impl-data-sensors} Data sensors need to capture both, telemetry data as well as the data that is required to maintain the global context graph, i.e., the relationship between platform and application. They consist of two components: the \texttt{node\_exporter} and the \texttt{context\_exporter}. The \texttt{node\_exporter} (\cref{fig:impl_arch} l. - F) is part of the Prometheus ecosystem and exports operating-system metrics to the Prometheus server. The \texttt{context\_exporter} gathers process, job and queueing system information and sends them to the model database server (\cref{fig:impl_arch} l. - G). \subsection{API Service} \label{sec:impl-api} The API service \texttt{seastar\_apid} (\cref{fig:impl_arch} l. - H) is implemented as a partitioned caching architecture to minimize network traffic. (\cref{fig:impl_arch} r.)The service can be instantiated in three different modes: \textit{master-mode}, \textit{forwarder-mode} and \textit{frontend-mode}. The frontend instances provide the \textsc{Seastar}{} API described in \Cref{sec:api}. Frontend instances do not have a direct connection to the database, but they maintain a local data cache which is fed either by an upstream master instance (2-tier setup) or a forwarder instance (n-tier setup). If a frontend or forwarder instance cannot serve an API request (cache miss), it sends a request to its upstream service to provide the missing data set. \texttt{seastar\_apid} is implemented in Python and uses Python's \textit{FLASK} HTTP framework. A Python API wrapper provides a more convenient, programmatic client access to the API service. Especially the well-defined data types free the user from the burden of parsing JSON return values by hand. \begin{lstlisting}[language=Python, caption={Python API client}] from seastar import PlatformAPI p = PlatformAPI(endpoint='localhost') rObj = p.self.context.parent print rObj.kind # dhcp.type_cpu print.rObj.metrics # ['memory_total', ... ] rObj.register_callback(cb_func, ...) \end{lstlisting} \noindent The Python API wrapper is only one example of a language-specific wrapper for the API. Any language for which an HTTP client library exists can interface with the \textsc{Seastar}{} service endpoints. Programming language independence and the use of standard, well documented protocols fosters adoption of \textsc{Seastar}{} across many different application communities. \section{Conclusion and Future Work} \label{sec:conclusion} In this paper we have picked up the telemetry data management challenge which we have identified in our previous work~\cite{weidner2016rethinking} as one of the current challenges in today's HPC ecosystems. We have outlined a solution, \textsc{Seastar}{}, that provides a conceptual framework, and coherent programming interface for the provisioning and integration of telemetry data on HPC platforms. We have furthermore sketched out how such a system can be implemented and integrated with existing HPC platforms. A first prototype implementation of the model database and API service has the potential to simplify application development significantly. However, further investigation, specifically a larger real-world use-case, study still needs to be conducted. The work presented in this paper is exploratory and the focus has been on finding the right concepts and abstractions. Future work will focus on the evaluation of \textsc{Seastar}{} and the implementation of application uses cases.\smallskip{} \noindent{}\textbf{In-Depth Evaluation:} we will evaluate \textsc{Seastar}{} along two axes: applicability at scale and applicability across different systems. This will include extensive performance measurements of the suggested architecture blueprint. The implementation of an adaptive user-level scheduling framework based on \textsc{Seastar}{} as a driving application use-case is already under development.\smallskip{} \noindent{}\textbf{Distributed Systems:} many distributed applications strive to run not just on a single HPC platform but to spread their workload and components across multiple platforms concurrently. We will extend the \textsc{Seastar}{} model to distributed systems and explore architectural alternatives for a distributed implementation.\smallskip{} \noindent{}\textbf{Extreme Scales and Big (Telemetry) Data:} derived metrics are one of the important concepts in \textsc{Seastar}{} to provide telemetry data to multiple different audiences at different level of abstraction. While easy enough to manage at small scale, at large scales processing derived metrics in real time would require a significant amount of computational resources. \section{Acknowledgments} This research was supported by an \textit{AWS in Education Research} grant from Amazon Web Services, Inc. \bibliographystyle{abbrv}
2,877,628,091,400
arxiv
\section{Introduction} \label{sec:intro} The development of Internet of Things (IoT) and wearables contributed to the progress of a wide range of quantified-self applications related to activity recognition. In this context, motion data extracted from connected devices (e.g., smartwatches) equipped with motion sensors (e.g., accelerometer and gyroscope) are sent to a central server which processes these data through machine learning models. According to the considered application, these learning models can classify which activity is performed by the user or compute other information such as the number of steps or burned calories. While quantitative analysis of daily activities can bring benefits from the health perspectives~\cite{health1,health2}, transferring all this data to a third-party server raises important privacy concerns. Indeed, data breaches, compromised servers or any unwanted exploitation of the data expose users to personal and sensitive information leakage such as health-related attributes~\cite{privacyhealth}. To mitigate this risk, a Federated Learning (FL) architecture (also known as collaborative learning) has been proposed~\cite{mcmahan}. In this scheme, the personal data of the user stays locally on its device and only a learning model is exchanged with the server. Iterativelly, the server sends a model to devices, this model is trained and refined with the local data on each device. Model updates are sent to the server which aggregates them to maintain a global learning model which will be disseminates back a devices. While this iterative process works well in case of data sharing similar distribution, heterogeneity of data across user devices can severely degrade performance of standard federated averaging for ML learning applications, especially for atypical users. Indeed, one unique model cannot cope with the heterogeneity of data and provide the best utility for all users~\cite{dysan}. To address this data heterogeneity, several local adaptation schemes have been proposed such as fine-tuning of personalized layers, multi-task learning, and knowledge distillation~\cite{fedper,impact2} which depict a benefit for all participants in terms of accuracy. While FL improves privacy by reducing the exposition of the personal data, it remains vulnerable to threats. For instance, FL is not robust to model poisoning which aims to destroy the convergence of the central model~\cite{blanchard,bernstein}. Privacy leakages may also occur through membership inference attacks~\cite{membership} which consist of inferring the presence of an individual data record in the training data, or attribute inference attack where the adversary is able to infer sensitive information about individuals~\cite{attribute}. The adversary can be passive~\cite{passiveactive1} (i.e., only observing exchanges) or active~\cite{passiveactive2} (i.e., modifying the protocol), and can control users or the server. To mitigate the risks, several approaches have been proposed from using Differential Privacy locally at user level or server level~\cite{naseri}, Homomorphic Encryption (HE) and Secure Multiparty Computation (SMC) \cite{hc_smc}. In this paper, we quantify the utility and privacy of a FL scheme using private personalized layers~\cite{fedper}. In such a scheme, only the lower layers of the model (capturing coarse grain information) are exchanged with the server while the upper layers of the model (capturing fine grain information) are personalized and kept private on each user. This scheme is known to improve the accuracy of the model in presence of heterogeneous data across users. However, the privacy impact of sharing only a sub part of the model has never been measured. To assess privacy leakage of this scheme, we consider both an attribute and a membership inference attack. Evaluations have been conducted using two datasets of motion sensor data collecting in real-life conditions. Results show that FL with personalized layers speeds up the convergence compared to vanilla FL and slightly increases the activity accuracy between 1\% and 5\%, while decreasing the gender and the overweight inference between 10\% and 20\% and 15\% on average for membership inference. This utility and privacy trade-off is better than a defense scheme using local differential privacy which decreases the inference of the gender and the overweight up to 12\% but at the cost of the activity accuracy which reduces up to 10\%. These results tend to show that minimizing the information exchanged with the server is an interesting avenue for both personalizing the model (i.e., improving accuracy) while limiting potential inferences (i.e., improving privacy). The outline of the paper is as follows. First, background and related work are described in Section~\ref{sec:background}. The exhaustive evaluation is then presented in Section~\ref{sec-eval} before concluding in Section~\ref{ending}. \section{Background and Related Work} \label{sec:background} In this section, we review background and related work on FL (Section~\ref{sec-fl}) and inference attacks and mitigation schemes (Section~\ref{sec-inference-defense}). \subsection{Federated Learning} \label{sec-fl} Deep Neural Networks are now the most effective algorithms for a lot of machine learning tasks. A new paradigm has been proposed by training Machine Learning (ML) models on the user devices, named Federated Learning (FL). In a FL scheme following~\cite{mcmahan}, participants keep their data locally on their device and exchange a model -- targeting a specific learning task -- with a server. The main objective of this algorithm is to iteratively train a learning model \textit{M} maintained by the server by aggregating this model trained locally on each participant. At each learning round \textit{i}, each client \textit{k} trains its local model \textit{$m_k$} with its own data using Stochastic Gradient Descent (SGD) during several iterations \textit{j}. In its synchronous version, once all the participants send their model update \textit{m} to the server, this server then aggregates all these model updates using the following equations before to disseminate back this aggregated model to all devices: $$ M_{i+1} = \sum_{c=1}^{C} \frac{n_c}{n} m_c^{i+1},$$ with $n_c$ the set of indexes of all the data points $n$ on client $c$, $m_c^{i+1}$ the local update of a client $c$, calculated with the following equation: $$ m_c^{i+1} = m_c^{i} - \eta g_c^i ,$$ with $\eta$ a fixed learning rate (i.e., hyperparameter which controls the step size of the optimization) for each client and $g_c^i$ the average gradient on the local data of the client $c$ at the epoch $i$. Those learning rounds continue until the convergence of the central model. \begin{figure}[!h] \centering \includegraphics[width=6.5cm]{fl_mlsp.png} \caption{Personalized FL approach: only the upper layers (colored in grey) are shared with the server while the personalization layers are kept private on the device. \label{fig:fl_flow} \end{figure} The development of FL highlighted other challenges~\cite{review_fl} such as the heterogeneity of data across user devices leading to a degraded accuracy for less represented users. To overcome this limitation, \cite{fedper} studied an FL scheme using personalization layers. In this scheme, the local model on each participant is composed of \textit{lower layers} trained following classical FL learning round, and \textit{upper and personalization} layers trained locally and which stay private on the device and not exchanged with the server (Figure~\ref{fig:fl_flow}). \subsection{Inference Attacks and Defenses} \label{sec-inference-defense} Privacy concerns are another main challenge of FL. Even though the raw data is not shared but kept on each local device, the model updates exchanged between participants and the server can leak sensitive information. Specifically, two types of attack can be considered. Poisoning attacks from malicious users aim at preventing the convergence of the learning model~\cite{blanchard}, or implanting a backdoor to control its behaviour~\cite{backdoor}. Inference attacks, in turn, attempt to infer sensitive information on the users through model updates exchanged during the training process. These attacks can be conducted by participants or by the server. In addition, they can be passive or active. For instance, a malicious server can infer sensitive information based on the local updated parameters $w^i$ sent by each user. To increase the observation, the server can send to users a fudged model in order to amplify the potential inference of sensitive information ~\cite{passiveactive2}. There are mainly two types of inference attacks: the attribute and the membership inference attacks. Attribute inference attack consists of inferring a sensitive information of the user~\cite{attribute} while membership inference attack consists of determining if a data record has been used for the training of a specific model~\cite{membership}. To apply these attacks on a FL scheme, a supervised classification model (i.e., Random Forest) was trained to infer the sensitive information from the model's parameters shared as input. To prevent these attacks, several approaches have been considered. Homomorphic Encryption (HE) and Secure Multiparty Computation (SMC)~\cite{hc_smc} approaches have been adapted to FL but are difficult to apply at scale due to overhead. Local Differential Privacy (LDP)~\cite{naseri}, in turn, consists of introducing a random perturbation on SGD during the learning phase on the user device. The method provides statistical guarantees on the inference capability of an adversary to infer information on an individual's data. A differential-private SGD can be written as: $$ w_{t+1} = w_t - \eta _t (\nabla l(w_t, x_t, y_t) + N_t) ,$$ with $\eta_t$ the learning rate, $w$ the parameter that minimizes the objective function $l$ for a data point ($x$,$y$) at time $t$, and $N_t$ a noise value that follows a Gaussian distribution. \section{Evaluation} \label{sec-eval} We exhaustively evaluate the utility and privacy of a FL scheme using private personalized layers in the context of activity recognition (details of the methodology are given Section~\ref{settings}). In this section, we show that personalized layers improves the utility (Section~\ref{utility}) and privacy (evaluated through attribute inference Section~\ref{privacy-attribute} and membership inference Section~\ref{privacy-mia}) compared to both a vanilla FL and a defense scheme using local differential privacy. \subsection{Experimental setting} \label{settings} \textbf{System model:} We consider a FL scheme using Stochastic Gradient Descent (SGD) addressing activity recognition. The learning model is based on 2 convolutional layers, and 3 fully connected layers. Only the 2 lower convolutional layers are exchanged with the server which aggregates and disseminates model updates to devices, the 2 upper fully connected layers stay private on the device and are personalized with the user data. The devices of users are considered as trusted but it is not the case of the server which is considered as an adversary trying to infer personal information of participants from their model updates. \setlist{nolistsep} \textbf{Datasets:} Two real-life condition datasets are used for the evaluation. They are both publicly available and heavily used in the literature. These datasets come from the extraction of motion sensor data during gait activities (\textit{i.e.}, based on step patterns) of different subjects. \begin{itemize} \item \textbf{MotionSense}~\cite{motionsense} contains motion data captured from an accelerometer (\textit{i.e.}, acceleration and gravity) and gyroscope of an iPhone 6s kept in the front pocket at a frequency rate of 50Hz. Overall, six activities (\textit{i.e.}, walking, jogging, going upstairs, going downstairs, sitting and standing) have been made by 24 users during 15 trials in the same conditions and environment. \item \textbf{MobiAct}~\cite{mobiact} records the motion data from 58 subjects during more than 2500 trials, all captured with a smartphone also in the front pocket. This dataset includes signals recorded from the accelerometer and gyroscope of a Samsung Galaxy S3 smartphone. Nine different activities of daily living are performed by the users. We only used the trials corresponding to the same activities as MotionSense in order to do the evaluation with the exact same settings. \end{itemize} Both datasets contain an equal number of men and women, and each activity is performed according the same conditions by all subjects. However, the walking activity is more represented than the others. For each user, we also have access to physical information (e.g., the gender, weight, height and age). \textbf{Baselines:} We considered two baseline approaches to compare FL scheme using private personalized layers (\textbf{FedPer})~\cite{fedper}: \begin{itemize} \item \textbf{Standard FL (Vanilla)}~\cite{mcmahan} This is the most common FL scheme using SGD training on the device and average aggregation of all models at each learning round on the central server. \item \textbf{Local Differential Privacy (LDP)}~\cite{naseri} We consider an implementation based on an introduction of noise following a Gaussian distribution ($\mathcal{N}$(0,0.01)) to the model updates computed through a classical learning phase (i.e., Vanilla). \end{itemize} \textbf{Evaluation metrics:} We evaluated FedPer and the different baselines along both utility and privacy metrics. \begin{itemize} \item \textbf{Utility: } To measure the utility, we considered the accuracy of the predicted activity. More precisely, we produce a confusion matrix based on the output of the classifier and measure the number of correct predictions made by this classifier over all predictions made. The value of the accuracy ranges from $0$ to $1$, in which $1$ corresponds to perfect accuracy. \item \textbf{Privacy: } To assess the level of privacy, we rely on the accuracy of both the inference of sensitive attributes and the inference to be a member of the training set. These inference attacks implement the solution proposed by~\cite{attribute} which leverages an invariant permutation representation of nodes at each layer to classify model updates received by the server through a random forest of 1000 trees with a maximum depth of 10. We consider the gender and the Body Mass Index (BMI) of the users as sensitive attributes. The BMI is a value defined by the weight of the user divided by the square of her height. This value allows to categorize a person as underweight, normal weight, overweight or obese. In our case we only focus on a binary classification: overweight (BMI $>$ 25) or not (BMI $<$ 25) for the sake of class balance. For the membership inference, the accuracy refers to the percentage of correct prediction (that a participant has been involved in the training of the model) over all predictions made. In both attacks, an accuracy of $0.5$ corresponds to a random guess as our dataset is balanced. \end{itemize} \textbf{Implementation details:} For each experiment, we run 10 times of 5-fold cross validation where each fold is tested based on the training of the other four. We considered 200 learning rounds and an early stopping that stops the learning process if the average test loss of the aggregated model sent locally on the user data does not decrease during 30 learning rounds. During each learning round, the training with SGD is done locally at user's level during 10 epochs. A constant learning rate is used with $\eta = 0.001$ for all the users. These parameters have been optimized independently for the tasks of activity recognition, gender and BMI inference. \subsection{Utility Evaluation} \label{utility} We measure the accuracy of the activity detection of FedPer and the baselines. Figure~\ref{fig:act_motionmobi} reports the Cumulative Distribution Function (CDF) of this accuracy over the population of users for MotionSense and MobiAct dataset. First, results show that the local adaptation of FedPer on the upper layers sightly increases the accuracy compared to the Vanilla approach (from 1\% to 7\% of increase on average for MotionSense and MobiAct, respectively). Second, results show that LDP baseline degrades significantly the accuracy for both datasets (10\% on average of MotionSense and 6\% on average for MobiAct). Indeed, by introducing noise, the convergence of the model is greatly degraded leading to a loss of prediction for all users. This result comforts previous results~\cite{impact1,impact2}. \begin{figure}[!h] \begin{minipage}[b]{0.43\linewidth} \centering \centerline{\includegraphics[width=4.7cm]{motionsense/activity_motion-eps-converted-to.pdf}} \centerline{(a) MotionSense} \end{minipage} \hfill \begin{minipage}[b]{0.43\linewidth} \centering \centerline{\includegraphics[width=4.7cm]{mobiact/activity_mobi-eps-converted-to.pdf}} \centerline{(b) MobiAct} \end{minipage} \caption{By personalizing upper layers of the model, FedPer sightly increases the accuracy of the activity prediction compared to a FL vanilla approach; local differential privacy, in turn, greatly degrades the accuracy.} \label{fig:act_motionmobi} \end{figure} We also measured the convergence speed of the learning. Figure~\ref{fig:act_perep} depicts the accuracy of the activity detection as a function of learning rounds for FedPer and the Vanilla approach. Results show that FedPer drastically speeds up the convergence. For instance, FedPer achieves 90\% of accuracy after 12 learning rounds on MotionSense where the Vanilla approach achieves the same accuracy after 100 learning rounds. For MobiAct, FedPer achieves 90\% of accuracy after 35 learning rounds where the Vanilla approach only reaches 86\% of accuracy after 200 learning rounds. By using its personalized layers at each learning round instead of starting the learning from the aggregate model sent by the server, the accuracy increases faster. For LDP, we can observe that the noise introduced prevents the model from converging. \begin{figure}[!h] \begin{minipage}[b]{0.43\linewidth} \centering \centerline{\includegraphics[width=4.5cm]{motionsense/plot_perep-eps-converted-to.pdf}} \centerline{(a) MotionSense} \end{minipage} \hfill \begin{minipage}[b]{0.43\linewidth} \centering \centerline{\includegraphics[width=4.5cm]{mobiact/plot_perep_mobi-eps-converted-to.pdf}} \centerline{(b) MobiAct} \end{minipage} \caption{By using personalized layers instead of aggregated information, the learning is drastically speeds up.} \vspace{-2mm} \label{fig:act_perep} \end{figure} \begin{figure}[h!] \begin{minipage}[b]{0.45\linewidth} \centering \centerline{\includegraphics[width=4.5cm]{motionsense/gender_motion_ep-eps-converted-to.pdf}} \centerline{(a) Gender - MotionSense} \end{minipage} \hfill \begin{minipage}[b]{0.45\linewidth} \centering \centerline{\includegraphics[width=4.5cm]{mobiact/gender__mobi_ep-eps-converted-to.pdf}} \centerline{(b) Gender - MobiAct} \end{minipage} \vfill \begin{minipage}[b]{0.45\linewidth} \centering \centerline{\includegraphics[width=4.5cm]{motionsense/imc_motion_ep-eps-converted-to.pdf}} \centerline{(c) BMI - MotionSense} \end{minipage} \hfill \begin{minipage}[b]{0.45\linewidth} \centering \centerline{\includegraphics[width=4.5cm]{mobiact/imc_mobi_ep-eps-converted-to.pdf}} \centerline{(d) BMI - MobiAct} \end{minipage} \caption{The increase of number of learning epochs per user increases the accuracy of the attack on both sensitive attributes. \vspace{-2mm} \label{fig:per_ep} \end{figure} \subsection{Privacy evaluation through attribute inference} \label{privacy-attribute} We conducted an attribute inference attack to infer the gender and the BMI of users from their model updates sent to the server. In this attack, participants train their local model on 80\% of their data. Once all the models are sent to the server, only the models from one class of the targeted attribute are aggregated (in our case, models from women for gender inference, and models from overweight users for BMI inference). Then the server sends back the aggregated model to all the users that fine-tune locally on the remaining 20\% of their data (e.g., training from a model aggregating model updates from women) before returning the update to the server. The adversary then trains an RF classifier on these model updates to infer the sensitive attribute. This training exploits 80\% of all the updates and the testing is done on the remaining 20\%, with cross validation. Figure~\ref{fig:per_ep} evaluates, for both datasets, the accuracy of these both sensitive attribute inferences over the epochs of local learning. Firstly, results show that without any protection (i.e., the Vanilla approach), all sensitive attributes can be inferred with high accuracy for both datasets (e.g., around 90\% of accuracy for the gender on MotionSense). FedPer reduces this accuracy between 10\% and 20\% according to the dataset and the sensitive attribute. Results also show that FedPer better protects users against inference attack compared to LDP regardless of dataset and sensitive attributes (from 5\% to 10\% of accuracy loss for MotionSense). Secondly, results show that the inference accuracy tends to increase over the epochs for all approaches. This is explained by the fact that attribute inference attack is closely related to overfitting~\cite{yeom}, the more the model learns on user's data, the more it adjusts the parameters to data structure and the more it may incorporate sensitive information. \begin{figure}[!h] \begin{minipage}[b]{0.45\linewidth} \centering \centerline{\includegraphics[width=4.5cm]{motionsense/gender_motion_4_cdf-eps-converted-to.pdf}} \centerline{(a) Gender - MotionSense} \end{minipage} \hfill \begin{minipage}[b]{0.45\linewidth} \centering \centerline{\includegraphics[width=4.5cm]{mobiact/gender_mobi_4_cdf-eps-converted-to.pdf}} \centerline{(b) Gender - MobiAct} \end{minipage} \vfill \begin{minipage}[b]{0.45\linewidth} \centering \centerline{\includegraphics[width=4.5cm]{motionsense/imc_motion_2_cdf-eps-converted-to.pdf}} \centerline{(c) BMI - MotionSense} \end{minipage} \hfill \begin{minipage}[b]{0.45\linewidth} \centering \centerline{\includegraphics[width=4.5cm]{mobiact/imc_mobi_9_cdf-eps-converted-to.pdf}} \centerline{(d) BMI - MobiAct} \end{minipage} \caption{FedPer and LDP increase the number of users with a small inference accuracy.} \vspace{-2mm} \label{fig:selec_ep} \end{figure} Figure~\ref{fig:selec_ep} reports the CDF of the inference accuracy over the participants. Results show that while each attribute can be inferred with high accuracy for a large part of the users, this accuracy drops for few percent of users. FedPer and LDP increase the percentage of users with a small inference accuracy. \begin{figure}[!h] \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.5cm]{motionsense/membership_-eps-converted-to.pdf}} \centerline{(a) MotionSense} \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.5cm]{mobiact/membership__mobi-eps-converted-to.pdf}} \centerline{(b) MobiAct} \end{minipage} \caption{FedPer and LDP significantly decrease the accuracy of the membership inference attack compare to Vanilla method} \vspace{-3mm} \label{fig:membership} \end{figure} \subsection{Privacy evaluation through membership inference} \label{privacy-mia} Lastly, we conduct a membership inference attack to evaluate privacy. In this attack, 50\% of the users follow a normal FL learning round with 80\% of their data. The models are sent to the server which disseminates back the aggregated model to all the users. All of them fine-tune the aggregated model on their remaining 20\% of data. The server then trains a RF to classify membership from model updates for all users (using 80\% of all these updates for the training and 20\% for the testing with cross validation as described in section~\ref{settings}). Figure~\ref{fig:membership} depicts the accuracy of this inference attack for both datasets and for all approaches. Similarly to the attribute inference attack, results show that the membership inference attack is more efficient on the Vanilla approach. FedPer provides the best protection compared to LDP (20\% on average for MotionSense dataset). Interesting enough, FedPer depicts an accuracy close to 50\% which correspond to a random guess (if the data of a specific user has been used to train the model) for MotionSense dataset. \section{Conclusion} \label{ending} We experimentally quantified the utility and privacy trade-off of FL using private personalized layers proposed by~\cite{fedper} in a context of activity recognition. We consider both an attribute and a membership inference attack to measure privacy leakage. Results show that using private personalized layers provides a better utility and privacy trade-off compared to a FL vanilla approach and a defense scheme using local differential privacy. These results tend to show that minimizing the information exchanged with the server is an interesting avenue for both improving accuracy and limiting privacy leakage. To comfort these results, it would be interesting to use other quantitative privacy metrics such as \textit{average information leakage} and \textit{maximum information leakage}~\cite{othermetrics}. \bibliographystyle{ACM-Reference-Format}
2,877,628,091,401
arxiv
\section{Introduction} Let $G=(V(G),E(G))$ be a finite, simple, and undirected graph of order $|V(G)|$. The distance $d_G(u,v)$ between two vertices $u,v\in V(G)$ is the length of the shortest path in the graph $G$ between $u$ and $v$ if they belong to the same connected component of $G$ and infinity otherwise. We omit the subscript $G$ if it is clear from the context. For a positive integer $k$ and vertices $u,v\in V(G)$, we define $d_k(u,v) := \min\{d(u,v),k+1\}$. A set $S \subseteq V(G)$ is a \textit{resolving set} of $G$ if, for any distinct $x,y\in V(G)$, there is a vertex $z\in S$ such that $d(x,z)\neq d(y,z)$. Intuitively, a resolving set of $G$ is a set of landmark vertices, such that each vertex in $V(G)$ is uniquely characterized by its distances to the landmarks. The \textit{metric dimension} $\dim(G)$ of $G$ is the cardinality of a smallest resolving set of $G$. Metric dimension was introduced by Slater \cite{slater1975leaves} in 1975, in connection with the problem of uniquely determining the location of an intruder in a network. Harary and Melter independently introduced the same concept in \cite{melter1976metric}. Metric dimension has since been heavily studied \cite{bailey2011base, caceres2007metric, chappell2008bounds, chartrand2003theory} and has applications in diverse areas such as chemistry \cite{chartrand2000resolvability}, pattern recognition and image processing \cite{melter1984metric}, and strategies for the Mastermind game \cite{chvatal1983mastermind}. Khuller et al. \cite{khuller1996landmarks} considered robot navigation as another possible application of metric dimension. In that sense, a robot moving around in a space modeled by a graph can determine its distance to landmarks located at some of the vertices. The minimum number of landmarks required for the robot to uniquely determine its location on the graph is the metric dimension of the graph. A set $A \subseteq V(G)$ is an \textit{adjacency resolving set} of $G$ if, for any distinct $x,y\in V(G)$, there is a vertex $z\in A$ such that $d_1(x,z) \neq d_1(y,z).$ The \textit{adjacency dimension} $\adim(G)$ of $G$ is the cardinality of a smallest adjacency revolving set of $G$. The concepts of adjacency resolving set and adjacency dimension were introduced by Jannesari and Omoomi \cite{jannesari2012metric} in 2012 as a tool for studying the metric dimension of lexicographic product graphs. The authors of \cite{jannesari2012metric} also considered robot navigation as a possible application of adjacency dimension: the minimum number of landmarks required for a robot moving from node to node on a graph to determine its location from only the landmarks adjacent to it is the adjacency dimension of the graph. A function $f:V(G)\rightarrow \mathds{Z}^+ \cup \{0\}$ is a \textit{resolving broadcast} of $G$ if, for any distinct $x,y\in V(G)$, there is a vertex $z\in \supp_G(f):= \{v\in V(G):f(v)>0\}$ such that $d_{f(z)}(x,z) \neq d_{f(z)}(y,z)$. The \textit{broadcast dimension} $\bdim(G)$ of $G$ is the minimum of $c_f(G) := \sum_{v\in V(G)}f(v)$ over all resolving broadcasts $f$ of $G$. The concepts of resolving broadcast and broadcast dimension were introduced in 2020 by Geneson and Yi \cite{geneson2020broadcast}, who noted that broadcast dimension also has applications in robot navigation. In that sense, transmitters with varying range are located at some of the vertices of a graph. A transmitter with range $k$ has cost $k$ for $k\in \mathds{Z}^+\cup \set{0}$. A robot moving around on the graph learns its distance to each transmitter that it is within range of and learns that it is out of range of the others. The minimum total cost of transmitters required for a robot to determine its location on the graph is the broadcast dimension. We say that a resolving set, adjacency resolving set, or resolving broadcast of $G$ is \textit{efficient} if it achieves $\dim(G)$, $\adim(G)$, or $\bdim(G)$, respectively. \begin{example} The following tree $T$ has different metric, adjacency, and broadcast dimension. \begin{figure}[h] \centering \begin{subfigure}{.33\textwidth} \centering \includegraphics[scale=.5]{figures/ExampleMetric} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[scale=.5]{figures/ExampleAdjacency} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[scale=.5]{figures/ExampleBroadcast} \end{subfigure} \begin{minipage}{.33\textwidth} \centering \vspace*{.5cm} $\dim(T) = 2$ \end{minipage}% \begin{minipage}{.33\textwidth} \centering \vspace*{.5cm} $\adim(T) = 6$ \end{minipage} \begin{minipage}{.33\textwidth} \centering \vspace*{.5cm} $\bdim(T) = 5$ \end{minipage} \caption{Three copies of tree $T$. An efficient resolving set is shown with open circles in the first copy; an efficient adjacency resolving set is shown with open circles in the second copy; an efficient resolving broadcast is labeled on the third copy.} \label{figure: example} \end{figure} \end{example} In \cite{geneson2020broadcast}, Geneson and Yi proved an asymptotic lower bound of $\Omega(\log{n})$ on the adjacency and broadcast dimension of graphs of order $n$, and they further demonstrated that this lower bound is asymptotically tight using a family of graphs from \cite{zubrilina2018edge}. \begin{theorem} [\cite{geneson2020broadcast}] \label{thm: lowerbound_adim_bdim} For all graphs $G$ of order $n$, we have $$n\geq \adim(G)\geq \bdim(G) = \Omega(\log{n}).$$ \end{theorem} We improve the lower bound on the broadcast dimension of acyclic graphs of order $n$ from $\Omega(\log{n})$ to $\Omega(\sqrt{n})$ and show that this improved lower bound is asymptotically tight. \begin{theorem} \label{thm: asymptotic_lower_bound1} For all acyclic graphs $G$ of order $n$, we have ${\bdim}(G) = \Omega(\sqrt{n})$, and this lower bound is asymptotically optimal. \end{theorem} Since the broadcast dimension is a generalization of the adjacency dimension, a natural question is how these quantities relate. \cref{thm: lowerbound_adim_bdim} gives that $\bdim(G) = \Omega\paren{\log\paren{\adim(G)}}$. In the following question, Geneson and Yi ask whether or not this lower bound is asymptotically optimal. \begin{question} (\cite{geneson2020broadcast})\textbf{.} \label{Question: 1} Is there a family of graphs $\set{G_k}_{k\in \mathds{Z}^+}$ with $\bdim(G_k)=\Theta(k)$ and $\adim(G_k) = 2^{\Omega(k)}$ for every $k\in \mathds{Z}^+$? \end{question} We resolve \cref{Question: 1} affirmatively by constructing such a family of graphs. Thus, we complete the characterization of how the broadcast dimension of a graph $G$ can vary in the adjacency dimension of $G$: $\adim(G) \geq \bdim(G) = \Omega( \log(\adim(G)))$, where both sides are tight. Our construction directly implies the following theorem. \begin{theorem} \label{thm: asymptotica_lower_bound2} The lower bound $\bdim(G) = \Omega\paren{\log\paren{\adim(G)}}$ is asymptotically optimal. \end{theorem} The question of the effect of vertex or edge deletion on the metric dimension of a graph was raised by Chartrand and Zhang in \cite{chartrand2003theory} as a fundamental question in graph theory. In \cite{geneson2020broadcast}, Geneson and Yi studied the effect of vertex deletion on the broadcast dimension of a graph, and they ask two corresponding questions for edge deletion. \begin{question} (\cite{geneson2020broadcast})\textbf{.} \label{Question: 2} Is there a family of graphs $\set{G_k}_{k\in \mathds{Z}^+}$ such that $\bdim(G_k)-\bdim(G_k-e_k)$ can be arbitrarily large, where $e_k\in E(G_k)$? \end{question} \begin{question} (\cite{geneson2020broadcast})\textbf{.} \label{Question: 3} For any graph $G$ and any $e=uv\in E(G)$, is it true that $\bdim(G-e) - \bdim(G) \leq d_{G-e}(u,v) - 1$? \end{question} Let $e=uv$ denote an edge of a connected graph $G$ such that $G-e$ is also a connected graph. We resolve the first question affirmatively and show that the bound proposed in the second question can fail. In fact, the value $\bdim(G-e) - \bdim(G)$ can be arbitrarily larger than $d_{G-e}(u,v)$. We also show that while $\bdim(G-e) - \bdim(G)$ can be arbitrarily large, the ratio $\frac{\bdim(G-e)}{\bdim(G)}$ is bounded from above. \begin{theorem} \label{thm: edge_deletion1} The value $\bdim(G) - \bdim(G-e)$ can be arbitrarily large. \end{theorem} \begin{theorem} \label{thm: edge_deletion2} The value $\bdim(G-e) - \bdim(G)$ can be arbitrarily larger than $d_{G-e}(u,v)$. \end{theorem} \begin{theorem} \label{thm: last} For all graphs $G$ and any edge $e\in E(G)$, we have $\frac{\bdim(G-e)}{\bdim(G)} \leq 3$. \end{theorem} The rest of this paper is structured as follows. In \cref{Section: General}, we introduce relevant terminology and notation, and we record preliminary results on the metric, adjacency, and broadcast dimension of graphs that are necessary for the rest of the paper. In \cref{section: paths_and_cycles}, we examine the broadcast dimension of paths and cycles. In \cref{Section: Acyclic}, we discuss results on the broadcast dimension of acyclic graphs and prove \cref{thm: asymptotic_lower_bound1}. In \cref{Section: Comparing}, we resolve \cref{Question: 1} affirmatively and prove \cref{thm: asymptotica_lower_bound2}. In \cref{Section: Edge_Deletion}, we prove Theorems \ref{thm: edge_deletion1}, \ref{thm: edge_deletion2}, and \ref{thm: last}. Finally in \cref{Section: Future_Work}, we conclude with some open problems about broadcast dimension. \section{Preliminaries} \label{Section: General} In this section, we first introduce relevant terminology and notation that we will use throughout the paper. We then record some preliminary results on the metric, adjacency, and broadcast dimension of graphs. For the rest of this section, we let $f:V(G)\rightarrow \mathds{Z}^+ \cup \{0\}$ for graph $G=(V(G), E(G))$. We denote by $P_n$, $C_n$, and $K_n$ the path, cycle, and complete graph on $n$ vertices, respectively. We say $\diam(G) = \max\set{d(u,v)\mid u,v\in V(G)}$. We denote by $\mathbf{1}$ the vector with 1 for each entry and $\mathbf{2}$ the vector with 2 for each entry, where the length of the vector is inferred from context. For an arbitrary set $S$, a totally ordered set $Y$, and a function $g:S\rightarrow Y$, we define $\argmax_{x\in S} g(x)$ to be any $x^*\in S$ such that $g(x) \leq g(x^*)$ for all $x\in S$. We define $\argmin_{x\in S} g(x)$ analogously. \begin{definition} A vertex $z\in \supp_G(f)$ \textit{resolves} a pair of distinct vertices $x,y\in V(G)$ if $$d_{f(z)}(x,z) \neq d_{f(z)}(y,z).$$ \end{definition} In order for a vertex $z\in \supp_G(f)$ to resolve a pair of vertices $x,y\in V(G)$, we must have $f(z)\geq d(x,z)$ or $f(z)\geq d(y,z)$. We formally define this notion below. \begin{definition} A vertex $z\in \supp_G(f)$ \textit{reaches} a vertex $v\in V(G)$ with respect to $f$ if $f(z) \geq d(v,z)$, and the function $f$ \textit{reaches} a vertex $v\in V(G)$ if there is a vertex $z\in \supp_G(f)$ that reaches $v$. \end{definition} By definition, the function $f$ is a resolving broadcast of $G$ if and only if every pair of distinct vertices in $V(G)$ is resolved by a vertex in $\supp_G(f)$. Thus, any resolving broadcast $f$ of $G$ must reach all but at most one vertex in $V(G)$. Equivalently, the function $f$ is a resolving broadcast of $G$ if and only if every vertex of $G$ is \textit{distinguished}; that is, every vertex of $G$ is uniquely characterized by its distances to the vertices in $\supp_G(f)$ that reach it. We formally define this term below. \begin{definition} Let $k = \abs{\supp_G(f)}$. The \textit{broadcast representation} of a vertex $v\in V(G)$ with respect to $f$ is the $k$-vector $b_f(v) =\allowbreak \paren{d_{f(u_1)}(v, u_1), \dots, d_{f(u_k)}(v, u_k)}$ for $u_i \in \supp_G(f)$. We say that a vertex $v\in V(G)$ is \textit{distinguished} if it has a unique broadcast representation $b_f(v)$. \end{definition} The following observations give insight into how the metric, adjacency dimension, and broadcast dimension of graphs are related and will be useful throughout the rest of the paper. \begin{observation} (\cite{geneson2020broadcast})\textbf{.} The following properties hold for any graph $G$. \begin{enumerate} \item We have $\dim(G) \leq \bdim(G) \leq \adim(G)$. \item If $\diam(G)\leq 2$, then we have $\dim(G)=\bdim(G)=\adim(G)$. \end{enumerate} \end{observation} The \textit{closed neighborhood} of a vertex $v\in V(G)$ is $N[v] = \{u\in V(G): uv\in E(G)\} \cup \{v\}$. Two distinct vertices $u,v \in V(G)$ are called \textit{twin vertices} if $N[u] =N[v]$. \begin{observation} \label{obs: twin} If $u,v\in V(G)$ are twin vertices, then the following properties hold. \begin{enumerate} \item For any resolving set $S$ of $G$, we have that $u\in S$ or $v\in S$ \cite{hernando2010extremal}. \item For any adjacency resolving set $A$ of $G$, we have that $u\in A$ or $v\in A$ \cite{jannesari2012metric}. \item For any resolving broadcast $f$ of $G$, we have that $u\in \supp_G(f)$ or $v\in \supp_G(f)$ \cite{geneson2020broadcast}. \end{enumerate} \end{observation} \section{Paths and Cycles} \label{section: paths_and_cycles} Here we restrict our attention to path and cycle graphs. It is easy to see that $\dim(P_n) = 1$ and $\dim(C_n) = 2$ for every integer $n\geq 3$. The adjacency dimension and the broadcast dimension, respectively, of paths and cycles were determined in \cite{jannesari2012metric} and \cite{geneson2020broadcast}. \begin{theorem} [\cite{jannesari2012metric}] \label{thm: paths_adj} For every integer $n\geq 4$, we have $\adim(P_n) = \adim(C_n) = \floor{\frac{2n+2}{5}}$. \end{theorem} \begin{theorem} [\cite{geneson2020broadcast}] \label{thm: paths} For every integer $n\geq 4$, we have $\bdim(P_n) = \bdim(C_n) = \floor{\frac{2n+2}{5}}$. \end{theorem} In this section, we prove the following result on efficient resolving broadcasts of paths and cycles. \begin{proposition} \label{prop: my_paths_cycles} For every $n \in \mathds{Z}^+$ and $G\in \set{P_n,C_n}$, if $f$ is an efficient resolving broadcast of $G$, then $f(v) \leq 2$ for all $v\in V(G)$. \end{proposition} We begin with two lemmas. In the proof of \cref{thm: paths}, Geneson and Yi proved the following useful fact, which we state here as a lemma. We include the proof for completeness. \begin{lemma} [\cite{geneson2020broadcast}] \label{lemma: at_most_one} For every $n \in \mathds{Z}^+$ and every efficient resolving broadcast $f$ of $G\in \set{P_n,C_n}$, there is an efficient resolving broadcast $f'$ of $G$ with the following properties. \begin{enumerate} \item Every vertex reached by $f$ is also reached by $f'$. \item For all $v\in V(G)$, we have $f'(v)\leq 1$. \end{enumerate} \end{lemma} \begin{proof} Let $G$ be the path $v_1,\dots,v_{n}$ or the cycle $v_1,\dots,v_{n},v_1$. Let $f_0$ be any efficient resolving broadcast of $G$. If $f_0(v)\leq 1$ for all $v\in V(G)$, then we are done. Otherwise, we repeatedly modify $f_i$ to obtain a new efficient resolving broadcast $f_{i+1}$ that satisfies the following monovariant: for integer $k$, let $U_k = \{v\in V(G): f_k(v)>1\} $ and $S_k = \sum_{v\in U_k} f_k(v)$, then $S_{i+1} < S_i$. Let $v_j\in V(G)$ be any vertex with $x:=f_i(v_j)> 1$. If $v_j$ is a leaf and $x =2$, we set $f_{i+1}(v_j)=1$ and $f_{i+1}(u)= \max\{f_i(u), 1\}$, where $u$ is the vertex adjacent to $v_j$. Otherwise, we set $f_{i+1}(v_j) = x - 2$, and we let $u_1$ and $u_2$ be the vertices $v_{(j+x-1)\mod n}$ and $v_{(j-x+1)\mod n}$, respectively. We set $f_{i+1}(u_1)=\max\{f_i(u_1), 1\}$ and $f_{i+1}(u_2) = \max\{f_i(u_2), 1\}$. The maximum value is used for vertices assigned multiple values for $f_{i+1}$, and $f_{i+1}(v) = f_i(v)$ for any vertex $v$ not assigned any value for $f_{i+1}$. This process will terminate after finitely many steps because of the monovariant on $S_i$, yielding a resolving broadcast that satisfies the description of $f'$. \end{proof} The proof of \cref{lemma: hat_bdim} uses some ideas from observations made in \cite{buczkowski2003k} about the metric dimension of a wheel $W_n = C_n + K_1$ for integer $n\geq 3$. To state the lemma, we need the following definition. \begin{definition} For a graph $G$, the value $\widehat{\bdim}(G)$ is the minimum of $\sum_{v\in V(G)} f(v)$ over all resolving broadcasts $f$ of $G$ such that every vertex $v \in V(G)$ is reached by at least one vertex $z\in \supp_G(f)$. This differs from $\bdim(G)$ because one vertex may be unreached by a resolving broadcast. \end{definition} \begin{observation} For all graphs $G$, we have $$\widehat{\bdim}(G) = \bdim(G\cup K_1) \quad \text{and} \quad \bdim(G)\leq \widehat{\bdim}(G) \leq \bdim(G) + 1.$$ \end{observation} \begin{lemma} \label{lemma: hat_bdim} For every integer $n\geq 4$, we have $\widehat{\bdim}(P_n) =\widehat{\bdim}(C_n)= \floor{\frac{2n+3}{5}}$. \end{lemma} \begin{proof} Let $G$ be the path $v_1,\dots,v_{n}$ or the cycle $v_1,\dots,v_{n},v_1$. First, we will show that $\widehat{\bdim}(G) = \bdim(G)$ for $n\not\equiv 1 \pmod 5$. Define $g:V(G)\rightarrow \mathds{Z}^+ \cup \{0\}$ as follows: $g(v_i)$ is 1 if $i\equiv 2\pmod 5$ or $i\equiv 4\pmod 5$ and 0 otherwise. Note that $g$ is a resolving broadcast of $G$ that achieves $\bdim(G)$ given in \cref{thm: paths} and that $g$ reaches all of the vertices of $G$ when $n\not\equiv 1 \pmod 5$. Now, we will show that $\widehat{\bdim}(G)= \bdim(G)+1$ for $n\equiv 1 \pmod 5$. Let $n= 5x + 1$ for some positive integer $x$; then, we have $\bdim(G) = \floor{\frac{10x+4}{5}}=2x$. It suffices to show that for any efficient resolving broadcast $f$ of $G$, there is a vertex not reached by $f$. By \cref{lemma: at_most_one}, there is an efficient resolving broadcast $f'$ of $G$ with $f'(v)\leq 1$ for all $v\in V(G)$ that reaches all of the vertices reached by $f$. For the sake of contradiction, we assume that $f'$ reaches all of the vertices, and so there is no vertex $v\in V(G)$ with $b_{f'}(v) = \mathbf{2}$. A \textit{gap} of graph $G$ is a maximal connected subgraph of $G$ that only consists of vertices that are not in $\supp_G(f')$. If two gaps are adjacent to the same vertex in $\supp_G(f')$, then we call them \textit{neighboring gaps}. No gap can contain three vertices, since the vertex in the middle of the gap would have broadcast representation $\mathbf{2}$. Additionally, any neighboring gap of a gap that contains two vertices must contain only one vertex, since otherwise there exists five consecutive vertices of $G$ where the vertex $m$ in the middle is the only one in $\supp_G(f')$, and the two vertices adjacent to $m$ would have the same broadcast representation. If $G$ is $C_n$, then of the $\bdim(G)$ gaps, at most $\floor{\frac{\bdim(G)}{2}}$ gaps contain two vertices, and none contain three vertices. Thus, $n\leq 2\bdim(C_n) + \floor{\frac{\bdim(C_n)}{2}} =5x$. Similar reasoning yields $n\leq 5x$ if $G$ were instead $P_n$. Since $G$ is a graph of order $5x+1$, we have reached a contradiction. \end{proof} With the above lemma, we are now able to prove \cref{prop: my_paths_cycles}. \begin{proof}[Proof of \cref{prop: my_paths_cycles}] Let $G$ be the path $v_1,\dots,v_{n}$ or the cycle $v_1,\dots,v_{n},v_1$, and let $f$ be an efficient resolving broadcast of $G$. If $n\leq 6$, then $\bdim(G) \leq 2$ by \cref{thm: paths}, so $f(v) \leq 2$ for all $v\in V(G)$. Thus, we consider $n\geq 7$ Let $v_i = \argmax_{v\in V(G)}(f(v))$. For the sake of contradiction, we assume that $f(v_i)\geq 3$. If vertex $v_i$ were a leaf (say $i=1$), then a function $g$ that is identical to $f$, except with $g(v_3)=f(v_1)-2$ and $g(v_1)= 1$, is a resolving broadcast of $G$ with $c_g(G) < c_f(G)$, contradicting the efficiency of $f$. Thus, $v_i$ has two neighbors. At least one of the neighbors of $v_i$ must be reached by some other vertex $v_j\neq v_i$ or else the two neighbors of $v_i$ would not be distinguished. First, we will show that $f$ is inefficient if $f(v_j)\geq 2$. Let $T$ be the set of vertices that are reached by $v_i$ or $v_j$. Note that $|T|\leq 2f(v_i)+f(v_j)+2$. By \cref{lemma: hat_bdim}, the vertices in $T$ can be reached and distinguished with a total cost of $\floor{\frac{2|T|+3}{5}}$, which is less than $f(v_i) + f(v_j)$ when $f(v_i) \geq 3$ and $f(v_j)\geq 2$. Thus, we must have $f(v_j)=1$, so $|T|\leq 2f(v_i)+1$ since $v_j$ cannot reach any vertex that $v_i$ does not reach. By \cref{lemma: hat_bdim}, the vertices in $T$ can be reached and distinguished with a total cost of $$\floor{\frac{2|T|+3}{5}}\leq \frac{4f(v_i)+5}{5} < f(v_i)+1 = f(v_i)+f(v_j).$$ This contradicts the efficiency of resolving broadcast $f$. \end{proof} \section{Results on Acyclic Graphs} \label{Section: Acyclic} In this section, we discuss some results on the broadcast dimension of acyclic graphs, and we prove \cref{thm: asymptotic_lower_bound1}. We make use of standard terminology for trees: a \textit{major vertex} in a tree $T$ is a vertex of degree at least three, and a \textit{leaf} of $T$ is a vertex of degree one. For any graph $G$, showing that a function $g:V(G)\rightarrow \mathds{Z}^+ \cup \{0\}$ is a resolving broadcast of $G$ gives an upper bound of $c_g(G)$ on $\bdim(G)$. On the other hand, obtaining a nice lower bound on $\bdim(G)$ is oftentimes less straightforward. The result on twin vertices from \cref{obs: twin} is a useful tool for lower bounding $\bdim(G)$. In this section, we use a different approach to derive a lower bound on the broadcast dimension of trees: we consider the number of unique broadcast representations of the vertices of a tree $T$ with respect to various functions $f:V(T)\rightarrow \mathds{Z}^+ \cup \{0\}$. This motivates the following definition. \begin{definition} For a graph $G$ of order $n$ and a function $f:V(G)\rightarrow \mathds{Z}^+ \cup \{0\}$, we say that $B_G(f)$ is the number of unique broadcast representations of the vertices of $G$ with respect to $f$. That is, $$B_G(f) =\abs{\set{b_f(v) \mid v\in V(G)}}.$$ Note that $B_G(f)=n$ if and only if $f$ is a resolving broadcast of $G$. \end{definition} The following lemma will be useful in the proof of \cref{thm: acyclic_lower_bound}. \begin{lemma} \label{lemma: reachable_from_a} Let $T$ be a tree with resolving broadcast $f$, and let $a, b, v, x\in V(T)$ such that the following inequalities hold: \begin{align*} & f(a)-d(a,x)\geq f(v)-d(v,x),\\ & f(b)-d(b,x) \geq f(v)-d(v,x), \\ & f(a)-d(a,v)\geq f(b)-d(b,v). \end{align*} Then every vertex of $T$ that is reached by both $b$ and $v$ is also reached by $a$. \end{lemma} \begin{proof} We consider four possible orientations of the vertices $a$, $b$, and $v$ (see \cref{figure: Cases}). \noindent\textbf{Case 1.} There is not a path in $T$ through vertices $a$, $b$, and $v$.\\ Let $c$ be the major vertex of $T$ such that the path from $c$ to $a$, the path from $c$ to $b$, and the path from $c$ to $v$ do not share any edges. In this case, $f(a)-d(a,v)\geq f(b)-d(b,v)$ implies that \begin{equation} \label{eq:1} f(a)-d(a,c)\geq f(b)-d(b,c). \end{equation} If the path from $x$ to $a$ does not go through $c$, then both the path from $x$ to $b$ and the path from $x$ to $v$ must pass through $c$, so $f(b)-d(b,x) \geq f(v)-d(v,x)$ implies that $f(b)-d(b,c) \geq f(v)-d(v,c)$. Combining this inequality with \eqref{eq:1}, we have \begin{equation} \label{eq:2} f(a)-d(a,c)\geq f(v)-d(v,c). \end{equation} Alternatively, if the path from $x$ to $a$ does go through $c$, then $f(a)-d(a,x)\geq f(v)-d(v,x)$ directly implies \eqref{eq:2}. Thus, the inequality in \eqref{eq:2} holds no matter where vertex $x$ is. The inequality in \eqref{eq:1} shows that any vertex reached by $b$ with a path to $b$ that goes through $c$ is reached by $a$. Similarly, the inequality in \eqref{eq:2} shows that any vertex reached by $v$ with a path to $v$ that goes through $c$ is reached by $a$. Thus, any vertex that is reached by both $b$ and $v$ is also reached by $a$. \noindent\textbf{Case 2.} $d(a,v)+d(v,b) = d(a,b)$.\\ If the path from $x$ to $b$ does not go through $v$, then \begin{align*} f(a)-d(a,x)\geq f(v)-d(v,x) &\implies f(a)\geq d(a,v) + d(v,x)+ f(v)-d(v,x)\\ &\implies f(a)-d(a,v) \geq f(v). \end{align*} Alternatively, if the path from $x$ to $b$ does go through $v$, then replacing $a$ with $b$ in the above inequalities, we get $f(b)-d(b,v) \geq f(v)$, which implies that $f(a)-d(a,v)\geq f(v)$. Thus, no matter where vertex $x$ is, we have $f(a)-d(a,v) \geq f(v)$, which shows that $a$ reaches all of the vertices reached by $v$. \noindent\textbf{Case 3.} $d(b,a)+d(a,v) = d(b,v)$. \noindent\textbf{Case 4.} $d(a,b)+d(b,v) = d(a,v)$. It is easy to see that the lemma is true for Cases 3 and 4 by direct observation or by performing analysis similar to the analysis shown for Cases 1 and 2. \end{proof} \begin{theorem} \label{thm: acyclic_lower_bound} For all trees $T$ of order $n$, we have ${\bdim}(T) \geq \sqrt{\frac{n}{6}}.$ \end{theorem} \begin{proof} Let $T$ be a tree of order $n$, and let $f$ be any resolving broadcast of $T$. We define $f': V(T)\rightarrow \mathds{Z}^+ \cup \{0\}$ such that $f'(v) = 0$ for all $v\in V(T)$. Note that $B_T(f') = 1$. Let $x\in V(T)$ be any vertex. We order the vertices in $\supp_T(f)$ so that vertex $v\in \supp_T(f)$ comes before vertex $u\in \supp_T(f)$ in the ordering only if $f(v)-d(v,x)\geq f(u) -d(u,x)$. We update the value of $f'(v)$ from 0 to $f(v)$ (notationally, $f'(v)\leftarrow f(v)$) one vertex $v\in \supp_T(f)$ at a time in the defined order until $f' = f$, and we consider the increase in $B_T(f')$ on each update. \begin{figure}[t] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=.5]{figures/Case1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=.5]{figures/Case2} \end{subfigure} \begin{minipage}{.5\textwidth} \centering \vspace*{.3cm} Case 1 \vspace*{.8cm} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \vspace*{.3cm} Case 2 \vspace*{.8cm} \end{minipage} \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=.5]{figures/Case3} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=.5]{figures/Case4} \end{subfigure} \begin{minipage}{.5\textwidth} \centering \vspace*{.3cm} Case 3 \end{minipage}% \begin{minipage}{.5\textwidth} \centering \vspace*{.3cm} Case 4 \end{minipage} \caption{The four cases from the proof of \cref{lemma: reachable_from_a} and the proof of \cref{thm: acyclic_lower_bound}. Note that all vertices may have larger degree than what is shown. Any non-pictured vertex of the tree that is in $S$ (defined in the proof of \cref{thm: acyclic_lower_bound}) and adjacent to a vertex in $S_1$ is also in $S_1$.} \label{figure: Cases} \end{figure} For a vertex $v\in \supp_T(f)$, let $W(v)$ be the set of vertices that can reach (with respect to $f'$) at least one vertex $u\in V(T)$ that is reached by $v$ (with respect to $f$). That is, \begin{equation*} W(v) = \set{w \mid w\neq v, w\in \supp_T(f'), u\in V(T), f'(w)\geq d(u,w), f(v)\geq d(u,v)}. \end{equation*} If $W(v) = \emptyset$, then updating $f'(v) \leftarrow f(v)$ increases $B_T(f')$ by at most $f(v) + 1$, which is upper bounded by $2\paren{f\paren{v}}^2$ since $f(v) \geq 1$. If $|W(v)| = 1$, then we can make the following observations about the broadcast representation $b_{f'}(u)$ of any vertex $u$ reached by $v$ after the update $f'(v) \leftarrow f(v)$. There are $f(v) + 1$ possible values for the entry of $b_{f'}(u)$ corresponding to vertex $v$ and $2f(v) + 1$ possible values for the entry of $b_{f'}(u)$ corresponding to the vertex in $W(v)$. The rest of the entries of $b_{f'}(u)$ must be the maximal possible value for that entry. Thus, $B_T(f')$ increases by at most $\paren{f(v) + 1}\paren{2f(v) + 1}\leq 6\paren{f\paren{v}}^2$ in this case. Now we consider $|W(v)| > 1$. Let $a=\argmax_{u\in W(v)}\paren{f'(u) - d(u,v)}$ and $b\in W(v)-\{a\}$. Let $\delta \geq 0$ such that the update $f'(v)\leftarrow f(v)$ increases $B_T(f')$ by $\delta$. \noindent\textbf{Claim.} If $f'(b)$ were instead zero, then the update $f'(v)\leftarrow f(v)$ would still increase $B_T(f')$ by at least $\delta$. \begin{proof}[Proof of claim.] Let $S$ be the set of vertices reached by both $b$ and $v$, and let $S_0 = V(T) - S$. We consider four possible orientations of the vertices $a$, $b$, and $v$ (see \cref{figure: Cases}), and we show that, in each case, the vertices in $S$ can be split into two (possibly empty) sets $S_1$ and $S_2$ such the three properties listed below are satisfied. Note that showing this proves the claim. \begin{minipage}{30pt} $\text{ }$ \end{minipage}% \begin{minipage}{.9\textwidth} \begin{enumerate} \item [Property 1.] Before updating $f'(v)\leftarrow f(v)$, every vertex in $S_1$ has a different broadcast representation from every vertex in $V(T) - S_1$. \item[Property 2.] Updating $f'(v)\leftarrow f(v)$ does not increase $\abs{\set{b_{f'}(v) \mid v\in S_1}}$. \item[Property 3.] If $f'(b)$ were instead zero, updating $f'(v)\leftarrow f(v)$ would increase $\abs{\set{b_{f'}(v) \mid v\in S_2\cup S_0}}$ by at least $\delta$. \end{enumerate} \end{minipage} Since we made the updates $f'(a)\leftarrow f(a)$ and $f'(b)\leftarrow f(b)$ before the update $f'(v)\leftarrow f(v)$, we have $f(a)-d(a,x)\geq f(v)-d(v,x)$ and $f(b)-d(b,x) \geq f(v)-d(v,x)$. Because of the way we chose vertex $a$, we have $f(a)-d(a,v)\geq f(b)-d(b,v)$. Thus, by \cref{lemma: reachable_from_a}, vertex $a$ also reaches all of the vertices in $S$. Because every vertex in $S_0$ is not reached by $b$ or not reached by $v$, the increase in $\abs{\set{b_{f'}(v) \mid v\in S_0}}$ after updating $f'(v)\leftarrow f(v)$ would be at least the same if $f'(b)$ were instead zero. In all four cases, if $S_1=\emptyset$, Properties 1 and 2 are trivially satisfied, and if $S_2 = \emptyset$, Property 3 is trivially satisfied. \noindent\textbf{Case 1.} There is not a path in $T$ through vertices $a$, $b$, and $v$.\\ Let $c$ be the major vertex of $T$ such that the path from $c$ to $a$, the path from $c$ to $b$, and the path from $c$ to $v$ do not share any edges. Let $S_1$ be the set of vertices in $S$ with a path to $b$ that does not go through $c$, and $S_2 = S-S_1$. Let $u_1\in S_1$. All other vertices with distance $d(u_1,a)$ to $a$ and $d(u_1,b)$ to $b$ are also in $S_1$ (Property 1) and have the same distance $d(u_1,a) - d(a,c)+d(c,v)$ to vertex $v$ (Property 2). Let $u_2\in S_2$. All of the vertices in $S_2$ that are distance $d(u_2,a)$ to vertex $a$ and distance $d(u_2,v)$ to vertex $v$ have the same distance to vertex $b$ (Property 3). \noindent\textbf{Case 2.} $d(a,v)+d(v,b) = d(a,b)$.\\ Let $S_1=S$ and $S_2 = \emptyset$. Let $u_1\in S_1$. All other vertices that are distance $d(u_1,a)$ from vertex $a$ and distance $d(u_1,b)$ from vertex $b$ are also in $S_1$ (Property 1) and are the same distance from vertex $v$ (Property 2). Property 3 is trivially satisfied. \noindent\textbf{Case 3.} $d(b,a)+d(a,v) = d(b,v)$.\\ Let $S_1$ be the set of vertices in $S$ with a path to $b$ that does not go through $a$, and $S_2=S-S_1$. Let $u_1\in S_1$. All other vertices with distance $d(u_1,a)$ to $a$ and $d(u_1,b)$ to $b$ are also in $S_1$ (Property 1), and they all have the same distance $d(u_1,a) + d(a,v)$ to vertex $v$ (Property 2). Let $u_2\in S_2$. All of the vertices in $S_2$ that are distance $d(u_2,a)$ to vertex $a$ have the same distance $d(u_2,a) + d(a,b)$ to vertex $b$ (Property 3). \noindent\textbf{Case 4.} $d(a,b)+d(b,v) = d(a,v)$.\\ Let $S_1=\emptyset$ and $S_2 = S$. Properties 1 and 2 are trivially satisfied. Let $u_2\in S_2$. All of the vertices with distance $d(u_2,a)$ to vertex $a$ and distance $d(u_2,v)$ to vertex $v$ have the same distance to vertex $b$ (Property 3). \end{proof} The claim implies that the change in $B_T(f')$ after updating $f'(v) \leftarrow f(v)$ when $|W(v)| > 1$ is upper bounded by the change in $B_T(f')$ after updating $f'(v) \leftarrow f(v)$ if we instead had $W(v)=\{a\}$. Thus, every update increases $B_T(f')$ by at most $6\paren{f(v)}^2$, and the very first update increases $B_T(f')$ by at most $2\paren{f(v)}^2$. Since we started out with $B_T(f') = 1$, and we must have $B_T(f') = n$ after finishing all of the updates, we have that $c_f(T) \geq \sqrt{\frac{n}{6}}$ for any resolving broadcast $f$ of $T$. \end{proof} Because the broadcast dimension of a disconnected graph is at least the sum of the broadcast dimensions of all of its connected components, \cref{thm: acyclic_lower_bound} directly implies the following corollaries. \begin{corollary} \label{corollary: acyclic} For all acyclic graphs $G$ of order $n$, we have ${\bdim}(G) = \Omega(\sqrt{n})$. \end{corollary} \begin{corollary} \label{corollary: adim_of_acyclic} For all acyclic graphs $G$ of order $n$, we have $\adim(G) = \Omega(\sqrt{n})$. \end{corollary} \begin{corollary} \label{corollary: adim_of_acyclic2} For all acyclic graphs $G$ of order $n$, we have $\adim(G) = O\paren{\paren{\bdim(G)}^2}$. \end{corollary} Now we will show that the bound from \cref{thm: acyclic_lower_bound} is sharp up to a constant factor and that the asymptotic bounds from \cref{corollary: acyclic} and \cref{corollary: adim_of_acyclic2} are asymptotically optimal. We do so by finding a family of trees that achieves these bounds up to a constant factor. This family of graphs will also be used to study edge deletion in \cref{Section: Edge_Deletion}. \begin{definition} \label{def: F_k} For every $k\in \mathds{Z}^+ \cup \{0\}$, graph $L_k$ is the path $v_0, \dots, v_k$. The graph $F_k$ is $L_k$ with a path $P_i$ connected to $v_i$ for each $1\leq i\leq k$. (See \cref{figure: F3} for the graph $F_3$. \end{definition} \begin{figure}[h] \includegraphics[scale=.5]{figures/F3} \centering \caption{The graph $F_3$.} \label{figure: F3} \end{figure} \begin{theorem} \label{thm: construct_F} For every $k\in \mathds{Z}^+ \cup \{0\}$, tree $F_k$ of order $\Theta(k^2)$ has ${\bdim}(F_k) = O(k)$ and $\adim(F_k) = \Theta(k^2)$. \end{theorem} \begin{proof} The function $f_k:V(F_k)\rightarrow \mathds{Z}^+ \cup \{0\}$ with $f_k(v_0) = f_k(v_k) =2k$ and $f_k(v)=0$ for all other vertices $v \in V(F_k)$ is a resolving broadcast of $F_k$ with $c_{f_k}(F_k) = 4k$, so $\bdim(F_k)\leq 4k = O(k)$. The size of any adjacency resolving set of $F_k$ must be linear in the number of vertices in order for all of the vertices on the paths attached to $L_k$ to be distinguished. Since tree $F_k$ has order $\Theta(k^2)$, we have $\adim(F_k) = \Theta(k^2)$. \end{proof} Combining \cref{corollary: acyclic} and \cref{thm: construct_F}, we have proven \cref{thm: asymptotic_lower_bound1}. In \cite{geneson2020broadcast}, Geneson and Yi showed that, for two connected graphs $G$ and $H$ such that $H\subset G$, the ratios $\frac{\dim(H)}{\dim(G)}$, $\frac{\adim(H)}{\adim(G)}$, and $\frac{\bdim(H)}{\bdim(G)}$ can be arbitrarily large. In the next result, we show that this can only be true when the graph $G$ is not acyclic. \begin{proposition} For two trees $T_1$ and $T_2$ such that $T_1\subseteq T_2$, we have that $\dim(T_1)\leq \dim(T_2)$, $\adim(T_1)\leq \adim(T_2)$, and $\bdim(T_1)\leq \bdim(T_2)$. \end{proposition} \begin{proof} Let $T$ be a tree with efficient resolving broadcast $f:V(T)\rightarrow \mathds{Z}^+ \cup \{0\}$. Let $v\in V(T)$ be a leaf of $T$, and let $uv\in E(T)$. If $v\not\in\supp_{T}(f)$, then $g:V(T-v)\rightarrow \mathds{Z}^+ \cup \{0\}$ with $g(w) = f(w)$ for every $w\in V(T-v)$ is a resolving broadcast of graph $T-v$. If $v\in\supp_{T}(f)$, then $g:V(T-v)\rightarrow \mathds{Z}^+ \cup \{0\}$ with $g(u) = \max\{f(v) -1, f(u)\}$ and $g(w) = f(w)$ for every $w\in V(T-v) - \{u\}$ is a resolving broadcast of $T-v$. Thus, $\bdim(T-v)\leq \bdim(T)$ for any leaf $v$ of $T$. Tree $T_2$ can be pruned into tree $T_1$ by repeatedly deleting leaves that are not in $T_1$. Thus, $\bdim(T_1)\leq \bdim(T_2)$. The results $\dim(T_1)\leq \dim(T_2)$ and $\adim(T_1)\leq \adim(T_2)$ follow with similar reasoning. \end{proof} \section{Comparing $\adim(G)$ and $\bdim(G)$} \label{Section: Comparing} Geneson and Yi \cite{geneson2020broadcast} showed that, for the the $d$-dimensional grid graph $G_k = \Pi_{i=1}^{d} P_k$, we have $\bdim(G_k) = \Theta(k)$ and $\adim(G_k) = \Theta(k^d)$ for every $k\in \mathds{Z}^+$ and any $d\geq 1$, where the constants in the bounds depend on $d$. In this section, we prove the following theorem. \begin{theorem} \label{thm: compare} There exists a family of graphs $\set{G_k}_{k\in \mathds{Z}^+}$ with $\bdim(G_k) = \Theta(k)$ and $\adim(G_k) = 2^{\Omega(k)}$ for every $k\in \mathds{Z}^+$. \end{theorem} First, we recall the following graph notation. We denote by $G[S]$ the subgraph of $G$ induced by $S\subseteq V(G)$. The \textit{Cartesian product} of graphs $G$ and $H$, denoted by $G \square H$, is the graph with vertex set $V(G) \times V(H):=\{(u_1,u_2)\mid u_1\in V(G), u_2\in V(H)\}$, where $(u_1,u_2)$ is adjacent to $(v_1,v_2)$ whenever $u_1 = v_1$ and $u_2v_2 \in E(H)$, or $u_2 = v_2$ and $u_1v_1\in E(G)$. We prove \cref{thm: compare} by finding a family of graphs with the desired properties. This family of graphs is defined as follows: \begin{definition} Graph $\widehat{X_0}$ is the path $a,b,c$, and graph $\widehat{X}$ is the graph with vertex set $\{a,b,c\}$ and edge set $\{ab\}$. For $i\in \mathds{Z}^+$, we let $$\widehat{X_i} = \widehat{X_0} \square \underbrace{\widehat{X} \square \widehat{X} \dots \square \widehat{X}}_{i\text{ times}}.$$ For $i\in \mathds{Z}^+ \cup \{0\}$, graph $X_i$ is $\widehat{X_i}$ with one modification: for every $1\leq j\leq i+1$, graph $X_i$ has an additional vertex $s_j$ that is adjacent to every vertex with $a$ as the $j$th coordinate. (See \cref{figure: G1} for the graph $X_1$.) \end{definition} \begin{figure}[t] \includegraphics[scale=.5]{figures/G1} \centering \caption{The graph $X_1$.} \label{figure: G1} \end{figure} \begin{lemma} \label{lemma: compare1} We have $\bdim(X_k) = \Theta(k)$ for all $k\in \mathds{Z}^+$. \end{lemma} \begin{proof} Let $k\in \mathds{Z}^+$ be given. For $i\in \mathds{Z}^+ \cup \{0\}$, we define $S_i = \set{s_j\mid 0\leq j\leq i}$. For $i\in \mathds{Z}^+ \cup \{0\}$, we define the function $f_i:V(X_i)\rightarrow \mathds{Z}^+ \cup \{0\}$ as follows: $f_i(s_0)=3$, $f_i(s_j)=2$ for every $1\leq j \leq i$, and $f_i(v)=0$ for all other vertices $v$. We claim that $f_i$ is a resolving broadcast of $X_i$ for all $i\in \mathds{Z}^+ \cup \{0\}$. We proceed to prove this claim by induction. In the base case $i=0$, we have that $X_0$ is the path $s_0,a,b,c$. It is easy to see that that function $f_0$ with $f_0(s_0) =3$ is a resolving broadcast of $X_0$. Assuming that $f_{k-1}$ is a resolving broadcast of graph $X_{k-1}$, we will show that $f_k$ is a resolving broadcast of graph $X_k$. Let $u_1, u_2\in V(X_{k-1})$ and $v_1, v_2 \in \{a,b,c\}$ such that $(u_1, v_1)$ and $(u_2, v_2)$ are two distinct vertices in $V(X_k)$. If $u_1\neq u_2$, then $(u_1, v_1)$ and $(u_2, v_2)$ are resolved by the vertex in $S_{k-1}$ that resolved $u_1$ and $u_2$ in $X_{k-1}$. Alternatively, if $u_1 = u_2$, then $v_1\neq v_2$, and $(u_1, v_1)$ and $(u_2, v_2)$ are resolved by $s_k$. Thus, function $f_k$ is a resolving broadcast of $X_k$. Now we can upper bound the broadcast dimension of $X_k$: $$\bdim(X_k)\leq c_{f_k}(X_k) = 3 + 2k \implies \bdim(X_k)= O(k).$$ By \cref{thm: lowerbound_adim_bdim}, we have $\bdim(X_k) = \Omega(k)$. Thus, we have $\bdim(X_k) = \Theta(k)$. \end{proof} \begin{lemma} \label{lemma: compare2} We have $\adim(X_k) = 2^{\Omega(k)}$ for all $k\in \mathds{Z}^+$. \end{lemma} \begin{proof} Let $k\in \mathds{Z}^+$ be given. For $i\in \mathds{Z}^+ \cup \{0\}$, we define $S_i = \set{s_j\mid 0\leq j\leq i}$. We claim that the following statement is true for all $i\in \mathds{Z}^+ \cup \{0\}$: for any adjacency resolving set $A_i$ of $X_i$, we have that $|(V(X_i)-S_i)\cap A_i| \geq 2^i$. We proceed to prove this claim by induction. In the base case $i=0$, for any adjacency resolving set $A_0$ of $X_0 = P_4$, we have by \cref{thm: paths_adj} $$|(V(X_0)-\{s_0\})\cap A_0| \geq \floor{\frac{2(4)+2}{5}}-1 = 1.$$ Now we assume that $|(V(X_{k-1})-S_{k-1})\cap A_{k-1}| \geq 2^{k-1}$ for any adjacency resolving set $A_{k-1}$ of $X_{k-1}$. Let $H$ be $X_{k-1}[V(X_{k-1})-S_{k-1}]$, the subgraph induced in $X_{k-1}$ by $V(X_{k-1})-S_{k-1}$. The induced subgraph $X_k[V(X_{k})-S_{k}]$ contains three copies of $H$ as subgraphs. Let $H_1$, $H_2$, and $H_3$ be the copies of $H$ in $X_k[V(X_{k})-S_{k}]$ that are induced by the sets of vertices $\{(v,a)\mid v\in V(H)\}$, $\{(v,b)\mid v\in V(H)\}$, and $\{(v,c)\mid v\in V(H)\}$, respectively. Let $v\in V(H)$. Vertex $(v,c) \in V(H_3)$ is only adjacent to vertices in $V(X_k) - V(H_3)$ that are in $S_{k-1}$. In particular, the vertices $(v,c) \in V(H_3)$ and $u\in S_{k-1}$ are adjacent in $X_k$ if and only if $v$ and $u$ are adjacent in $X_{k-1}$. Thus, we have $\left|V(H_3)\cap A_{k}\right| \geq 2^{k-1}$ for any adjacency resolving set $A_k$ of $X_k$ by the inductive hypothesis. If $\left|V(H_1)\cap A_{k}\right| = 0$, then we must have $\left|V(H_2)\cap A_{k}\right| \geq 2^{k-1}$, in order to distinguish all of the vertices in $H_2$. If instead $\left|V(H_1)\cap A_{k}\right| = x$ for some positive integer $x$, then we must have $\left|V(H_2)\cap A_{k}\right| \geq 2^{k-1}-x$, since every vertex in $V(H_1)\cap A_k$ reaches at most one vertex in $H_2$. Thus, any adjacency resolving set $A_k$ of $X_k$ must have at least $2^{k-1}$ vertices in $V(H_1)\cup V(H_2)$. We have $$\left|(V(X_{k})-S_{k})\cap A_{k}\right| = |V(H_3) \cap A_k| + |(V(H_1)\cup V(H_2))\cap A_k| \geq 2^k$$ for any adjacency resolving set $A_k$ of $X_k$, which completes the induction. Thus, we have $|A_k| \geq 2^k$ for any adjacency resolving set $A_k$ of $X_k$, so $\adim(X_k) = 2^{\Omega(k)}$. \end{proof} Combining \cref{lemma: compare1} and \cref{lemma: compare2}, we have proven \cref{thm: compare}. We note that our construction of graph $X_k$ has broadcast dimension that is asymptotically optimal in both its order and its adjacency dimension: \begin{remark} There does not exist a family of graphs $\set{G_k}_{k\in \mathds{Z}^+}$ with $\bdim(G_k) = \Theta(k)$ and $\adim(G_k) = 2^{\omega(k)}$ for every $k\in \mathds{Z}^+$ because $\bdim(G) = \Omega(\log n)$ for all graphs $G$ of order $n$ by \cref{thm: lowerbound_adim_bdim}. \end{remark} Our result in \cref{thm: compare} directly implies \cref{thm: asymptotica_lower_bound2} and resolves \cref{Question: 1} affirmatively. Furthermore, we can also answer \cref{Question: 1} for acyclic graphs: \begin{remark} By \cref{corollary: adim_of_acyclic2}, there does not exist a family of acyclic graphs $\set{G_k}_{k\in \mathds{Z}^+}$ with $\bdim(G_k) = \Theta(k)$ and $\adim(G_k) = 2^{\Omega(k)}$ for every $k\in \mathds{Z}^+$. \end{remark} \section{Edge Deletion} \label{Section: Edge_Deletion} Throughout this section, we let $v$ and $e$, respectively, denote a vertex and an edge of a connected graph $G$ such that $G-v$ and $G-e$ are also connected graphs. Geneson and Yi \cite{geneson2020broadcast} constructed families of graphs that demonstrated that both $\frac{\bdim(G)}{\bdim(G-v)}$ and $\bdim(G-v) - \bdim(G)$ can be arbitrarily large. In this section, we prove analogues of their results for the effect of edge deletion on the broadcast dimension of a graph. We prove \cref{thm: edge_deletion1} and \cref{thm: edge_deletion2}, which state that both $\bdim(G) - \bdim(G-e)$ and $\bdim(G-e) - \bdim(G)-d_{G-e}(u,v)$ can be arbitrarily large for $e=uv\in E(G)$. We do so by finding families of graphs that demonstrate these results. We also show that the ratio $\frac{\bdim(G-e)}{\bdim(G)}$ is bounded from above by 3, proving \cref{thm: last}. In the following theorem, we resolve \cref{Question: 2} affirmatively by constructing a family of graphs that uses ideas from a graph constructed by Eroh et al. in \cite{eroh2015effect}, which they used to show that $\dim(G) - \dim(G-e)$ can be arbitrarily large. \begin{reptheorem}{thm: edge_deletion1} The value $\bdim(G) - \bdim(G-e)$ can be arbitrarily large. \end{reptheorem} \begin{figure}[t] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=.5]{figures/DeleteEdge1__} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=.5]{figures/ForceFour} \end{subfigure} \begin{minipage}{.5\textwidth} \centering \vspace*{.5cm} $G_k$ \end{minipage}% \begin{minipage}{.5\textwidth} \centering \vspace*{.5cm} $T_i$ \end{minipage} \caption{A graph $G_k$ such that $\bdim(G_k) - \bdim(G_k-e) = \Omega(k)$. For every $1\leq i\leq k$, vertex $z_i$ is the root of a copy of tree $T_i$, shown on the right, so $|L_i| = 22$.} \label{figure: DeleteEdge1} \end{figure} \begin{proof} Let $k\in \mathds{Z}^+$, and let $G_k$ be the graph in \cref{figure: DeleteEdge1} with $e = AB$. For each $1\leq i \leq k$, let \textit{layer} $L_i$ be the set of vertices indicated in \cref{figure: DeleteEdge1}. We define function $g:V(G_k-e)\rightarrow \mathds{Z}^+ \cup \{0\}$ as follows: $$ g(v)= \begin{cases} 3 &\text{ if } v=A,\\ 4 &\text{ if } v=z_i\text{ for }1\leq i \leq k,\\ 1 &\text{ if } v \text{ is a vertex on tree }T_i\text{ shown with an open circle in \cref{figure: DeleteEdge1}},\\ 0 &\text{ otherwise. } \end{cases} $$ Because $g$ is a resolving broadcast of graph $G_k-e$, we can upper bound the broadcast dimension of graph $G_k-e$: we have $\bdim(G_k-e) \leq c_g(G_k-e)=3 + 9k.$ Let $f:V(G_k)\rightarrow \mathds{Z}^+ \cup \{0\}$ be a resolving broadcast of the graph $G_k$. For every pair of distinct vertices $u_1,u_2 \in V(G_k)$ with $d(u_1, A) = d(u_2,A)$ and $d(u_1, B) = d(u_2,B)$, at least one of $u_1$ or $u_2$ must be reached by a vertex in $\supp_{G_k}(f)$ that is on the same layer since otherwise we would have $b_f(u_1) = b_f(u_2)$. Thus, at most $$\max_{u,v\in V(G_k)}(d(u,A)+1)(d(v,B)+1) + 1 = O(1)$$ layers of graph $G_k$ can have a vertex that is not reached by any vertex on the same layer. The following properties must hold for the remaining $k - O(1)$ layers $L_i$. \begin{enumerate} \item Every vertex $v\in L_i$ is reached by a vertex in $L_i\cap \supp_{G_k}(f)$. \item We have $\supp_{G_k}(f) \cap (L_i - V(T_i)) \neq \emptyset$, since otherwise $b_f(x_i) = b_f(y_i)$. \item Any distinct $u,v\in V(T_i)$ with $d(u,z_i) =d(v,z_i)$ must be resolved by a vertex in $L_i$ since $d(u,A) = d(v,A)$ and $d(u,B) = d(v,B)$. \end{enumerate} \begin{figure}[h] \centering \includegraphics[scale=.5]{figures/ForceFourLabeled} \caption{Tree $T_i$ labeled for casework reference.} \label{figure: ForceFourLabeled} \end{figure} Refer to \cref{figure: ForceFourLabeled} for the remainder of the proof. There are three pairs of twin vertices on tree $T_i$ (see dotted rectangular boxes). By \cref{obs: twin}, at least one of the vertices in each of these pairs must be in $\supp_{G_k}(f)$. Without loss of generality, let the three vertices that are denoted with an open circle be in $\supp_{G_k}(f)$. The total value assigned to each of the two groups of five vertices identified by dashed trapezoidal boxes must be at least 2 in order for the three vertices that are the same distance away from $z_i$ in each of those groups to be distinguished. Any assignment of a total value of 5 to $T_i$ subject to the above constraints leaves at least four unreached vertices: $v_1$, $v_2$, $v_{3a}$ or $v_{3b}$, and $v_{4a}$ or $v_{4b}$. These four vertices must be reached by assigning an additional total value of at least $4$ to the vertices on tree $T_i$ (in addition to the positive value assigned to some vertex in $L_i - V(T_i)$), or by assigning an additional total value of $c< 4$ to the vertices on tree $T_i$ and at least $5-c$ to a vertex $v\in L_i- V(T_i)$. In either case, a total value of at least 10 must be assigned to the vertices on such a layer $L_i$. Because a total value of at least 10 is assigned to at least $k - O(1)$ layers of $G_k$ by any resolving broadcast $f$ of $G_k$, we have $\bdim(G_k) - \bdim(G_k-e) \geq 10k - 9k - O(1)= \Omega(k)$. \end{proof} We will prove \cref{thm: edge_deletion2} by constructing a family of graphs that shows that $\bdim(G-e) - \bdim(G)$ can be arbitrarily larger than $d_{G-e}(u,v)$, thus showing that the bound proposed in \cref{Question: 3} can fail. Since we use both spider graphs and the graph $F_k$ (see \cref{def: F_k}) in the graph construction, we begin with two lemmas: a lemma about graphs containing spiders as subgraphs and a lemma about the graph $F_k$. As we will be working with a specific family of spider graphs in the proof \cref{thm: edge_deletion2}, we introduce our notation for spider graphs: \begin{definition} A tree is called a \textit{spider} if every vertex, except for one vertex known as the \textit{center vertex}, has degree at most two. A \textit{leg} of a spider graph is a path connected to the center vertex. We denote by $SP\paren{\ell_1^{(x_1)}, \dots, \ell_m^{(x_m)}}$ a spider of order $n$ with $x_i$ legs of length $\ell_i$ for every $1\leq i\leq m$, where $\ell_1\leq \ell_2\leq \dots \leq \ell_m$ and $1+ \sum_{i=1}^{m}\ell_ix_i = n$. \end{definition} \begin{lemma} \label{lemma: spider} For any integer $k>1$, let $G$ be a graph that contains spider $SP\paren{3k^{\paren{6k}}}$ with center $c$ as a subgraph such that $c$ is the only vertex on the spider that is adjacent to any vertex of the graph that is not on the spider. If a resolving broadcast $f$ of $G$ is efficient, then there exists a vertex $z\in \supp_G(f)$ with $f(z) -d(c,z) \geq 3k-2$. \end{lemma} \begin{proof} For the sake of contradiction, consider an efficient resolving broadcast $f$ where there is not a $z\in \supp_G(f)$ with $f(z) -d(c,z) \geq 3k-2$. On each leg of the spider $SP\paren{3k^{\paren{6k}}}$, the three vertices farthest from $c$ are only reached by vertices on the same leg. Let $u,v\in V(G)$ be two distinct vertices on the legs of the spider with $d(u,c) = d(v,c)$. If neither $u$ nor $v$ are reached by a vertex that is on the same leg as $u$ or $v$, then $b_f(u) = b_f(v)$. Thus, every vertex on the legs of the spider, except at most $3k$ of them (one vertex of each distinct distance from $c$) must be reached by another vertex on the same leg. On at least $6k-3k=3k$ legs of the spider, all vertices need to be reached by a vertex on the same leg; let $L$ be the set of these legs. A vertex $v\in \supp_G(f)$ on a leg of the spider can reach at most $2f(v) + 1$ vertices on the same leg, and we have that $2f(v)+1 \leq 3f(v)$ with equality if and only if $f(v)=1$. Because all of the vertices on a leg $\ell\in L$ need to be reached by a vertex on $\ell$, the total value $v_{\ell}$ assigned to the $3k$ vertices on $\ell$ must be at least $k$. If $v_{\ell} = k$, then we must have the following assignment: the vertices on $\ell$ that are distance $3i-1$ from $c$ for $1\leq i \leq k$ are assigned a value of 1, and the rest of the vertices on $\ell$ are assigned 0. However, with this assignment, there are two vertices on $\ell$ that have the same broadcast representation: the vertex that is distance $3k-2$ from $c$ and the vertex that is distance $3k$ from $c$ are only reached by the vertex between them. Thus, $v_{\ell} \geq k+1$ for every $\ell\in L$. Consider function $f':V(G)\rightarrow \mathds{Z}^+ \cup \{0\}$ defined as follows. Let $f'(v) = f(v)$ for all vertices that are not on a leg in $L$, and let $f'(c)=3k-2$. The vertices on a leg in $L$ that are distance $3i-1$ from $c$ for $1\leq i \leq k$ are assigned a value of 1, and the rest of the vertices on a leg in $L$ are assigned 0. We note that $f'$ is a resolving broadcast of $G$. Moreover, we have $c_{f'}(G) < c_{f}(G)$ because $f'(c) - f(c) \leq 3k-2$, and for each of the $3k$ legs $\ell\in L$, we have $\sum_{v\in \ell} f(v) - \sum_{v\in \ell} f'(v) \geq 1$. This contradicts the efficiency of $f$. \end{proof} \begin{lemma} \label{lemma: F_k} For any resolving broadcast $f$ of $F_k$ (see \cref{def: F_k}) with $f(v_k) \geq k$, we have $$\sum_{v\in V(F_k)} f(v) \geq f(v_k) + 2k - O(1).$ \end{lemma} \begin{proof} Let $f$ be a resolving broadcast of $F_k$ that minimizes $\sum_{v\in V(F_k)} f(v)$, under the constraint that $f(v_k) \geq k$. For $u\in V(F_k)$, we define $p(u) := \argmin_{i\in [0,k]}d\paren{u,v_i}$. \noindent\textbf{Case 1.} $f(w)- d\paren{w,v_{p(w)}} \leq \ceil{\frac{k}{2}}$ for all $w \in V(F_k) - \{v_k\}$.\\ Define $f': V(F_k)\rightarrow \mathds{Z}^+ \cup \{0\}$ such that $f'(v_k) = f(v_k)$ and $f'(v) = 0$ for all other $v\in V(F_k)$. The number of unique broadcast representations of the vertices of $F_k$ with respect to function $f'$, denoted $B_{F_k}(f')$, is $k+1$. Updating $f'(w)\leftarrow f(w)$ for any $w \in \supp_{F_k}(f) - \{v_k\}$ introduces at most $f(w)$ new unique broadcast representations to the vertices $u$ with $p(u) < p(w)$. Thus, every update $f'(w)\leftarrow f(w)$ for some $w \in \supp_{F_k}(f) - \{v_k\}$ increases $B_{F_k}(f')$ by at most $$O\paren{f(w)}+ \sum _{i=1}^{f(w)} i \leq f(w)\paren{\frac{k}{4}+ O(1)}.$$ Since we must have $\frac{k^2}{2} + O(k)$ unique broadcast representations, the lemma holds in this case. \noindent\textbf{Case 2.} There is a vertex $w \neq v_k$ with $f(w) - d\paren{w,v_{p(w)}}> \ceil{\frac{k}{2}}$.\\ Let $t$ be the vertex on $F_k$ farthest from vertex $v_0$. We must have $d(w,t) -f(w)=O(1)$, since otherwise there would be $\omega(1)$ vertices $u$ with $p(u) \geq p(w)$ not reached by $w$. These vertices would be most efficiently distinguished by increasing $f(w)$, contradicting the efficiency of $f$. The vertices $u$ with $p(u) < p(w)$ must be distinguished with an additional total cost of at least $p(w) - O(1)$. Thus, \begin{align*} \sum_{v\in V(F_k)} f(v) \geq f(v_k) + d(w,t) + p(w) - O(1) \geq f(v_k)+2k - O(1), \end{align*} as desired. \end{proof} With \cref{lemma: spider} and \cref{lemma: F_k}, we can prove \cref{thm: edge_deletion2}. \begin{reptheorem}{thm: edge_deletion2} The value $\bdim(G-e) - \bdim(G)$ can be arbitrarily larger than $d_{G-e}(u,v)$, where $e=uv\in E(G)$. \end{reptheorem} \begin{proof} For integer $k\geq 2$, let $H_k$ be the graph in \cref{figure: DeleteEdge2_}, and let $e=v_{i}v_{i+1}$, where $i = \floor{\frac{3k-2}{2}}$. Let $S_1$ and $S_2$ be the spider $SP\paren{3k^{(6k)}}$ centered at $v_0$ and $v_{3k -2}$, respectively. For $u\in V(H_k)$, we define $p(u) := \argmin_{i\in [0,k]}d_{H_k}(u,v_i)$. We will show that for sufficiently large $k$, we have $$\bdim\paren{H_k-e} - \bdim(H_k) = d_{H_k-e}(v_{i},v_{i+1})+\Omega(k) = \frac{k}{2} +\Omega(k).$$ \begin{figure}[b] \hspace*{.6cm}\includegraphics[scale=1]{figures/DeleteEdge2_} \caption{A graph $H_k$ such that $\bdim(H_k-e) - \bdim(H_k)$ can be arbitrarily larger than $d_{H_k-e}(v_i,v_{i+1})$, where $e=v_iv_{i+1}$ and $i = \floor{\frac{3k-2}{2}}$. The vertices $v_0, \dots, v_{3k-2}$ are on a path. Additionally, each $v_j$ with $1\leq j \leq i-1$ is connected to a path $P_j$; each $v_j$ with $i+2\leq j \leq 3k-3$ is connected to a path $P_{3k-2-j}$, and vertices $v_i$ and $v_{i+1}$ are on a cycle of length $\floor{\frac{k}{2}}$. Finally, $v_0$ and $v_{3k-2}$ are both centers of a copy of spider $SP\paren{3k^{(6k)}}$.} \label{figure: DeleteEdge2_} \end{figure} Let $B = \bdim\paren{SP\paren{3k^{\paren{6k}}}}$. Let $g:V(H_k)\rightarrow \mathds{Z}^+ \cup \{0\}$ be the function that applies an efficient resolving broadcast of $SP\paren{3k^{(6k)}}$ to $S_1$ and $S_2$ on graph $H_k$. By \cref{lemma: spider}, there are vertices $z_1'$ on $S_1$ and $z_2'$ on $S_2$ with $g(z_1') - d_{H_k}(v_0, z_1')\geq 3k-2$ and $g(z_2') - d_{H_k}(v_{3k-2}, z_2')\geq 3k-2$. Function $g$ is a resolving broadcast of $H_k$ since every pair of distinct vertices in $V(H_k)$ that are on the same spider is clearly resolved, and every other pair of vertices is resolved by either $z_1'$ or $z_2'$. Thus, $\bdim(H_k)\leq 2B$. Let $f$ be an efficient resolving broadcast of the graph $H_k-e$. By \cref{lemma: spider}, we must have vertices $z_1,z_2\in \supp_G(f)$ with $f(z_1) - d_{H_k-e}(v_0, z_1)\geq 3k-2$ and $f(z_2) - d_{H_k-e}(v_{3k-2}, z_2)\geq 3k-2$. \noindent\textbf{Case 1.} There does not exist a vertex $z$ with $$f(z) - d_{H_k-e}(v_0, z)\geq 3k-2 \quad\text{and}\quad f(z) - d_{H_k-e}(v_{3k-2}, z)\geq 3k-2.$$ In this case, $z_1$ and $z_2$ are distinct vertices. We define $c_1 := f(z_1) - 3k +2$ and $d_1 :=p(z_1)$, and we similarly define $c_2 := f(z_2) - 3k +2$ and $d_2:= 3k-2-p(z_2)$. Note that $c_1 \geq d_1$ and $c_2 \geq d_2$ by \cref{lemma: spider}. If $d_1>0$, let $T_1$ be the $F_{d_1}$ subgraph induced by the vertices $u$ with $p(u)\leq d_1$ that are not on a leg of spider $S_1$. Otherwise, let $T_1$ be the graph that consists of the singular vertex $z_1$. Let $y_1 = \argmax_{V(S_1) - \{v_0\}}(f(y) - d(y, v_0))$. By \cref{lemma: F_k}, we must have \begin{equation} \label{eqn: F_k} f(y_1) - d(y_1,v_0)+ \sum_{v\in V(T_1)} f(v) \geq f(z_1) + 2d_1 + \max\set{f(y_1) - d(y_1,v_0) -2d_1, 0} - O(1) \end{equation} in order to distinguish the vertices of $T_1$. On spider $S_1$, assigning vertex $y_1$ the value $f(y_1)$ only distinguishes at most $d(y_1, v_0) + f(y_1)$ vertices on the same leg of $S_1$. In an efficient resolving broadcast of $S_1$, those $d(y_1, v_0) + f(y_1)$ vertices would have instead been distinguished with a total cost of at most $\ceil{\frac{d(y_1, v_0) + f(y_1)}{3}}$ by assigning a value of 1 to every third vertex (see proof of \cref{lemma: spider}). Thus, we can obtain the following bound on the total value assigned to the vertices in set $U_1 = V(S_1) - \{v_0, y_1, z_1\}$: \begin{equation} \label{eqn: spider} \sum_{v\in U_1} f(v) \geq B - g(z_1') - \frac{d(y_1, v_0) + f(y_1) }{3} - O(1). \end{equation} Using \eqref{eqn: F_k} and \eqref{eqn: spider}, we lower bound the total value assigned to all of the vertices $u$ with $p(u) \leq d_1$: \begin{align*} & f(y_1) + \sum_{v\in V(T_1)} f(v) + \sum_{v\in U_1} f(v) \\ \geq \text{ } &{f(z_1)} +2d_1+\max\set{ f(y_1)- d(y_1,v_0)- 2d_1, 0}+d(y_1,v_0) + {B -3k - \frac{d(y_1, v_0) + f(y_1) }{3}} - O(1) \\ \geq \text{ } & c_1+2d_1+d(y_1,v_0) + {B - \frac{d(y_1, v_0) + \paren{d(y_1,v_0)+ 2d_1 }}{3}} - O(1) \\ \geq \text{ }& {c_1} +\frac{ 4d_1 }{3} + B - O(1). \end{align*} In this case, the sum of the values assigned to vertices $u$ with $p(u) > d_1$ must be at least $B$ in order to distinguish the vertices of spider $S_2$. Thus, we have $$\bdim(H_k-e) - \bdim(H_k) \geq \paren{\sum_{v\in V(H_k - e)} f(v)} - 2B \geq c_1 + \frac{4}{3}d_1 - O(1).$$ If $d_1\geq \frac{k}{4}$, then we have $$\bdim(H_k-e) - \bdim(H_k) \geq c_1 + \frac{4d_1}{3} - O(1)\geq \frac{7d_1}{3} - O(1) \geq \frac{7k}{12} - O(1)= \frac{k}{2} +\Omega(k),$$ as desired. By symmetry, if $d_2 \geq \frac{k}{4}$, then we are also done. Now, we consider $d_1, d_2 < \frac{k}{4}$. The $\frac{k^2}{2} \pm O(k)$ vertices in region $A_2$ (see \cref{figure: DeleteEdge2_geo}) are all reached by $z_2$, and all but $O(k)$ of them must be reached by another vertex that is not in $B_2$ in order to be distinguished. Additionally, at least $\frac{k^2}{2} - O(k)$ of the vertices in $A_1$ must be reached by vertices not in $B_1$ in order to be distinguished. Vertex $z_1$ reaches at most $(c_1+d_1 + O(1))k $ of the vertices in $A_2$, and the total value assigned to the vertices in $B_1$ is at least $B + c_1 + \frac{4}{3}d_1 - O(1)$. Similarly, vertex $z_2$ reaches at most $(c_2+d_2 + O(1))k$ of the vertices in $A_1$, and the total value assigned to the vertices in $B_2$ is at least $B+ c_2 + \frac{4}{3}d_2 - O(1)$. Any vertex $v$ that is not in $B_1$ or $B_2$ has $g(v) = 0$ and reaches at most $k \cdot f(v) + O(1)$ of the vertices in $A_1 \cup A_2$. Thus, in this case we have $$\bdim(H_k-e) - \bdim(H_k)\geq \frac{1}{k}\paren{\abs{A_1 \cup A_2} - O(k)}= k - O(1) = \frac{k}{2} +\Omega(k).$$ \begin{figure}[h] \centering \includegraphics[scale=1]{figures/DeleteEdge2_geo_corrected} \caption{A geometric interpretation of graph $H_k - e$. The spiders centered at $v_0$ and $v_{3k-2}$ (not pictured) are also in $B_1$ and $B_2$, respectively.} \label{figure: DeleteEdge2_geo} \end{figure} \noindent\textbf{Case 2.} There exists a vertex $z$ with $$f(z) - d_{H_k-e}(v_0, z)\geq 3k-2 \quad \text{and}\quad f(z) - d_{H_k-e}(v_{3k-2}, z)\geq 3k-2.$$ The assumption in this case directly implies that $$f(z) \geq \frac{d_{H_k-e}(v_0,v_{3k-2})}{2} + 3k - O(1) = 4.75k - O(1).$$ Without loss of generality, we assume $p(z) \geq i$. Let $T_1$ be the $F_{i-1}$ subgraph induced by the vertices $u$ with $p(u)\leq i-1$ that are not on a leg of spider $S_1$. By the same reasoning as in the first case, in order for the vertices on $T_1$ to be distinguished, an additional total value of $ \frac{4}{3} \cdot d(v_0,v_{i-1}) - O(1)$ must be assigned to the vertices $u$ with $p(u) < i$. Thus, we have \begin{align*} \bdim(H_k-e) - \bdim(H_k)&\geq f(z)-g(z_1')-g(z_2') + \frac{4}{3}\cdot \frac{3k}{2} - O(1)\\ &\geq 4.75k - 6k + 2k - O(1)\\ &= \frac{k}{2} +\Omega(k), \end{align*} as desired. \end{proof} While the value $\bdim(G-e) - \bdim(G)$ can be arbitrarily large, the ratio $\frac{\bdim(G-e)}{\bdim(G)}$ is bounded. We prove this below, using some ideas from the proof that $\dim(G-e)\leq \dim(G)+2$ in \cite{eroh2015effect}. Recall that a \textit{geodesic} is a shortest path between two points. \begin{reptheorem}{thm: last} For all graphs $G$ and any edge $e\in E(G)$, we have $\frac{\bdim(G-e)}{\bdim(G)} \leq 3$. \end{reptheorem} \begin{proof} Let $f$ be an efficient resolving broadcast of $G$, and let vertices $u$ and $v$ be the endpoints of edge $e$. Let $b = \max_{v\in V(G)} f(v)$. We will show that function $f'$, which is identical to $f$, except with $f'(u) = f'(v) = b$, is a resolving broadcast of $G-e$. Then, we will be done since $$3\bdim(G) = 3\sum_{w\in V(G)} f(w) \geq \sum_{w\in V(G-e)}f'(w)\geq \bdim(G-e).$$ Let $z\in \supp_G(f)$, and let $x$ and $y$ be two vertices with $d_{f(z)}(x,z) \neq d_{f(z)}(y,z)$ in graph $G$. Suppose that $x$ and $y$ are no longer resolved by $z$ after the edge $e$ is deleted; that is, $d_{f(z)}(x,z) = d_{f(z)}(y,z)$ in graph $G-e$. Then, we must have $d_G(u,z) \neq d_G(v,z)$ since removing edge $e=uv$ increases the distance from $z$ to at least one vertex in the graph. Without loss of generality, we assume that $d_G(v,z) < d_G(u,z)$. We consider two cases and show that $u$ resolves $x$ and $y$ in graph $G-e$ in both cases; that is, we show that we have $d_{f'(u)}(x,u) \neq d_{f'(u)}(y,u)$ in graph $G-e$ in both cases. \noindent \textbf{Case 1.} Removing edge $e$ only increases the distance from $z$ to one of $x$ and $y$ (say $x$). \\ Edge $e$ must lie on every $x-z$ geodesic in $G$. Since $d_G(v,z) < d_G(u,z)$, we have an $x-u$ geodesic in $G$ that does not go through edge $e$. Moreover, we have \begin{align*} f'(u) &\geq f(z)\geq \min\left\{d_G(x,z),d_G(y,z)\right\} = d_G(x,z)\geq d_G(x,u) = d_{G-e}(x,u). \end{align*} The above inequality shows that $u$ reaches $x$ with respect to $f'$ in graph $G-e$. Thus, it remains to be shown that $d_{G-e}(x,u) \neq d_{G-e}(y,u)$ in this case. \noindent \textit{Subcase 1.} $f(z)\geq d_G(y,z) = d_{G-e}(y,z)$.\\ In this subcase, $d_{f(z)}(x,z) = d_{f(z)}(y,z)$ in graph $G-e$ implies that $d_{G-e}(x,z) = d_{G-e}(y,z)$, and so \begin{align*} d_{G-e}(x,u) &= d_G(x,u) = d_G(x,z) - d_G(z,u) < d_{G-e}(x,z) - d_G(z,u)\\ &= d_{G-e}(y,z) - d_G(z,u) = d_{G}(y,z) - d_G(z,u)\leq d_G(y,u) \\ &\leq d_{G-e}(y,u). \end{align*} \noindent \textit{Subcase 2.} $f(z) < d_G(y,z) = d_{G-e}(y,z)$.\\ If $d_{G-e}(x,u) = d_{G-e}(y,u)$, then we have \begin{align*} d_G(y,z) &\leq d_{G}(y,u) + d_G(u,z) \leq d_{G-e}(y,u) + d_G(u,z)\\ &= d_{G-e}(x,u) + d_G(u,z) = d_{G}(x,u) + d_G(u,z)\\ &= d_G(x,z) \leq f(z), \end{align*} a contradiction. \noindent \textbf{Case 2.} Removing edge $e$ increases the distance from $z$ to both $x$ and $y$.\\ Edge $e$ must lie on every $x-z$ geodesic and every $y-z$ geodesic in graph $G$. Since $d_G(v,z) < d_G(u,z)$, we have $d_G(u,x) < d_G(v,x)$ and $d_G(u,y) < d_G(v,y)$. Because $z$ resolves $x$ and $y$ in $G$, at least one of $x$ and $y$ (say $x$) is reached by $z$ in $G$. Then, $$f'(u) \geq f(z) \geq d_{G}(x, z) \geq d_G(x,u) = d_{G-e}(x,u)$$ and $$d_{G-e}(x,u) = d_G(x,u) \neq d_G(y,u)=d_{G-e}(y,u),$$ so vertex $u$ resolves vertices $x$ and $y$ in graph $G-e$. \end{proof} \section{Future Work} \label{Section: Future_Work} In \cref{corollary: adim_of_acyclic}, we showed that $\adim(G) = \Omega(\sqrt{n})$ for all acyclic graphs $G$ of order $n$. To our knowledge, the best such lower bound before our work is the $\Omega(\log{n})$ bound on the adjacency dimension of general graphs of order $n$ given by Geneson and Yi in \cref{thm: lowerbound_adim_bdim}, which they showed to be asymptotically optimal using a family of graphs constructed by Zubrilina in \cite{zubrilina2018edge}. We ask if our lower bound on the adjacency dimension of acyclic graphs is asymptotically optimal. \begin{question} Is there a family of acyclic graphs $\set{G_k}_{k\in \mathds{Z}^+}$ with $\adim(G_k) = \Theta\paren{\sqrt{|V(G_k)|}}$ for every $k\in \mathds{Z}^+$? \end{question} The bounds that we derived in \cref{thm: acyclic_lower_bound} and \cref{thm: last} are sharp up to a constant factor. Sharper bounds may be obtained by examining the steps of the proofs more carefully. Additionally, it would be interesting to determine the exact broadcast dimension of some special graphs for which the broadcast dimension is currently only known up to a constant factor. \begin{question} What is the broadcast dimension of the grid graph $P_m \square P_n$? \end{question} \begin{question} What is the broadcast dimension of the graph $F_k$ from \cref{def: F_k}? \end{question} We note that the broadcast dimension of the grid graph $P_m \square P_n$ is at most $2m+2n$: for paths $P_m : x_1, x_2,\dots x_m$ and $P_n : y_1, y_2,\dots y_n$, the function $f$ that assigns $m+n$ to $\paren{x_{1}, y_1}$ and $\paren{x_{1}, y_{n}}$ and assigns 0 to the rest of the vertices is a resolving broadcast of $P_m \square P_n$. Additionally, the broadcast dimension of $F_k$ is at most $3k$: function $f$ with $f(v_0) = 2k$, $f(v_k) = k$, and $f(w)=0$ for all $w\in V(F_k) - \{v_0, v_k\}$ is a resolving broadcast of $F_k$. \cref{lemma: F_k} makes partial progress towards finding the broadcast dimension of $F_k$. In \cref{Section: Edge_Deletion}, we show that both $\bdim(G-e)-\bdim(G)$ and $\bdim(G) - \bdim(G-e)$ can be arbitrarily large and that $\frac{\bdim(G-e)}{\bdim(G)} \leq 3$ for all graphs $G$ and any edge $e\in E(G)$. These results naturally lead us to ask the following question: \begin{question} Is $\frac{\bdim(G)}{\bdim(G-e)}$ bounded from above for all graphs $G$ and any edge $e\in E(G)$? \end{question} On a similar note, Geneson and Yi showed in \cite{geneson2020broadcast} that both $\frac{\bdim(G)}{\bdim(G-v)}$ and $\bdim(G-v) - \bdim(G)$ can be arbitrarily large. The corresponding problem for $\frac{\bdim(G-v)}{\bdim(G)}$ remains open. \begin{question} Is $\frac{\bdim(G-v)}{\bdim(G)}$ bounded from above for all graphs $G$ and any vertex $v\in V(G)$? \end{question} To better understand how metric dimension and broadcast dimension compare to each other, it would be interesting to derive more properties of broadcast dimension that are analogues to known properties of metric dimension. For example: \begin{question} For a graph $G$ and $n\in \mathds{Z}^+$, bound $\bdim(G\square P_n)$ and $\bdim(G\square C_n)$ in terms of some function of $G$ and $n$. \end{question} \begin{question} For graphs $G$ and $H$, bound $\bdim(G\square H)$ in terms of some function of $G$ and $H$. \end{question} \begin{question} Is determining the broadcast dimension of a graph an NP-hard problem? \end{question} It is NP-hard to determine the metric dimension and adjacency dimension of a general graph (see \cite{garey1979computers}, \cite{fernau2018adjacency}, respectively). Determining the domination number of a general graph is also an NP-hard problem \cite{garey1979computers}. Heggernes and Lokshtanov \cite{heggernes2006optimal} found a polynomial-time algorithm for computing the broadcast domination number of arbitrary graphs, and both the domination number and broadcast domination number of a tree can be determined in linear time (see \cite{cockayne1975linear},\cite{dabney2009linear}, respectively). We ask the corresponding question for the broadcast dimension of trees. \begin{question} Is there a polynomial-time algorithm for determining the value of $\bdim(T)$ for every tree $T$? \end{question} We refer to \cite{geneson2020broadcast} for more open questions about broadcast dimension. Finally, we note that it would also be interesting to study the broadcast dimension of directed graphs and graphs with weighted edges. \section{Acknowledgments} This research was conducted at the 2020 University of Minnesota Duluth Research Experience for Undergraduates (REU) program, which is supported by NSF-DMS grant 1949884 and NSA grant H98230-20-1-0009. I would like to thank Joe Gallian for organizing the program, suggesting the problem, and supervising the research. I would also like to thank Amanda Burcroff, Brice Huang, and Joe Gallian for reading this paper and giving valuable suggestions and Amanda Burcroff for useful discussions over the course of the program.
2,877,628,091,402
arxiv
\section{Classifier Single-epoch performance}\label{sec:appendix} \begin{figure*} \begin{center} \subfloat{ \includegraphics[width=0.45\textwidth] {ml_performance_surface_brightness.pdf} } \subfloat{ \includegraphics[width=0.45\textwidth] {ml_performance_medsky.pdf} } \\ \subfloat{ \includegraphics[width=0.45\textwidth] {ml_performance_lmt_mag.pdf} } \subfloat{ \includegraphics[width=0.45\textwidth] {ml_performance_phi_IQ.pdf} } \end{center} \caption{This figure is an extension of Fig.~\ref{fig:efficiency_ML_performance}. We compare the performance of the marginalized single parameter efficiency of the trained classifier compared to that of the original distributions in Fig.~\ref{fig:efficiency_single_parameter}. We see the behavior of the ISP being reproduced by feeding the classifier a few thousand points. } \label{fig:ml_performace_extended} \end{figure*} In Fig.~\ref{fig:efficiency_ML_performance}, we made a comparison between the marginalized single parameter efficiency for the single-epoch transient brightness from the classifier predictions. Here, we show it for the remaining parameters. While the final classifier is trained on the full dataset, to make the comparison, we train it on $90\%$ of the total fake point source simulations we performed, as mentioned in Sec.~\ref{sec:point_source_transients}. From the remaining $10\%$ sample size, we make a random selection of points (progressively increasing), feed them to the classifier and bin the results in the same manner as in Fig.~\ref{fig:efficiency_single_parameter} to compare marginalized efficiency plots. These are shown in Fig.~\ref{fig:ml_performace_extended} and Fig.~\ref{fig:efficiency_ML_performance}, the latter presented earlier. We see that the behavior starts to converge to that of the ISP in a few thousand points. \FloatBarrier \subsection{Rates} The computation of the rate posterior assumes the likelihood of observing $N$ candidate events is an inhomogeneous Poisson process \citep{loredo95, farr15}. Our \emph{search} will filter the SN~Ia population based on the model presented in Sec.~\ref{sec:rates} at the expense of some contamination from other transient types, potentially with similar lightcurve morphologies. If the mean count of these impurities is $\Lambda_0$, the likelihood function is: \begin{eqnarray} p\left(N \vert \Lambda_0, \Lambda_{\text{SNe}}\right) \propto \left(\Lambda_0 p_0 + \Lambda_{\text{SNe}} p_{\text{SNe}}\right)^N \nonumber \\ \times \exp{\left(- \Lambda_0 - \Lambda_{\text{SNe}}\right)}, \label{eq:discussion_likelihood} \end{eqnarray} where $p_{\text{SNe}}$ ($p_0$) is the \emph{a priori} weight that a transient is (isn't) a SN~Ia after the filtering process. With a suitable choice of prior, we can use Bayes' theorem to obtain the posterior. Considering the Jeffreys' prior: \begin{equation} p\left(\Lambda_0, \Lambda_{\text{SNe}}\right) = \frac{1}{\sqrt{\Lambda_0}} \frac{1}{\sqrt{\Lambda_{\text{SNe}}}}, \label{eq:discussion_prior} \end{equation} the posterior takes the form: \begin{eqnarray} p\left(\Lambda_0, \Lambda_{\text{SNe}}\right \vert N) &\propto & p\left(N \vert \Lambda_0, \Lambda_{\text{SNe}}\right) p\left(\Lambda_0, \Lambda_{\text{SNe}}\right) \nonumber \\ &\propto & \frac{\left(\Lambda_0 p_0 + \Lambda_{\text{SNe}} p_{\text{SNe}}\right)^N} {\sqrt{\Lambda_0\Lambda_{\text{SNe}}}} \nonumber \\ &&\times \exp{\left(- \Lambda_0 - \Lambda_{\text{SNe}}\right)}. \label{eq:discussion_posterior_full} \end{eqnarray} Integrating out the nuisance parameter, $\Lambda_0$, we have the marginalized posterior on $\Lambda_{\text{SNe}} = R\langle VT \rangle$, or equivalently on $R$: \begin{eqnarray} p\left(R\vert N\right) &=& \int_0^{\infty} p\left(\Lambda_0, \Lambda_{\text{SNe}}\right \vert N) \mathrm{d}\Lambda_0 \nonumber \\ &\propto & \frac{e^{-R\langle VT \rangle}} {\sqrt{R\langle VT \rangle}} \times \left[\left(R\langle VT \rangle p_{\text{SNe}}\right)^N + \right. \nonumber \\ &&\left. \frac{N}{2}p_0\left(R\langle VT \rangle p_{\text{SNe}}\right)^{N-1}\right], \label{eq:discussion_posterior} \end{eqnarray} where we expand Eq.~(\ref{eq:discussion_posterior_full}) and integrate, keeping terms up to linear order in $p_0$ since we expect that $p_0 \ll p_{\text{SNe}}$. \subsection{Approximate SN~Ia Count in iPTF} Type Ia supernova rates have been studied earlier in the literature \citep{Dilday_2008, subaru, 10.1093/mnras/stz258}. Deep field instruments have provided estimates of the Ia rate out to high redshift \citep{subaru}. The intermediate Palomar Transient Factory, being an all sky survey has a comparatively lower sensitivity to SNe~Ia at \replaced{$z_{\mathrm{median}} = 0.098$}{$z_{\text{median}}^{\text{Ia}} = 0.099$}, evaluated in Sec.~\ref{sec:rates}. The SDSS-II supernova survey has estimated the volumetric SN~Ia rate at $z\approx 0.1$ to be $R_{\mathrm{SNIa}}^{\mathrm{SDSS-II}}\sim 2.9^{+1.07}_{-0.75} \times 10^{-5} \mathrm{Mpc^{-3}yr^{-1}}$ \citep{Dilday_2008}. Using our estimate of the space-time sensitive volume from Eq.~(\ref{eq:rates_VT_value}), an estimate of the count of SNe~Ia in iPTF is $630 - 1160$. This is consistent with $1035$ objects tagged ``SN~Ia'' during the survey time. \subsection{Future Work} While the number of transients tagged as ``SN~Ia'' by human scanners during iPTF survey time seem consistent with our ballpark above, the systematic uncertainty of such a classification remains unquantified. The quantities $p_0$, $p_{\text{SNe}}$ and $N$ in Eq.~(\ref{eq:discussion_posterior}) require a systematic search into the iPTF archival data to retrieve the candidate count and systematic errors associated with such a classification. We defer this and the computation of SN~Ia volumetric rate to a future work in the series. The methodology developed here facilitates the computation of space-time volume sensitivities of general transient types. Of particular interest are the fast transients in iPTF archival data as discussed in \cite{ho18}. Also, the observation of the ``kilonova'' resulting from the binary neutron star merger, GW170817 \citep{2017ApJ...848L..12A, dynamical_ejecta, Abbott_2017}, hints towards the association of transients to binary neutron star mergers. There is no evidence of detection of such a transient in the iPTF data, in which case rate upper limits could be placed due to non-detection. \subsection{Single Parameter Efficiencies} \begin{figure*}[htp] \begin{center} \subfloat{% \includegraphics[width=0.44\textwidth , trim=1cm 0cm 0cm 3.5cm] {efficiency_stamp_mag.pdf} } \subfloat{% \includegraphics[width=0.44\textwidth , trim=0cm 0cm 1cm 3.5cm] {efficiency_surface_brightness.pdf} } \\ \subfloat{% \includegraphics[width=0.44\textwidth , trim=1cm 0cm 0cm 1cm] {efficiency_airmass.pdf} } \subfloat{% \includegraphics[width=0.44\textwidth , trim=0cm 0cm 1cm 1cm] {efficiency_medsky.pdf} } \\ \subfloat{% \includegraphics[width=0.44\textwidth , trim=1cm 0cm 0cm 1cm] {efficiency_zp.pdf} } \subfloat{% \includegraphics[width=0.44\textwidth , trim=0cm 0cm 1cm 1cm] {efficiency_lmt_mag.pdf} } \\ \subfloat{% \includegraphics[width=0.44\textwidth , trim=1cm 1.5cm 0cm 1cm] {efficiency_phi_IQ.pdf} } \subfloat{% \includegraphics[width=0.44\textwidth , trim=0cm 1.5cm 1cm 1cm] {efficiency_moonillf.pdf} } \end{center} \caption{The single parameter efficiencies, defined in Eq.~\ref{eq:efficiency_single_parameter_definition} are shown here. In each of the panel, the x-axis is the parameter of interest. The top two panels are parameters which are the intrinsic properties while the remaining are those taken from observing conditions. We also separate out the efficiencies based on the filter. While small deviations exists in the curves the general trend is unchanged based on the filter. Since there were more number of images (almost 3 times for field 100019) taken in \emph{R} filter than \emph{g} filter during iPTF survey, the range of observing conditions are larger for the \emph{R} filter. } \label{fig:efficiency_single_parameter} \end{figure*} The \emph{single parameter} efficiency is the marginalized version of Eq.~(\ref{eq:recovery_efficiency_definition}). Suppose our parameter of interest is $\theta$ and the other ``nuisance'' parameters are given by $\pmb{\gamma}$, such that in Eq.~(\ref{eq:recovery_efficiency_definition}), $\pmb{\lambda} = \{\theta, \pmb{\gamma} \}$. The single parameter efficiency is: \begin{equation} \varepsilon(\theta) = \ffrac{\left[\int_{\pmb{\gamma}} N_{\text{rec}}(\theta, \pmb{\gamma}) \mathrm{d}\pmb{\gamma}\right] \mathrm{d}\theta} {\left[\int_{\pmb{\gamma}} N_{\text{tot}}(\theta, \pmb{\gamma})\mathrm{d}\pmb{\gamma}\right] \mathrm{d}\theta} \label{eq:efficiency_single_parameter_definition} \end{equation} In Fig.~\ref{fig:efficiency_single_parameter} we show the single parameter efficiencies. The expected trend of missing \added{faint} transients \deleted{at high apparent magnitude} is seen in the plot for $m_{\text{inj}}$. We find that the recovery efficiency starts to drop for transients by the 20th magnitude and sensitivity is almost nil by the 22nd magnitude. \subsection{Multi-dimensional Detectability}\label{sec:multi_dim_efficiency} In this section, we make a selection of parameters from the full parameter set, $\pmb{\lambda}$, to those on which the detectability depends strongly. In other words, the detectability is a multi-variate function of all the possible parameters which influences the detection of a transient. We identify the minimal set which captures maximum variability. There can be correlations among a pair of parameters. For example, the sky-brightness, $F_{\text{sky}}$ and the limiting magnitude, $m_{\text{lim}}$, are correlated - a bright sky hinders the depth and results in a low value of limiting magnitude. The variation of the marginalized efficiencies shown in Fig.~\ref{fig:efficiency_single_parameter} assist us with the choice of such a parameter set. Since the trend in the single parameter efficiencies are similar to those from PTF, we select the parameters considered by \cite{frohmaier_2017} with a minor difference in the usage of the galaxy surface brightness directly, as used in \cite{Frohmaier_2018}, in place of the $F_{\mathrm{box}}$ \footnote{Background subtracted flux in a 3x3 box in the location of transient.} parameter used in the former. This is justified because our fakes were injected in galaxies. \deleted{subsubsection \textbf{Relevant Parameters}} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth , trim=1cm 1.5cm 0cm 0cm] {ml_performance_stamp_mag.pdf} \end{center} \caption{Comparison between single parameter efficiency of transient brightness as predicted by trained single-epoch classifier in Eq.~(\ref{eq:efficiency_epsilonhat_definition}) versus the distribution obtained from the ISP. The original curve has \replaced{ $\approx 2.4\times 10^6$ points, equal to the number of fake simulated transients } {$\sim 10^6$ points used to train the classifier}. The ML curves are made by binning the predictions made by the single-epoch classifier on a few thousand random points sampled from the parameter space of the injections (see Eq.~(\ref{eq:efficiency_multidim_parameters})). Two cases for $10^3$ and $10^4$ points are shown. We see that the behavior of the classifier converges to that of the ISP within a small sample size ($\lesssim 1\%$ compared to the size of original distribution; see Appendix~\ref{sec:appendix} for other parameters) } \label{fig:efficiency_ML_performance} \end{figure} \begin{table} \begin{center} \begin{tabular}{ccc} \hline Training $\%$ & Testing $\%$ & Avg. mis-classification\\ \hline \hline 75 $\%$ & 25 $\%$ & 5.776 $\%$\\ 80 $\%$ & 20 $\%$ & 5.760 $\%$\\ 85 $\%$ & 15 $\%$ & 5.745 $\%$\\ 90 $\%$ & 10 $\%$ & 5.758 $\%$\\ \hline \end{tabular} \end{center} \caption{The table shows the average misclassification obtained for the \texttt{KNearestNeighbor} classifier. The complete dataset contains $\approx 2.24 \times 10^6$ fake point source injections of which $\approx 1.62 \times 10^6$ ($\approx 6.2 \times 10^5$) are found (missed) by the ISP. This is split into respective training and testing fractions. The right-most column shows the fraction of the testing set for which the predictions made by the classifier, trained on the corresponding training fraction differed from the actual value. The misclassification does not change significantly as the size of training data is varied and is attributed mostly to systematics. We quote a conservative value of $6\%$ as the systematic uncertainty of the classifier. } \label{tab:efficiency_avg_misclassification} \end{table} We choose, the following set to represent the dependence of detectability: \begin{equation} \pmb{\beta} = \{m, S_{\text{gal}}, F_{\text{sky}}, \Phi_{\text{IQ}}, m_{\text{lim}}\}. \label{eq:efficiency_multidim_parameters} \end{equation} Here $m$ is the apparent magnitude of the transient, $S_{\text{gal}}$ is the host galaxy surface brightness, $F_{\text{sky}}$ is the sky brightness, $\Phi_{\text{IQ}}$ is the ratio of the astronomical seeing to that of the reference image and $m_{\text{lim}}$ is the limiting magnitude. The quantities $m$ and $S_{\text{gal}}$ are natural in capturing detectability. Sky brightness affects the detectability in a strong way, as is apparent from Fig.~\ref{fig:efficiency_single_parameter}. The $\Phi_{\text{IQ}}$ parameter captures the variability of the atmosphere. Finally, the limiting magnitude, $m_{\text{lim}}$, although correlated with $F_{\text{sky}}$, captures longer exposure times and status of instrument electronics. With this set, we use the machinery of supervised learning provided by the \texttt{scikit-learn} library \citep{scikit-learn} to train a binary classifier based on the results of the ISP. Once trained, the classifier outputs a probability of detection given arbitrary but physical values of $\pmb{\beta}$. We denote this trained classifier by $\hat{\varepsilon}$: \begin{equation} \hat{\varepsilon} = \hat{\varepsilon}(m, S_{\text{gal}}, F_{\text{sky}}, \Phi_{\text{IQ}}, m_{\text{lim}}). \label{eq:efficiency_epsilonhat_definition} \end{equation} The \texttt{scikit-learn} library provides a suite of classifiers. We choose the non-parametric \texttt{KNearestNeighbor} classifier based on speed and accuracy given our large volume of training data. \added{ Our complete dataset comprises of $\sim 2.24 \times 10^6$ fake point source injections of which $\sim 1.62 \times 10^6$ ($\sim 6.2 \times 10^5$) are found (missed) by the ISP. } We train the \replaced{model}{classifier} using 11 neighbors - twice the number of dimensions plus one to break ties. The observation of a fiducial transient is a point in this parameter space. To decide if that point is ``missed'' or ``found'', we use a majority vote from the nearest 11 neighbors. \added{ To cross-validate the performance, the dataset is split into a training set containing $90\%$ of the full dataset, and a testing set containing the remaining $10\%$. } We checked that increasing the number of neighbors does not significantly increase the correctness of predictions made by the classifier. We note that one could use a different threshold for this classification. For example, a different option could be to use greater than 3 ``found'' neighbors to call the arbitrary point as found. However, it comes at a cost of misclassification. \replaced{ We calculate the systematic uncertainty of the classifier by splitting the original data into training and testing sets. We find that by using the majority vote criterion the misclassification is } { From the predictions of the classifier on the testing set, we find the systematic uncertainty of the classifier to be } $\approx 6\%$ i.e. 6 out of 100 predictions made by the classifier is expected to be either true negative or false positive cases. The result does not change much if the size of the training and testing set is varied \added{(see Table~\ref{tab:efficiency_avg_misclassification})}. A comparison between the predictions made by the trained classifier and the original ISP efficiency with the transient magnitude is presented in Fig.~\ref{fig:efficiency_ML_performance}. We see that the behavior of the ISP is reproduced by feeding the classifier with only a few thousand points randomly chosen from the parameter space. \subsection{Point Source Transients}\label{sec:point_source_transients} We follow the \emph{clone stamping} technique used by \cite{frohmaier_2017} for PTF to perform our fake point source injections. The parameters describing these fake transients are \emph{single epoch} - they represent the intrinsic properties of the object and observing conditions at a particular epoch. In other words, here we assess the detectability given the transient was in the field of view of the instrument. The computational cost for performing injections into all iPTF images and running ISP on them is significant. Therefore, we carry out the process in a single iPTF field 100019. We choose this field since the distribution of transient population in this field is an accurate representation of the transient population in the sky observed from Palomar (see Fig. 1 of \cite{frohmaier_2017}). \deleted{subsubtitle \textbf{Injection Procedure}} \begin{figure*}[htp] \begin{center} \subfloat[Brighter fake transient]{% \includegraphics[width=0.32\textwidth] {original_0.pdf} \includegraphics[width=0.32\textwidth] {faked_0.pdf} \includegraphics[width=0.32\textwidth] {subtraction_0.pdf} } \\ \subfloat[Dimmer fake transient]{% \includegraphics[width=0.32\textwidth] {original_4.pdf} \includegraphics[width=0.32\textwidth] {faked_4.pdf} \includegraphics[width=0.32\textwidth] {subtraction_4.pdf} } \end{center} \caption{An example of an injected transient and the corresponding difference image thumbnail obtained after the image subtraction. The leftmost thumbnail (both panels) is from the original image, the middle thumbnail is a result after a transient is injected, the right thumbnail shows the difference image. The location of the cross-hair is the approximate point where the transient was injected. } \label{fig:injections_clone_stamping} \end{figure*} The fake injections are \emph{bright} stars chosen from each original image. These are objects having the following properties: \begin{align} \begin{split} m_{*} \in [13.5, 16] \quad;\quad &\texttt{CLASS\_STAR} \in [0.5, 1.0] \\ \texttt{FWHM} \in [1.0, 3.0] \quad;\quad &\texttt{ELLIP} \in [0.0, 0.3]. \end{split} \label{eq:injections_bright_stars} \end{align} Here $m_{*}$ is the apparent magnitude, $\texttt{CLASS\_STAR}$ is a quantity having a value between 0 (not star-like) and 1 (star like). $\texttt{FWHM}$ is the full width at half maximum, in pixels. $\texttt{ELLIP}$ is the ellipticity of the object. These quantities are reported after running \texttt{SExtractor} \citep{sextractor} on the original images. \added{ The reason we choose objects in this range is because we want the point spread function (PSF) to be well estimated, which is the case for bright stars having a high signal to noise ratio $\gtrsim 100$ ($m_{*} \leq 16$). At the same time we want to avoid pixel saturation and therefore select stars with $m_{*} \geq 13.5$. } Objects falling in a 50 pixel wide edge boundary are left out since they could potentially be affected by image subtraction artifacts. A square of side length $\sim 9$ arc seconds \footnote{More precisely, 9 pixels. $1\text{ pix.} \approx 1.01''$.} , centered around the star and local-background subtracted, constitutes a \emph{stamp}. A stamp containing any other object apart from the source star is avoided. The local-background refers to that reported by \texttt{SExtractor}. The stamp is scaled by an appropriate scaling factor to create a point source transient of desired magnitude. Each transient is allocated a host galaxy \footnote{About 50 fake transients were injected in each image; $90\%$ having an associated host galaxy, $10\%$ away from any host galaxy. In this study we only use the injections in host galaxies.}. We follow \cite{frohmaier_2017} regarding the location in the host and place our stamp at a random pixel location within a elliptical radius \footnote{\texttt{KRON\_RADIUS} in \texttt{SExtractor}} of 3 pixels. This value contains sufficient amount of the flux from the galaxy. This procedure is performed on all the images in field 100019 of iPTF, ten-fold, with a total of \replaced{$\approx 2.4 \times 10^6$}{$\approx 2.24 \times 10^6$} injected transients. The transient magnitudes are chosen uniformly between 15th and 22nd magnitude with the constraint that the stamp is one magnitude fainter than the original star. \added{ We only re-scale to fainter magnitudes because we do not want artifacts like noise residuals from the average background subtraction to be scaled up as noise spikes. } Therefore, $m_{\text{inj}}$ follows: \begin{equation} m_{\text{inj}} \sim \begin{cases} U(15, 22) &; m_{*} \in (13.5, 14) \\ U(m_{*} + 1, 22) &; \text{otherwise} \end{cases}. \label{eq:injections_m_stamp_distribution} \end{equation} An example of an injected transient in a galaxy and the new object recovered by the ISP is shown in Fig.~\ref{fig:injections_clone_stamping}. \deleted{\texttt{subsubsection}, in favor of \texttt{subsection}.} \subsection{Recovery Criteria}\label{sec:recovery_criteria} \begin{figure}[htp] \begin{center} \includegraphics[width=1.0\columnwidth, trim=0cm 1cm 0cm 0cm] {theta_iq_cumul_hist.pdf} \end{center} \caption{The figure shows the cumulative histogram of the quantity $\Theta_{\text{IQ}}$, defined as the ratio between the astronomical seeing of the image to that of the reference image as given in Eq.(\ref{eq:recovery_theta_iq}). The threshold value $\Theta^{99\%}_{\mathrm{IQ}} = 0.48$ corresponds to the $99\%$ percentile. We place a constraint of this value when the objects recovered by the pipeline are spatially cross matched to an injected transient. } \label{fig:recovery_theta_iq} \end{figure} The recovery efficiency $\varepsilon$ is defined as the ratio of the number of injections recovered in a part of the parameter space to the total number of injections in that part. Let our injections be described by parameters $\pmb{\lambda}$, then: \begin{equation} \varepsilon(\pmb{\lambda}) = \ffrac{N_{\text{rec}}(\pmb{\lambda}) \mathrm{d}\pmb{\lambda}} {N_{\text{tot}}(\pmb{\lambda}) \mathrm{d}\pmb{\lambda}} \label{eq:recovery_efficiency_definition} \end{equation} The quantity in the numerator and denominator is the number of recovered and total injections respectively $\in \left(\pmb{\lambda}, \pmb{\lambda} + \mathrm{d}\pmb{\lambda}\right)$. Here $\pmb{\lambda}$ includes both intrinsic source properties of the transient and its environment along with the observing conditions. Examples of intrinsic properties include the magnitude of the transient and the surface brightness of the host galaxy where as those for observing conditions include airmass or sky brightness. While we control fake transient brightness, the observing conditions are those of the images themselves. Since images across the full survey time are used, the parameter space of the observing conditions is automatically spanned. We determine recovery based on the spatial cross matching of the injections with new objects reported after running the ISP. To determine the tolerance to be imposed during the cross-matching, we define $\Theta_{\text{IQ}}$ as: \begin{equation} \Theta_{\text{IQ}} = \frac{ \sqrt{(x_{\text{inj}} - x_{\text{rec}})^2 + (y_{\text{inj}} - y_{\text{rec}})^2} } {\Phi} \label{eq:recovery_theta_iq} \end{equation} where $\Theta_{\text{IQ}}$ is the distance between the injected and the recovered sources in units of the seeing, $\Phi$. We choose the threshold of $\Theta_{\text{IQ}}$ such that $99\%$ of the found injections lie within this threshold, which has a value of $\Theta^{99\%}_{\mathrm{IQ}} = 0.48$ (see Fig.~\ref{fig:recovery_theta_iq}). We also impose real-bogus score threshold, $\texttt{RB2} \geq 0.1$ on the new object. This threshold on $\texttt{RB2}$ is inspired from survey operation thresholds. Out of the \replaced{$\approx 2.4 \times 10^6$ injections, we recover $\approx 1.7 \times 10^6$} {$\approx 2.24 \times 10^6$ injections, we recover $\approx 1.62 \times 10^6$}. \subsection{iPTF Image Subtraction Pipeline} The iPTF real-time image subtraction pipeline (henceforth ISP) was hosted at the National Energy Research Scientific Computing Center (NERSC). A complete exposure of 11 working CCDs was transferred to NERSC immediately after data acquisition to search for new candidates. The pipeline preprocessed the images to remove bias and correct for flat-fielding. It solved for astrometry and photometry, and performed image subtraction using the \texttt{HOTPANTS} algorithm \citep{2015ascl.soft04004B}. New candidates were assigned a \emph{real-bogus} classification score between 0 and 1 corresponding to bogus and real respectively \citep{real_bogus_2}. Additionally, candidates would be cross-matched to external catalogs to remove asteroids, active galactic nuclei (AGNs) and variable stars. \section{Introduction} \label{sec:intro} \input{introduction} \section{Intermediate Palomar Transient Factory} \label{sec:isp} \input{isp} \section{Fake Transients} \label{sec:injections} \input{injections} \section{Single Epoch Detectability} \label{sec:efficiency} \input{efficiency} \section{Lightcurve Recovery} \label{sec:rates} \input{rates} \section{Discussion and Conclusions} \label{sec:discussion} \input{discussion} \acknowledgments \input{acknowledgements} \subsection{SN~Ia Lightcurves}\label{subsec:salt2_lightcurves} We use SN~Ia lightcurves from the SALT2 model \citep{salt2}. In particular, we use the \texttt{Python} implementation of SALT2 provided in \texttt{sncosmo} library \citep{sncosmo}. This model is based on observations of SNe~Ia by the SDSS and SNLS surveys. The free parameters of the model include the stretch ($x_1$) and color ($C$) parameters of the SN~Ia. Regarding the range of these parameters, we follow same range as \cite{frohmaier_2017} (see Table 1 and Eq.(4) therein). The ranges cover the possible lightcurve morphologies of SNe~Ia \citep{jla}. We show an example lightcurve, at a redshift of $z = 0.01$ with an instrinsic $M_B = -19.05$ in Fig.~\ref{fig:rates_example_lightcurve}. When propagating the flux, we also take into account the extinction due to host galaxy dust and the Milky Way (MW) dust. We use the MW dust map by \cite{f99dust} which is a part of the \texttt{sncosmo} package. For the host galaxy extinction, we use the distribution of $E(B-V)$ of SN~Ia in their host galaxies \citep{hatano}. Dust extinction plays a significant role in the detectability of lightcurves as the SNe can be dimmed by as much as $1 - 1.5$ magnitudes. \subsection{Lightcurve Ensemble}\label{subsec:lightcurve_ensemble} We simulate $\approx 5 \times 10^6$ SN~Ia lightcurves uniformly in co-moving volume up to redshift, $z_{\mathrm{max}}^{\mathrm{Ia}} = 0.28$ \footnote{ The $z_{\mathrm{max}}^{\mathrm{Ia}} = 0.28$ is high enough to capture the spacetime boundary of iPTF sensitivity. Also, no simulations are done below a declination, $\delta_{\mathrm{min}}\approx -31^{\circ}$ consistent with hardware limitations for iPTF. }, uniform in peak time distribution in the observer frame. We assume a flat $\Lambda\mathrm{CDM}$ cosmology with Hubble constant, $H_0 = 69.3$ $\mathrm{km s^{-1}/Mpc}$ and matter to critical density, $\Omega_\mathrm{m} = 0.287$ \added{\citep{2013ApJS..208...19H}} \footnote{\texttt{astropy.cosmology.WMAP9}}. We associate a host galaxy surface brightness to each of these SNe using the distribution of surface brightness from iPTF data. The epochs when the SN~Ia is observed come from the iPTF observing schedule. At each observation, we obtain the transient magnitude at that epoch from the lightcurve and the observing conditions from the iPTF survey database. The single epoch classifier then tells us the epochs when the transient was detected. An example is shown in Fig.~\ref{fig:rates_example_lightcurve} where the vertical lines in the upper and lower panel respectively represent the observations and detections at each epoch. \subsection{SN~Ia Space-time Sensitive Volume} To understand rates, one must have a good estimate of the survey sensitivity to particular transient types. Let $\Lambda_{\text{SNe}}$ be the expected count of SNe seen during survey time. Then, with $R$ as the intrinsic rate we have: \begin{eqnarray} \Lambda_{\text{SNe}} &=& \int f(t;\underbrace{M_B, z, \dots}_{\pmb{\kappa}}) \overbrace{\frac{\mathrm{d}N}{\mathrm{d}t_e \mathrm{d}V_c}}^{R} \frac{1}{1 + z} \frac{\mathrm{d}V_c}{\mathrm{d}z} \mathrm{d}z \mathrm{d}t \mathrm{d}\pmb{\kappa} \nonumber \\ &=& R \int f(t;\underbrace{M_B, z, \dots}_{\pmb{\kappa}}) \frac{1}{1 + z} \frac{\mathrm{d}V_c}{\mathrm{d}z} \mathrm{d}z \mathrm{d}t \mathrm{d}\pmb{\kappa} \label{eq:rates_Nobs_VT_relation}\\ &=& R \langle VT \rangle, \nonumber \end{eqnarray} where the integral runs over time of observation and co-moving volume up to $z_{\mathrm{max}}^{\mathrm{Ia}}=0.28$. The \emph{selection} function, $f(\dots) \in \{0, 1\}$, is to be interpreted as the weight assigned to regions in space-time. The value of the selection function is a consequence of running a particular instance of SN~Ia through the observing schedule and inferring detectability based on the single-epoch classifier in Eq.~(\ref{eq:efficiency_epsilonhat_definition}). Therefore, the selection function depends on the observer time, $t$, which captures the duty cycle and cadence. Also, it depends on the intrinsic properties of the supernova like the absolute intrinsic magnitude, $M_B$, the redshift, $z$, at which it was simulated, the sky location and so on. These are collectively represented by $\pmb{\kappa}$ in Eq.~(\ref{eq:rates_Nobs_VT_relation}). Since we have distributed the supernovae uniformly in co-moving volume, the integral is approximated in the Monte-Carlo sense: \begin{eqnarray} \langle VT \rangle &=& \int f(t;\underbrace{M_B, z, \dots}_{\pmb{\kappa}}) \frac{1}{1 + z} \frac{\mathrm{d}V_c}{\mathrm{d}z} \mathrm{d}z \mathrm{d}t \mathrm{d}\pmb{\kappa} \nonumber \\ &\approx & \frac{N_{\text{rec}}}{N_{\text{tot}}} T \int \frac{1}{1 + z} \frac{\mathrm{d}V_c}{\mathrm{d}z} \mathrm{d}z, \end{eqnarray} where $N_{\text{rec}}$ is the number of SNe recovered from this simulation campaign, $N_{\text{tot}}$ is the total number simulated and $T$ is the four year period of iPTF over which we performed the simulations \footnote{More specifically, Oct 23, 2012 to Mar 3, 2017 $\Rightarrow$ 1592 days}. We obtain the result: \begin{equation} \langle VT \rangle_{\mathrm{Ia}} = 2.93 \pm 0.21 \times 10^{-2} \; \mathrm{Gpc^{3}\,yr} \label{eq:rates_VT_value} \end{equation} where the error includes the $\sim 1/\sqrt{N}$ statistical error from Monte Carlo integration and the $6\%$ systematic error of the single epoch detectability classifier computed in Sec.~\ref{sec:multi_dim_efficiency}, the latter being the dominant source of error. The distribution of the detected SNe~Ia in sky is shown in Fig.~\ref{fig:rates_SNIa_skymap} colored by redshift. Using the recovered SNe~Ia, the median sensitive co-moving volume is found to be $0.305$ $\mathrm{Gpc^3}$. We report the redshift corresponding to this value as the median sensitive redshift to SNe~Ia, $z_{\text{median}}^{\text{Ia}} = 0.099$, shown in Fig.~\ref{fig:rates_SNIa_recovery}. \subsection{SN~IIp Space-time Sensitive Volume} \begin{figure}[htp] \begin{center} \includegraphics[width=1.0\columnwidth, trim=0.5cm 0cm 0.5cm 0cm] {lightcurve_sn2p_z_0_01.pdf} \includegraphics[width=1.0\columnwidth, trim=0.5cm 1cm 0.5cm 0cm] {lightcurve_sn2p_z_0_01_with_recovery.pdf} \end{center} \caption{\textbf{Upper panel}: An example of a SN~IIp lightcurve, with the apparent magnitude, $m$ on the y-axis and time on x-axis. The lightcurve is shown in the iPTF $R$ and $g$ bands. The observations of the telescope are shown as vertical lines. \textbf{Lower panel}: The same lightcurve is plotted, however, the vertical lines now represent the recovery by single epoch classifier. One can identify the only $g$ band observation (around 40 days) being missed due to fainter magnitude in the $g$ band.} \label{fig:rates_example_sn2p_lightcurve} \end{figure} In contrast to the well-defined Ia lightcurves with their typical timescales of several weeks, we also wanted to explore longer-timescale lightcurves as a limiting case. Therefore, we consider type IIp supernovae and compute their space-time sensitive volume in similar lines as Sec.~\ref{subsec:lightcurve_ensemble}. In general, type II supernovae (SNe~II) vary in lightcurve morphology and are categorized in various subtypes \citep{li_2011}. Specifically, type IIp lightcurves have a distinct ``plateau'' feature after the rise lasting for about 100 days after explosion, as shown in Fig.~\ref{fig:rates_example_sn2p_lightcurve}. The intrinsic brightness, $M_B \sim -16.75$, is significantly lower than that of SNe~Ia \citep{richardson_2014}. Hence, we expect the space-time sensitive volume to be lower than that of the SNe~Ia. When considering the Ia lightcurves in Sec.~\ref{subsec:salt2_lightcurves}, the SALT2 model parameters were used to tune possible lightcurve morphologies. Here we take a simpler approach and consider a time-series model from \cite{1999ApJ...521...30G} (named \texttt{nugent-sn2p} in the \texttt{sncosmo} package) to compute the flux up to 100 days from the explosion time. Thus, while simulating the SNe~IIp in space-time, the only change to the lightcurve shape is the ``stretch'' depending on the cosmological redshift. We simulate $\sim 9.1 \times 10^5$ SN~IIp lightcurves uniform in sky location, observer time and co-moving volume up to a redshift, $z = 0.1$. Like the SNe~Ia, each SN~IIp is assigned a host galaxy surface brightness from the surface brightness distribution of galaxies in iPTF and a $E(B-V)$ extinction value from IIp extinction distribution in \cite{hatano}. In this case, we use the criteria that the lightcurve must be recovered a minimum of five epochs, brighter than 20th magnitude in a span of 3 weeks within the 100 days post explosion. The iPTF observing schedule along with the single-epoch classifier is used to compute the detectability in each epoch. We obtain the result: \begin{equation} \langle VT \rangle_{\mathrm{IIp}} = 7.80 \pm 0.76 \times 10^{-4} \; \mathrm{Gpc^{3}\,yr}, \label{eq:rates_sn2p_VT_value} \end{equation} where the error includes the statistical error from the Monte-Carlo integration and the $6\%$ systematic uncertainty from the single-epoch classifier (see Sec.~\ref{sec:multi_dim_efficiency}). The median sensitive redshift is found to be $z_\text{median}^{\text{IIp}} = 0.038$.
2,877,628,091,403
arxiv
\section{Introduction} \label{intro} While they may be the least massive and luminous systems in our universe, understanding the origin of dwarf galaxy properties represents a key step in developing and testing theories of galaxy formation and cosmology. From the perspective of cosmology, abundances and structural properties of low mass haloes present an important observational test of the $\Lambda$CDM model, but the effects of baryonic physics (to which dwarfs are highly susceptible because of their small potential wells) can make it difficult to make robust predictions. Meanwhile, from the point of view of galaxy formation, these `messy' baryonic processes are interesting in their own right, as well as providing insight into the reionization history of the universe and the enrichment of the intergalactic medium (IGM) with metals. Examining the issue of dwarf abundances, there is a substantial offset between the predicted dark matter halo abundance from numerical simulations and the observed galaxy stellar mass function \citep[for recent work see e.g.][]{Behroozi2013,Moster2018}, indicating that dwarfs must be more than an order of magnitude less efficient at forming stars than Milky Way-sized haloes. This is also posited as a solution to the so called `missing satellites problem', where the observed number of Local Group satellites is at odds with the substantially larger number of dark matter haloes predicted by cosmological N-body simulations \citep[see e.g.][]{Moore1999, Klypin1999, Diemand2008, Springel2008, Koposov2009, Rashkov2012, Sawala2016}. It has been suggested for some time that low mass haloes should have their star formation efficiency suppressed by two primary processes: SN feedback \citep[e.g.][]{Larson1974, Dekel1986, Mori2002, Governato2007} and cosmic reionization \citep[e.g.][]{Efstathiou1992, Bullock2000, Dijkstra2004, Kravtsov2004, Madau2008}. Evidence that these structures do in fact exist but are relatively dark has been bolstered recently by detections of local ultra-faint dwarfs \citep[e.g.][]{Koposov2015, Laevens2015, Martin2015, KimD2015}. As well as influencing abundances of low mass haloes, baryonic physics has also been invoked to solve structural discrepancies between dark matter simulations and observations. One such discrepancy is often termed the `cusp-core controversy'. Within the $\Lambda$CDM paradigm, dark matter-only simulations systematically predict steep inner density profiles for these low mass haloes, but some observations suggest that they may instead contain low density cores \citep[see e.g.][]{Moore1994, Flores1994, deBlok2002, Walker2011}. While SN feedback has been widely invoked in hydrodynamical simulations in an attempt to generate cored density profiles, there is still no consensus in the literature as some groups find only cuspy profiles \citep[e.g.][]{Vogelsberger2014, Sawala2016}, while others find various levels of cored profiles with different trends as a function of halo mass or redshift \citep[e.g.][]{Navarro1996a, Gnedin2002, Read2005, Mashchenko2008, Governato2010, Pontzen2012, DiCintio2014, Onorbe2015, Fitts2017}. The level of success in transforming dark matter cusps into cores seems closely related to the degree of burstiness of SN feedback, which could also affect the mass-loading of galactic outflows and the early enrichment of the IGM. It is perhaps at some level unsurprising that the properties of simulated dwarfs predicted by different groups are at variance as very different sub-grid models for star formation, SN feedback and wind launching are adopted, in addition to results often being rather sensitive to the numerical resolution of the simulations. This is however clearly unsatisfactory if we are to understand at a more fundamental level how SN feedback operates in dwarf galaxies, and even more so if we are to derive robust constraints on the nature of dark matter, using observed dwarfs as near-field cosmology probes. Recently, based on analytical calculations or small scale simulations of individual SN explosions, there have been several theoretical works (e.g. \citealt{Hopkins2014a, Kimm2014, Iffrig2015, Martizzi2015, Kim2015, Walch2015}, but see also earlier work by \citealt{Cioffi1988,Thornton1998}) aiming at quantifying the correct momentum injection at a given SN remnant stage, as a function of local ISM properties, such as the gas density, metallicity and porosity. These studies are particularly useful as they in principle allow the imparting of the appropriate momentum into the ISM, even when the Sedov-Taylor phase of the SN remnant evolution is not properly resolved (often the case in galaxy formation simulations), without the use of tunable parameters (although they usually make certain assumptions such as a uniform ambient medium). We have trialled this type of SN injection, often dubbed `mechanical feedback', in an extensive series of simulations of isolated dwarf galaxies \citep{Smith2018}, finding that it results in realistic and well converged star formation rates and morphologies over two orders of magnitude in mass resolution. However, to naturally produce mass-loaded, multi-phase outflows gas mass resolution needed to be of order of few tens of $\mathrm{M_\odot}$ at least. Thankfully, even though these resolution requirements are quite daunting, they are achievable in full cosmological simulations provided that one dwarf is simulated at a time with a zoom-in technique. Hence, the aim of this work is to examine the mechanical SN feedback scheme in fully self-consistently formed dwarfs without tuning any parameters to understand if it leads to realistic dwarf properties once the cosmological gas inflows and mergers are taken into account. We explore this by randomly selecting five dwarfs with virial masses between $\sim 2 -6 \times 10^9 \,\mathrm{M_\odot}$ at $z=4$ which reside in different environments and have different assembly histories. \section{Methodology}\label{Methodology} Our numerical scheme is essentially the same as that described in \cite{Smith2018}, but we summarise the salient details here. We carry out our simulations with the moving-mesh code \textsc{Arepo} \citep{Springel2010} which solves hydrodynamics on an unstructured Voronoi mesh. Gravity is included using a hybrid TreePM scheme. In this work, we include radiative cooling as in \cite{Vogelsberger2013}. Primordial heating and cooling rates are calculated using cooling, recombination and collisional rates provided by \cite{Cen1992} and \cite{Katz1996}. Metal-line cooling to 10~K is obtained from lookup tables containing rates precalculated from the photoionization code \textsc{Cloudy}. We include a redshift dependent, but spatially homogeneous UV background from \cite{FG2009}, although it is only turned on from $z=9$ to approximate the latest Planck measurement of optical depth to reionization \citep{Planck2016}\footnote{However, we have performed extra simulations where the UV background follows \cite{FG2009} exactly (switching on at $z=11.7$) and find that it does not change our results in any appreciable way.}. We adopt the density based self-shielding prescription of \cite{Rahmati2013} to attenuate the UV background in dense gas. \begin{figure*} \centering \includegraphics[trim=0 7 0 0,clip]{figs/projections_wide_cbar.pdf} \includegraphics[trim=0 0 0 5,clip,width=1.0\textwidth]{figs/projections_wide.pdf} \caption{Density projections of the large scale environment around our target haloes in the coarse dark matter only simulation at $z = 4$. The target haloes are marked with green ticks for ease of identification and the virial radius is marked with a green circle. We deliberately select the haloes from a variety of environments ranging from void-like regions to rich filaments.} \label{projections_wide} \end{figure*} We include a non-thermal pressure floor to prevent artificial fragmentation in the event of under-resolving the Jeans length \citep[see e.g.][]{Truelove1997}. To ensure that the Jeans length is resolved by $N_\mathrm{J}$ cells, this takes the form \begin{equation} P_\mathrm{min} = \frac{N_\mathrm{J}^2 \Delta x^2 G \rho^2}{\pi \gamma}\,, \label{pressure_floor} \end{equation} where $\Delta x$ is the cell diameter, $\rho$ is the gas density and $\gamma = 5/3$ is the adiabatic index. We adopt $N_\mathrm{J}=8$ in this work. A detailed discussion of the effects of adopting a pressure floor can be found in \cite{Smith2018} (see also Section~\ref{Section_parameters}). Gas above a density threshold of $n_\mathrm{SF}$ is assigned a star formation rate density according to a simple Schmidt law, \begin{equation} \dot{\rho}_{*} = \epsilon_\mathrm{SF}\frac{\rho}{t_{\mathrm{ff}}}\,, \label{eq:schmidt} \end{equation} where $\rho$ is the gas density, $\epsilon_\mathrm{SF}$ is some efficiency and $t_{\mathrm{ff}}=\sqrt{3\pi/32G\rho}$ is the free-fall time. We use a fiducial value of $n_{\mathrm{SF}}=10\ \mathrm{cm}^{-3}$ and $\epsilon_\mathrm{SF}=1.5\%$ \citep[chosen to match observed efficiencies in dense gas, see e.g.][and references therein]{Krumholz2007}. We also examine the effect of varying these parameters in Section~\ref{Section_parameters}. Using these rates, gas cells are then stochastically converted into star particles (collisionless particles representing single stellar populations). Star particles inherit the metallicity of the gas from which they were formed. For each star particle, we obtain a SN rate, $\dot{N}_\mathrm{SN}$, as a function of age and metallicity precalculated using \mbox{\textsc{Starburst99}} \citep{Leitherer1999} assuming a \cite{Kroupa2002} IMF. The number of SNe that occur in a timestep is then drawn from a Poisson distribution with a mean of $\bar{N}_\mathrm{SN}=\dot{N}_\mathrm{SN}\Delta t$, where $\Delta t$ is the timestep. In order to individually time resolve SNe, we impose a timestep limiter for star particles to ensure that $\bar{N}_\mathrm{SN}\ll1$. When a SN occurs, mass, metals, energy and momentum are coupled to the gas cell containing the star particle (the host cell) and its neighbours (all those that share a face with the host cell). Feedback quantities are distributed to the gas cells using an explicitly isotropic weighting scheme in the rest frame of the star particle (details in \citealt{Smith2018}, see also \citealt{Hopkins2018}) in order to avoid spurious numerical effects that can arise when using a simple kernel (mass) weighted approach to nearest neighbours due to the increased relative number of resolution elements present in dense gas. The ejecta mass, $m_\mathrm{ej}$, deposited per SN is $10\ \mathrm{M_\odot}$, of which $2\ \mathrm{M_\odot}$ is in metals, with an energy of $10^{51}\ \mathrm{ergs}$. In simulations designated `no feedback', mass and metals are returned, but no feedback energy/momentum is deposited. In runs with full SN feedback, we adopt the mechanical feedback scheme described in \cite{Smith2018} \cite[see also][]{Hopkins2014a,Hopkins2017a,Hopkins2018,Kimm2014,Kimm2015,Martizzi2015}. This aims to deposit the correct amount of momentum corresponding to the stage of the SN remnant evolution resolved (dependent on the local gas density and metallicity). When analysing simulations, we use the halo finder \textsc{Subfind} \citep{Springel2001,Dolag2009} to calculate halo properties. We adopt the convention of considering friends-of-friends (FOF) groups as the primary dark matter halo and subhaloes as galaxies within the halo (unless otherwise stated, we only consider centrals). For halo virial masses, we use the definition of \cite{Bryan1998} and for galaxy stellar masses we use the mass contained within twice the radius that contains half the total subhalo stellar mass associated with the group. We use the code \textsc{Sublink} \citep{Rodriguez-Gomez2015} to construct merger trees and track haloes/subhaloes throughout the simulations. We follow the branch of the merger tree with the most mass behind it for our analysis except where otherwise mentioned. We adopt a \cite{Planck2016} cosmology throughout this work. Unless otherwise stated, all units are in proper coordinates. \begin{figure*} \centering \includegraphics[trim=0 6 0 0,clip]{figs/projections_cbar.pdf} \includegraphics[trim=0 0 0 6,clip, width=1.0\textwidth]{figs/projections.pdf} \caption{Density projections, from left to right: dark matter (shown here for the no feedback simulations, although the equivalent plots for runs with SNe are similar), gas for simulations without SNe, gas with SNe, stars without SNe and stars with SNe. Each row corresponds to a different dwarf. The gas and stellar projections are centred on the central galaxy of the halo. Dwarfs 1, 2, 3 and 4 are shown at $z=4$, while dwarf 5 is shown at $z=4.4$ to allow comparison to the curtailed no feedback simulation. The virial radius is indicated with a green circle. While SN feedback significantly alters morphologies, particularly in the case of dwarf 4, in most cases centrally condensed baryon concentration persists.} \label{projections} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figs/m_halo_star.pdf} \caption{Stellar mass to halo mass ratio as a function of halo mass for our various simulations at $z=10$, 8, 6 and 4. Halo mass here is defined as in \protect\cite{Bryan1998}. We express the stellar mass to halo mass ratio (which can be considered the integrated baryon conversion efficiency) as the mass of stars formed in the central galaxy (defined as the most massive subhalo) divided by the product of the halo mass and the cosmic baryon fraction. Open symbols indicate simulations without SNe. In the case of dwarf 4, we plot both the progenitor haloes of the final $z=4$ halo prior to their major merger at $z=5.5$. At $z=4$, while the two haloes have merged, the two central galaxies of the progenitors have not yet merged into a single subhalo (see main text); nonetheless, we use the sum of the stellar mass in both of these galaxies to compute the star formation efficiency. We also indicate results from abundance matching as in \protect\cite{Behroozi2013} and \protect\cite{Moster2018} with shaded regions, although it should be noted that at such low halo masses the relations are heavily extrapolated. Note that with exception of dwarf 4, all of our simulated dwarfs are in large disagreement with the abundance matching extrapolations.} \label{mhalo_star} \end{figure*} \section{Simulations} \subsection{Initial conditions and simulation details} The process for creating cosmological `zoom-in' initial conditions is as follows. First, a coarse resolution dark matter only simulation of a large, periodic cosmological volume is run to a target redshift, $z_\mathrm{target}$. Dark matter haloes of interest are identified in the $z_\mathrm{target}$ output of this simulation and are resimulated at a higher resolution with a `zoom-in' technique. Gas is then added to the initial conditions by splitting the particles into dark matter and gas mesh generating points according to the cosmic baryon fraction (although we also carry out dark matter only zoom-in simulations). We use the code \textsc{MUSIC} \citep{Hahn2011} to generate the initial conditions (at $z=127$) for both the coarse and zoom-in simulations. Dwarfs 1 and 2 are selected at $z = 0$ from $10\ \mathrm{cMpc}\,\mathrm{h}^{-1}$ coarse boxes with a resolution of $256^3$ particles (giving a particle mass of $7.47\times10^{6}\ \mathrm{M_\odot}$). In the coarse simulation, their virial masses at $z = 0$ are $1.04\times10^{10}\ \mathrm{M_\odot}$ and $1.12\times10^{10}\ \mathrm{M_\odot}$ with virial radii of 62.0~kpc and 64.5~kpc, respectively. The selection regions at $z=0$ are a sphere of radius 736~kpc for dwarf 1 and a sphere of radius 295~kpc for dwarf 2. However, for the purposes of this work, we carry out our analysis up until $z = 4$, at which point their masses are $2.82\times10^{9}\ \mathrm{M_\odot}$ and $3.11\times10^{9}\ \mathrm{M_\odot}$, and they have virial radii of 9.20~kpc and 9.47~kpc (note that in the zoom-in simulations, the final mass and radius varies due to the effects of baryonic physics and the higher resolution). Dwarfs 3, 4 and 5 are selected at $z = 4$ from a $20\ \mathrm{cMpc}\,\mathrm{h}^{-1}$ box with a resolution of $512^3$ particles (i.e. the same mass resolution as the boxes used for dwarfs 1 and 2). The masses of dwarfs 3 and 4 at $z=4$ are $2.56\times10^{9}\ \mathrm{M_\odot}$ and $2.51\times10^{9}\ \mathrm{M_\odot}$ with virial radii of 8.86~kpc and 8.78~kpc. The selection regions at $z=4$ are spheres of radius 44.1~kpc. In the coarse simulation, we identify a fifth halo with a virial mass of $1.00\times10^{10}\ \mathrm{M_\odot}$ and a virial radius of 13.96~kpc, which we resimulate with a zoom-in region of 88.3~kpc. In the subsequent zoom-in simulation, this region actually contains two separate haloes of $\sim6\times10^9\ \mathrm{M_\odot}$, sufficiently separated as to be considered independent. We take the larger of these two haloes to be the focus of our analysis, referring to it as dwarf 5. Fig.~\ref{projections_wide} shows dark matter density projections of the large scale region around the target haloes in the coarse simulations at $z=4$. Dwarfs 1, 2 and 3 are in relatively low density environments, 4 is in a more crowded filament region, while 5 is a larger system in a very crowded filament. Our fiducial simulations increase the number of resolution elements in the zoom-in region by a factor of $16^3$ giving dark matter particle and target gas cell masses of $1536\ \mathrm{M_\odot}$ and $287\ \mathrm{M_\odot}$, respectively. We also run simulations of dwarf 1 with a higher resolutions of $35.9$ and $15\ \mathrm{M_\odot}$ gas cell mass for the purposes of testing convergence. The refinement/derefinement scheme in \textsc{Arepo} keeps gas cell masses within a factor of 2 of the target mass. Because star particles are formed by converting gas cells, this also corresponds to the initial star particle mass (prior to mass loss from feedback). We use comoving gravitational softenings of 0.129~ckpc for the high resolution dark matter particles, gas cells\footnote{For gas cells, the softening is calculated as the maximum of either this fixed softening value or 2.5 times the cell radius.} and star particles. For dwarfs 3, 4 and 5 the softening is held at its $z=6$ proper length of 18.4~pc from that redshift onwards, although this makes very little practical difference. Because our simulations do not include the necessary physics (such as molecular cooling) to resolve Population III stars and the first enrichment of the ISM, we impose a metallicity floor of $10^{-4}\ \mathrm{Z_\odot}$\footnote{Our choice is motivated by the critical metallicity for fragmentation such that a Population II cluster can be formed \citep[see e.g.][]{Schneider2012}. This choice is somewhat arbitrary, but we find that increasing the floor value by an order of magnitude has negligible impact on our results. In practice, our haloes are rapidly enriched above the floor value.}. For each dwarf, we carry out a simulation to $z=4$ with no feedback and with SNe (due to computational expense, we run the no feedback dwarf 5 simulation to $z=4.4$). Section~\ref{zDiscussion} contains details of additional simulations of dwarf 1 carried out with various modifications to our fiducial parameters to test convergence. \subsection{Results} Fig.~\ref{projections} shows density projections of the dark matter, gas and stars for our simulated dwarfs at $z=4$ (dwarf 5 is shown at $z=4.4$ to allow comparison between the runs with and without feedback). A variety of morphologies are present. Runs without feedback tend to produce highly compact gas and stellar discs. Recent mergers can give rise to warped structures, for example in dwarfs 1 and 4. With feedback, dwarfs 1, 2 and 5 also feature compact stellar discs similar to the no feedback simulations, although their orientation has changed. The gas morphology is more obviously changed, with more irregular and diffuse structure. In dwarfs 3 and 4, feedback has made a significant impact on the stellar structure, with significantly lower surface densities and the absence of a well defined disc. In dwarf 3, most of the gas been cleared away, leaving a small dense core, while the feedback has almost completely evacuated the gas from dwarf 4. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figs/sfr.pdf} \caption{Star formation rates as a function of redshift for the central galaxies. For dwarf 4, the central galaxies of the two almost equal mass progenitor haloes are shown, including when the two subhaloes are present in the same halo after the merger (see main text for more details); the second of the two haloes is plotted with a dashed line. Mechanical feedback in general leads to more bursty star formation rates.} \label{sfr} \end{figure*} Fig.~\ref{mhalo_star} shows the stellar mass to halo mass ratio of the central galaxies formed in the simulations, normalised by the cosmic baryon fraction, as a function of halo mass, for 4 redshifts ($z=10$, 8, 6, 4). We also plot empirically derived abundance matching results from \cite{Behroozi2013} and \cite{Moster2018} for comparison, although we have heavily extrapolated the results to reach this mass range so they should be treated with caution. However, even with this caveat in mind, it can be seen that the majority of our simulated galaxies massively overproduce stars, lying several orders of magnitude above the abundance matching relations at all four redshifts. This is true for all simulations without SNe, where typically $10 - 60\%$ of the available baryons (taking that to be $f_\mathrm{b}M_\mathrm{halo}$) have been converted into stars, with variation of only a factor of a few between $z=10 - 4$. With feedback, there are mixed results. Dwarf 1 produces almost identical stellar to halo mass ratios with and without feedback at all redshifts, with only marginal suppression of star formation by $z=4$. Similarly, in dwarf 5 feedback has little impact on the evolution of stellar mass. For dwarf 2, at $z=10$, the ratio is about an order of magnitude lower in the run with feedback than without (although still somewhat high). However, the ratio increases with decreasing redshift and by $z=4$ the difference is slight. Dwarf 3 has a similar behaviour to dwarf 2, except its ratio drops relative to the no feedback simulation, eventually lying a factor of a few lower. Dwarf 4 is the only case where there is a dramatic suppression of star formation by feedback. This object has a major merger (with a ratio $\sim1.5$) around $z=5.5$, so we treat the two progenitor haloes separately prior to their merger. Without feedback, both progenitor haloes have similar stellar to halo mass ratios to the other dwarfs, although they are individually of lower mass. With the inclusion of SN feedback, the ratio is dropped by approximately an order of magnitude at $z=10$ and this offset increases with time. By $z=4$, the progenitor haloes have merged according to the halo finder, although the central galaxies of the progenitors have not yet merged. For consistency, we now calculate the stellar to halo mass ratio for the final halo by considering the stellar mass of both of these galaxies. The ratio is now a factor of $\sim100$ lower than the simulation without feedback and is close to the abundance matching relations (bearing in mind their uncertainties at this mass). The reason for the increased effectiveness of the feedback in this case would appear to be that this object has evolved for most of its history as two independent systems that are less massive at a given redshift than the other simulated dwarfs, the shallower potential well increasing the relative efficiency of the SNe to clear gas. This suggests that haloes in this mass range are very sensitive to the manner of their assembly. Having discussed the integrated efficiency of star formation, we now consider the star formation histories of our dwarfs shown in Fig.~\ref{sfr}. For dwarf 4, we plot results for both central galaxies as in Fig.~\ref{mhalo_star}. For dwarf 1, the SFRs are essentially the same in the runs with and without SNe. Star formation starts at around $z=11.5$ and rapidly climbs to $0.2\ \mathrm{M_\odot\ yr^{-1}}$ by $z=10$. This rapid rise in star formation coincides with a merger at $z\sim11$. The SFRs remain around this level until $z=4$ in the no feedback run, apart from a merger-driven increase at $z\approx5.5$. The results of this merger are apparent in the highly disrupted gas and disc structure visible in Fig.~\ref{projections}. With SNe, the brief increase in SFR is arrested by the feedback and dropped well below the no feedback rates. This burst of feedback is responsible for the more diffuse gas apparent in Fig.~\ref{projections}. The ability of the feedback to be effective during the later merger but not during the first merger is due to the amount of gas available. The subhalo gas fraction (relative to the total mass) at $z=5.5$ is approximately a quarter of that at $z=11.5$. In contrast to dwarf 1, SNe are able to suppress the SFR significantly in dwarf 2 above $z=6$. Without feedback, the SFR rises in a similar manner to dwarf 1, although not as rapidly. However, SNe are able to restrict star formation to a brief burst at around $z=11.5$ and another at $z=8$. It would appear that the calmer environment (i.e. no major merger), relative to dwarf 1, at the onset of star formation allows the SNe to be effective. Dwarf 2 experiences a gas rich merger around $z=6$ that leads to a large spike in SFR in both no feedback and feedback runs and the rapid build up of gas overwhelming the feedback. Following this event, the SFR remains high in both runs, leading to the similar (high) stellar mass to halo mass ratio at $z=4$. A burst of efficient feedback around $z=4.5$ leads to a slight drop in SFR relative to the no feedback simulations, the results of which can be seen in the gas morphology in Fig.~\ref{projections}. In dwarf 3, without feedback, the SFR rises slowly from $z=14$, before becoming reasonably steady at a few $10^{-1}\ \mathrm{M_\odot\ yr^{-1}}$ from $z=10$ onwards. This dwarf experiences no mergers of consequence, growing more slowly than dwarfs 1 and 2, probably as a result of being in a less dense environment. Once SNe are included, SNe are able to suppress star formation, but only following extended bursts of high SFRs. The feedback episodes are able to remove gas from the centre of the halo (giving rise to the morphology seen in Fig.~\ref{projections}) and the lower final stellar mass. However, a sufficiently large mass of stars is formed in the bursts such that the galaxy still lies several orders of magnitude above the (extrapolated) abundance matching relations. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figs/profiles.pdf} \caption{Radial profiles at $z=4$ for the various simulations, solid and dashed lines indicating runs with and without SNe, respectively. Vertical dotted lines denote the virial radius. Dwarf 5 without feedback is not shown as it was halted prior to this redshift, however its $z=4.4$ profiles are consistent with the results from the other dwarfs. \textit{Left}: circular velocity profiles. While SN feedback systematically reduces the peak of circular velocity profiles, this reduction is only moderate (with the exception of dwarf 4). \textit{Centre}: total (i.e. dark matter, gas and stars) density profiles. Dotted lines show profiles from collisionless dark matter only simulations. \textit{Right}: the ratio of dark matter density in the simulations with baryonic physics as compared to the collisionless simulations (renormalised by the cosmic dark matter fraction).} \label{structure} \end{figure*} As mentioned previously, despite ending up with a $z=4$ halo mass similar to the other dwarfs simulated, dwarf 4 spends most of its history as two lower mass systems prior to a late major merger. Correspondingly, in the runs without feedback, the progenitors have lower SFRs than the other dwarfs, although this results in similar stellar to halo mass ratios (see Fig.~\ref{mhalo_star}). Peaking at $0.05\ \mathrm{M_\odot\ yr^{-1}}$ by $z=9$, the SFR of both galaxies evolves in a similar fashion. There is a slight drop in SFR after $z=9$. The two haloes merge around $z=5.5$ leading to a rapid increase in star formation. Like dwarfs 2 and 3, with the addition of feedback, the initial onset of star formation is limited to a short burst. However, the system is even more efficiently cleared of gas, resulting in a complete lack of star formation until the merger occurs. Unlike the no feedback case, this merger is relatively dry so the merger-triggered star formation burst is severely curtailed. Dwarf 5 starts forming stars at $z\approx15$, rising to high SFR after $z=10$. A large amount of variability can be seen, mainly corresponding to mergers. Feedback has very little impact on the SFR in an averaged sense, although it impacts the gas near the very centre of the halo enough to cause variations relative to the no feedback simulation. Fig.~\ref{structure} shows circular velocity profiles, total density profiles and the ratio of dark matter density in simulations with baryonic physics compared to collisionless (i.e. dark matter only) simulations at $z=4$. It can be seen that on the whole, the simulations give rise to extremely concentrated mass distributions. The circular velocity profiles are strongly peaked at very small radii (10s of parsecs), in some cases $>100\ \mathrm{km\ s^{-1}}$. The inclusion of SNe reduces the magnitudes of the peaks by a factor of a few. Dwarf 4, which has managed to significantly suppress star formation (as seen in Figs.~\ref{mhalo_star} and \ref{sfr}), is unique in preventing a peaked circular velocity profile. Instead, a gently rising profile reaches its peak value $\sim30\ \mathrm{km\ s^{-1}}$ near the virial radius where it converges with the no feedback profile. The centrally concentrated mass distribution that gives rise to these strongly peaked circular velocity profiles can be seen in the central panel of Fig.~\ref{structure} in the form of radial profiles of total density (i.e. dark matter, gas and stars). Also plotted are profiles from collisionless simulations (dotted lines). These latter profiles are well fit by NFW profiles \citep{Navarro1997}. The introduction of baryons leads to a strong peak of gas and stars with $0.1\ \mathrm{kpc}$, overdense relative to the collisionless simulations by a factor of 100 in the centre. While the baryonic mass is dominant in this region, it can be seen in the rightmost panel of Fig.~\ref{structure} that dark matter density has also been enhanced by a factor of $\sim10$. Here, we plot the ratio of the dark matter density to the density from the collisionless simulations (renormalised by the cosmic dark matter fraction). The central concentration of baryons has lead to contraction of the dark matter. Only in dwarf 4 has the feedback managed to expel sufficient baryons to prevent this central overdensity, its total and dark matter density profiles lying marginally under the collisionless case. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figs/size_mass.pdf} \caption{Projected stellar half-mass radius vs. stellar mass for the central galaxies at $z=10$, 8, 6 and 4. Once again, in the case of dwarf 4, the central galaxies of the two progenitor haloes are shown. The points are calculated as the mean over a sample of 500 random viewing angles, with the error bars marking one standard deviation (open symbols are for no feedback runs, while filled symbols are for simulations with SNe). Horizontal dotted lines indicate the gravitational softening length at a given redshift (at $z=4$, the blue dotted line corresponds to dwarfs 3, 4 and 5). Also plotted are observations for local dwarfs \citep{McConnachie2012,Koposov2015} for comparison, although a comparison of these $z=0$ objects with our $z=4$ galaxies should be treated with caution. Most objects form most of their stars in a dense central region limited only by the softening length (dwarf 3 no feedback is an outlier, see text). SNe have little impact, except in dwarf 4.} \label{sizes} \end{figure*} \begin{figure*} \centering \includegraphics{figs/ks.pdf} \caption{A Kennicutt-Schmidt plot, showing SFR surface density as a function of gas surface density for our simulated dwarfs between $z=12-4$ ($z=12-4.4$ for dwarf 5 with no feedback). We only plot the most massive of the two dwarf 4 subhaloes for clarity, but the secondary subhalo exhibits similar behaviour. These are global measurements, taken within a radius containing 90\% of the total SFR, projecting down the gas angular momentum vector (open symbols are for no feedback runs, while filled symbols are for simulations with SNe). Also shown are observations, both global \protect\citep{Kennicutt1998,Wyder2009} and spatially resolved \protect\citep{Bigiel2008}. We also plot the power law fit of \protect\cite{Kennicutt1998} to the data of that work. Most of our simulated galaxies have high gas surface densities and SFR surface densities. A few galaxies experience strong bursts of feedback which drive them well beyond the boundaries of the plot as they are quenched, the few low surface density points representing transitions.} \label{ks} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figs/v_sigma.pdf} \caption{Stellar kinematic properties of the simulated dwarf galaxies at $z=4$. \textit{Left}: the peak rotational velocity, having aligned the system in the `disc' plane relative to the total stellar angular momentum vector. \textit{Centre}: the 1D velocity dispersion within the peak rotational velocity radius (or in the case of dwarf 4 with SNe, within stellar half-mass radius, see text for details). \textit{Right}: the ratio of rotational velocity to velocity dispersion, a measure of rotational support. Also plotted are measurements of observations from \protect\cite{Wheeler2017} of dwarfs in the Local Group.} \label{v_sigma} \end{figure*} Fig.~\ref{sizes} shows the 2D projected stellar half-mass radius, $R_{1/2}$, as a function of stellar mass for the various galaxies at $z=10$, 8, 6 and 4. We make this measurement from 500 randomly distributed viewing angles\footnote{The results are well converged with number of samples above 500.}. The mean of the sample is plotted, error bars indicating the 1$\sigma$ limit of the distribution. We mark with horizontal dotted lines the gravitational softening lengths. For reference, we also plot observations of local dwarfs \citep{McConnachie2012,Koposov2015}, although the comparison of these $z=0$ objects with our $z=4$ dwarfs should be taken with some caution. The majority of our galaxies have extremely compact stellar distributions. While the projections in Fig.~\ref{projections} show extended discs on the scale of hundreds of parsecs, most of the stellar mass is contained within a few tens of parsecs. In fact, the stellar half-mass radius is very close to the gravitational softening length, indicating that that the objects have undergone catastrophic collapse halted only by our limited resolution\footnote{Dwarf 3 without feedback at $z=4$ is somewhat of an outlier. The lack of any disruption from mergers has allowed a more extended stellar distribution. As the majority of these stars are formed between $z=6-4$, this leads to a sudden increase in $R_{1/2}$ by $z=4$. The simulation with SNe has a significantly smaller $R_{1/2}$, but this is mainly because it has a proportionally lower mass of stars.}. The two component subhaloes of dwarf 4 remain less concentrated with the inclusion of SNe, lying at $z=4$ within a factor of a few of the $z=0$ observations at (dwarfs 2 and 3 also have larger $R_{1/2}$ at $z=10$ before the failure of the SNe at later times). Fig.~\ref{ks} shows the location of our objects on a Kennicutt-Schmidt plot (SFR surface density as a function of gas surface density) between $z=12-4$. We make these global measurements by taking the face-on `disc' projection defined by the total angular momentum vector of the gas within twice the 3D stellar half-mass radius (although it should be noted that not all of our galaxies produce discs). For a given projection of the galaxy, we find the 2D radius containing 90\% of the total SFR\footnote{The measurements are relatively insensitive to the exact fraction adopted, the points being shifted up and down the Kennicutt-Schmidt relation slightly.}. We then compute SFR surface density and mass surface density from the gas within this radius. Fig.~\ref{ks} also shows global \citep{Kennicutt1998,Wyder2009} and spatially resolved \citep{Bigiel2008} observations. Due to the extremely compact nature of most of our galaxies, the majority of our simulations appear in the same region of the Kennicutt-Schmidt plot as starburst galaxies. There is a trend for our simulations with feedback to produce galaxies with slightly lower SFR and mass surface densities than simulations without feedback. When galaxies experience an efficient burst of feedback (dwarfs 2, 3 and 4; see Fig.~\ref{sfr}) they move towards the lower end of the relation. However, because these bursts are very strong and tend to completely disrupt the star forming gas, we do not see a steady state at low surface densities, but the measurements in this quenched phase lie well beyond the boundaries of the plot. For example, dwarf 4 with feedback only appears on the plot at $z=11.5$, 11 and 4.5 because it effectively has no star formation at other times (we do not plot the secondary subhalo of dwarf 4). Fig.~\ref{v_sigma} shows kinematic information of our simulated galaxies as a function of stellar mass at $z=4$ as compared to measurements of local dwarfs from \cite{Wheeler2017}. The left panel shows the rotational velocity. We take here the peak value of the stellar rotation curve, having first transformed into the `disc' plane of the galaxy by aligning with the total angular momentum vector of the stars. It should be noted that the kinematics from the simulations should be treated with caution given that the size of the systems approaches the gravitational softening length in those cases in which catastrophic collapse has occurred. The rotational velocities are well in excess of the observations, but not unexpected given the highly peaked circular velocity profiles (see Fig.~\ref{structure}). It should, however, again be noted that we are comparing high redshift kinematics with low redshift data; we would expect the circular velocity to scale as $(1+z)^{1/3}$ \citep[e.g.][]{Bullock2001} which might reduce the tension. There is a trend for the simulations with SN feedback to produce higher rotational velocity systems (except for dwarf 4). With SNe, however, the two subhaloes of dwarf 4 show no evidence of rotation and are therefore consistent with the observations at that mass which demonstrate little or no rotation. The central panel of Fig.~\ref{v_sigma} shows the 1D velocity dispersion, $\sigma$, for our systems. We measure the 3D velocity dispersion within a sphere whose radius corresponds to the peak rotational velocity\footnote{Taking other reasonable radii, such as one or two times the stellar half-mass radius yields the same results within $1\ \mathrm{km\ s^{-1}}$; these radii are all comparable.}, then obtain the 1D value by dividing by $\sqrt{3}$. In the case of dwarf 4 with SNe (which shows no rotation) we use the stellar half-mass radii. Again, most simulations lie significantly above the local observations, a consequence of the highly compact systems (Fig.~\ref{sizes} demonstrates how much more extended observed galaxies in this mass range are). There is a steep relation of increasing $\sigma$ with increasing stellar mass. The two subhaloes of dwarf 4 with SNe lie close to the observations, with velocity dispersions of $\sim5\ \mathrm{km\ s^{-1}}$. Examining the ratio of the rotational velocity to the velocity dispersion provides a measure of the rotational support of the system. This is shown in the right panel of Fig.~\ref{v_sigma}. Most of our systems are rotationally supported, in contrast with the observations which prefer rotation to be subdominant (although there are a few outliers and the uncertainties are large in some observations), with the caveat that we are comparing our $z=4$ objects with local observations. Only dwarf 4 with SNe is consistent with observations producing a dispersion dominated system. We note that our dwarfs end up as either over-massive discs when feedback is inefficient or a dim spheroidal when feedback is efficient in dwarf 4 \citep[a similar pattern is found for Milky Way mass haloes by][]{Roskar2014}, but given that we only have one of the latter types of objects we cannot make any claim to bimodality. \begin{figure} \centering \includegraphics{figs/profiles_met.pdf} \caption{Radially (mass-weighted) averaged stellar metallicity profiles at $z=4$. Outside of a few kpc, the profiles become very noisy, in some cases because of substructures. For dwarf 4, the profiles are centred on the most massive subhalo from the no feedback simulations. While in dwarfs 1, 2, 3 and 5 inefficient SN feedback leads to over-enrichment, in the case of dwarf 4 this is reduced by two orders of magnitude to more reasonable values.} \label{profiles_met} \vspace{-4ex} \end{figure} Because dwarfs 1, 2, 3 and 5 produce such large masses of stars in a confined region, the resulting metal enrichment of the surrounding region is necessarily extremely high. Fig.~\ref{profiles_met} shows radial stellar metallicity profiles at $z=4$. Without SNe, the central tens of parsecs (which contain most of the stellar mass) are dominated by a stellar population of $\sim2\ \mathrm{Z_\odot}$. The metallicity drops rapidly through the disc region (on the order of $100$~pc, see also Fig.~\ref{projections}) to reach a metallicity ranging between $0.2\ \mathrm{Z_\odot}$ (dwarf 4) and $0.6\ \mathrm{Z_\odot}$ (dwarf 2) in the stellar `halo'. The metallicity gradient is then flat until the edge of the stellar distribution, after which the profiles are noisy due to the low stellar density and the presence of other subhaloes. With the exception of dwarf 4, the addition of SNe makes very little difference to the stellar metallicities, which is to be expected given the inefficiency of feedback in these systems (although, in dwarf 3, the stellar halo is curtailed at a smaller radius). For dwarf 4, SNe reduce the central metallicities by 2 orders of magnitude to $0.02-0.03\ \mathrm{Z_\odot}$. The resulting metallicity gradient is flat through the entire stellar distribution, out to $\sim1\ \mathrm{kpc}$ (the second subhalo appears in this radial profile at larger radii, as can also be seen in the no feedback profile). While the lower stellar metallicities are partially due to the lower overall stellar mass formed relative to the no feedback simulation, the ability of the SNe to expel metals from the centre of the halo is also key. Fig.~\ref{met4} shows mass-weighted gas metallicity projections of dwarf 4, without and with the inclusion of feedback. In the absence of feedback, metals remain where they have been deposited by the star particles, leading to high concentrations around the subhaloes. This can be seen in the projection, where the majority of gas (both inside and outside of the virial radius) remains at the metallicity floor of the initial conditions, $10^{-4}\ \mathrm{Z_\odot}$. Very small patches of super-solar metallicity gas can be seen around the subhaloes, while some metal enriched gas has been stripped during the merger, leaving trails. With the inclusion of feedback, gas of a few $10^{-2}\ \mathrm{Z_\odot}$ is widely distributed inside and outside of the virial radius. In the other dwarfs, the inclusion of feedback also allows metals to leave the halo ($\sim10^{-1}\ \mathrm{Z_\odot}$ at the virial radius), but approximately 3 orders of magnitude more stellar mass has been created to achieve this i.e. the SNe are $\sim100$ times less efficient at ejecting metals. We reported a similar phenomenon in isolated simulations in \cite{Smith2018}, where inefficient SNe lead to slow moving, highly metal enriched outflows simply due to the number of SNe occurring. Nonetheless, dwarf 4 demonstrates that it is possible for dwarfs to efficiently enrich the CGM. \begin{figure} \centering \includegraphics[width=0.4\textwidth,trim=0 8 0 0,clip]{figs/projections4_met_cbar.pdf} \includegraphics[trim=0 0 0 6,clip]{figs/projections4_met.pdf} \caption{Mass-weighted projections of gas metallicity for dwarf 4 at $z=4$, without (\textit{left}) and with (\textit{right}) SN feedback. The virial radius of the halo is marked with a green circle. There is a stark difference in the gas metallicity distribution, which is much more homogeneous in the run with SN feedback, allowing the CGM to be enriched to a few times $10^{-2}\ \mathrm{Z_\odot}$.} \vspace{-4ex} \label{met4} \end{figure} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figs/flow_4.pdf} \caption{Mass outflow and inflow rates as a function of redshift for dwarf 4, with and without feedback. Rates are calculated across shells of thickness $50$~pc located at $0.25R_\mathrm{vir}$, $0.5R_\mathrm{vir}$ and $R_\mathrm{vir}$. The SFR is also shown for reference. With SN feedback, a burst of star formation causes a subsequent outflow with mass loading factors between $10-100$ depending on the radius (comparing peak SFR to peak outflow rate) as well as strongly suppressing inflow. Sudden increases in inflow and outflow rates in both feedback and no feedback runs near $z=4$ are largely due to the merger.} \label{flow4} \vspace{-4ex} \end{figure*} Fig.~\ref{flow4} shows gas mass outflow and inflow rates across $0.25R_\mathrm{vir}$, $0.5R_\mathrm{vir}$ and $R_\mathrm{vir}$ as a function of redshift for dwarf 4 with and without SN feedback. SFRs are also plotted for comparison. The outflow rates are calculated as: \begin{equation} \dot{M}_\mathrm{out} = \frac{\sum_i m_i v_{\mathrm{out},i}}{\Delta r} \,, \end{equation} where the sum is over all gas cells within a shell of thickness $\Delta r = 50\ \mathrm{pc}$ centred on the target radius that have a positive radial outflow velocity, $v_\mathrm{out}$. The inflow rates are calculated in the same manner, but for all cells that have a negative radial velocity. In the absence of SN feedback, outflow rates mostly remain well below inflow rates, with outflows only arising from the motion of substructures and mergers. For example, the dramatic increase in outflow rates just before $z=4$ is due to the motion of the merging subhalo within the primary halo. With SN feedback, after the bursts of star formation, outflow rates increase dramatically while inflow is suppressed. Outflows are often characterised in terms of mass loading factor, i.e. the ratio of outflow rate to SFR. The outflows across the three radii are offset from the peak of the SFR due to the time difference between star formation and SNe exploding as well as the travel time of the outflow, so an instantaneous mass loading factor is not a useful quantity. However, comparing the peak SFRs and outflow rates yields a mass loading factor of approximately 90, 60 and 30 across $0.25R_\mathrm{vir}$, $0.5R_\mathrm{vir}$ and $R_\mathrm{vir}$, respectively. Following the burst of star formation at $z=11$, inflow across $0.25R_\mathrm{vir}$ is essentially halted until after $z=10$. The inflow rates remain a factor of $\sim5$ below the corresponding no feedback simulation rates until the merger begins at $z\approx5.5$. At this point, it appears that the UV background is hindering the ability of gas to condense into the centre of the halo. The second burst of star formation after $z=5$ also produces a brief outflow, preventing further star formation. None of the other dwarfs simulated are able to produce strong outflows. Dwarfs 2 and 3 have very brief outflows after bursts of star formation and subsequent efficient feedback, but they have mass loading factors $<2$ and barely suppress inflow rates except in the very centre of the halo. \vspace{-4ex} \section{Discussion} \label{zDiscussion} \begin{figure*} \centering \includegraphics{figs/dens_pdf.pdf} \caption{Distribution of the densities of gas in which SNe occur. Dwarf 1 and 4 are compared, with and without feedback. The redshift evolution of the PDFs are shown (cumulatively). For numerical reasons, these PDFs are for all SNe that occur in the high-resolution region, rather than being tied explicitly to the host halo of a given dwarf. However, the vast majority of SNe occur in the host halo, so these PDFs are representative. Most SNe occur in gas with a density of approximately $10^4\ \mathrm{cm}^{-3}$ for dwarf 1 (with and without feedback) and dwarf 4 without feedback. However, with the inclusion of SN feedback in dwarf 4, the mean density drops by three orders of magnitude.} \label{dens_pdf} \end{figure*} \subsection{Why is SN feedback inefficient?} In all of our simulated galaxies except for dwarf 4, SN feedback is unable to prevent the catastrophic collapse of gas and resulting runaway star formation. The reason for this inefficiency appears to be that most SNe occur in very dense gas. This can be seen in Fig.~\ref{dens_pdf} which shows the distribution of gas density in which SN explode for all SNe above a given redshift for dwarfs 1 and 4, with and without feedback. Comparing the PDFs for dwarf 1 at $z=11$ for the no feedback and feedback simulations, both peak at a high density of $\approx10^{3}\ \mathrm{cm^{-3}}$. With feedback, there is a slight tail to low density, indicating that the feedback has been able to clear some gas. A short time later at $z=10$, the peak of the distribution is at $\approx10^{4}\ \mathrm{cm^{-3}}$. By contrast, while without feedback dwarf 4 has a similar PDF to dwarf 1, once SNe are included the peak of the distribution is at $\sim3\ \mathrm{cm^{-3}}$ for all redshifts. With SNe as the sole form of feedback, the decisive criterion determining the success and failure of the feedback is whether it is able to clear the dense gas immediately. If at any point it cannot, then subsequent SNe will become increasingly inefficient, eventually resulting in the inability of the feedback to have sufficient impact on galaxy properties. Dwarf 1 fails the criterion immediately, as does dwarf 5. Dwarf 2 succeeds twice but is overwhelmed by a sudden increase of gas during a wet merger at $z\sim6$. Dwarf 3 is partially successful, but the bursts reach too high a SFR before the system is quenched, so the net reduction in stellar mass is too low. Dwarf 4 is unique amongst our simulations in being completely successful, mainly due to its merger history. As described in the previous section, while the final halo is comparable in mass to dwarf 1, it is formed from a major merger (with a mass ratio $\sim1.5$) late in its history ($z\approx5.5$). This means that it spends most of its evolution as two smaller haloes. Having a lower halo mass makes it easier to clear gas for two reasons. Firstly, there is a shallower potential well to fight against. Secondly, the inflow of gas onto the haloes is reduced relative to dwarf 1 (even without feedback, the SFRs for the two progenitor haloes are lower than for dwarf 1; see Fig.~\ref{sfr}). This means that while at $z=4$ dwarf 1 and dwarf 4 have the same virial mass to within 0.05~dex, they have evolved as if they were very different mass systems due to the manner of their assembly. It may appear at first glance that the inefficiency of SNe in dense gas is a result of shortcomings in our method of feedback injection i.e. numerical overcooling. However, our mechanical scheme is designed to help mitigate the effects of under-resolved SN remnants by injecting the correct momentum relative to the stage of their evolution that can be resolved. Full details can be found in \cite{Smith2018} where we also demonstrate using isolated simulations that this scheme is numerically robust (see also the following section where we discuss convergence with resolution). For extremely dense gas, at most tractable resolutions, the SN remnant will remain entirely unresolved so our scheme will inject the final momentum achieved during the Sedov-Taylor phase. We make use of a fitting function to high resolution simulations of individual SNe \citep[see][]{Blondin1998, Thornton1998, Geen2015, Kim2015,Martizzi2015,Kimm2015}, \begin{equation} p_\mathrm{fin} = 3 \times 10^5\ \mathrm{km\ s^{-1}\ \mathrm{M}_{\rm \odot}}\, E^{16/17}_{51} n^{-2/17}_{\mathrm{SN}} Z^{-0.14}_\mathrm{SN}\,, \label{p_fin} \end{equation} where $E_{51} = \left(E_\mathrm{SN} / 10^{51}\ \mathrm{ergs}\right)$ is the energy of the SN (for our individually time-resolved SNe, $E_\mathrm{51}\equiv1$), while $n_\mathrm{SN}= \left(n_\mathrm{H} / \mathrm{cm^{-3}}\right)$ and $Z_\mathrm{SN} = \mathrm{MAX}\left( Z/Z_{\rm \odot}, 0.01\right)$ are the hydrogen number density and metallicity of the ambient gas, respectively. It can therefore be seen by comparing the peaks of the density PDF for SN sites that in dwarf 1 the momentum budget per SN is reduced to $\sim0.39$ of that in dwarf 4. Additionally, if metals are not cleared efficiently this will also impact the available momentum. Given that the typical gas metallicity in the centre of dwarf 1 is approximately a factor of 100 higher than in dwarf 4, this reduces the momentum budget again by half, meaning that in total only 20\% of the momentum budget per SN is available relative to dwarf 4. In addition to impacting the small scale evolution of the SN remnant, the build up of a central concentration of dense gas will make it more difficult for the momentum injection from SNe to clear material from the galaxy because the mass of material that must be swept up in order for an outflow to escape becomes proportionally higher. These two factors lead to a state of runaway star formation if at any point the feedback is unable to prevent the build up of dense gas, particularly if inflow rates increase suddenly (e.g. due to mergers, be they major or minor). \vspace{-2ex} \subsection{The impact of the choice of parameters on our results}\label{Section_parameters} Having discussed the reasons why SN feedback is inefficient in our fiducial simulations, we now explore the degree to which our results are generally applicable as opposed to being dependent on our choice of parameters. Fig.~\ref{parameters} shows the SFR as a function of redshift and the stellar mass to halo mass ratio as a function of halo mass (at $z=10$, 8, 6 and 4) for our fiducial simulations of dwarf 1 as well as 9 resimulations in which we vary various parameters of our models. Again, we plot the (heavily extrapolated) $z=4$ abundance matching relations from \cite{Behroozi2013} and \cite{Moster2018}. We also indicate the integrated star formation efficiency equivalent to the stellar mass equalling the (still extrapolated) \cite{Moster2018} $z=0$ prediction\footnote{We take the halo mass at $z=0$ for dwarf 1 from a dark matter only simulation. Using the abundance matching relation from \cite{Moster2018} we obtain a predicted stellar mass, $M_{*,\mathrm{Moster}}(z=0)$. Even at this redshift, we must still extrapolate down in halo mass by 0.5 dex. We can then determine the equivalent integrated star formation efficiency for a given halo mass (i.e. at a higher redshift) if the galaxy had the predicted $z=0$ stellar mass. This is the dashed line in Fig.~\ref{parameters}.}. If the galaxy exceeds this value, this means that it has already formed more than the $z=0$ stellar mass prediction (although this should be taken as a rough guide because of the effects of extrapolation and intrinsic scatter). It can be seen that this is the case for the majority of our simulations, often by $z=10$. We can see that our choice to delay turning on the \cite{FG2009} UV background until $z=9$ is similar to switching it from $z=11.7$, apart from a slight reduction in SFRs between $z=10-9$. This shows that the assumed UV background is unable to prevent the catastrophic build up of gas at $z=10$. We note, however, that this conclusion rests on the approximation of a homogeneous UV background as opposed to local varying radiation fields. Dwarfs that are in crowded regions or are satellites of larger galaxies may be bathed in ionizing radiation from nearby external sources, assuming that those galaxies are able to clear/ionize sufficient local gas to achieve a high enough escape fraction for UV photons. The failure of the UV background to quench our dwarfs is not inconsistent with other works that indicate the existence of a $z=0$ threshold mass of a few $10^9\ \mathrm{M_\odot}$ below which UV background is effective \cite[see e.g.][]{Okamoto2008, Shen2014, Sawala2015, Wheeler2015, Fitts2017} as our dwarfs will have $z=0$ virial masses in excess of $10^{10}\ \mathrm{M_\odot}$. Perhaps more importantly, we have neglected photoionization from the stars formed in the galaxies themselves. This may be able to prevent the build up of dense gas in star forming regions \citep[see e.g.][]{Vazquez-Semadeni2010,Walch2012,Dale2014,Sales2014,Rosdahl2015}. The formation of H\,\textsc{ii} regions can result in SNe occurring in lower density regions, enhancing their efficiency \citep[see e.g.][]{Geen2015}. However, resolving H\,\textsc{ii} regions in these circumstances is challenging (the radius of a Str{\"o}mgren sphere around a typical O star embedded in $10^4\,\mathrm{cm^{-3}}$ gas is sub-parsec). While it is possible to try and compensate for the effect of unresolved H\,\textsc{ii} regions on SN remnant evolution in a subgrid manner \citep[e.g.][]{Kimm2017}, this is beyond the scope of this work. However, we note that pre-SN stellar feedback is one example of additional physics that can enhance the ability of SNe to regulate galaxy properties, as we discuss below. The use of pressure floors to prevent artificial fragmentation is a subject of some debate in the literature. We discussed the impact of adopting such a technique in some detail in \cite{Smith2018}, so we refrain from an in-depth discussion here. Nonetheless, we tested the impact on our zoom-in simulations by resimulating dwarf 1 without a pressure floor. As can be seen from Fig.~\ref{parameters}, this has a negligible impact on our results. The lack of a floor seems to produce slightly more clustered SNe, leading to a reduction in SFR by a factor a few from $z\sim10-7$. However, in general the SFR is similar to the fiducial simulation and the $z=4$ stellar mass is the same within 4\%. Increasing the star formation threshold by an order of magnitude to $100\ \mathrm{cm^{-3}}$ also produces more clustered SNe at early times, allowing the feedback to quench star formation at $z=10$. This leads to a reduction in stellar mass relative to the fiducial case by a factor of a few at $z=8$. However, this is still not enough to prevent the build up of dense gas at later times. From $z=7$ onwards, the SFR is similar to the fiducial case, leading to a reduction of the $z=4$ stellar mass by only 1.3. We further carry out a simulation in which we modify the equation used to calculate the final momentum of a SN remnant after the Sedov-Taylor phase (eq.~\ref{p_fin}) such that the dependence on ambient gas density is capped at $100\ \mathrm{cm^{-3}}$. This is a crude approximation to the idea that local stellar feedback may have prevented surrounding gas reaching high density prior to the first SN occurring. Imposing this density cap increases the momentum budget per SN by a factor of 1.7 relative to SN occurring in gas with $10^{4}\ \mathrm{cm^{-3}}$ (as in Fig.~\ref{dens_pdf}). Of course, the gas itself is still at high density, so continues to present an obstacle to efficient clearing of material from a hydrodynamical standpoint. Nonetheless, with this caveat in mind and very moderate increase in the momentum budget, this simulation results in a factor of 3 lower stellar mass at $z=4$. This hints that the need to regulate local gas density is important, but also demonstrates that such a simple modification to the subgrid scheme is not sufficient to obtain realistic galaxy properties. \begin{figure*} \centering \includegraphics{figs/modified.pdf} \caption{SFR as a function of redshift (\textit{left}) and integrated star formation efficiency (\textit{right}) for simulations of dwarf 1 with a variety of alternative parameters. The results are split into two rows for clarity. Simulations are at the fiducial gas cell mass resolution of 287~$\mathrm{M_\odot}$ unless denoted high resolution (35.9~$\mathrm{M_\odot}$) or ultra-high resolution (15~$\mathrm{M_\odot}$). The simulations shown are as follows. \textit{Top}: no feedback (black), our fiducial SNe simulation (red), the \protect\cite{FG2009} UV background is turned on from our first available tabulated redshift of $z=11.7$ (green), the pressure floor is turned off (blue), the star formation density threshold is increased by a factor of 10 to $100\ \mathrm{cm^{-3}}$ (purple), we impose a cap of $100\ \mathrm{cm^{-3}}$ on the density that is used to determine the maximum momentum that can be injected for a SN (see eq.~\ref{p_fin}) (yellow). \textit{Top}: the fiducial SNe simulation is repeated in these panels for reference (red), high resolution (cyan), ultra-high resolution (pink), fiducial resolution with the star formation efficiency increased by a factor of 10 to 15\% (brown), ultra-high resolution with the star formation efficiency increased to 15\% (orange), fiducial resolution with the pressure floor is turned off and the star formation efficiency is set to 100\% (light green). Abundance matching relations at $z=4$ \protect\citep{Behroozi2013,Moster2018} are shown, although they are extrapolated into this mass range. The dashed black line indicates the integrated star formation efficiency at a given halo mass if the stellar mass equalled that predicted from the $z=0$ halo mass by the \protect\cite{Moster2018} relation (still slightly extrapolated to this mass, even at $z=0$). If a simulation exceeds this line at any point, the galaxy will overshoot the $z=0$ abundance matching relation even if it does not form any more stars.} \label{parameters} \end{figure*} Increasing the number of resolution elements in the zoom-in region by a factor of $2^3$ (which we label as `high resolution'), giving a mass resolution of $191\ \mathrm{M_\odot}$ and $35.9\ \mathrm{M_\odot}$ for dark matter particles and gas cells, respectively, has very little impact on the results. While the SFR shows slightly more variation than the fiducial resolution simulation and there is a suppression of star formation briefly between $z=5.5-5$, even with the increased resolution the feedback is unable to prevent runaway star formation beginning at early times. This leads to a $z=4$ stellar mass that only differs from the fiducial simulation by a factor of $1.2$. We also further increase the gas cell mass resolution to $15\ \mathrm{M_\odot}$ (labelled `ultra-high resolution')\footnote{Our refinement/derefinement strategy keeps the cells within a factor of 2 of the target mass. At $15\ \mathrm{M_\odot}$ this means that a substantial number of star particles are formed with an initial mass lower than our fiducial SN ejecta mass of $10\ \mathrm{M_\odot}$. Therefore, for this simulation we drop the ejecta mass to $5\ \mathrm{M_\odot}$. We have carried out additional tests (not shown) to confirm that this has a negligible impact on our results.}. This results in a far more bursty SFR as outflows are stronger. This is consistent with our previous tests in isolated setups that demonstrate that strong outflow generation is difficult to achieve with a mass resolution coarser than $20\ \mathrm{M_\odot}$ (\citealt{Smith2018}, see also discussions in \citealt{Kimm2015}, \citealt{Hu2016} and \citealt{Hu2019}). While this means that the results do not converge well with resolution, the galaxy still exceeds the predicted $z=0$ stellar mass as early as $z=8$. At the fiducial resolution, increasing the star formation efficiency, $\epsilon_\mathrm{SF}$ by a factor of 10 to 15\% leads to significantly different behaviour. The SFR rises faster and strong clustering of SNe leads to efficient launching of outflows and the suppression of star formation. Star formation proceeds in short bursts for the entire duration of the simulation. Despite this, the $z=4$ stellar mass is only reduced by slightly over an order of magnitude, leaving it over 2 orders of magnitude above the (extrapolated) abundance matching relations and an order of magnitude larger than dwarf 4 with the fiducial star formation parameters. Failing to match the abundance matching relations at this redshift is not necessarily a failure in and of itself because of the uncertainties involved at this mass range. However, at $z=4$, the galaxy has just reached the predicted $z=0$ stellar mass. Given that there are no indications that it has been conclusively quenched at $z=4$, this suggests that the galaxy may well end up with an unphysically large stellar mass at $z=0$. Repeating this experiment with the ultra-high resolution (decreasing cell mass by a factor of 19) reveals similar results, actually resulting in a slightly higher final stellar mass. Finally, we try an extreme choice of parameters in an attempt to reduce the stellar mass further. We turn off the pressure floor and use $\epsilon_\mathrm{SF}=100\%$. This leads to extremely rapid star formation and a concentrated burst of SN feedback that is able to completely quench the galaxy, expelling most of the gas. Star formation does not resume by $z=4$. The result is a reduction in $z=4$ stellar mass by almost 2 orders of magnitude. While this is still too high relative to the extrapolated abundance matching relations, it is possible that this galaxy would move onto the relation at lower redshift. While this may be seen as a successful solution, a more cautious interpretation would indicate that, given that we need to push our star formation model to its extremes in order to be successful, we are likely neglecting some other important physical processes that would alleviate the need for very high values of $\epsilon_\mathrm{SF}$ in the first place. Selecting the appropriate value of $\epsilon_\mathrm{SF}$ to use in galaxy simulations is non-trivial, particularly in the case where star forming regions may be partially resolved. It is important to establish over what time and length scales the efficiency is averaged and the degree to which these scales are relevant to the scales and physics resolvable in the simulation. We have adopted a fiducial value of $1.5\%$, which represents an average over a GMC and over a cloud-scale free-fall time \citep[see e.g.][]{Krumholz2007}. The value itself can be considered representative for a `typical' Milky Way (MW) star forming region, although observations reveal a large observed scatter of up to 0.8 dex \citep[see e.g.][]{Murray2011,Lee2016}. One way of explaining this scatter is to invoke a (magneto)-turbulent model of GMC star formation to modulate the efficiency per free-fall time \citep[e.g.][]{Krumholz2005,Padoan2011,Hennebelle2011,Federrath2012}. In more extreme environments, the deviation from the standard SF relations can be even more severe. Some high-redshift disc and starburst galaxies have been reported to have larger $\epsilon_\mathrm{SF}$ by a factor of at least 10 \citep[see e.g.][]{Daddi2010,Genzel2010} while in the MW's Central Molecular Zone (CMZ), a possible analog for high redshift star formation environments, the efficiency appears to be a factor of 10-100 lower (see e.g. \citealt{Longmore2013}; however, see e.g. \citealt{Sharda2018} and \citealt{Federrath2016} for applications of (magneto)-turbulent SF models to high redshift and MW CMZ environments, respectively). Additionally, in certain circumstances it is possible that while an average over long timescales yields some value of the efficiency, it may in fact vary episodically on smaller scales, either due to regulation by turbulent pressure \citep[see e.g.][]{Kruijssen2014} or by feedback regulation \citep[see e.g.][]{Grudic2018}. The upshot of both of these scenarios is that using a large spatial scale and `long' timescale averaged value of $\epsilon_\mathrm{SF}$ may artificially smooth out star formation and, crucially for this work, SN rates. The reason for the increase in SN feedback efficiency as a result of increasing $\epsilon_\mathrm{SF}$ is twofold. Firstly, it leads to more clustered SNe that are able to work together to drive outflows. Secondly, it avoids the issue of building up high density gas by efficiently converting gas into stars before the problem arises. Care, however, must be taken when using such high values of the efficiency that this does not represent an unphysical removal of gas. As the gas consumption time is then effectively the free-fall time, above $100\ \mathrm{cm^{-3}}$ this becomes comparable to the time before the first SNe explode, meaning that most, if not all, of the local gas will have been converted into stars, significantly dropping local density for subsequent SN events. If the internal structure of star-forming regions is well resolved this may not particularly problematic because the hydrodynamics should correctly follow the fragmentation of the region without recourse to `fudge factors'. However, if the region is unresolved, using an efficiency of $100\%$ will quickly convert the entire mass of the region into stars, which is likely unphysical. In other words, if we are confident that we fully resolve all the relevant small scale processes and timescales (for example, that our hydrodynamics will correctly capture effects such as turbulent support, or that our subgrid feedback prescriptions are fully physical), then we can use a high star formation efficiency coupled with some smaller scale restrictions for which gas can form stars (e.g. virial parameter, Jeans unstable gas etc.) and rely on these processes to correctly regulate the resulting SFRs. If not, then the results are likely to be erroneous. For example, in a scenario such as that described by \cite{Kruijssen2014}, if we fail to resolve the turbulent pressure (and other relevant small scale details) that leads to episodic star formation, then we will entirely miss the low efficiency section of the cycle. In our case, it is likely that we sit somewhere in between these two cases. Our fiducial choice of a fixed $\epsilon_\mathrm{SF}=1.5\%$ is possibly too conservative. On the other hand, it is not clear that we capture the small scale structure and turbulence of the ISM sufficiently to justify 100\%, probably leading to the unphysically rapid consumption of star forming regions by gas `deletion' and subsequently overpowered SN feedback. It should be noted that roughly this magnitude of $\epsilon_\mathrm{SF}$ will be required to regulate SF, as using 10 times our fiducial value also failed to regulate star formation. Furthermore, we have experimented with the adoption of a SF prescription that uses a variable efficiency based on local turbulent gas properties (with a prescription similar to \citealt{Kimm2017}). This scheme attempts to infer the likely (unresolved) turbulent Mach number, $\mathcal{M}$, and virial parameter, $\alpha$, based on the resolved local velocity gradients. These are then used as inputs into the analytic star formation law of \cite{Padoan2011} \citep[see also this formalism explored in][]{Federrath2012}. This derives a star formation efficiency per freefall time by calculating the fraction of gas above some critical density, determined by considering the particular log-normal density distribution of gas expected for the given values of $\mathrm{M}$ and $\alpha$. We leave a detailed discussion to a future work (Smith et al. 2019 in prep.), but find it worthwhile to report the tentative result that in this specific case there is little impact on the evolution of Dwarf 1. This is largely because we find in our simulations these models typically give $\epsilon_\mathrm{SF}\approx1\%-20\%$, which we have already demonstrated is not high enough to sufficiently enhance the strength of SN feedback such that it makes a difference to the evolution of our dwarfs. Nonetheless, it is clear that the efficiency of SN feedback is strongly dependent on their spatial and temporal clustering. Since this is explicitly tied to the manner in which star formation proceeds on local scales, it is important to model this in a physical a manner as possible. Another phenomenon which impacts SN clustering properties is the fraction of walkaway/runaway SN progenitors. Dynamical interactions during the formation of a star cluster may eject progenitors \citep[see e.g.][]{Poveda1967,Fujii2011,Oh2015} or alternatively runaways may be caused by the occurrence of a SNe in an OB binary system (see e.g. \citealt{Blaauw1961,PortegiesZwart2000,Eldridge2011}, see also \citealt{Kim2017} for a subgrid implementation of this mechanism). If a progenitor is able to travel away from its birth site the subsequent SN is more likely to occur in a low density medium which maximises its efficiency. Conversely, a high fraction of runaways will tend to smooth out the spatial clustering of the ensemble of SNe, potentially reducing the ability of remnants to overlap and form superbubbles. Finally, if SNe occur outside of the dense star forming clouds they may not be able to efficiently disperse star forming gas. Speculating on the dominant impact of runaway SN progenitors is beyond the scope of this present work (given that it is likely to be sensitive to the exact parameters adopted such as runaway fraction, velocity distribution etc.), but we note that they may play an important role in determining overall SN feedback efficiency. It is worth reemphasizing that regardless of the star formation criteria, there is a large body of theoretical and observational work indicating that other sources of stellar feedback must be operating prior to the first SN, such as stellar winds, photoelectric heating and photoionization from young stars. These processes may have a significant impact on local gas, not only affecting its density and temperature structure, but also the level of turbulent support. Given that we have demonstrated a tendency for dense gas to build up and overwhelm SN feedback in our $z=4$ dwarfs (and that this effect is physically realistic, rather than just being a symptom of numerical overcooling), it may be the case that non-SN stellar feedback plays a more important role in the evolution of low mass haloes than is commonly assumed. This conclusion is consistent with the results found by the \textsc{FIRE-1} project \citep{Hopkins2014a} in which the removal of other sources of stellar feedback in dwarfs led to SN feedback having almost no impact on stellar mass (though the effect appears to be less severe in \textsc{FIRE-2} \citep{Hopkins2017a}). Finally, we note that the efficiency of first (and subsequent) SN events may depend on the fraction of runaway SN and on alternative heating processes such as those provided by relativistically accelerated particles in the wake of SN explosions. \vspace{-4ex} \section{Conclusion} We have carried out very high resolution cosmological zoom-in simulations of five dwarf galaxies up to $z = 4$ with virial masses between $\sim 2-6\times10^9\ \mathrm{M_\odot}$. Our simulations adopt the mechanical SN feedback scheme introduced in \cite{Smith2018} and a spatially constant, but time evolving UV background \citep{FG2009}. The SN feedback is constructed to deliver the correct momentum to the surrounding ISM corresponding to the stage of the SN remnant evolution. We found that this model leads to self-regulated star formation rates, realistic galaxy kinematics and gas content thanks to the occurrence of multiphase, mass-loaded outflows in isolated dwarf simulations \citep{Smith2018}. The aim of the present work is to determine whether the same model of SN feedback results in the realistic dwarfs properties once the full cosmological formation is incorporated self-consistently. We find that: \begin{itemize} \item Without the inclusion of SN feedback, we produce dwarfs that have over 3 orders of magnitude too much stellar mass relative to (extrapolated) abundance matching predictions. Their stellar and gas metallicities are in excess of solar abundances. The dwarfs undergo a catastrophic collapse to the resolution limit, resulting in extremely dense systems with strongly peaked circular velocity curves. Dark matter density in the centre of the halo is enhanced relative to a collisionless simulation by approximately an order of magnitude. \item In general, while the inclusion of SN feedback induces more bursty SFR rates and affects dwarf morphologies, it has insufficient impact on the total stellar mass formed. In the majority of our systems, the build up of dense gas (often following a wet merger) renders the SNe too inefficient to expel gas from the galaxy and suppress star formation. We emphasise because our scheme injects the correct amount of momentum per SN, this effect is not an example of classical numerical overcooling but rather a physical suppression of SN efficiency. Most SNe explode in gas of density $10^4\ \mathrm{cm^{-3}}$ which limits the feedback momentum budget available. This suggests that some other mechanism(s) must be invoked (e.g. other sources of stellar feedback) that can prevent gas from collapsing to such high densities and/or clear it prior to SNe occurring. Inclusion of runaway SN may help alleviate this issue as well. \item We however find one exception to this scenario where we are able to produce a realistic dwarf relative to the extrapolations of abundance matching and various metrics of local analogs. Our dwarf 4 forms by a major merger relatively late in its history at $z \approx 5.5$. It therefore spends most of its evolution as two lower mass systems in which the SNe are able to expel gas and halt star formation before catastrophic collapse sets in. Their late major merger is therefore mostly dry and does not trigger more than a brief burst of star formation which is quickly suppressed by feedback. We note that while SNe feedback is clearly efficient here, enriching the CGM to a few $10^{-2}\ \mathrm{Z_\odot}$ with mass-loaded winds, no prominent dark matter core forms. \item We have carried out a variety of other simulations to test the applicability of our conclusions. We find that our results are not significantly impacted by increasing resolution, changing details of the (spatially uniform) UV background or removing the pressure floor. Our results are also relatively insensitive to increasing the star formation density threshold by an order of magnitude. Arbitrarily increasing the star formation efficiency parameter by an order of magnitude to 15\% leads to more bursty behaviour and reduced star formation, but still overshoots abundance matching relations by 2 orders of magnitude. Only by taking an extreme choice of parameters, using a star formation efficiency of $100\%$, are we able to get close to the relation. \end{itemize} We have demonstrated that realistically modelled SN feedback is easily overwhelmed early on in the cosmological assembly of dwarfs by the build up of gas, despite the relatively shallow potential well. While this can potentially be dealt with by adopting a star formation prescription that leads to extremely concentrated SN feedback, it seems that some combination of other sources of stellar feedback and/or currently unresolved turbulent support may be required to modulate ISM densities prior to the first SNe exploding in order to preserve their efficiency. \vspace{-4ex} \section*{Acknowledgements} We are grateful to Cathie Clarke, Adrianne Slyz, Christoph Federrath and the anonymous referee for helpful comments. MCS and DS acknowledge support by the Science and Technology Facilities Council (STFC) and the ERC Starting Grant 638707 ``Black holes and their host galaxies: co-evolution across cosmic time''. This work was performed on the following: the DiRAC Data Analytic system at the University of Cambridge, operated by the University of Cambridge High Performance Computing Service on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BIS National E-infrastructure capital grant (ST/K001590/1), STFC capital grants ST/H008861/1 and ST/H00887X/1, and STFC DiRAC Operations grant ST/K00333X/1; the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility. The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1; the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility. The equipment was funded by BEIS capital funding via STFC capital grants ST/P002293/1 and ST/R002371/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. \vspace{-4ex} \bibliographystyle{mn2e}
2,877,628,091,404
arxiv
\section{Introduction} Recent development of experimental techniques has opened an exciting possibility to work with ultracold Bose gases in one-dimensional (1D) conditions, for example in a set of elongated optical traps (see Moritz et al.~\cite{Essl03}) and in magnetic traps created by solid state chips (Esteve et al.~\cite{Esteve06}). This development permitted to verify the theoretical predictions in highly controllable experiments. The theoretical investigation of one-dimensional bosons has been began by Marvin Giradeau in Ref.~\cite{Gir60}, where the case of an infinite repulsion was considered. This case is often called as ``Tonks-Giradeau'' (TG) limit, although Tonks considered 1D classical gas~\cite{Tonks36}. The next important step was made by Lieb and Liniger in Refs.~\cite{LiebLiniger63,Lieb63} where they obtained an exact solution of the problem of ground state properties and energy spectrum of the one-dimensional Bose gas with the delta-functional repulsive interaction (Lieb-Liniger gas). Probably, the most surprising result of the paper~\cite{Lieb63} is the existence, besides the phonon-like branch of elementary excitations (Lieb I branch), which presence was natural to assume in analogy with 3D case, also of the second branch (Lieb II branch). This branch exists in a finite interval of the momenta $|p|/\rho \leq \pi$ and its energy approaches zero, when the coupling constants tends to zero. In TG limit the spectrum coincides with that of an ideal Fermi gas and Lieb II branch corresponds to excitation of holes. The meaning of the second branch in the opposite (mean-field ``Bogolyubov'') limit of weakly interacting bosons was explained by Kulish, Manakov and Faddeev~\cite{Fadd76} (see also Ishikawa and Takayama~\cite{Ish80}). They have shown, that the energy-momentum dispersion relation for the second branch in this limit coincides with the relation, obtained by Tsuzuki \cite{Tsu71} for a soliton, described by the Gross-Pitaevskii equation (GPE). Recently Sato et al. have shown that the spatial profile of the order parameter, defined as a proper matrix element, also reproduces the GPE soliton profile~\cite{Sato12}. It was also shown by Kanamoto, Carr and Ueda that states with non-zero angular momenta of Lieb's Hamiltonian on a ring can be identified in the same limit as multisoliton solutions of GPE~\cite{Carr10}. Hence Lieb II branch of excitations in the intermediate regime is a result of a quite non-trivial crossover between a topological soliton and an excitation of fermion-like holes. Investigation of the properties of these unusual objects is, in our opinion, an interesting and important problem. In this paper we investigate dynamics of Lieb II branch of excitations in a gas confined to 1D harmonic trap. We assume that the size of the cloud is large in comparison with the healing length. Then one can safely use the Local Density Approximation (LDA) for the dynamics. \section{Local Density Approximation} In the LDA, dynamics of an excitation is defined by its dispersion law in a uniform gas. The most convenient description of the dynamics is in terms of energy of the excitation $\varepsilon(V,\mu)$ expressed as a function of its velocity $V$ and the chemical potential $\mu$. For a smooth external potential $U(x)$, the LDA energy can be obtained by exchanging the chemical potential by its local value $\mu \rightarrow \mu-U(x)$. This means that in the course of the motion of an excitation in the presence of the external field, the energy $\varepsilon (V,\mu -U(x))$ must remain constant \cite{Shl99,KonPit04}. Differentiating with respect to time and taking into account that $dx/dt=V$ we obtain $1/V(\partial \varepsilon /\partial V)_{\mu }dV/dt-(\partial \varepsilon /\partial \mu )_{V}(\partial U/\partial x)=0$, or \begin{equation} m_{eff}\frac{dV}{dt}=-N_{s}\left( \frac{\partial U}{\partial x}\right) \label{Newt} \end{equation} where the parameters, characterizing the excitations, \begin{eqnarray} \begin{array}{lll} m_{eff} &=&\frac{1}{V}\left( \frac{\partial \varepsilon }{\partial V}\right)_\mu\\ N_s &=&-\left( \frac{\partial \varepsilon }{\partial \mu }\right) _{V} \end{array} \label{meffNs} \end{eqnarray} have, correspondingly, meaning of the effective mass $m_{eff}$ and the effective number of atoms $N_s$ in the excitation. For the excitations of the second branch in the Bose gas these quantities are negative, thus $|N_s|$ is the number of atoms expelled at creation of an excitation. These equations of motion of solitons in LDA approximation were derived in \cite{KonPit04} for the GPE solitons and in \cite{Scott11} for the general case. The effective number of atoms in an excitation $N_s$ appears in a natural way in the equation for $dV/dt$. However, one should take into account, that it is not identical to the number of atoms $N_d$, introduced in \cite{Schecter12}. These quantities coincide in the Bogolyubov limit. It is convenient to rewrite the equation (\ref{Newt}) as \begin{eqnarray} \label{Z} Zm\frac{dV}{dt}=-\left( \frac{\partial U}{\partial x}\right),\quad Z(V,\mu )=\frac{m_{eff}}{mN_s} \end{eqnarray} where $m$ is mass of an atom. Dimensionless ``mass renormalization'' function $Z$ is the only parameter, describing dynamics of the soliton in LDA. Quantities $m_{eff}$ and $N_s$ can be easily calculated in the Bogolyubov regime using GPE. Here according to~\cite{Tsu71} the energy of a soliton is $\varepsilon (V,\mu )=2\hbar (\mu -mV^2)^{3/2}/(3cm^{1/2})$, where $2c$ is the one-dimensional coupling constant (see Eq.~(\ref{H}) below). Correspondingly \begin{eqnarray} \label{Eq:GP Ns} \begin{array}{lll} N_s&=&-\frac{\hbar }{cm^{1/2}}\left(\mu -mV^2\right)^{1/2}\\ m_{eff}&=&2mN_s \end{array} . \end{eqnarray} Thus in the Bogolyubov regime the effective mass of a soliton is twice the total mass $N_sm$ of the particles in it, so as far as dynamics are concerned, a GPE soliton moves in an arbitrary external field as a particle of mass $2m$~\cite{KonPit04}. If the gas is trapped in a harmonic trap $U(x) = m\omega_h^2x^2/2$, the frequency of small oscillations can be found from the equation of motion~(\ref{Newt}) keeping the values of $N_s$ and $m_{eff}$ constant and equal to the ones at $V=0$ and in the center of the trap. The frequency of harmonic oscillations depends on soliton properties as \begin{equation} \Omega =\sqrt{\frac{mN_s}{m_{eff}}}\omega_h=\frac{1}{\sqrt{Z}}\omega_h\;. \label{Omega} \end{equation} For the GPE soliton one has $Z=2$ and $\Omega =\omega_h/\sqrt{2}$. This result was first obtained in \cite{Ang00} by a different method and confirmed in experiments\cite{Seng08}. In the opposite, TG limit the energy of an second-branch excitation can be presented as $\varepsilon(V,\mu )=\mu -mV^2/2$ and \begin{eqnarray} N_s =-1,\quad m_{eff} = -m \label{Eq:TG meff} \end{eqnarray} correspondingly to the ``hole-like'' nature of the excitation in this limit. In this case the frequency of oscillations is $\Omega = \omega_h$. In this paper we will calculate the characteristic parameters $m_{eff},N_{s},Z$ and the frequency $\Omega$ for intermediate strengths of the interaction. It is worth noticing that in absence of external field the state with one soliton in the Lieb-Liniger model has an infinite lifetime. In the presence of trapping an excitation has a finite lifetime due to emission of phonons. This effect has been investigated in~\cite{Gang12} for the GPE solitons. The probability of the decay is small at small enough $\omega_h$. In the following we will not consider this effect. \section{Lieb's equations} In the Lieb-Liniger model the Hamiltonian is written as \begin{equation} H=\frac{\hbar^2}{2m}\sum_i\frac{d^2}{dx_i^2}+2c\sum\limits_{i<k}\delta(x_i-x_k) . \label{H} \end{equation} In the original paper\cite{LiebLiniger63} authors used the system of units with $\hbar=1,m=1/2$. Calculation of the second branch of the spectrum of elementary excitations is reduced to the solution of a linear integral equation for the function $J(k,q)$ \begin{equation} 2\pi J(k,q)-2c\int\limits_{-K}^{K}\frac{J(r,q)dr}{c^2+(r-k)^2} =\pi-2\tan^{-1}\left(\frac{q-k}{c}\right). \label{Eq:J(k,q)} \end{equation} The limit of integration $K$ defines the one-dimensional density $\rho$ (and the value of the dimensionless parameter $\gamma=c/\rho$) indirectly, as an integral of the solution of an equation similar~(\ref{Eq:J(k,q)}), but without $q$-dependent term on the r.h.s. \cite{LiebLiniger63}. The dependence $K(\gamma)$ and the inverse one $\gamma(K)$ can be calculated following the methods of \cite{LiebLiniger63}. Once such relations are known, the sound velocity $u$ can be calculated according to $u=-2\gamma^2d(K/\gamma)/d\gamma$. We use matrix methods to solve Eq.~(\ref{Eq:J(k,q)}) and similar integral equations. To do so we discretize the integral which than is written as a $(r,q)$ matrix. The inverse matrix is calculated and is multiplied by the discrete representation of the r.h.s. of Eq.~(\ref{Eq:J(k,q)}). The knowledge of $J(k,q)$ permits to calculate the dependence of the energy $\epsilon$ on the momentum $p$ in the parametric form (here $q$ is understood as a free parameter): \begin{equation} \begin{array}{lll} \varepsilon &=&\mu-q^2+2\int\limits_{-K}^{K}J(k,q)k\;dk\\ p&=&-q+\int\limits_{-K}^{K}J(k,q)\;dk \end{array} . \label{ep} \end{equation} The resulting energy $\varepsilon$ of excitations is a function of $\mu$ and $p$, instead of $\mu$ and $V$. It is possible to calculate $V(p,\mu)=(\partial \varepsilon/\partial p)_\mu$ from equations~(\ref{ep}). To calculate $N_s$ and $m_{eff}$ in these variables one can use the relations \begin{eqnarray} \begin{array}{lll} m_{eff}&=& \frac{1}{V} \left(\frac{\partial\varepsilon}{\partial p}\right)_\mu \left(\frac{\partial p}{\partial V}\right)_{\mu},\\ N_s&=& - \left(\frac{\partial\varepsilon}{\partial \mu}\right)_p +V \left(\frac{\partial V}{\partial\mu}\right)_p \left(\frac{\partial p}{\partial V}\right)_{\mu} \end{array} . \label{Ns} \end{eqnarray} The natural parameters of Lieb's equations (\ref{Eq:J(k,q)}-\ref{ep}) are $K$ and $q$. The derivatives entering in Eqs.~(\ref{meffNs}) can be expressed in terms of partial derivatives at constant $q$ or $K$ \begin{eqnarray} \begin{array}{lll} m_{eff}& = & \left(\frac{\partial p}{\partial q}\right)_K^2 \left/ \left[ \left(\frac{\partial^2\varepsilon}{\partial q^2}\right)_K - V \left(\frac{\partial^2 p}{\partial q^2}\right)_K \right] \right. ,\\ N_s&=& - \left(\frac{\partial\varepsilon}{\partial \mu}\right)_q + \left(\frac{\partial \varepsilon}{\partial q}\right)_K \left(\frac{\partial V}{\partial \mu}\right)_q \left/ \left(\frac{\partial V}{\partial q}\right)_K \right.\\ V &=& \left(\frac{\partial\varepsilon}{\partial q}\right)_K \left/ \left(\frac{\partial p}{\partial q}\right)_K \right. \end{array} \label{meffVNs:numerics} . \end{eqnarray} First and second derivatives of $p$, $\varepsilon$ and $J(k,q)$ with respect to $q$ at fixed $K$ are found by solving additional integral equations which are obtained from Eqs.~(\ref{Eq:J(k,q)}-\ref{ep}) by differentiating with respect to the parameter $q$. Derivatives at fixed $q$ are calculated numerically. \section{Results and discussion} We calculated $N_s$ and $m_{eff}$ from Eqs.~(\ref{meffVNs:numerics}). As it was discussed above, the soliton dynamics completely described by the ratio $Z=m_{eff}/mN_{s}$. The dependency of $Z$ on velocity for different values of $\gamma$ presented in Fig.~\ref{Fig1}. The $V$-dependence disappears in the TG limit, where $Z=1$, and in GP limit, where $Z=2$ [see equations (\ref{Eq:GP Ns}) and (\ref{Eq:TG meff})]. \begin{figure}[tbp] \includegraphics[width=\columnwidth,angle=0]{Fig1.eps} \caption{(Color online) Parameter $Z=m_{eff}/(N_{s}m)$ as a function of velocity $V$ in units of the speed of sound $u$ at different values of interaction strength $\gamma$, from top to bottom, $\gamma = 0.034; 0.12; 0.80; 4.5; 61$ (corresponding to $K=20; 10; 3.3; 1; 0.1$). The dependence on $V$ disappears both in TG and GP limits.} \label{Fig1} \end{figure} \begin{figure}[tbp] \includegraphics[width=\columnwidth, angle=0]{Fig2.eps} \caption{(Color online) Solid line, frequency of oscillations $\Omega$ in units of the frequency $\omega_h$ of the harmonic oscillator as a function of the interaction parameter $\gamma$; dashed line, asymptotic value in TG limit; dash-dotted line, asymptotic value in GP limit; short-dashed line, perturbative solution of integral equations, $\Omega/\omega_h = 1-1/\gamma+...$.} \label{Fig2} \end{figure} Probably the best way to experimentally verify our predictions is to measure the frequencies of oscillations $\Omega$ in a trap in different regimes. Figure~\ref{Fig1} shows the dependence of the frequency $\Omega $ of small oscillation on the interaction parameter $\gamma $. (The value of $\gamma$ should be taken for the center of the trap.) One can see that the frequency continuously increases with increasing $\gamma $ from its GPE value $\omega_h/\sqrt{2}$ to the ideal Fermi gas value $\omega_h$. The most sharp change takes place at $\gamma \sim 3$. There are different ways of measuring the oscillations frequency. At moderately small values of $\gamma$, when a soliton still contains a large number of atoms, one can directly observe its motion, like in the experiments~\cite{Seng08,Lew99}. Instead at $\gamma\sim 1$ the number of atoms in a soliton is small, $|N_{s}|\sim 1$, one might exploit the confinement induced resonance (CIR)\cite{Olshanii98} in order to change the value of $\gamma $ in the course of an experiment. In typical one-dimensional experiments there is a number of elongated optical traps, created by standing waves. Initially solitons can be created by a phase imprinting method at small $\gamma $. Later the value of $\gamma $ might be increased by using CIR and soliton oscillations can be excited by a parametric modulation of the trap frequency. One expects to observe a resonance at the frequency of modulation $\Omega/2$. The resonance can be detected by heating of the gas. \begin{figure}[tbp] \includegraphics[width=\columnwidth, angle=0]{Fig3.eps} \caption{(Color online) Solid line, number of particles $N_s$ in a stationary soliton ($V=0$) as a function of the interaction parameter $\gamma$; dashed line TG limit, Eq.~(\ref{Eq:TG meff}); dash-dotted line GP limit, Eq.~(\ref{Eq:GP Ns}).} \label{Fig3} \end{figure} The frequency $\Omega$ of small oscillation is given by Eq.~(\ref{Omega}). It is a quantity of a large importance as it can be observed experimentally. However, $N_s(V=0)$ and $m_{eff}$ are interesting on their own. The dependence of the number of particles in the soliton at rest, $N_s(V=0)$, on the interaction strength is shown in Fig.~\ref{Fig3}. We find that for small $\gamma$ (GP regime) $|N_s| \gg 1$ and the soliton is a macroscopic object, however $|N_s|$ becomes of the order of 1 already at $\gamma \sim 1$. In Fig.~\ref{Fig4} we presented $m_{eff}$ as a function of velocity $V$ at different values $\gamma$. \begin{figure}[tbp] \includegraphics[width=\columnwidth, angle=0]{Fig4.eps} \caption{(Color online) Solid lines, effective mass $m_{eff}$ in a as a function of velocity $V$ in units of speed of sound $u$ at different values of the interaction parameter $\gamma$; dashed line TG limit, Eq.~(\ref{Eq:TG meff}); dash-dotted line, GP limit, Eq.~(\ref{Eq:GP Ns}).} \label{Fig4} \end{figure} At small $\gamma$ both $N_S$ and $ m_{eff}$ are quite well described by the GP result, Eq.~(\ref{Eq:GP Ns}). However, the situation is different for ``fast'' solitons with $p\rightarrow 0$, i.e. with $V\rightarrow u$. According to (\ref{Eq:GP Ns}) the effective mass of a soliton tends to zero, $m_{eff}\propto (u-V)^{1/2}$, correspondingly to the small amplitude of the soliton. However the calculations show that $m_{eff}$ tends to a finite value at $V\rightarrow u$. This means that the dispersion law of the soliton should has the expansion at $p\rightarrow 0$ \begin{equation} \varepsilon (p)\approx up+\frac{p^2}{2|m_{eff}(p=0)|} \;. \label{mstar} \end{equation} This relation is quite non-trivial, because presence of the $p^2$ contradicts the GPE. Indeed, according to GPE $\varepsilon -up\propto p^{5/3}$. The existence of the $p^2$ term in the spectrum of 1D bosons was established by Imambekov, Schmidt, and Glazman (see \cite{GRMP}, Eq.~(50)). Such a term exists both for upper and lower branches of elementary excitations. The effective mass is the same in the absolute value for two branches. (See \cite{GRMP}, a paragraph after Eq. (172).) Simple calculation permits to present the result of \cite{GRMP} as \begin{equation} \left\vert m_{eff}(p=0)\right\vert^{-1}=\frac{3}{4}\sqrt{\frac{u}{\pi \hbar m\rho }}\left( 1+\frac{\rho^2}{3u^2}\frac{d(u^2/\rho )}{d\rho} \right) \;. \end{equation} In the GPE regime $\gamma \ll 1$, velocity of sound $u\propto \rho^{1/2}$, and the second term disappears. Then \begin{equation} |m_{eff}(p=0)|=\frac{4\sqrt{\pi}}{3}\gamma^{-1/4} = 2.36\gamma^{-1/4}\; \label{Nsp0} \end{equation} (see \cite{GRMP}, the paragraph next to Eq. (172)). In Fig.~\ref{Fig5} we test the obtained result by showing the dependence of $|m_{eff}(p=0)|\gamma^{1/4}$ on $\gamma$. One can see a good agreement for the coefficient in Eq.~(\ref{Nsp0}) in GP limit. It is possible to show that the presence of the $p^2$ term in dispersion does not violate the GPE relation $m_{eff}=2mN_{s} $. Thus, this peculiar effect has no influence on the equation of motion~(\ref{Newt}). In the inset of Fig.~\ref{Fig5} we test the expansion of $m_{eff}$ in TG limit. To do so we plot the quantity $|m_{eff}|/m-1$ on a log-log scale and compared it with the expression, obtained for $\gamma \gg 1$ in \cite{Brand05} in the Hartree-Fock approximation. \begin{figure}[tbp] \includegraphics[width=\columnwidth, angle=0]{Fig5.eps} \caption{(Color online) Effective mass of the ``fast'' soliton with $p=0$. Dashed line shows the asymptotic law Eq.~(\ref{Nsp0}). The inset shows $m_{eff}/m-1$ in comparison with large $\gamma$ result of~\cite{Brand05}.} \label{Fig5} \end{figure} To conclude, by using the exact Lieb-Liniger theory we investigated physical characteristics of Lieb II soliton-like branch of excitations. The frequency of oscillations, effective mass and number of atoms in the soliton are calculated. Direct numerical calculations confirmed violation applicability of the GP equation at small momentum $p$ in accordance with exact theory. The experimental possibility of the verification of the calculations is discussed. Authors thank J.~Brand, L.D.~Faddeev, D.M.~Gangardt, and L.I.~Glazman for fruitful discussions. G.E.A. acknowledges support from the Spanish MEC through the Ramon y Cajal fellowship program. L.P.P. acknowledges support by ERC through the QGBE grant and by the Italian MIUR through the PRIN-2009 grant.
2,877,628,091,405
arxiv
\section{Introduction} \label{sec:intro} Scene and object 3D reconstruction is the process of capturing their shape and appearance using various methods and approaches such as stereo, structure from motion, shape from shading, and many more \cite{Remondino2006Imagebased3M}. The reconstruction is highly applicable in a number of fields as it provides the ability to understand 3D scenes and objects on basis of 2D images. The applications ranging from robotics and automated industrial quality inspection over human-machine interaction \cite{6977392} (example action, gesture and face recognition), satellite 3D data analysis \cite{7563843}, to movies and architectural applications \cite{Herbort2011AnIT}. Additionally, the method is commonly used to analyse the surfaces of a celestial object, such as the Moon \cite{Hicks2011APF}. Photometric stereo (PS) is a well-established technique that is used for 3D surface reconstruction \cite{Esteban2008MultiviewPS}. The approach generally inherits the principle of appearance analysis of a 3D object on its 2D images. Based on the intensity information, these approaches attempt to infer the shape of the depicted object \cite{Herbort2011AnIT}. It estimates shape and recovers surface normals of a scene by utilising several intensity images obtained under varying lighting conditions with an identical viewpoint \cite{Tankus2005PhotometricSU,Hayakawa2002PhotometricSU}. By default, PS assumes a Lambertian surface reflectance; a standard reflectance model which defines a linear dependency between the normal vectors and image intensities. The definition of the model then can be used to determine the 3D space in the image \cite{Belhumeur1996WhatIT}. However, just a single Lambertian image is not adequate to correctly determine the surface shape. Therefore, the PS uses several images whose pixels corresponds to a single point on the object and is able to recover surface normals and albedos \cite{Tan2008SubpixelPS}. Light displays complicated attributes while interacting with objects resulting direct and indirect illumination as shown in figure \ref{nbounceImage}.However, classical PS naively assumes that a scene is illuminated only directly by the emitting source. In presence of indirect illumination, it produces erroneous results with reduced reconstruction accuracy \cite{Ikeuchi1981DeterminingSO}. For example, an indirect illumination such as inter-reflections makes concave objects appear shallower \cite{Nayar1990ShapeFI}. In this paper, we present an iterative 3D reconstruction method considering inter-reflections due to the concavities and the environment. We propose a novel method that accounts for inter-reflections in a calibrated photometric stereo environment. This approach utilises a reverted Monte Carlo ray tracing method to extract the environmental colour trying to minimise the inter-reflections within images used for photometric stereo. This approach not only accommodates the concave surface but also applies to any object in a scene with inter-reflections. The proposed method Iterative Ray Tracing Photometric Stereo - IRT PS iteratively applies Photometric Stereo (PS) and a reverted ray tracing algorithm based on a Monte-Carlo implementation to reconstruct with higher accuracy the observed surfaces. This approach iteratively reconstructs the surface and separates the indirect from direct lighting considering also the environment around the object. Likewise, the proposed IPT-PS method can be integrated to any PS technique removing the effects of inter-reflections and improving the overall reconstruction accuracy. Our approach is extensively evaluated on three datasets and the overall results demonstrate improvement over the classic approaches. The main contributions of our work are: \begin{itemize} \item a reverted Monte Carlo ray tracing algorithm to estimate the indirect lighting both from the environment and the object's concavities. \item an iterative surface reconstruction method that is utilised by the reverted Monte Carlo ray tracing \item the proposed methodology that allows IRT-PS to be combined with any other PS algorithm improving the overall performance. \end{itemize} The paper is organised as follows: Section \ref{photometricStereoLable} provides background material on Photometric Stereo, followed by invert light transport, their properties and related works. In section 3, we introduce the mathematical definition of necessary the terms. In section 4, we propose a novel iterative PS method and discuss the suggested reverted Monte Carlo ray tracing algorithm. The performance of this approach is investigated in section 5, with section 6 concluding the work. \section{Photometric Stereo}\label{photometricStereoLable} Photometric stereo (PS) is an approach to estimate the surface normal and reflectance (i.e albedo) of an object based on three or more intensity images with the fixed view under varying lighting condition \cite{Hayakawa2002PhotometricSU}. A number of solutions have been proposed to address this problem. Woodham \cite{Woodham1978} was the first to introduce the PS method. He proposed an approach simple and effective. However, he only considered Lambertian surface which suffers from noise. In his method, it is assumed that the surface albedo is known a prior for each point on the surface, the surface gradient can be obtained by using a three-point light source. Onn and Bruckstein \cite{Onn1990IntegrabilityDS} developed a two-image PS method. Their work was based on the assumption that the objects are smooth and no self-shadows are present. The PS was further extended by Coleman and Jain \cite{Coleman1982}, which utilises four light sources, discards the specular reflections and estimates the surface shape by performing mean of diffuse reflections and the use of the Lambertian reflection model. Nayar \textit{et al.} \cite{Nayar1990} proposed a PS method which used a linear combination of an impulse specular component and the Lambertian model to recover the shape and reflectance for a surface. Similarly, an algorithm for estimating the local surface gradient and real albedo from four sources in the presence of highlights and shadows was proposed by Barsky and Petrou \cite{Barsky2001b}. Chandraker \textit{et al.}\cite{Chandraker2007} proposed an algorithm that required at least four light sources and images to reconstruct surface in presence of shadow. It is also worth mentioning the related work presented in \cite{Levine2005,Finlayson2004,ArgyriouChapter,Ragheb2003} following similar architectures and approaches. Furthermore, over the previous years, methods that consider images produced by more general lighting conditions not known a prior. Basri \textit{et al.} \cite{BasriJacobsKemelmacher_IJCV07} proposed a PS method where no prior knowledge of the light source and its type is required, however the emitting source should be distant or unconstrained. They utilised low order spherical harmonics and optimised it to low-dimensional space to represent Lambertian objects. Likewise, Shi \textit{et al.} \cite{Shi2010SelfcalibratingPS} used colour and intensity profiles, which are obtained from registered pixels across images to propose a self-calibrating PS method. They automatically determine a radiometric response function and resolved the generalised bas-relief for estimating surface normals and albedos. While lighting conditions could be unknown, they required fixed viewpoint. Nevertheless, a majority of the methods and models while working well with with matte objects, under-perform when the reconstructed objects are specular, transparent or with inter-reflections. Non-Lambertian reflection and specifically inter-reflection may be difficult to solve in photometric stereo. Solomon and Ikeuchi \cite{Solomon1996} developed a method where they utilised four lights and tried to extract the surface shape and roughness of an object which has specular lobe. They used a simplified version of Torrance-Sparrow reflectance model to determine the surface roughness. Bajcsy \textit{et al.} \cite{Bajcsy1996DetectionOD} presented an algorithm for detecting diffuse and specular interface reflections and some inter-reflections. They used brightness, hue, and saturation values instead of RGB as they point out that the values have a direct correspondence to body colours and to diffuse and specular, shading, shadows and inter-reflections. But, the algorithm requires uniformly coloured dielectric surface under single coloured scene illumination. Tozz \textit{et al.} \cite{Tozza2016DirectDP} proposed a PS method that is independent of the albedo values and uses image ratio formulation. However, their method requires an initial separation of diffuse and specular components. In addition, because of the nature of light, inter-reflection is unavoidable even in a controlled environment. This may vary in magnitude depending on the environment itself, the structure, and the material of the object. Moreover, it may not be uniform over the whole surface. As a result, the images are blurred locally in shade. Most photometric methods do not consider inter-reflection from an environment and concave surfaces, and those that do have considered one of the two cues only. One of the first attempts at scene recovery under inter-reflection was purpose by Nayar \textit{et al.} \cite{Nayar1990ShapeFI}. They presented an iterative algorithm which recovers shapes from a concave surface which first estimates the shape from intensity data; then this shape is used as input, and the radiosity method is applied to estimate a corrected, no-interreflection image intensity distribution. These steps are carried iteratively until convergence. Nevertheless, the algorithm only examines the inter-reflection in concave shapes, Lambertian reflectance models and does not take into account the colour of the inter-reflected light. Funt and Drew \cite{Funt1993ColorSA} proposed an algorithm which is based on singular value decomposition of the colour for a convex surface. They proposed a \textit{``one-bounce''} model which measured inter-reflection between two matte convex surfaces with a uniform colour and illumination can vary spatially in its intensity but not in its spectral composition. Again, the algorithm is specific to convex surface assuming a uniform colour and illumination that can vary spatially. Langer \cite{Langer1999WhenSB} did the study on the shadows which becomes inter-reflections. They proposed a method for inferring surface colour in a uni-chromatic scene which is based on the relative contrast of the scene in different colour channels. Again, the method is highly specific and only deals with inter-reflection related to shadow. Most existing shape from intensity techniques accounts for an only direct component of light transport. Nayar \textit{et al.} \cite{Nayar2006FastSO} proposed using high-frequency illumination patterns to separate direct and indirect illumination from more general scenes.Gupta \textit{et al.} \cite{Gupta2009DeFO} studied the relation between illumination defocus and global light transport. Again, Chen \textit{et al.} \cite{Chen2008ModulatedPF} used modulated structured light patterns with high-frequency patterns to mitigate the effects of indirect illumination. Lamond \textit{et al.} \cite{Lamond2009DiffuseSpeuclar} used high-frequency light patterns to separate the diffuse and specular components of BRDF. Holroyd \textit{et al.} \cite{Holroyd2010ACO} constructed a high-accuracy imaging system for measuring the surface shape and BRDF. All these techniques either are an active method or they assume that the indirect illumination in each of the acquired images is caused by a single source. In contrast, we consider separation of indirect components by simulating the inter-reflections and removing it from the source images. \section{Forward Light Propagation} An image captured by the camera is the results of a complex sequence of reflections and inter-reflections. When light is emitted from the source, it bounces off the scene's surface one or more times before reaching to a camera. \begin{figure}[ht] \begin{center} \includegraphics[width=.95\linewidth]{image/exampleOfLightBounce.png} \end{center} \caption{ (Left)Direct and (Middle)(Right)indirect light bounce around the environment} \label{nbounceImage} \end{figure} In theory, every image can be captured as infinity sum, $I = I^1 + I^2 + I^3 + .. + I^n$, where $I^n$ denotes the total contribution of light that bounces $n$ times before reaching the camera as shown in figure \ref{nbounceImage}. For example, $I^1$ is the captured image if it was possible to remove all the indirect illumination from reaching the camera sensor, while the infinite sum $I^2 + I^3 + .. + I^n$ describes the total contribution of indirect illumination. Although we can capture the final image $I$ using a camera, the individual \textit{“n-bounce”} images are not directly measurable in the real-world scenario. Nevertheless, the techniques for simulating inter-reflections and other light transport effects are not new in the computer vision and graphics. The algorithm that simulated the forward light transport was solved by Kajiya \cite{Kajiya1986TheRE}. The algorithm is also known as \textit{rendering equation}. The rendering equation is an integral in which the radiance leaving a point is given as the sum of emitted plus reflected radiance under a geometric optics approximation. \begin{equation} \label{renderingEquation} I(x,x')=g(x,x')\Bigg [e(x,x') + \int\limits_s p(x,x',x'')I(x',x'')dx'' \Bigg] \end{equation} Where $I(x,x')$ is related to the intensity of light passing from $x'$ to point $x$. $g(x,x')$ is a "geometry" term, $e(x,x')$ is related to the intensity of emitted light from $x'$ to $x$ and $p(x,x'x'')$ is related to the intensity of light scattered from $x''$ to $x$ by a patch of surface at $x'$. An algorithm such as ray tracing \cite{Foley1990ComputerG}\cite{Jarosz2008AdvancedGI} solved the equation \ref{renderingEquation} by using Monte-Carlo methods, whereas radiosity \cite{Foley1990ComputerG}\cite{Immel1986ARM} used finite element method to produce near realistic looking images in the field. For a Lambertian object illuminated by a light source of parallel rays, the observed image intensity $\mathbf{a}$ at each pixel is given by the product of the albedo $\rho$ and the cosine of the incidence angle $\theta_{i}$ (the angle between the direction of the incident light and the surface normal) \cite{Horn1977}. The above incidence angle can be expressed as the dot product of two unit vectors, the light direction $\mathbf{l}$ and the surface normal $\mathbf{n}$, $\mathbf{a}=\rho \cos(\theta_{i})=\rho (\mathbf{l}\cdot \mathbf{n})$. Let us now consider a Lambertian surface patch with albedo $\rho$ and normal $\mathbf{n}$, illuminated in turn by several fixed and known illumination sources with directions $\mathbf{l}^{1}$, $\mathbf{l}^{2}$, ..., $\mathbf{l}^{\tilde{Q}}$. In this case we can express the intensities of the obtained pixels as: \begin{equation} \label{Eq:R01_PS_Q} \mathbf{a}^{k}=\rho(\mathbf{l}^{k}\cdot \mathbf{n}),\ \ \ \rm{where} \ \ k=1,2,...,\tilde{Q}. \end{equation} We stack the pixel intensities to obtain the pixel intensity vector \\ $\mathbf{A}_{a}=(\mathbf{a}_{1},\mathbf{a}_{2},...,\mathbf{a}_{\tilde{Q}})^{T}$. Also the illumination vectors are stacked row-wise to form the illumination matrix $\mathbf{L}=(\mathbf{l}^{1}, \mathbf{l}^{2},...,\mathbf{l}^{\tilde{Q}})^{T}$. Equation~(\ref{Eq:R01_PS_Q}) could then be rewritten in matrix form: \begin{equation} \label{Eq:R01_PS_QM} \mathbf{A}_{a}=\rho \mathbf{L} \mathbf{n} \end{equation} If there are at least three illumination vectors which are not coplanar, we can calculate $\rho$ and $\mathbf{n}$ using the Least Squares Error technique, which consists of using the transpose of $\mathbf{L}$, given that $\mathbf{L}$ is not a square matrix: \begin{equation} \label{Eq:R01_PS_QMInv} \mathbf{L}^{T}\mathbf{A}_{a}=\rho \mathbf{L}^{T}\mathbf{L} \mathbf{n} \Rightarrow (\mathbf{L}^{T}\mathbf{L})^{-1}\mathbf{L}^{T}\mathbf{A}_{a}=\rho \mathbf{n} \end{equation} Since $\mathbf{n}$ has unit length, we can estimate both the surface normal (as the direction of the obtained vector) and the albedo (as its length). Extra images allow one to recover the surface parameters more robustly. \section{Proposed Iterative Ray Tracing Photometric Stereo Method (IRT-PS)} In nature, when we illuminate a surface, light not only reflects towards the viewer but also among all surfaces in the environment. This is always true, with exception of scenes that consists only of a single convex surface. In general, scenes include concave surfaces where points reflect light between themselves. Furthermore, inter-reflections can occur due to the environment and appreciably can alter a scene's appearance. In figure \ref{innerreflectionExample_1}, to simulate the inter-reflections the sphere is placed within the Cornell box \cite{Niedenthal2002} and highlights the inter-reflections i.e sphere receive the colours from its environment. \begin{figure}[ht] \begin{center} \includegraphics[width=0.3\linewidth]{image/exampleOfinnerreflection_direct.jpeg} \includegraphics[width=0.3\linewidth]{image/exampleOfinnerreflection_indirect} \includegraphics[width=0.3\linewidth]{image/exampleOfinnerreflection_final}\\ \subcaption{(Left)Image with no inter-reflection, (Middle) Image with inter-reflection from Environment only, (Right) Combined Image} \includegraphics[width=0.3\linewidth]{image/concave_direct} \includegraphics[width=0.3\linewidth]{image/concave_indirect} \includegraphics[width=0.3\linewidth]{image/concave_final}\\ \subcaption{(Left)Image with no inter-reflection, (Middle) Image with inter-reflection from Concavity only, (Right) Combined Image} \end{center} \caption{Example images of Inter-reflection from environment and concavity} \label{innerreflectionExample_1} \end{figure} Existing computer vision algorithms do not account for effects of inter-reflections and hence often produce erroneous results. The algorithms that are directly affected by inter-reflections are the shape-from-intensity algorithms including Photometric Stereo. Due to the common assumption of single surface reflections (direct illumination) and disregarding higher order (inter-reflections, a subset of global illumination), photometric methods produce erroneous results when applied to open scenes. \begin{figure}[ht] \begin{center} \includegraphics[width=0.95\linewidth]{image/algorithmOverview} \end{center} \caption{An overview of the proposed IRT-PS algorithm.} \label{OverviewOfAlgorithm} \end{figure} The first stage of this approach (stage 0), is performed only once throughout the process and involves the acquisition of the initial input images. It is assumed that inter-reflections are present and that the captured surface is within the known environment. In our case within a Cornell Box. Moving to the following stage, PS is applied to the images acquired at stage 0 using equation \ref{Eq:R01_PS_QMInv} to obtain the initial albedo $\rho_{t}$ and normals $\mathbf{n}_{t}$. Integrating over the obtained normals a 3D surface $H_{t}$ is obtained using the M-estimator technique. This initial surface that is affected by the presence of the inter-reflections becomes the input to the following stage, that involves the proposed reverted ray tracing algorithm. As environment information is known prior to reconstruction, we can implement our environment. The Cornell Box was setup as the environment at the following stage 3. More realistic textures can be used for the walls without affecting the proposed methodology. \begin{figure}[ht] \begin{center} \includegraphics[width=0.35\linewidth]{image/exampleForOriginalImage_Sphere1} \includegraphics[width=0.35\linewidth]{image/exampleForOriginalImage_Sphere3}\\ \includegraphics[width=0.35\linewidth]{image/exampleForOriginalImage_Sphere4} \includegraphics[width=0.35\linewidth]{image/exampleForOriginalImage_Sphere2}\\ \end{center} \caption{Sample images from Stage 0 with inter-reflections due to the environment} \label{chart:overallErrorChart} \end{figure} In stage 4, we simulate the environment assuming the Cornell box is given or estimated. In our case, this approach can be extended to other realistic environmental projection such as Hemispherical Dome Projection \cite{Swinburne2005SphericalM} without affecting the proposed methodology. Then we place the generated $H_{t}$ surface within this environment. In the following stage, based on equation \ref{eq:renderingEquationWithMonteCarlo}, the reverted ray tracing algorithm is applied. Since we are only interested in inter-reflections, only the indirect illumination is calculated.To implement the ray tracer for Lambertian surface, we solve the rendering equation by integrating Monte Carlo estimator \begin{equation} L_{0}(p,w_{o}) = \int\limits_\Omega f(p,w_{0},w_{i})L_{i}(p,w_{i})cos\theta_{i}dw_{i} \label{eq:renderingEquation} \end{equation} Where $L_{0}(p,w_0)$ is the total outgoing radiance reflected at $p$ along the $w_0$ direction. $L_i(p,w_i)$ is the radiance incident at $p$ along the $w_i$ direction. $f(p,w_0,w_i)$ determines how much radiance is reflected at $p$ in direction $w_0$, due to irradiance incident at $p$ along the $w_i$ direction. $cos\theta_i$ is from the Lambert's cosine law: diffuse reflection is directly proportional to $cos(\theta)$ of the normals and the incident illumination ($i$). Finally, $\int\limits_\Omega dw_{i}$ is an integral over a given hemisphere. As Monte-Carlo approximation is a method to approximate the expectation of a random variable, using samples. \begin{equation} E(X) \approx \frac{1}{N}\sum^{n}_{i=1} X_i \end{equation} where, $E(X)$ is an approximation of average value of random variable $X$.$N$ is the sample size. And when we integrate it to equation \ref{eq:renderingEquation} we solve the rendering equation. \begin{equation} \langle L_{0}(p,w_{o}) \rangle= \frac{1}{N} \sum^{N}_{i=1}\frac{f(p,w_{0},w_{i})L_{i}(p,w_{i})cos\theta_{i}dw_{i}}{p(w_i)} \label{eq:renderingEquationWithMonteCarlo} \end{equation} However, Monte-Carlo estimator is affected by noise, the ray tracer algorithm also inherited such a problem. For example, to half the noise in an image rendered by ray tracing, we need to quadruple the number of samples. To estimate the environmental colour, we first hit the $H_{t}$ surface with rays from each pixel, consider techniques such as hemisphere sampling and we randomly reflect the rays toward the environment. As a result, the images of the environment are captured for the various levels/depths of ray reflection. In this study, we only use up to 3 reflection rays (1 to 3) with just a single sampling, as shown in figure \ref{fig:environmentColourExtraction}. Because we are not calculating all the ray reflections within the environment, we will have pixel locations without intensity values. An example can be seen in figure \ref{fig:environmentColourSample}. Therefore, we are using a non-uniform interpolation algorithm \cite{Thvenaz1999ImageIA} to approximate the missing values in the obtained environmental intensity images $E^{r}_{t}$, where $r$ corresponds to the number of ray reflections. \begin{figure}[ht] \begin{center} \includegraphics[width=0.3\linewidth]{image/EnvironmentRay1} \includegraphics[width=0.3\linewidth]{image/EnvironmentRay2} \includegraphics[width=0.3\linewidth]{image/EnvironmentRay3} \end{center} \caption{Extraction of Environment Intensities in 3 different ways (a) Only extract colour (c1), (b) reflect ray one time and combine the intensities (c1 * c2), and (c), reflect one more time and combine all the colours (c3*c2*c1).} \label{fig:environmentColourExtraction} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=0.30\linewidth]{image/Sphere_env_r1} \includegraphics[width=0.30\linewidth]{image/Sphere_env_r2} \includegraphics[width=0.30\linewidth]{image/Sphere_env_r3}\\ \subcaption{Environment Colour extracted for Sphere} \includegraphics[width=0.30\linewidth]{image/fixedSphereEvn_R1} \includegraphics[width=0.30\linewidth]{image/fixedSphereEvn_R2} \includegraphics[width=0.30\linewidth]{image/fixedSphereEvn_R3}\\ \subcaption{The Interpolated images of Environment Colour} \end{center} \caption{Sample image of Environment colour captured by R1 - R3 rays and their interpolated images} \label{fig:environmentColourSample} \end{figure} In figure \ref{fig:environmentColourSample}, we see that the more ray reflects, the less bright the pixels become. The main reason behind this phenomenon is because of ray tracing algorithm and considering that the first ray $r1$ has more influence on the final pixel intensity than the ray $r3$. Therefore, when we have more ray reflections, the intensity of the pixels needs to be reduced, accordingly. In stage 5, we generate the new input images $A_{t+1}=A_{t}-E^{r}_{t}$ by subtracting the environmental intensity reducing the inter-reflections from the original input images. There are three different sets of images for each ray reflection $r1$, $r2$ and $r3$. \begin{figure}[ht] \begin{center} \includegraphics[width=0.35\linewidth]{image/exampleForOriginalImage_Sphere1} \includegraphics[width=0.35\linewidth]{image/fixedSphereEvn_R1}\\ \includegraphics[width=0.35\linewidth]{image/differenceImage_R1} \end{center} \caption{(Left) Image with inter-reflections, (Right) estimated environmental intensity image and (Bottom) obtained image without inter-reflections.} \label{fig:differenceImagesSample} \end{figure} Finally, the obtained images which have fewer inter-reflections (example difference image is shown in figure \ref{fig:differenceImagesSample}) are used for as input to photometric stereo, generating a new $H_{t+1}$ surface. The whole process can be applied iteratively for a certain number of iterations or until the difference $D_{H}=H_{t+1}-H_{t}$ between a new 3D surface and the previous one is less than a given threshold. \begin{figure}[ht] \centering \includegraphics[width=0.3\linewidth]{image/exampleGroundTruth_Sphere.png} \includegraphics[width=0.3\linewidth]{image/exampleGroundTruth_cat.png} \includegraphics[width=0.3\linewidth]{image/exampleGroundTruth_FemaleHead.png} \caption{Ground truth used for rendering and evaluation purpose. Synaptic Matlab Sphere, Harvard’s Photometric data, Scan Dat.} \label{fig:groundTruthExample1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.24\linewidth]{image/exampleForOriginalImage_Sphere1} \includegraphics[width=0.24\linewidth]{image/exampleForOriginalImage_Sphere3} \includegraphics[width=0.24\linewidth]{image/exampleForOriginalImage_Sphere4} \includegraphics[width=0.24\linewidth]{image/exampleForOriginalImage_Sphere1}\\ \includegraphics[width=0.24\linewidth]{image/exampleForOriginalImage_Cat1} \includegraphics[width=0.24\linewidth]{image/exampleForOriginalImage_Cat2} \includegraphics[width=0.24\linewidth]{image/exampleForOriginalImage_Cat3} \includegraphics[width=0.24\linewidth]{image/exampleForOriginalImage_Cat4}\\ \includegraphics[width=0.24\linewidth]{image/exampleForOriginalImage_Female1} \includegraphics[width=0.24\linewidth]{image/exampleForOriginalImage_Female2} \includegraphics[width=0.24\linewidth]{image/exampleForOriginalImage_Female3} \includegraphics[width=0.24\linewidth]{image/exampleForOriginalImage_Female4} \caption{Image samples with rendered inter-reflections.} \label{chart:overallErrorChart} \end{figure} \section{Experiments and Results} In our comparative evaluation study, three different datasets with ground truth were used. Scan data from the Harvard PS dataset \cite{3909}, a dataset with faces \cite{ArgPet2008} and synthetic data generated by simulated objects (see figures \ref{fig:groundTruthExample1} and \ref{chart:overallErrorChart}). We used the photometric stereo approach to reconstruct the sets of the acquired $H_{t}$ surface, with and without inter-reflections considering different numbers (1 to 3) of ray reflections in the proposed reverted Monte-Carlo ray tracing algorithm. We then estimate the height-, albedo- and normal-error comparing to classic PS method \cite{Sun2007} using the available ground truth. To calculate the height-error we used the equation, \begin{equation} \overline{H}_{err} = \frac{1}{n}\Bigg(\sum_{i=1}^{n}|H_{GT} - H_{t}|_i\Bigg) \label{eq:heightError} \end{equation} $\overline{{H}}_{err}$ is the mean for height error. $H_{GT}$ is the height value of ground truth surface, whereas $H_t$ is the height value of reconstructed surface. Regarding the albedo-error we use the equation below, \begin{equation} \begin{split} P_{err}^{r} = |P_{GT}^{r} - P_{H}^{r}| \\ P_{err}^{g} = |P_{GT}^{g} - P_{H}^{g}|\\ P_{err}^{b} = |P_{GT}^{b} - P_{H}^{b}| \\ P_{err}^{rgb} = \frac{\overline{P_{err}^{r}} + \overline{P_{err}^{g}}+\overline{P_{err}^{b}}}{3} \end{split} \label{eq:heightError} \end{equation} where $P_{err}^{rgb}$ is the albedo-error from mean of individual colour channel; Red $P_{err}^{r}$, Green $P_{err}^{g}$, and Blue $P_{err}^{b}$ channel. Likewise, to calculate normal-error we utilise the following equation: \begin{equation} \begin{split} N_{err}^{x} = |N_{GT}^{x} - N_{H}^{x}| \\ N_{err}^{y} = |N_{GT}^{y} - N_{H}^{y}|\\ N_{err}^{z} = |N_{GT}^{z} - N_{H}^{z}| \\ N_{err}^{xyz} = \frac{\overline{N_{err}^{x}} + \overline{N_{err}^{y}}+\overline{N_{err}^{z}}}{3} \end{split} \label{eq:heightError} \end{equation} $N_{err}^{xyz}$ denote the mean normal-error for all the axis $x,y,$ and $z$.Where $ \overline{N_{err}^{x}}$ is a mean error for X axis, $\overline{N_{err}^{y}}$ is mean error for Y, and $\overline{N_{err}^{z}}$ is mean error for Z, $N_{H}^{xyz}$ is normal from reconstructed surface. \begin{figure}[ht] \begin{center} \includegraphics[width=0.34\linewidth]{image/ExampleOfAlbedoError_Sphere1} \includegraphics[width=0.34\linewidth]{image/ExampleOfAlbedoError_Sphere2}\\ \includegraphics[width=0.34\linewidth]{image/ExampleOfAlbedoError_Sphere3} \includegraphics[width=0.34\linewidth]{image/ExampleOfAlbedoError_Sphere4} \end{center} \caption{Example of the estimated albedo using classic PS \cite{Sun2007}, and the proposed IRT-PS method using 1-, 2- and 3-ray reflections.} \label{fig:albedoErrorExample} \end{figure} \begin{table}[ht] \begin{center} \caption{Obtained results for the synthetic data, the Harvard and the face PS database comparing the \cite{Sun2007} method, with the 3 variations of the proposed IRT-PS approach.} \begin{tabular}{ |l|c|c|c|c|c|} \hline \textbf{Synthetic} & \textbf{PS} \cite{Sun2007} & \textbf{IRTPSr1} & \textbf{IRTPSr2} & \textbf{IRTPSr3} \\\hline \textbf{Height} &18.653 & 18.460 & 18.565 & \textbf{\textcolor{green}{18.436}}\\\hline \textbf{Albedo} &0.082 & 0.082 & \textbf{\textcolor{green}{0.081}} & 0.087\\\hline \textbf{Normal} &0.825 & 0.824 & 0.824 & \textbf{\textcolor{green}{0.823}}\\\hline \textbf{Harvard} & \textbf{PS} \cite{Sun2007} & \textbf{IRTPSr1} & \textbf{IRTPSr2} & \textbf{IRTPSr3} \\\hline \textbf{Height} &8.150 &8.140 &8.097 & \textbf{\textcolor{green}{7.296}}\\\hline \textbf{Albedo} &0.522 & \textbf{\textcolor{green}{0.518}} &0.520&0.521\\\hline \textbf{Normal} &0.840 &0.839 & \textbf{\textcolor{green}{0.838}} &0.840\\\hline \textbf{Face} & \textbf{PS} \cite{Sun2007} & \textbf{IRTPSr1} & \textbf{IRTPSr2} & \textbf{IRTPSr3} \\\hline \textbf{Height} &9.341 &9.181 &9.272 & \textbf{\textcolor{green}{8.835}} \\\hline \textbf{Albedo} &0.235 &0.231 & \textbf{\textcolor{green}{0.230}} &0.241 \\\hline \textbf{Normal} &0.823 &0.823 &0.8221 & \textbf{\textcolor{green}{0.822}} \\ \hline \hline \hline \textbf{Overall} & \textbf{PS} \cite{Sun2007} & \textbf{IRTPSr1} & \textbf{IRTPSr2} & \textbf{IRTPSr3} \\\hline \textbf{Height} & \textbf{\textcolor{red}{12.049}} & 11.927 & 11.978& \textbf{\textcolor{green}{11.523}}\\\hline \textbf{Albedo} & \textbf{\textcolor{red}{0.280}} & \textbf{\textcolor{green}{0.2772}} &0.2773 &0.283\\\hline \textbf{Normal} & \textbf{\textcolor{red}{0.829}} &0.829 & \textbf{\textcolor{green}{0.8283}}& 0.8288\\\hline \end{tabular} \end{center} \label{tb:sphereError} \end{table} \begin{figure}[ht] \begin{center} \includegraphics[width=0.475\linewidth]{image/Bar_SphereHeightError} \includegraphics[width=0.475\linewidth]{image/Bar_SphereAlbedoError}\\ \includegraphics[width=0.475\linewidth]{image/Bar_SphereNormalError} \subcaption{Overall Results for syntactic database: (left) Height Error, (Middle) Albedo Error (Right) Normal Error} \includegraphics[width=0.475\linewidth]{image/Bar_CatHeightError} \includegraphics[width=0.475\linewidth]{image/Bar_CatAlbedoError}\\ \includegraphics[width=0.475\linewidth]{image/Bar_CatNormalError} \subcaption{Overall Results for face database \cite{ArgPet2008} :(left) Height Error, (Middle) Albedo Error (Right) Normal Error} \includegraphics[width=0.475\linewidth]{image/Bar_FemaleHeightError} \includegraphics[width=0.475\linewidth]{image/Bar_FemaleAlbedoError}\\ \includegraphics[width=0.475\linewidth]{image/Bar_FemaleNormalError} \subcaption{Overall Results for Harvard database \cite{3909} : (left) Height Error, (Middle) Albedo Error (Right) Normal Error} \end{center} \caption{Overall results of the performed experiments demonstrating the r1 and r3 are the best methods for albedo and height estimation, respectively.} \label{chart:overallErrorChart} \end{figure} From the table 1, and charts in figure \ref{chart:overallErrorChart}, we can see that the overall trend of mean Height, Albedo, and Normal errors are reduced with our approach than the classic photometric stereo one. In table 1, text highlighted in red are the average overall results of the \cite{Sun2007} photometric stereo method. Whereas best results from our IRT-PS approach are highlighted in the green text. From the charts figure \ref{chart:overallErrorChart}, we can see the general trend of the height error: Results improve with each additional ray and the best result is achieved by Ray 3. Likewise, the best result for Albedo and Normal are given by Ray 2. The indirect illumination captured by Rays R3 and R2 of the environment were able to reduce the inter-reflection effect from the original images. Furthermore, looking at the overall table and comparing to PS \cite{Sun2007}, we again see that our method improves in all the estimation. The greatest improvement can be seen in Height, followed by Normal, and finally the Albedo error. This shows that if we improve the captured indirect illumination then it should result in more accurate and detailed reconstructed surfaces. \section{Conclusions} In this work, a novel iterative method considering inter-reflections both due to concavities and the environment was proposed. The IRT-PS approach iteratively applies Photometric Stereo and a reverted Monte-Carlo ray tracing algorithm, reconstructing the observed surface and separating the indirect from direct lighting. A comparative study was performed evaluating the reconstruction accuracy of the proposed solution on three different datasets and the overall results demonstrate improvement over the classic approaches that do not consider environmental inter-reflections. \\\\ \section*{Acknowledgements} \noindent This work is co-funded by the NATO within the WITNESS project under grant agreement number G5437. The Titan X Pascal used for this research was donated by the NVIDIA Corporation. {\small \bibliographystyle{ieee}
2,877,628,091,406
arxiv
\section{Introduction} Infertility is a global health issue worldwide \cite{infertilityAroundTheGlobe}. The number of couples reporting infertility and referring to assisted reproductive technology (ART) centers for infertility workup and care in Europe is increasing by 8 - 9\% every year \cite{assistedReprodTechInEurope2009}. One of the most common treatments for infertile couples is In Vitro Fertilization (IVF). It consists of controlled ovarian hyperstimulation, followed by ovum pickup, fertilization, and embryo culture for 2-6 days under controlled environmental conditions, leading to intrauterine transfer or freezing of embryos identified as having a good implantation potential by embryologists. The clinical effectiveness of IVF is variable across regions with reported efficiency ranging from 20\% to 40\%. IVF is mainly hampered by the current limitations of embryo quality assessment methods \cite{IVFLowEfficacy}. Indeed, the main embryo quality assessment method is based on morphological evaluation, which consists of daily static observation under the microscope. Although consensus exists for morphological evaluation of embryo development, this method still suffers from a lack of predictive power and inter- and intra-operator variability \cite{highVari1,highVari2,highVari3}. Time-lapse imaging incubators (TLI) were first released in the IVF market around 2010. They provide continuous monitoring of embryo development, by taking photographs of each embryo at regular intervals throughout its development, ultimately compiling a video giving a dynamic overview of embryonic in vitro development. This technology allows very stable culture conditions and leads to a dynamic annotation of embryonic developmental events, called morphokinetic (MK) parameters, such as, for instance, cell divisions, blastocyst formation, and expansion. Although several studies have reported an association between MK parameters and implantation potential, the clinical usefulness of TLI remains debated \cite{timeLapseCultureWithMorpho,Paulson2018,Armstrong15}. Nevertheless, TLI still appears to be the most promising solution to improve embryo quality assessment methods, and subsequently the clinical efficiency of IVF. In particular, the unprecedented high volume of high-quality images produced by TLI systems could be leveraged using modern Artificial Intelligence (AI) methods, like deep learning (DL). Indeed, the recent emergence of DL has revolutionized many fields like games \cite{AlphaGo}, computer vision \cite{alexnet}, language processing \cite{attentionIsAllYouNeed}, protein folding \cite{alphafold}, and its advent has set high expectations on its potential for medicine and biology and called for concrete applications. Importantly, the question of data sharing is at the center of DL strategies being applied to health data. Indeed, a model can not be reproduced and evaluated externally if the dataset used to train the model is not made available. The main reason behind this common absence of data sharing probably has to do with concerns about data security and maybe to a lesser extent with the made scientific competition. The consequence of a rather “black box” development of DL methods in IVF results in a lack of consensus about which DL architecture to use, with private companies selling and implementing solutions that have not been independently evaluated by the community, raising questions about potential bias and fairness issues for example \cite{interpretableIVF}. Data sharing is therefore of utmost importance to properly implement DL in IVF practice \cite{interpretableIVF}. In this context, we are in dire need of a reference time-lapse dataset and a baseline analysis with the most common DL algorithms, similar to what has been done in other fields \cite{mimic,chexpert,retina,segDataset}. Several teams have applied DL models in IVF, but with important limitations: either the number of videos was lower than 300 or the number of total images composing the videos was under 150k (\cref{datasetChar}) \cite{WeakSupMorphoKin,predSuccRate,cellCount,BlastCellCount}. \begin{table*}[ht!b] \noindent\makebox[\textwidth]{% \begin{tabular}{c|c|c|c|c|c} \toprule Author&Year&Video nb.&Image nb.&Phases used&Accuracy obtained \\ \midrule Khan et al. &2016&256&150k&1-5 cells&$87\%$ \\ Moradi Rad et al.&2018&-&224&1-5 cells&$82.4\%$ \\ Silva-Rodr\'{i}guez et al.&2019&263&100k&1-5 cel &$80.9\%$\\ Kumar Kanakasabapathy et. al.&2019&-&8k&Blasto/No Blasto&$96\%$ \\ H Ng et al.&2018&-&600k&tStart to t4+&$84.6\%$ \\ Liu et al, &2019&170&60k&tStart to t4+&$83.8\%$ \\ Lau et al.&2019&1303&145k&tStart to t4+&$83.65\%$ \\ \bottomrule \end{tabular} } \caption{Dataset characteristics of previous works.} \label{datasetChar} \end{table*} Additionally, studies used a limited amount of embryonic stages / MK parameters to identify with DL. Finally, and as stated above, the studies did not share their datasets, making their analysis impossible to recapitulate. A shared dataset should be large enough to train powerful deep learning models, contain full videos to make full use of the TLI information, and have highly detailed annotations taking into account a large number of development phases to maximize potential clinical use. Here, we propose a unique reference benchmark that will allow the community to evaluate and compare morphokinetic models and will be a step towards deep learning-powered IVF. The dataset contains 756 full videos and a total of 337k images which was sufficient to train and evaluate deep learning models. We applied ResNet, LSTM, and ResNet-3D architectures to our dataset and demonstrate that they overperform our previous algorithmic approach to automatically annotate stage development phases on TLI data \cite{magalie}. Of note, we propose highly detailed annotations with 16 differents development phases, from early cell division phases (t2-t5+) as in previous work, but also late cell divisions (t6 to t9+), phases after morulation (tM to tHB), and very early phases (tPNa and tPNf), which, to the best of our knowledge, have never been reported up to now. \section{Methods} \subsection{Dataset collection} Between 2011 and 2019, 716 infertile couples underwent Intracytoplasmic Sperm Injection (ICSI) cycles in our University-based IVF center and had all their embryos cultured and monitored up to blastocyst stage with a TLI system. We randomly selected a subset of 873 videos, which is approximately 10\% of all the videos recorded, because of the limited computational budget at the time of the study. We subsequently extracted all focal planes using an Application Programming Interface (API) provided by the TLI manufacturer (Vitrolife©). We acknowledge that only ICSI cycles were included in our time-lapse devices over that period, as we considered that conventional IVF would lead to different developmental timings as compared to ICSI. We do not routinely use assisted hatching. There were no major lab changes over the study period. The Local Institutional Review Board (GNEDS) approved this project. All patients agreed with the anonymous use of their clinical data. Patient treatment and embryo culture protocol were described in a previous study \cite{Freour2015}. In brief, embryo culture was performed from fertilization (day 1) up to blastocyst stage (day 5 or day 6) at 37◦C with 5\% O2 and 6\% CO2 in a sequential culture medium, i.e. G1 plus (Vitrolife©, Sweden) from day 0 to day 3, followed by G2 plus (Vitrolife©, Sweden). We acknowledge that culture media might impact embryo development and have an evolving composition throughout embryo development. However, the available literature does not support the concept of medium-dependent morphokinetic patterns \cite{basile2013type}. Although we perfectly agree that there is a need to clarify IVF culture media composition to enhance our understanding of embryo development \cite{sundeSerious2016}, there is no evidence to our knowledge that the content of commercial culture media changes over time in ways that are important enough to consider. The images were acquired with a TLI system (Embryoscope©, Vitrolife©, Sweden) every 10 to 20 min by a 1280 × 1024 resolution camera under a 635 nm LED light source passing through Hoffman’s contrast modulation optics. To reduce the noise in the annotations, we chose to keep only the videos showing at least 6 distinct stages of development to train and test our models. The final number of videos in our dataset is 756. Among these videos, 526 correspond to embryos considered to be morphologically viable and subsequently chosen for transfer, while the other 230 videos correspond to discarded embryos because of poor development. The information about embryo viability is not used in this work as the purpose is to focus solely on morphokinetic parameter prediction. These discarded embryos allowed us to study a variety of abnormal embryonic features (abnormal morphology, abnormal fertilization/number of pro-nuclei, necrosis, fragmentation, developmental delay, etc.) or problems during image acquisition (sharpness, change of focus, brightness, etc.). All annotations on the videos were made by 2 qualified and experienced embryologists undergoing regular internal quality control. Annotations were made following the recommendations of Ciray et al. for the detection of stages \cite{ciray}, i.e. 16 phases corresponding to 16 cellular events: tPB2, tPNa, tPNf, t2, t3, t4, t5, t6, t7, t8, t9+, tM, tSB, tB, tEB and finally tHB \cite{ciray}. Each phase refers to a specific cellular event i.e. polar body appearance for phase PB2 (pPB2), pronuclei appearance and disappearance (pPNa and pPNf), blastomere division from 2-cell stage to >8 cells-stage (p2 to p9+), compaction (phase pM), blastocyst formation (phases pSB, pB), expansion and hatching (phases pEB and pHB). We started prospective annotation of the database according to this reference work in 2014, while annotations made before 2014 were retrospectively checked. Although we included all available focal planes in the dataset, we only used the center focal plane in our experiments. \subsection{Models} Several baseline models were used to perform this classification task and compared using the defined metrics on the annotated dataset. The first model is designed for isolated image classification, the next two models allow the classification of images in a sequence. They are illustrated in \cref{models} and detailed below. \begin{figure}[t] \centering \subfloat[The ResNet model\label{res18}]{ \centering \includegraphics[width=0.9\textwidth]{figures/CNN.png} }\\ \subfloat[The ResNet-3D model\label{res3D}]{ \centering \includegraphics[width=0.9\textwidth]{figures/CNN3D.png} }\\ \subfloat[The ResNet-LSTM model\label{lstm}]{ \centering \smallskip \includegraphics[width=0.9\textwidth]{figures/CNN-LSTM.png} } \caption{The different models evaluated. ResNet takes an isolated image as input and outputs a vector of class probabilities. ResNet-LSTM and ResNet-3D have as input a short sequence of images and output a sequence of probability vectors.\label{models}} \end{figure} The ResNet Model. Residual models are widely used for the classification of isolated images, for example on ImageNet \cite{resnet}. This model is composed exclusively of convolution layers and contains residual connections every 2 layers. The resolution and the number of channels of the feature maps are respectively divided and multiplied by 2 every 4 layers. After the convolutions, an average-pooling layer produces a vector of features, to which the final soft-max layer is applied to make predictions. We use the variant ResNet proposed by He et. al. \cite{resnet}. The ResNet-LSTM. This model is the combination of the ResNet model with an LSTM \cite{lstm}. The LSTM model has been designed to model sequences and has been successfully applied in tasks such as speech recognition \cite{LSTMOnVoice}. Pre-activations of the penultimate layer of ResNet are used as a feature vector and are passed to a bi-directional two-layer LSTM that models the evolution through time steps. The size of each hidden unit is 1024. A linear layer after the LSTM calculates the class scores for each image. The ResNet-3D \cite{resnet3D} is a variant of ResNet designed for the classification of image sequences. This model processes the image sequence by merging temporal information at all layers in the network, allowing both late and early merging of information. For this application, the max-pooling and stride parameters are set to 1 in the temporal dimension. The removal of temporal aggregation is necessary to obtain one prediction per frame in the input sequence. We use the variant ‘R2plus1d-18’ proposed by Hara et al. \cite{resnet3D}. \subsection{Metrics} Several metrics are defined to evaluate the models in our context. The first metric we introduce is the Pearson product-moment $r$. It measures the correlation between the predicted transition time and the actual time of the corresponding transition. Before being computed, it requires applying the Viterbi algorithm beforehand so that predictions of the models are made consistent throughout the video, as is often done in the literature \cite{WeakSupMorphoKin,cellCount,BlastCellCount,ICLRMorpho}. The models we used do not have a strong constraint forcing them to respect the chronology of embryo development phases and can sometimes predict biologically impossible events like backward transitions ($p3 \rightarrow p2$, $pM \rightarrow p9+$, etc.). The Viterbi algorithm makes the prediction consistent by combining the transition probabilities produced by the model with the knowledge of actual biologically possible transitions. Once the predictions are made biologically plausible, we compute the correlation $r$ between the timings predicted by the models and the actual timings: \begin{equation} r = \frac{C}{V_p\times V_{gt}}, \end{equation} , where $C$, $V_p$, and $V_{gt}$ are respectively the covariance between the predicted and actual transition times, the variance of the predicted times, and the variance of the actual times. For this metric, only the transitions present in both ground-truth and predictions are taken into account. We determined that this metric is positively biased, which led us to introduce three more metrics: the accuracy p, the Viterbi accuracy, and the temporal accuracy. The accuracy p is one of the most widely used metrics in image classification and is calculated by the proportion of images correctly labeled by the model: \begin{equation} p = \frac{N}{N_{total}}, \end{equation} where $N$ and $N_{total}$ are respectively the numbers of images correctly classified and the total number of images. We also define a variant, the Viterbi accuracy $p_v$ that consists in applying the Viterbi algorithm beforehand to make the prediction consistent. We use the following formula : \begin{equation} p_v= \frac{N_v}{N_{total}} \end{equation} where $N_v$ is the number of correctly classified images once the raw predictions have been refined using the Viterbi algorithm. Finally, we define the temporal accuracy pt as the average proportion of phase transitions that are predicted sufficiently close to the corresponding actual transition. By “close enough”, we mean that the time between the predicted transition time and the actual transition time is inferior to a threshold. Therefore, it requires that the predictions are made consistent using the Viterbi algorithm. This metric is computed as follows : \begin{equation} p_t=\frac{T-T_{far}}{T}, \end{equation} where $T$ is the total number of phase transitions and $T_{far}$ is the number of transitions predicted too far away in time from their actual timing. For example, consider a video containing T = 6 transitions (p2 $\rightarrow$ p3 $\rightarrow$ p4 $\rightarrow$ p5 $\rightarrow$ p6 $\rightarrow$ p7 $\rightarrow$ p8) where the model has predicted the sequence (p2 $\rightarrow$ p3 $\rightarrow$ p4 $\rightarrow$ p5 $\rightarrow$ p6 $\rightarrow$ p8). The model has skipped phase p7 but has predicted phase p8 and this is likely due to the length of phase p7 which can be very short. As, in reality, the embryo cannot skip phase (the fact that some phases cannot be seen in the video is due to the large time interval between successive images), we consider that the model has implicitly predicted p7 with the same timestamp as the one predicted for p8. Now, let’s say the transitions (p2 $\rightarrow$ p3) and (p3 $\rightarrow$ p4) are predicted too far away from the corresponding transitions, i.e. the first image where the model has assigned the label of the new phase and the actual image corresponding to the new phase are separated by a time interval superior to a threshold $\theta$. Then, we have $T_{far}=2$ and the temporal precision is $p_t=(6-2)/6 = 0.67$. The threshold $\theta$ needs to be dependent on the phase because some phases are more difficult to locate precisely in time than others. For this, we use intra-operator standard deviations extracted from Martínez-Granados et al. to have thresholds that are more or less large according to the intrinsic ambiguity of each phase \cite{stdsThres}. In this work, the authors have sent time-lapse videos of embryo development to several IVF centers as an external quality control program and notably studied the intra-operator variance. Using their supplementary data, we compute the standard deviation $\sigma_p$ observed between operators for each phase $p$. The threshold $\theta_p$ we use for phase $p$ is simply set to $\sigma_p$. The standard deviations for each phase are available in \cref{stds}. The $p$ and $p_v$ metrics have the disadvantage of penalizing models that offer phase transitions far from true transitions as much as those that are wrong by only a few frames. The temporal accuracy metric takes this into account: a model that predicts a phase change close to the actual phase change is favored over a model that is far from the truth. \begin{table}[ht!b] \centering \begin{tabular}{*1c|*{7}c}\toprule Phase & pPNa& pPNf& p2 & $p3$& $p4$& $p5$& $p6$\\ $\sigma_p$& $1.13$ &$0.50$&$0.91$ &$1.81$ &$1.34$ &$1.49$&$1.61$\\ \midrule Phase & $p7$& $p8$& $p9+$& $pM$& $pSB$& $pB$ & $pEB$ \\ $\sigma_p$ &$2.93$ &$5.36$ &$4.42$ &$5.46$ &$3.78$ &$3.29$ &$4.85$ \\ \bottomrule \end{tabular} \caption{Inter-operator standard deviation of annotations in hours. Computed using data from \cite{stdsThres}.} \label{stds} \end{table} \subsection{Experimental setup} To show the potential use of this dataset we trained several deep neural networks on our dataset and evaluated them using cross-validation (k = 8). Details of the experiments are given below. Pre-processing and video selections. No data augmentation was used. Images were reduced from $500 \times 500$ to $224 \times 224$ to reduce GPU memory usage, as is usually done for deep learning models \cite{alexnet}. Some embryos grow slowly and the recording of the video only lasts a fixed amount of time, making the early phases over-represented. To prevent the model from being too biased towards early phases, we kept only the videos showing at least 6 distinct stages of development. The final number of videos used in the experiments was 756. Meta-parameters. Each training batch is composed of 10 sequences of 4 consecutive images. The ResNet model processes each image independently and therefore reads the $10\times4 = 40$ images as if they were independent images. An equal input sequence number and equal sequence length for the three models allow a fair comparison. The position of the sequence in the video is chosen randomly in the video. The loss function was optimized with SGD, with a constant learning rate of 0.001 and a momentum of 0.9. We applied dropout \cite{dropout} ($p = 0.50$) on the last layer of each model during training. During test and validation, to reduce GPU memory usage, the evaluation batch size was set to 150. The models were not evaluated over the entire video at once but over 150-frame sequences. Since each video contains about 500 frames, a few inferences are sufficient to analyze an entire video. Let N be the total number of training frames and L be the number of frames in a sequence. An epoch ends when the model has seen N/L sequences. To select the sequences, we used uniform random sampling with replacement, i.e. the model may see the same image several times and may not see some images within an epoch. For each split, we used 664, 47, and 45 videos for training, validation, and test. This represents respectively 297k, 20k, and 20k images and allows to detect an absolute variation of 1.5\% during evaluation, with a base accuracy ranging from 65\% to 70\%, a significance level of 0.05, and a power of 0.9, according to a power test. A model is trained during 100 epochs. The best model considering the validation set is then restored and evaluated on the test set. Initializing weights. The ResNet and ResNet-3D weights are pre-trained on ImageNet \cite{alexnet} and Kinetics \cite{resnet3D} respectively. The weights of the ResNet component of ResNet-LSTM are also pre-trained on ImageNet and the weights of the LSTM components are initialized randomly. \section{Results} \paragraph{Compiling a fully annotated, open dataset} To fill an important gap in the implementation of deep learning in IVF, we sought to build an open resource of fully annotated images of human preimplantation development. The dataset contains 756 videos with annotations for 16 morphokinetic events, covering the whole development of the embryo from day 1 to day 5-6 (Figure 2). This is accompanied by 4 custom evaluation metrics and 3 baseline model performances, along with cross-validation splits to reproduce our results and to rigorously evaluate new models and methods. \begin{figure*}[ht!b] \centering \includegraphics[width=0.8\textwidth]{figures/flowchart_datasetpaper.png} \caption{The time-lapse embryo dataset. This dataset contains 756 videos with annotations for 16 morpho-kinetic events, accompanied by 4 custom evaluation metrics and 3 baseline model performances, along with cross-validation splits.} \label{workflow} \end{figure*} Deep learning models are heavily dependent on data and might provide poor performance on a specific class if the amount of input corresponding to it is too small. This is why for each phase, we provide at least several thousand images, even for short phases like pNf, p3, or p5, (\cref{fig:classStat} (a)). The only exception is pHB as it is difficult to capture, the time-lapse recording being often interrupted before reaching that stage. Nevertheless, we still provide more than a hundred images for this phase. Most videos have at least 8 annotated phases and that approximately 380 videos have more than 13 phases annotated, illustrating the richness of annotation of our dataset (\cref{fig:classStat} (b)). \begin{figure}[ht!b] \subfloat[]{\includegraphics[width=0.75\textwidth]{figures/figure2_a.png}}\\ \subfloat[]{\includegraphics[width=0.75\textwidth]{figures/figure2_b.png}} \caption{Statistics of the dataset. (a) The number of images per phase in the dataset. (b) The number of phases per video in the dataset.} \label{fig:classStat} \end{figure} Sample images allow one to have a clear view of the content of the dataset and the annotations associated with the images (\cref{fig:phases}). Note that, depending on their position in the well, embryos can sometimes be partially occluded which is quite common in time-lapse videos. However, even when a part of the embryo is hidden, the images are sufficient to identify the development phase. The behavior of the AI-based model on a specific type of input (like partial views/artifacts) is conditioned on its presence/absence in the training set. In our study, such events were included in the training set, the model was therefore theoretically not affected by these outlier images. \begin{figure*}[ht!b] \makebox[\textwidth]{ \begin{tabular}{cccccccc} \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/tPB2_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/tPNa_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/tPNf_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/t2_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/t3_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/t4_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/t5_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/t6_small.png} \\ pPB2 & pPNa & pPNf & p2 & p3 & p4 & p5 & p6 \\ Second polar & Pro-nuclei & Pro-nuclei & \multirow{2}{*}{$2$ cells} &\multirow{2}{*}{$3$ cells} & \multirow{2}{*}{$4$ cells} & \multirow{2}{*}{$5$ cells} & \multirow{2}{*}{$6$ cells} \\ body detached & appearance & disappearance & \\ \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/t7_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/t8_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/t9+_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/tM_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/tSB_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/tB_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/tEB_small.png} & \includegraphics[width=0.110\textwidth,trim={0.5cm 0.5cm 0.5cm 0.5cm},clip]{phases2/tHB_small.png} \\ p7 & p8 & p9+ & pM & pSB & pB & pEB & pHB \\ \multirow{2}{*}{$7$ cells} & \multirow{2}{*}{$8$ cells} & $9$ cells & End of & Start of & Full & Expanded & Hatching \\ & & or more & compaction & blastulation& blastocyst & blastocyst & blastocyst \\ \end{tabular} } \caption{Illustrations of the $16$ development phases used. Contrast and luminosity are standardized for better visualization.\label{fig:phases}} \end{figure*} Using the API provided by Vitrolife, we could extract full-length videos and all focal planes available, highlighting the importance of data accessibility. The annotations made by experts recapitulate embryo development from tPB2 to tHB instead of focusing solely on early cleavages (t2 to t5+), as is done usually in the literature. Finally, 526 videos of our dataset correspond to embryos that were evaluated as compatible with clinical use, i.e. transferred or frozen, which are accompanied by a detailed outcome annotation (results of HCG test, presence/absence of fetal heartbeat, gestational sacs, and live-born information). This means our dataset can also be used by researchers to evaluate outcome prediction models and test cross-center generalizability. Note that the outcome information is considered as patient information, and therefore requires an agreement (MTA) with the University-Hospital of Nantes (see authors for details). \paragraph{Baseline model training} The first task to implement deep-learning on a dataset is to train baseline models with the most popular deep-learning architectures. The metrics associated with ResNet, ResNet-LSTM, and ResNet-3D analysis of our dataset are compiled in \cref{full_res}. \begin{table*}[ht!b] \makebox[\textwidth][c]{ \begin{tabular}{cccccc} \toprule Model&Split&$r$&$p$&$p_v$&$p_t$\\ \midrule \multirow{8}{*}{ResNet}&1&\underline{$0.958$}&$0.645$&$0.687$&$0.284$\\ &2&$0.894$&$0.57$&$0.594$&$0.292$\\ &3&$0.964$&\underline{$0.668$}&\underline{$0.698$}&$0.423$\\ &4&$0.976$&$0.655$&$0.708$&$0.447$\\ &5&$0.968$&$0.707$&$0.732$&$0.469$\\ &6&$0.981$&$0.702$&$0.747$&\underline{$0.365$}\\ &7&$0.97$&$0.687$&$0.731$&$0.219$\\ &8&$0.974$&$0.671$&$0.711$&$0.469$\\ \cline{2-6} & Mean & $0.961\pm0.026$&$0.663\pm0.041$&$0.701\pm0.044$&$0.371\pm0.09$\\ \hline \multirow{8}{*}{ResNet-LSTM}&1&$0.98$&$0.663$&$0.681$&$0.375$\\ &2&$0.96$&$0.592$&$0.591$&$0.625$\\ &3&$0.989$&$0.714$&$0.725$&$0.611$\\ &4&$0.969$&$0.677$&\underline{$0.701$}&$0.851$\\ &5&$0.988$&$0.733$&$0.737$&$0.844$\\ &6&$0.98$&$0.709$&$0.722$&$0.462$\\ &7&\underline{$0.976$}&$0.699$&$0.705$&$0.135$\\ &8&$0.974$&\underline{$0.691$}&$0.703$&\underline{$0.567$}\\ \cline{2-6} & Mean & $\mathbf{0.977\pm0.009}$&$0.685\pm0.041$&$0.696\pm0.043$&$0.559\pm0.223$\\ \hline \multirow{8}{*}{ResNet-3D}&1&$0.981$&$0.687$&$0.703$&$0.486$\\ &2&$0.917$&$0.619$&$0.641$&$0.875$\\ &3&\underline{$0.977$}&$0.711$&$0.743$&$0.75$\\ &4&$0.981$&$0.716$&$0.756$&$0.851$\\ &5&$0.984$&$0.736$&$0.76$&$0.547$\\ &6&$0.982$&$0.742$&$0.777$&$0.553$\\ &7&$0.959$&\underline{$0.701$}&\underline{$0.733$}&$0.469$\\ &8&$0.979$&$0.726$&$0.767$&\underline{$0.741$}\\ \cline{2-6} & Mean & $0.97\pm0.021$&$\mathbf{0.705\pm0.036}$&$\mathbf{0.735\pm0.042}$&$\mathbf{0.659\pm0.154}$\\ \bottomrule \end{tabular}} \caption{Performance obtained after the 8-fold cross-validation. Each row either indicates the performance of a model on one split or the mean performance across all splits accompanied by standard deviation. $r$ is correlation, $p$ is accuracy, $p_v$ is Viterbi accuracy and $p_t$ is temporal precision. For each metric, bold indicates the best mean performance on the given metric and the underline indicates the closest value to the mean performance of the model.\label{full_res}} \end{table*} The first metric we considered is the correlation metric, which is close to 1 for all deep-learning approaches. This metric is poorly informative. Indeed, one can notice the high bias and low variance of the correlation metric r that is pushing all values close to 1. This is because the predicted transitions are forced to be in a biologically plausible order after applying the Viterbi algorithm, implying a minimum level of alignment with the actual transitions, hence the high correlation values. To get a better idea of the performance of the deep learning algorithms, we focused our analysis on other metrics. The accuracy provides a greater range of values and shows that ResNet, ResNet-LSTM, and ResNet-3D are respectively able to correctly classify on average 66.3\%, 68.5\%, and 70.5\% of the images of the test videos. Logically, the accuracy with Viterbi $p_v$ yields higher values than the regular accuracy p because the model’s predictions are first made biologically plausible. Finally, the temporal precision shows that ResNet, ResNet-LSTM, and ResNet-3D predict respectively 37.1\%, 55.9\%, and 65.9\% of the transitions at a timing close from the real one. These 3 metrics highlight the superiority of ResNet-LSTM and ResNet-3D over ResNet. This is explained by the fact that ResNet processes images in an isolated manner, like an embryologist having a static view of the embryo using a microscope, whereas ResNet-LSTM and ResNet-3D process several images together, like an embryologist using a TLI system, and are therefore able to more accurately understand at which development phase the embryo is currently at. This highlights the relevance of proposing a dataset composed of full videos instead of isolated images, as models designed for this kind of input can improve performance. Globally, we observed that the models achieved good performance, illustrating that our dataset is sufficient in size and quality to train and evaluate deep learning models. Comparison of deep-learning approaches with the Ad-Hoc algorithmic annotation We then sought to evaluate the performance obtained with deep learning methods with the ad-hoc method developed in our previous work \cite{magalie}. Note that we did not use the $p_v$ metric as it requires applying the Viterbi algorithm, which is not possible for the Ad-Hoc method, as it does not generate probabilities of transition but instead directly predicts the time of the transition. The $p$, $p_v$, and pt metrics show that these DL models overperform the ad-hoc method by a large margin (0.659 vs. 0.615 on the pt metric and 0.705 vs. 0.58 on the p metric) confirming the interest and relevance of DL for the task of automatic morpho-kinetic parameter extraction (\cref{res_vs_prev}). \begin{table}[ht!b] \makebox[\textwidth][c]{ \begin{tabular}{ccccc}\toprule Model&$r$&$p$&$p_v$&$p_t$\\ \midrule ResNet&$0.961$& $0.663$&$0.701$&$0.371$\\ ResNet-LSTM&$\mathbf{0.977}$& $0.685$&$0.696$&$0.559$\\ ResNet-3D&$0.97$& $\mathbf{0.705}$&$\mathbf{0.735}$&$\mathbf{0.659}$\\ \hline Ad-Hoc &$0.973$& $0.580$&$0.580$&$0.615$\\ \bottomrule \end{tabular}} \caption{Performance of deep learning methods compared to our previous work \cite{magalie}. Top rows indicates the mean performances of the deep learning models across all splits, bottom row indicates the mean performance of our previous ad-hoc method on the whole dataset. $r$ is correlation, $p$ is accuracy, $p_v$ is Viterbi accuracy and $p_t$ is temporal precision. For each metric, bold indicates the best mean performance on the given metric.} \label{res_vs_prev} \end{table} Comparing our deep-learning results with previous analyses To check if our models perform similarly to those found in the literature, we evaluated 2 results: the ability to identify phases from p2 to p5+ \cite{WeakSupMorphoKin,predSuccRate,cellCount,BlastCellCount,ICLRMorpho,DLDP} and blastocyst vs not-blastocyst \cite{inexpAutoDeepLearn}. When restricting the test images to early cleavages, we obtained accuracies p similar to those found in the literature: 0.86, 0.88 and 0.88 for ResNet, ResNet-LSTM and ResNet-3D vs 0.82 to 0.87 in \cite{WeakSupMorphoKin,predSuccRate,cellCount,BlastCellCount,ICLRMorpho,DLDP} (\cref{res_vs_othersetups}). \begin{table}[ht!b] \makebox[\textwidth][c]{ \begin{tabular}{ccc}\toprule \multirow{3}{*}{Model}&Identification & Blastocyst\\ & of phases & vs \\ & from p2 to p5+ & Not-blastocyst \\ \midrule ResNet&0.86&0.98 \\ ResNet-LSTM&0.88&0.99 \\ ResNet-3D& 0.88& 0.99 \\ \bottomrule \end{tabular}} \caption{Evaluation of deep learning methods on the identification of phases from p2 to p5+ and blastocyst vs not-blastocyst. The metric used is the accuracy $p$.} \label{res_vs_othersetups} \end{table} To test the performance of blastocyst identification, we processed the predictions made during the first evaluation and merged phases from tPB2 to tM for the non-blastocyst class and merged phases from tB up to the end for the blastocyst class. We ignored the phase pSB as it is a phase of transition to the blastocyst stage not belonging to either of the 2 groups. We obtained accuracies of 0.98, 0.99 and 0.99 vs 0.96 in \cite{inexpAutoDeepLearn} on blastocyst/non-blastocyst evaluation (\cref{res_vs_othersetups}). This highlights that our database allows the models to be trained with 16 phases without being biased when reducing the settings to what can be found in the literature. \section{Discussion} In this study, we report the development and the respective performance of popular deep learning models for the annotation of a large dataset of time-lapse videos of embryo development that we propose to make publicly available for the sake of facilitated and improved further research in the field. We chose three architectures for our experiments. First, the ResNet architecture because of its simplicity and ease to train. Second, the ResNet-LSTM and the ResNet-3D architectures because they can leverage information from the temporal context. We evaluated those architectures with cross-validation and 4 metrics including 2 custom ones that we introduced. The fair performance yielded by the models here indicates that the dataset is large enough to train a deep learning model. Also, by leveraging image sequence models like ResNet-3D or ResNet-LSTM, we could improve prediction quality, highlighting the relevance of proposing full videos instead of isolated images. The good performance can be surprising as this dataset is composed of only 756 videos and video classification can be considered more data demanding than image classification. However, video classification consists in passing a sequence of images to a model and training it to produce a single output where each video has a single label for all its images. Here, the models are also passed as a sequence of images but they are trained to produce one output per image, i.e. classify each image, and each image has its own label. This is why this problem can be considered as image classification. This dataset size (337k images) is consistent with the dataset size found in the literature, where the number of images ranges from 60k to 600k \cite{WeakSupMorphoKin,predSuccRate,cellCount,BlastCellCount,ICLRMorpho,DLDP}. Kanakasabapathy et al. reported that inter and intra-variance were too high when more than 6 embryo developmental phases were used \cite{inexpAutoDeepLearn}. Contrarily, we report here for the first time the analysis of videos consisting of 16 precise developmental phases. Although some variance is also found in our work, we were able to precisely reconstitute the succession of all 16 morpho-kinetic events. Our work, therefore, goes beyond the simplified set of classes previously used, for example taking only into account early phases up to p9+ and merging phases p4 to p9+ into one class p4+. We also observed that we were able to reproduce the performance obtained in the literature using similar setups which show that our method is at least as good as the previous analyses. Another interesting part of our work is that we implemented 2 improvements to the DL approach. Firstly, we performed 8 fold cross-validation, while previous studies used a single split. Secondly, we used both 3D CNN architecture and a dedicated temporal model, which are relevant considering the temporal nature of the data, leading to improved performance. Although not evaluated up to now in the field of IVF and time-lapse videos of embryo development, the ResNet-3D architecture has been successfully used in several other medical domains such as oncology \cite{Yuan2020,10.1007/978-3-030-33676-9_26} cardiology \cite{10.1117/1.JBO.25.9.095003}, Computerized Tomography (CT) imagery quality, \cite{choi2019multidimensional} and neuroimaging \cite{10.1007/978-3-030-00689-1_9,10.1117/12.2549758}. Finally, The deep learning architecture provides superior performance to our previous Ad-Hoc method \cite{magalie}, highlighting the relevance of a deep learning tailored dataset as a step towards machine learning assisted IVF. Moreover, 526 of the 756 videos proposed correspond to transferred embryos and can be accompanied by detailed outcome annotations, eventually allowing other researchers to use this benchmark to validate outcome prediction models. In summary, our work will have a major impact on the implementation of DL in IVF, by providing a much-needed benchmark, ultimately benefiting infertile patients with improved clinical success rates. \section{Acknowledgments} The authors would like to thank the IVF staff at the University Hospital of Nantes, and more specifically Dr. Arnaud Reignier and Mrs. Jenna Lammers for the annotation of the database. This work was funded by ANR - Next grant DL4IVF (2017) None of the authors report having competing commercial interests concerning the submitted work. \bibliographystyle{ieee}
2,877,628,091,407
arxiv
\section{Introduction}\label{sec:intro} \subsection{Background}\label{subsec:background} The enumerative geometry of nodal curves has, in recent years, grown into a rich and increasingly intriguing field of mathematics. While many of the questions which arise in this context belong naturally to the domain of classical algebraic geometry, there are also deep connections to more sophisticated, modern notions, such as mirror symmetry. In this paper, we consider the enumeration of nodal curves on surfaces, which we assume to be complex, projective (for natural reasons) and smooth and irreducible (for convenience). There have recently been important breakthroughs in this field. In particular, in 2010 Tzeng gave a first proof \cite{Tzeng} of important conjectures of G\" ottsche. More precisely, let $S$ denote a surface as specified above. If $\mathscr{L}$ is a line bundle on $S,$ one may consider the associated complete linear system of curves, given by $|\mathscr{L}|,$ that is, $\mathbb{P}(H^{0}(S,\mathscr{L})).$ Denote this projective space by $Y,$ let $N$ be the dimension of $Y,$ and let $r \leq N$ be a non-negative integer. Denote by $N_{r}(S,\mathscr{L})$ the degree of the locus of $r$-nodal curves in $Y.$ Finally, let $(\partial,k,s,x)$ denote the four \textit{Chern numbers} of the polarized surface $(S,\mathscr{L}),$ that is, $\partial := \mathscr{L}^{2}, k = \mathscr{LK}_{S}, s = \mathscr{K}_{S}^{2}, x = c_{2}(S),$ where $\mathscr{K}_{S}$ denotes the canonical bundle on $S,$ and, for two line bundles $\mathscr{L}$ and $\mathscr{K},$ we let $\mathscr{LK} \in \mathbb{Z}$ denote the degree of $c_{1}(\mathscr{L})c_{1}(\mathscr{K}).$ The two primary conjectures of G\"ottsche (proved by Tzeng) are: \begin{conjecture}\label{conj:polynomiality} \emph{(\cite{Got}, Conjecture 2.1.)} There exist polynomials $Z_{r} \in \mathbb{Q}[t,u,v,w]$ of degree $r$ (for $r \geq 0$) such that whenever $\mathscr{L}$ is $(5r-1)$-very ample, $N_{r}(S,\mathscr{L})$ is given by $Z_{r}(\partial,k,s,x).$ \end{conjecture} \begin{conjecture}\label{conj:gen_got} \emph{(\cite{Got}, Conjecture 2.4.)} Let $(S,\mathscr{L})$ be fixed, then the generating function of the (virtual) curve numbers $Z_{r}(\partial,k,s,x)$ is \begin{displaymath} \sum_{r \geq 0} Z_{r}(\partial,k,s,x)(DG_{2}(\tau))^{r} = \frac{(DG_{2}(\tau)/q)^{\chi(\mathscr{L})}B_{1}(q)^{\mathscr{K}_{S}^{2}}B_{2}(q)^{\mathscr{L} \mathscr{K}_{S}}}{(\Delta(\tau)D^{2}G_{2}(\tau)/q^{2})^{\chi(\mathscr{O}_{S})/2}}. \end{displaymath} \end{conjecture} Here, $G_{2}(\tau)$ is the second Eisenstein series and $\Delta(\tau)$ is the Ramanujan discriminant modular form. Let $q := e^{2\pi i\tau},$ then \begin{eqnarray*} G_{2}(\tau) & = & -1/24 + \sum_{n=1}^{\infty}\left( \sum_{d|n}d \right) q^{n}, \\ \Delta(\tau) & = & q \prod_{m > 0} (1-q^{m})^{24}. \end{eqnarray*} $D$ denotes the differential operator $q\frac{d}{dq},$ and finally $B_{1}(q)$ and $B_{2}(q)$ are (currently unknown) rational power series in $q.$ The latter result will be referred to as the \textit{G\"ottsche--Yau--Zaslow formula.} It involves five universal power series, three of which are quasi-modular forms, while the remaining two, $B_{1}(q)$ and $B_{2}(q),$ are not yet identified. However, using the recursive formula of Caporaso--Harris \cite{CH}, G\"ottsche computed the terms of these power series up to degree 28 \cite[Remark 2.5]{Got}. In \cite{KST}, Kool, Shende and Thomas published a shorter proof of the first conjecture mentioned above. They also refined the result, showing that it is sufficient for $\mathscr{L}$ to be $r$-very ample. On the other hand, in \cite[Theorem 2.1]{Qvi}, we show that a consequence of the G\"ottsche--Yau--Zaslow formula is that the node polynomials $Z_{r}(\partial,k,s,x)$ (using terminology introduced by Kleiman and Piene) are of a very particular form: \begin{theorem}\label{thm:main} \emph{(\cite{Qvi}, Theorem 2.1.)} For all $i \geq 1$ there exists a linear form $a_i$ in four variables, with coefficients which are integers, such that for all $r \geq 0,$ \begin{displaymath} Z_{r}(\partial,k,s,x) = \frac{P_{r}(a_{1}(\partial,k,s,x), \ldots, a_{r}(\partial,k,s,x))}{r!}, \end{displaymath} with $P_{r}$ the $r$th complete exponential Bell polynomial. \end{theorem} This theorem generalizes the structural part of a theorem by Kleiman--Piene, \cite[Theorem 1.1]{KP1}, concerning node polynomials for $r \leq 8$ nodes. It does not, however, give the numerical expressions of the polynomials $a_i,$ of which Kleiman--Piene computed the first eight: \footnotesize \begin{eqnarray*} a_{1} & = & 3\partial + 2k + x \\ a_{2} & = & -42\partial - 39k - 6s - 7x \\ a_{3} & = & 1380\partial + 1576k + 376s + 138x \\ a_{4} & = & -72360\partial - 95670k - 28842s - 3888x \\ a_{5} & = & 5225472 \partial + 7725168 k + 2723400 s + 84384 x \\ a_{6} & = & -481239360 \partial -778065120 k - 308078520 s + 7918560 x\\ a_{7} & = & 53917151040\partial + 93895251840k + 40747613760s - 2465471520x\\ a_{8} & = & -7118400139200\partial - 13206119880240k - 6179605765200s + 516524964480x. \end{eqnarray*} \normalsize The aim of this paper is to provide an explicit construction of the linear polynomials $a_i,$ with methods from intersection theory. As the direct computation of the node polynomials $Z_{r}$ becomes increasingly difficult for high values of $r,$ our emphasis is on the structure of these polynomials, which do indeed seem to have some striking combinatorial properties. Using the principle of inclusion-exclusion combined with excess intersection theory, multiple-point formulas, and finally residual intersection theory, we are able provide a natural decomposition of the polynomials $a_i$ into a sum of three terms with distinct geometric interpretations. Two of these terms are computable with the methods at hand. In addition, we point out the connections between the polynomials $a_i$ and the multisingularity (Thom) polynomials appearing in \cite{Kaz} by Kazarian. \subsection{Structure of this article}\label{subsec:structure} In Section \ref{sec:basic_setup} we describe the schemes which will be used to construct the node polynomials from an intersection theoretical viewpoint. Section \ref{sec:shape} provides an ad hoc definition of integers $a_{i}(S,\mathscr{L}),$ depending on $S$ and $\mathscr{L},$ and associated classes $a_{i}(S,\mathscr{L})H^{i}$ in the Chow ring of the linear system of curves ($H$ being the class of a hyperplane). It then presents the node polynomials $Z_r$ as Bell polynomials evaluated in the integers $(-1)^{i-1}(i-1)!a_i(S,\mathscr{L}), 1 \leq i \leq r.$ Sections \ref{sec:equivalences} and \ref{sec:residual} discuss the various contributions to the integers $a_i(S,\mathscr{L})$ coming from different distinguished varieties of the intersection product that we study, and establish them as being the evaluation in the Chern numbers of $(S,\mathscr{L})$ of universally defined linear forms with integer coefficients. To avoid excessive notations, these forms are denoted by $a_i.$ \subsection{Conventions}\label{subsec:conventions} For a class $\alpha \in A^{k}(\mathbb{P}^{N}),$ we denote by $\int \alpha$ the degree of the class $\alpha \cdot H^{N-k} \in A^{N}(\mathbb{P}^{N}),$ with $H$ the class of a hyperplane. If $Y$ is a $\mathbb{C}$-scheme and $F$ is a scheme over $Y,$ we denote by $F^{\times r}$ the $r$-fold fiber product of $F$ with itself over $Y.$ \subsection{Acknowledgements}\label{subsec:acknow} I am greatly indebted to my advisor, Ragni Piene, who presented the initial idea to me and has steadily guided me towards the present article, answering all my questions with never-failing patience. An important part of the research which led to this paper was done while the author was a visiting student at MIT in the spring of 2012. It is a great pleasure to thank the Department of Mathematics and Steven Kleiman for hosting me. I would also like to thank Paolo Aluffi for an interesting and worthwile discussion. \section{Intersection theoretical setup}\label{sec:basic_setup} Let $S$ denote a smooth, irreducible projective surface over $\mathbb{C},$ and let $\mathscr{L}$ be a line bundle on $S;$ its global sections correspond to curves on $S,$ so we have a natural parameter space for curves, namely the projective space \begin{equation} Y := \mathbb{P}(H^{0}(S,\mathscr{L})). \end{equation} Let $N := \textnormal{dim }Y$ and set $F := S \times Y$ with projection $\gamma_{1}$ to $Y.$ Consider the relative effective divisor $\mathscr{D}$ in $F$ which is the total space of the complete linear system $|\mathscr{L}|;$ set-theoretically, it consists of pairs $(\kappa,y)$ such that $\kappa$ is a point on the curve $D_{y} \subset S$ corresponding to $y \in Y.$ Let $X \subset \mathscr{D}$ be the \textit{critical locus}, i.e., the scheme-theoretic closure of the set of pairs $(\kappa,y) \in S \times Y$ such that $\kappa$ is a singularity on $D_{y}.$ We consider $X$ as a scheme over $Y$ through the composition $f: X \stackrel{\iota}{\hookrightarrow} S \times Y \stackrel{\gamma_{1}}{\rightarrow} Y.$ Let $\widetilde{\mathscr{L}}$ denote $\mathscr{L} \boxtimes \mathscr{O}_{Y}(1),$ an invertible sheaf on $F.$ Recall that the associated sheaf of first order principal parts is defined as \begin{equation} \mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}}) := p_{2\ast}\Bigl(p_{1}^{\ast}\widetilde{\mathscr{L}}/(\mathscr{I}^{2} \cdot p_{1}^{\ast}\widetilde{\mathscr{L}})\Bigr), \end{equation} where $p_{j}: F \times_{Y} F \rightarrow F$ are the projections and $\mathscr{I}$ is the ideal sheaf of the diagonal $\Delta_{F}$ in $F \times_{Y} F.$ This sheaf fits into the vertical exact sequence below: \[ \xymatrix { & 0 \ar[d] \\ & \Omega^{1}_{F/Y} \otimes \widetilde{\mathscr{L}} \ar[d] \\ \mathscr{O}_{F} \ar[dr]^{z} \ar[r]^{z'} & \mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}}) \ar[d] \\ & \widetilde{\mathscr{L}} \ar[d] \\ & 0 \\ } \] Scheme-theoretically, $\mathscr{D}$ is defined as the zero scheme of a section $z$ of the invertible sheaf $\widetilde{\mathscr{L}},$ since $\mathscr{O}_{F}(\mathscr{D}) = \mathscr{L} \boxtimes \mathscr{O}_{Y}(1).$ The section $z$ induces a section $z'$ of $\mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}}).$ Scheme-theoretically, $X$ is the zero scheme of $z'.$ The vertical exact sequence above shows that $\mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}})$ is locally free of rank 3, so every component of $X$ has codimension at most 3 in $F$. In case of equality for all components, the class of $X,$ which we denote by $\xi := [X] \in A^{\ast}(F),$ is given by $c_{3}(\mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}})).$ \begin{proposition}\label{prop:isom} There is an isomorphism of $\mathscr{O}_{X}$-modules between the $Y$-relative normal bundle of $X$ in $F,$ i.e., $N_{X}F/Y,$ and (the restriction to $X$ of) the sheaf $\mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}}).$ \end{proposition} \begin{proof} Let $\mathscr{I}$ denote the ideal of $X$ in $F,$ then $\mathscr{I}_{|X} \cong \mathscr{I}/\mathscr{I}^{2} \cong (N_{X}F/Y)^{\vee}.$ On the other hand, $X$ is defined by the section $z': \mathscr{O}_{F} \longrightarrow \mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}}).$ Taking the duals, we have a morphism \begin{displaymath} \mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}})^{\vee} \longrightarrow \mathscr{O}_{F}^{\vee} \cong \mathscr{O}_{F} \end{displaymath} whose image is the ideal sheaf $\mathscr{I}.$ Restricting to $X,$ we get a surjection $$\mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}})_{|X}^{\vee} \longrightarrow \mathscr{I}/\mathscr{I}^{2},$$ which is, in fact, an isomorphism since the sheaves have the same rank. The result follows. \end{proof} \begin{example}\label{ex:critical_locus_P2} Consider $S = \mathbb{P}^{2}$ and the family of curves of degree $d,$ i.e., sections of $\mathscr{O}(d).$ Thus $Y = \mathbb{P}^{d(d+3)/2}.$ Let $\varphi \in \mathbb{C}[x_{0},x_{1},x_{2}, c_{ijk} | i + j + k = d]$ be the homogeneous polynomial of degree $d$ in $x_{0},x_{1}$ and $x_{2},$ and of degree 1 in the $c_{ijk}:$ \begin{displaymath} \varphi := \sum_{i + j + k = d} c_{ijk}x_{0}^{i}x_{1}^{j}x_{2}^{k}. \end{displaymath} Then $\mathscr{D} = Z(\varphi)$ is a hypersurface in $S \times Y,$ whereas $X,$ which is the locus of singular curves with a marked singularity, appears, by the Jacobi criterion, as the complete intersection of the three hypersurfaces in $F$ determined by the vanishing of the three partial derivatives $\frac{\partial \varphi}{\partial x_{0}}, \frac{\partial \varphi}{\partial x_{1}}$ and $\frac{\partial \varphi}{\partial x_{2}}.$ As observed in \cite[\S 1.1]{Alu2}, it follows that $X$ is a $\mathbb{P}^{N-3}$-bundle over $\mathbb{P}^{2};$ in particular, it is smooth. \hfill $\blacksquare$ \end{example} Above, we defined $\xi = [X] \in A^{\ast}(F).$ Pushing this class down to $Y$ by $\gamma_{1}$ yields an enumerative cycle class, in the following sense: $Y$ being projective of dimension $N$, its Chow ring is simply $A^{\ast}(Y) = \mathbb{Z}[H]/H^{N+1},$ with $H$ the class of a hyperplane. Therefore, $\gamma_{1\ast}\xi = a_{1}(S,\mathscr{L})H$ for an integer $a_{1}(S,\mathscr{L}),$ since dimension is preserved by pushdowns. The integer $a_{1}(S,\mathscr{L})$ is precisely the number $N_{1}(S,\mathscr{L})$ of 1-nodal curves in the linear system $|\mathscr{L}|$ through $N-1$ points in general position on $S.$ \begin{proposition}\label{prop:a1_linear} The integer $a_{1}(S,\mathscr{L})$ is given by evaluating a linear polynomial in four variables in the four Chern numbers $(\partial,k,s,x)$ of $(S,\mathscr{L}).$ More precisely, we have \begin{equation} a_{1}(S,\mathscr{L}) = 3\partial + 2k + x. \end{equation} \end{proposition} \begin{proof} We have $a_{1}(S,\mathscr{L}) = \gamma_{1\ast}\xi,$ with $\xi \in A^{\ast}(F)$ the class of $X,$ i.e., $c_{3}(\mathscr{P}_{F/Y}^{1}(\widetilde{\mathscr{L}})).$ Hence, putting $v:=c_{1}(\widetilde{\mathscr{L}})$ and $w_{j} = c_{j}(\Omega^{1}_{F/Y})$ for $j=1,2,$ the exact sequence \begin{displaymath} 0 \rightarrow \Omega^{1}_{F/Y} \otimes \widetilde{\mathscr{L}} \rightarrow \mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}}) \rightarrow \widetilde{\mathscr{L}} \rightarrow 0 \end{displaymath} yields $\xi = v^{3} + v^{2}w_{1} + vw_{2},$ which is a class of codimension 3 on $F.$ Let $\nu$ and $\gamma_{1}$ be the projections from $F = S \times Y$ to $S$ and $Y,$ respectively. Let $L := c_{1}(\mathscr{L}),$ $K: = c_{1}(\mathscr{K}_{S})$ and $H$ be the class of a hyperplane in $Y.$ For simplicity, let $L, K$ and $H$ also denote their own pullbacks (via $\nu$ and $\gamma_{1}$) to $F.$ Then $v = L + H$ and $w_{j} = c_{j}(\Omega^{1}_{F/Y}) = c_{j}(\nu^{\ast}\Omega^{1}_{S}) = \nu^{\ast}c_{j}(\Omega^{1}_{S}).$ We therefore get $w_{1} = \nu^{\ast}c_{1}(\Omega^{1}_{S}) = \nu^{\ast}c_{1}(\textnormal{det }\Omega^{1}_{S}) = \nu^{\ast}K,$ whereas $w_{2} = \nu^{\ast}c_{2}(\Omega^{1}_{S}) = \nu^{\ast}c_{2}(S).$ This gives us \begin{equation} \xi = (L+H)^{3} + K(L+H)^{2} + x(L+H). \end{equation} This can be seen as a polynomial in $H,$ and when pushing down to $Y,$ only the terms of first order in $H$ survive, so $a_{1}(S,\mathscr{L})H = \gamma_{1\ast}\xi = (3L^{2})H + (2LK)H + xH.$ Hence we conclude that $a_{1}(S,\mathscr{L}) = 3\partial + 2k + x.$ \end{proof} A natural candidate for a scheme parametrizing curves with $r$ marked nodes would be the fibered product $X \times_{Y} \ldots \times_{Y} X$ with $r$ factors (geometrically, the fiber product ensures that we get $r$ marked nodes on the same curve, represented by a point in $Y$). There are, however, two major problems, both of which appear already for $r = 2.$ Several loci appear in the scheme $X \times_{Y} X$: \begin{enumerate} \item a locus parametrizing binodal curves with marked nodes; \item the diagonal $\Delta_{X},$ parametrizing nodal curves with a marked node; \item the cuspidal locus, parametrizing cuspidal curves with a marked cusp. \end{enumerate} The diagonal is an excess locus; its dimension is $N-1,$ while the expected dimension of $X \times_{Y} X$ is $N-2.$ The cuspidal locus has the correct dimension, and is embedded in the diagonal (since there is only one singularity). Consequently, if we remove the intersection theoretical contribution of $\Delta_{X}$ to the intersection product $X_1 \cdot X_2,$ we get (up to a multiplicative factor of 2, due to the intrinsic symmetry of $X \times_{Y} X$) the number of 2-nodal curves plus the number of cuspidal curves in $|\mathscr{L}|$. Subtracting this last number and dividing by 2 yields the number of binodal curves in $|\mathscr{L|}.$ Intersection theoretically, the procedure is to intersect the pullbacks $p_{i}^{\ast}\xi, i = 1,2,$ with $p_{i}$ the projections $F \times_{Y} F \rightarrow F,$ then remove a certain excess class $B_{2}$ which represents the proper contribution of the diagonal and the contribution of the embedded cuspidal locus to the intersection product. We then wish to find the pushdown to $Y$ of this rational equivalence class, i.e., the class \begin{displaymath} \gamma_{2\ast} \bigl((p_{1}^{\ast}\xi \cdot p_{2}^{\ast}\xi) - B_{2}\bigr) \in A^{2}(Y), \end{displaymath} where $\gamma_{2}: F \times_{Y} F \rightarrow Y$ is the natural projection. It should be obvious that for higher values of $r,$ the problem of the diagonals becomes more and more intricate. \begin{definition}\label{def:curly_bracket} For $F$ a smooth scheme of dimension $n,$ and $\alpha \in A^{\ast}(F),$ we let $\{\alpha\}^{k}$ denote the $k$-codimensional part of $\alpha,$ an element in $A^{k}(F).$ Similarly, we let $\{\alpha\}_{k}$ denote the $k$-dimensional part, an element in $A_{k}(F).$ \hfill $\blacksquare$ \end{definition} \begin{example}\label{ex:two_nodes} We will illustrate in more detail the enumeration of 2-nodal curves in the above setting. The idea is to consider the intersection class $p_{1}^{\ast} \xi \cdot p_{2}^{\ast}\xi,$ and subtract the excess coming from the diagonal and the embedded cuspidal locus, supported on the diagonal. Cuspidal curves in $|\mathscr{L}|$ are enumerated by a polynomial which is provided in, for example, Kazarian's paper \cite[Example 10.2]{Kaz}. In his notation, this is $S_{A_{2}} = 12\partial + 12k + 2s +2x.$ The diagonal $\Delta_{X}$ being a set-theoretically connected component of the intersection $p_{1}^{-1}(X) \cap p_{2}^{-1}(X) \cong X^{\times 2},$ we can use Proposition 9.1.1 in \cite{Ful} to compute its proper contibution to the intersection product. In our case the computation takes place on $F^{\times 2},$ and we get a class in $A_{m}(F^{\times 2})$ where $m = \textnormal{dim}(F^{\times 2}) - \sum_{i=1}^{2} \textnormal{codim}(p_{i}^{-1}X, F^{\times 2}) = 4 + \textnormal{dim }Y - 2 \cdot 3 = \textnormal{dim }Y - 2,$ namely \begin{equation} \left\{c\Big(\left(p_{1}^{\ast}N_{X}F\right) |\Delta_{X}\Big) \cdot c\Big(\left(p_{2}^{\ast}N_{X}F\right) |\Delta_{X}\Big) \cdot c\left( N_{\Delta_{X}}F^{\times 2} \right)^{-1} \cap [\Delta_{X}] \right\}_{N-2}, \end{equation} representing the contribution of the diagonal itself to $p_{1}^{\ast}\xi \cdot p_{2}^{\ast}\xi.$ We want to find the pushdown of this class to $Y$ through $\gamma_{2} = \gamma_{1} \circ p_{1}.$ Since $\Delta_{X} \hookrightarrow \Delta_{F} \hookrightarrow F^{\times 2}$ are two regular embeddings, the normal bundle of the first being $N_{X}F \cong \mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}})$ and the one of the second being the pullback of $T_{F/Y} \cong T_{S},$ the class introduced above is equal to \begin{equation} \left\{ c\left(\mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}})\right) \cdot c(T_{F/Y})^{-1} \cap [X] \right\}_{N-2}. \end{equation} Recall our notations $L := c_{1}(\mathscr{L}),$ $K := c_{1}(\mathscr{K}_{S}),$ and $H$ is the class of a hyperplane in $Y.$ We also use $v = c_{1}(\widetilde{\mathscr{L}}) = L+H$ and $w_{j} = c_{j}(\Omega^{1}_{F/Y}),$ so that $w_{1} = \nu^{\ast}K$ and $w_{2} = \nu^{\ast}x,$ where $\nu$ is the projection from $F$ to $S.$ Now, we have $c_{1}(T_{F/Y}) = -w_{1}$ and $c_{2}(T_{F/Y})=w_{2}$ since $T_{F/Y}^{\vee} \cong \Omega^{1}_{F/Y}.$ Thus, $c(T_{F/Y})^{-1} = 1+w_{1}+(w_{1}^{2}-w_{2}).$ On the other hand, the exact sequence \begin{equation} 0 \rightarrow \Omega^{1}_{F/Y} \otimes \widetilde{\mathscr{L}} \rightarrow \mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}}) \rightarrow \widetilde{\mathscr{L}} \rightarrow 0 \end{equation} yields, by the Whitney sum formula, $c(\mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}})) = c(\Omega^{1}_{F/Y} \otimes \widetilde{\mathscr{L}}) \cdot c(\widetilde{\mathscr{L}}).$ Thus, considering Chern polynomials: \begin{eqnarray*} c_{t}(\Omega^{1}_{F/Y} \otimes \widetilde{\mathscr{L}}) & = & \sum_{i=0}^{2} t^{i} (1 + tc_{1}(\widetilde{\mathscr{L}}))^{2-i}c_{i}(\Omega^{1}_{F/Y}) \\ & = & \bigl(1 + t(L+H)\bigr)^{2} + t\bigl(1+t(L+H)\bigr)w_{1} + t^{2}w_{2}. \end{eqnarray*} Also, we have $[X] = \xi = (L+H)^{3} + K(L+H)^{2} + x(L+H).$ What we want is the degree 2 part of the coefficient of $H^{2}$ in the expansion of \begin{equation} c(\Omega^{1}_{F/Y} \otimes \widetilde{\mathscr{L}}) \cdot c(\widetilde{\mathscr{L}}) \cdot c(T_{F/Y})^{-1} \cap [X], \end{equation} when considering $K$ and $L$ to have degree 1 and $x$ to have degree 2. A simple computation in, for instance, \verb+Maple+, yields the following polynomial: \begin{equation} Q_{2} := 18\partial + 15k + 2s + 3x. \end{equation} We see that $Q_{2} + 2S_{A_{2}} = 18\partial + 15k + 2s + 3x + 2 \cdot (12\partial + 12k + 2s +2x) = 42\partial+39k+6s+7x,$ which is precisely the polynomial $-a_{2}(\partial,k,s,x)$ of Kleiman--Piene. On the other hand, the pushdown to $Y$ of the intersection product $p_{1}^{\ast}\xi \cdot p_{2}^{\ast}\xi$ is equal to $a_{1}^{2}H^{2}$ where $a_{1}H = \gamma_{1\ast}\xi = (3\partial+2k+x)H.$ In total, the pushdown of the class representing honest 2-nodal curves is $(a_{1}^{2} + a_{2})H^{2}.$ Divide this by 2 to avoid recountings due to permutations of the nodes; the result is, up to a factor $H^{2},$ the number of 2-nodal curves through $N-2$ points in general position on $S.$ \hfill $\blacksquare$ \end{example} \section{Shape of node polynomials}\label{sec:shape} For greater values of $r$ there are several diagonals which appear, as well as their intersections, which we refer to as \textit{polydiagonals}. There is a bijection between polydiagonals in $X^{\times r}$ and non-singleton partitions $\pi$ of $[r] := \{1,\ldots, r\}.$ Indeed, a partition is of a set of disjoint subsets of $[r]$ whose union is equal to $[r].$ These subsets are called \textit{blocks} of the partition. Denote by $\Pi_{r}$ the set of all partitions of $[r],$ and by $\Pi_{r}^{\circ}$ the set of non-singleton partitions, the singleton partition being $\widehat{0}_{r} := 1|2|\ldots |r,$ i.e., the only partition with $r$ blocks. Then $\pi \in \Pi_{r}^{\circ}$ corresponds to the polydiagonal \begin{equation} \Delta^{(r)}_{\pi} := \{(x_{1}, \ldots, x_{r}) \in X^{\times r}, x_{i} = x_{j} \textnormal{ if } i \textnormal{ and } j \textnormal{ are in the same block of } \pi \} \end{equation} in $X^{\times r}.$ We denote by $\widehat{1}_{r}$ the single-block partition $12 \ldots r.$ If there is no room for confusion, we use $\widehat{0}$ and $\widehat{1}$ instead of $\widehat{0}_{r}$ and $\widehat{1}_{r}.$ It is a well-known fact that imposing $r$ nodes on the curves in a system is a codimension $r$ requirement. Hence the dimension of the \textit{configuration space} $\mathbb{F}(X,r)$ (i.e., the complement of the diagonals in $X^{\times r}$) is equal to $N - r,$ where $N = \textnormal{dim }Y.$ The union of the scheme-theoretic polydiagonals, however, is a connected component of $X^{\times r}$ of dimension $N-1,$ since it contains the small diagonal $\Delta^{(r)}_{12\ldots r} \cong X.$ Letting $p_{j}: F^{\times r} \rightarrow F, 1 \leq j \leq r,$ denote the projections, we make the following \textit{ad hoc} definition, whose importance will be made clear in the following: \begin{definition}\label{def:equiv} Let $r \geq 1.$ For each $\hat{0} \neq \pi \in \Pi_{r},$ we let $B^{(r)}_{\pi} \in A_{\ast}(\Delta^{(r)}_{\pi})$ denote the equivalence (in the sense of \cite[Definition 6.1.2]{Ful}) of the closed subset $\Delta^{(r)}_{\pi}$ for the intersection product $p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi.$ Also, we let $B^{(r)}_{\hat{0}} \in A_{\ast}(X^{\times r})$ denote the intersection product itself. Furthermore, define \begin{displaymath} a_{i}(S,\mathscr{L}) := (-1)^{i-1}(i-1)!\int_{Y} f_{\ast}B^{(i)}_{1\ldots i} \in \mathbb{Z}, \end{displaymath} where $f: X \rightarrow Y$ is the composition of the embedding $\iota: X \hookrightarrow F$ and the projection $F = S \times Y \rightarrow Y.$ \hfill $\blacksquare$ \end{definition} \begin{remark}\label{rem:all_dist} We would like to emphasize the fact that we are not simply considering the proper contribution of $\Delta^{(r)}_{\pi}$ to the intersection product $p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi,$ but the contribution of all distinguished varieties whose support is contained in this polydiagonal. \end{remark} \begin{definition}\label{def:comp_bell} The \textit{complete (exponential) Bell polynomials} are defined by the formal identity in $t,$ \begin{equation}\label{eqn:formal_id} \sum_{r \geq 0}P_{r}t^{r}/r! = \exp \left(\sum_{l \geq 1} x_{l}t^{l}/l! \right). \end{equation} \hfill $\blacksquare$ \end{definition} \begin{example}\label{ex:bell_polys} The first four Bell polynomials are easily seen to be: \begin{eqnarray*} P_1(x_1) & = & x_1 \\ P_2(x_1,x_2) & = & x_1^2 + x_2 \\ P_3(x_1,x_2,x_3) & = & x_1^3 + 3x_1x_2 + x_3 \\ P_4(x_1,x_2,x_3,x_4) & = & x_1^4 + 6x_1^2x_2 + 4x_1x_3 + 3x_2^2 + x_4 \end{eqnarray*} \hfill $\blacksquare$ \end{example} One can also define partial Bell polynomials: \begin{definition}\label{def:part_bell} The \textit{partial Bell polynomials} are defined for all $n \geq 1$ and all $1 \leq l \leq n,$ by the following formula: \footnotesize \begin{displaymath} P_{n,l}(x_{1}, x_{2}, \ldots, x_{n-l+1}) := \sum \frac{n!}{j_{1}!j_{2}! \ldots j_{n-l+1}!} \left(\frac{x_{1}}{1!}\right)^{j_{1}}\left(\frac{x_{2}}{2!}\right)^{j_{2}} \ldots \left(\frac{x_{n-l+1}}{(n-l+1)!}\right)^{j_{n-l+1}}, \end{displaymath} \normalsize where we sum over all tuples of integers $j_{1}, \ldots, j_{n-l+1} \geq 0$ such that $j_{1} + \ldots + j_{n-l+1} = l$ and $j_{1} + 2j_{2} + \ldots + (n-l+1)j_{n-l+1} = n.$ \hfill $\blacksquare$ \end{definition} Combinatorically, the coefficient in front of $x_{1}^{j_{1}}x_{2}^{j_{2}} \ldots x_{n-l+1}^{j_{n-l+1}}$ is interpreted as the number of ways to partition a set of $n$ elements into $l$ blocks where $j_{1}$ blocks have 1 element, $j_{2}$ have 2 elements etc., the members of the set being indistinguishable. The complete Bell polynomials are the sum of the partial ones: \begin{equation} P_{n}(x_{1},\ldots,x_{n}) = \sum_{l=1}^{n}P_{n,l}(x_1,x_2,\ldots,x_{n-l+1}). \end{equation} The object of this section is to show the following theorem: \begin{theorem}\label{thm:shape} Let $(S,\mathscr{L})$ be a polarized smooth, irreducible projective surface over $\mathbb{C}$ and let $r \geq 1$ be an integer. Then, provided $\mathscr{L}$ is $r$-very ample, the number $N_{r}(S,\mathscr{L})$ of $r$-nodal curves in the linear system $|\mathscr{L}|$ is given by \begin{displaymath} N_{r}(S,\mathscr{L}) = \frac{P_{r}(a_{1}(S,\mathscr{L}), \ldots, a_{r}(S,\mathscr{L}))}{r!}, \end{displaymath} where $P_{r}$ is the $r$th complete Bell polynomial. \end{theorem} Consider the fiber product $F^{\times r} = F \times_{Y} \ldots \times_{Y} F,$ with $r$ projections $p_{j}$ to $F.$ The $r$-fold fiber product $X \times_{Y} \ldots \times_{Y} X$ is equal to $p_{1}^{-1}(X) \cap \ldots \cap p_{r}^{-1}(X).$ As a starting point for enumerating $r$-nodal curves in $|\mathscr{L}|,$ one could consider the intersection product \begin{displaymath} p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi \in A^{\ast}(F^{\times r}). \end{displaymath} However, the polydiagonals give an excess contribution to this intersection, which we want to remove. This motivates the following definition: \begin{definition}\label{def:class_I} We denote by $I_{r}$ the intersection class $p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi$ minus the equivalence of the union of the polydiagonals. More precisely, recall that $\Pi_{r}^{\circ}$ denotes the set of partitions of $[r],$ $1|2|\ldots|r$ excluded, then \begin{equation} I_{r} := p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi - \left(p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi\right)^{\bigcup_{\pi \in \Pi_{r}^{\circ}}\Delta^{(r)}_{\pi}}. \end{equation} \hfill $\blacksquare$ \end{definition} We now want to express $I_r$ using the classes $B^{(r)}_{\pi}.$ For this, we need some notation. If $\pi$ and $\pi'$ are two partitions in $\Pi_{r},$ we write $\pi' \prec \pi$ if each block of $\pi'$ is contained in a block of $\pi,$ i.e., if the partition $\pi'$ is a refinement of the partition of $\pi.$ The number of blocks of a partition $\pi$ is denoted by $|\pi|.$ Thus, the singleton partition $\widehat{0} = 1|2|\ldots|r$ is the only partition $\pi$ of $[r]$ such that $|\pi| = r.$ \begin{lemma}\label{lemma:mobius_coeffs} We have \begin{equation} I_{r} = \sum_{\pi \in \Pi_{r}} n^{(r)}_{\pi} B^{(r)}_ {\pi}, \end{equation} where the coefficients $\{n^{(r)}_{\pi}\}$ are defined as follows: For $\pi \in \Pi_{r},$ let $s_{i}(\pi)$ denote the number of blocks of size $i$ in $\pi,$ where $1 \leq i \leq r.$ Then \begin{equation} n^{(r)}_{\pi} = \prod_{i=1}^{r} \left[(-1)^{i-1}(i-1)!\right]^{s_{i}(\pi)}. \end{equation} \end{lemma} \begin{proof} We have \begin{eqnarray*} I_{r} & = & p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi - \left(p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi\right)^{\bigcup_{\pi \in \Pi_{r}^{\circ}}\Delta^{(r)}_{\pi}} \\ & = & p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi - \sum_{Z \subseteq \bigcup_{\pi \in \Pi_{r}^{\circ}}\Delta^{(r)}_{\pi}} \left(p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi\right)^{Z}, \end{eqnarray*} where the $Z$s appearing in the index are distinguished varieties of the intersection product $p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi$ support on the union of the polydiagonals. Since these $Z$s are irreducible, we have \begin{equation} -\sum_{Z \subseteq \bigcup_{\pi \in \Pi_{r}^{\circ}}\Delta^{(r)}_{\pi}} \left(p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi\right)^{Z} = \sum_{\pi \in \Pi_{r}^{\circ}}\sum_{Z \subseteq \Delta^{(r)}_{\pi}} n^{(r)}_{\pi} \left(p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi\right)^{Z}, \end{equation} where the $n^{(r)}_{\pi}$ are defined so that each term $\left(p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi\right)^{Z}$ for some distinguished variety $Z$ supported on the union of the diagonals occurs only once. Starting with the ``largest'' polydiagonals, i.e., the $\Delta^{(r)}_{\pi}$ for which $|\pi| = r-1,$ the coefficient $n^{(r)}_{\pi}$ must be $-1.$ Then each term $\left(p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi\right)^{Z}$ for $Z$ supported on some polydiagonal $\Delta^{(r)}_{\pi}$ with $|\pi| = r-2$ occurs $\sum_{\pi' \prec \pi} n^{(r)}_{\pi'}$ times, hence we must add them to the previous expression, but with a coefficient \begin{equation} n^{(r)}_ {\pi} := -1 - \sum_{\pi' \prec \pi, |\pi'| \neq r} n^{(r)}_{\pi'} = - \sum_{\pi' \prec \pi} n^{(r)}_{\pi'} \end{equation} to ensure they are only subtracted once. Now continue this way, using the principle of inclusion-exclusion. We recognize the definition of the coefficients $n^{(r)}_{\pi}$ as $n^{(r)}_{\pi} = \mu(\widehat{0}_r,\pi)$ with $\mu$ the M\"obius function of the poset $\Pi_{r}$ (cf. \cite[Section 3.9]{Sta}). Since we have \begin{equation} \mu_{n} := \mu(\widehat{0}_{n},\widehat{1}_{n}) = (-1)^{n-1}(n-1)! \end{equation} by \cite[Example 3.10.4]{Sta}, and because of the product theorem for M\"obius functions \cite[Proposition 3.8.2]{Sta}, it follows that \begin{equation} n^{(r)}_{\pi} = \prod_{i=1}^{r} \left[(-1)^{i-1}(i-1)!\right]^{s_{i}(\pi)}. \end{equation} \end{proof} For each $r \geq 1,$ it is clear that polydiagonals in $X^{\times r}$ are isomorphic, as schemes, to fibered products of small diagonals from the $X^{\times i}, i \leq r.$ For instance, in $X^{\times 6}$ we have \begin{equation} \Delta^{(6)}_{1|23|456} \cong X \times_{Y} \Delta^{(2)}_{12} \times_{Y} \Delta^{(3)}_{123}. \end{equation} So when passing from fewer than $r$ to $r$ nodes, what is new compared to previous cases --- from a structural point-of-view --- is the contribution to the intersection product $p_{1}^{\ast}\xi \cdot \ldots \cdot p_{r}^{\ast}\xi$ from the small diagonal $\Delta^{(r)}_{12\ldots r}.$ From the above, this contribution appears with the coefficient $(-1)^{r-1}(r-1)!,$ which is what motivates Definition \ref{def:equiv}. Since $B^{(i)}_ {1\ldots i}$ is a class of dimension $N-i,$ the codimension of its pushdown in $Y$ becomes $i.$ We want to show that $\forall r \geq 2,$ the class $I_{r} = \sum_{\pi \in \Pi_{r}} n^{(r)}_{\pi} B^{(r)}_{\pi} \in A_{N-r}(F^{\times r})$ (each term having been pushed forward to a class on $F^{\times r}$) pushes down to the $r$th Bell polynomial in the classes $a_{i}(S,\mathscr{L})H^{i}, 1 \leq i \leq r$ on $Y.$ We need an intermediate result (to lighten the notation, we assume all classes are pushed forward to the appropriate ambient variety $F^{\times i}$): \begin{proposition}\label{prop:splitting} For any $r \geq 2$ and any $\pi \in \Pi_{r},$ we have the following equality of classes on $Y$ ($\prod$ denoting the intersection product $\cdot$): \begin{equation} \gamma_{r\ast}B^{(r)}_{\pi} = \prod_{i=1}^{r} \Bigl(\gamma_{i\ast} B^{(i)}_{1\ldots i} \Bigr)^{s_{i}(\pi)} \in A^{r}(Y). \end{equation} \end{proposition} Before proving the proposition, let us clarify by looking at a simple example. \begin{example}\label{ex:five_diags} Say $r = 5$ and we are interested in the contribution to the intersection product $p_{1}^{\ast}\xi \cdot \ldots \cdot p_{5}^{\ast}\xi \in A^{\ast}(F^{\times r})$ coming from the diagonal $\Delta_{12|345}.$ For notational simplicity, let $p$ and $q$ denote the projections $p_{12}$ and $p_{345}$ from $F^{5}$ to $F^{\times 2}$ and $F^{\times 3},$ respectively. Then there are two natural ways of associating a class on $Y$ to the class $B^{(5)}_{12|345}.$ The ``easiest'' is to push forward by $\gamma_{5}.$ The other one consists of pushing forward to $F^{\times 2} \times F^{\times 3}$ through $p \times q,$ then to $Y \times Y$ with $\gamma_{2} \times \gamma_{3},$ and finally pulling back to $Y$ via the the diagonal embedding $\delta_{Y}: Y \hookrightarrow Y \times Y.$ The diagram \[ \xymatrix { F^{\times 5} \ar[r]^>>>>>{p \times q} \ar_{\gamma_{5}}[d] & F^{\times 2} \times F^{\times 3} \ar^{\gamma_{2} \times \gamma_{3}}[d]\\ Y \ar@{^{(}->}_{\delta_{Y}}[r] & Y \times Y } \] is a fiber square, and by \cite[Proposition 1.7]{Ful}, the relation \begin{equation} \label{equality} \gamma_{5\ast} (p \times q)^{\ast} \alpha = \delta_{Y}^{\ast}(\gamma_{2} \times \gamma_{3})_{\ast} \alpha \in A^{\ast}(Y) \end{equation} holds $\forall \alpha \in A^{\ast}(F^{\times 2} \times F^{\times 3}).$ There is a degree-preserving morphism of graded rings \begin{displaymath} A^{\ast}(F^{\times 2}) \otimes A^{\ast}(F^{\times 3}) \stackrel{\times}{\rightarrow} A^{\ast}(F^{\times 2} \times F^{\times 3}), \end{displaymath} called the \textit{exterior product,} and the relation (\ref{equality}) holds for all $\alpha$ in its image. However, the intersection product $\cdot$ on $Y$ is simply the composition \begin{displaymath} A^{\ast}(Y) \otimes A^{\ast}(Y) \stackrel{\times}{\rightarrow} A^{\ast}(Y \times Y) \stackrel{\delta_{Y}^{\ast}}{\rightarrow} A^{\ast}(Y). \end{displaymath} Let $\alpha$ be the exterior product of $B^{(2)}_{12}$ and $B^{(3)}_{123}.$ Then the right hand side of (\ref{equality}) is $\gamma_{2\ast}B^{(2)}_{12} \cdot \gamma_{3\ast}B^{(3)}_{123}.$ So to conclude that $\gamma_{5\ast}B^{(5)}_{12|345} = \gamma_{2\ast}B^{(2)}_{12} \cdot \gamma_{3\ast}B^{(3)}_{123},$ it suffices to have the equality $(p \times q)^{\ast}\alpha = B^{(5)}_{12|345}.$ But $(p \times q)^{\ast}\alpha = p^{\ast}B^{(2)}_{12} \cdot q^{\ast}B^{(3)}_{123},$ so we must show that this intersection product equals $B^{(5)}_{12|345}.$ \hfill $\blacksquare$ \end{example} In fact, what is done in the preceding example is general: \begin{lemma}\label{lemma:splitting} Let $r \geq 2$ and consider a partition $\pi \in \Pi_{r}.$ For each block of $\pi$ there is a corresponding subset $I$ of $[r].$ Consider the natural projection $p_{I}: F^{\times r} \rightarrow F^{\times |I|}.$ Denote the set of blocks of $\pi$ by $\mathbb{B}(\pi).$ Then the pushdown to $Y$ through $\gamma_{r}$ of the class \begin{displaymath} \prod_{I \in \mathbb{B}(\pi)} p_{I}^{\ast}B^{(|I|)}_{1 \ldots |I|} \in A^{\ast}(F^{\times r}) \end{displaymath} is equal to the intersection product over $I \in \mathbb{B}(\pi)$ of the classes $\gamma_{|I|\ast}B^{(|I|)}_{1 \ldots |I|} \in A^{\ast}(Y).$ \end{lemma} \begin{proof} The matter of generalizing the result from the previous example is purely formal, and therefore left out. \end{proof} We now prove Proposition \ref{prop:splitting}: \begin{proof} By Lemma \ref{lemma:splitting}, it suffices to show that after push-forward to $F^{\times r},$ \begin{equation} B^{(r)}_{\pi} = \prod_{I \in \mathbb{B}(\pi)} p_{I}^{\ast}B^{(|I|)}_{1\ldots |I|}. \end{equation} For each $1 \leq i \leq r,$ let $p_{i}$ denote the $i$th projection from $F^{\times r}$ to $F$ and $\delta_{r}$ the diagonal embedding of $F^{\times r}$ in $F^{\times r} \times \ldots \times F^{\times r}.$ Let $N$ be the dimension of $Y.$ We are interested in the intersection diagram \[ \xymatrix { \bigcap X_{i} \cong X^{\times r} \ar@{^{(}->}[r] \ar@{^{(}->}[d] & F^{\times r} \ar@{^{(}->}^{\delta_{r}}[d]\\ X_{1} \times \ldots \times X_{r} \ar@{^{(}->}[r] & F^{\times r} \times \ldots \times F^{\times r} } \] where $X_{i} := p_{i}^{-1}(X).$ Denote by $\mathscr{N}^{(r)}$ the pullback of the normal bundle of $X_{1} \times \ldots \times X_{r}$ in $F^{\times r} \times \ldots \times F^{\times r}.$ The latter embedding is closed regular of codimension $3r,$ so $\mathscr{N}^{(r)}$ is a bundle of rank $3r$ on $X^{\times r}.$ Let $\zeta_{r}$ be the projection $\mathscr{N}^{(r)} \rightarrow X^{\times r}.$ The cone $C^{(r)} := C_{X^{\times r}}F^{\times r},$ which has pure dimension $2r + N,$ embeds as a closed subcone of $\mathscr{N}^{(r)}$ over $X^{\times r},$ and gives a cycle $[C^{(r)}]$ of dimension $2r + N$ on this bundle. Let the irreducible components of $C^{(r)}$ be $C^{(r)}_{j}, 1 \leq j \leq t_{r},$ with geometric multiplicities $m^{(r)}_{j}$ and supports $Z^{(r)}_{j},$ which are irreducible subschemes of $X^{\times r},$ not necessarily all distinct. Let $z^{(r)}_{j}: Z^{(r)}_{j} \rightarrow N^{(r)}_{j}$ be the zero section of the restriction of $\mathscr{N}^{(r)}$ to $Z^{(r)}_{j}.$ Now, $B^{(r)}_{\pi}$ is defined as the sum of the contributions to $X_1 \cdot \ldots \cdot X_r$ coming from all distinguished varieties $Z^{(r)}_{j}$ (defined above) supported on $\Delta^{(r)}_{\pi} = \bigcap_{I \in \mathbb{B}(\pi)} \Delta^{(r)}_{I}.$ Hence, with $p_I: F^{\times r} \rightarrow F^{\times |I|}$ the natural projection for each $I \in \mathbb{B}(\pi),$ there is a multiplicative correspondence between tuples of components of $C^{(|I|)}, I \in \mathbb{B}(\pi),$ each with support contained in $\Delta^{(|I|)}_{1\ldots |I|},$ and components of $C^{(r)}$ with support contained in $\Delta^{(r)}_{\pi},$ such that the geometric multiplicity of $C^{(r)}_{j}$ equals the product of the geometric multiplicities of the corresponding components of the $C^{(|I|)}.$ Hence, letting $\delta$ denote the diagonal embedding of $F^{\times r}$ into $F^{\times r} \times \ldots \times F^{\times r}$ ($|I|$ factors) and letting $\times$ denote the exterior product, \begin{eqnarray*} \prod_{I \in \mathbb{B}(\pi)} p_{I}^{\ast}B^{(|I|)}_{1\ldots |I|} & = & \delta^{\ast}\left(\bigtimes_{I \in \mathbb{B}(\pi)} p_{I}^{\ast}B^{(|I|)}_{1\ldots |I|}\right) \\ & = & \delta^{\ast} \left( \bigtimes_{I \in \mathbb{B}(\pi)} p_{I}^{\ast} \left(\sum_{Z^{(|I|)}_{j} \subseteq \Delta^{(|I|)}_{1\ldots |I|}} m^{(|I|)}_{j}z^{(|I|)\ast}_{j}[C^{(|I|)}_{j}] \right)\right) \\ & = & \delta^{\ast} \left( \sum_{\substack{I \in \mathbb{B}(\pi) \\ Z^{(|I|)}_{j(I)} \subseteq \Delta^{(|I|)}_{1\ldots |I|}}} \prod_{I \in \mathbb{B}(\pi)} m^{(|I|)}_{j(I)} \bigtimes_{I \in \mathbb{B}(\pi)} p_{I}^{\ast} z^{(|I|)\ast}_{j(I)}[C^{(|I|)}_{j(I)}] \right) \\ & = & \sum_{\substack{I \in \mathbb{B}(\pi) \\ Z^{(|I|)}_{j(I)} \subseteq \Delta^{(|I|)}_{1\ldots |I|}}} \prod_{I \in \mathbb{B}(\pi)} m^{(|I|)}_{j(I)} \delta^{\ast} \left( \bigtimes_{I \in \mathbb{B}(\pi)} p_{I}^{\ast} z^{(|I|)\ast}_{j(I)}[C^{(|I|)}_{j(I)}] \right). \end{eqnarray*} Using the definition of the intersection product \cite[Section 6.1]{Ful} and the correspondence between the $C^{(r)}_{j}$ whose support is contained in $\Delta^{(r)}_{\pi},$ and tuples of components of the $C^{(|I|)}$ for $I \in \mathbb{B}(\pi),$ with $\prod_{I \in \mathbb{B}(\pi)} m^{(|I|)}_{j(I)} = m^{(r)}_{j},$ we now get \begin{eqnarray*} \prod_{I \in \mathbb{B}(\pi)} p_{I}^{\ast}B^{(|I|)}_{1\ldots |I|} & = & \sum_{Z^{(r)}_{j} \subseteq \Delta^{(r)}_{\pi}} m^{(r)}_{j} z^{(r)\ast}_{j}[C^{(r)}_{j}] \\ & = & B^{(r)}_{\pi}, \end{eqnarray*} \end{proof} We may now proceed to prove the main theorem of this section, Theorem \ref{thm:shape}, concerning the shape of the node polynomials:\\ \begin{proof} We assume $r$ is such that $\mathscr{L}$ is $r$-very ample. Hence, by Proposition 2.1 in \cite{KST}, a general $r$-dimensional linear system $\mathbb{P}^{r} \subset |\mathscr{L}|$ contains a finite number of $r$-nodal curves, appearing with multiplicity 1, and all other curves are reduced with geometric genus strictly larger than $g-r,$ where $2g-2 = \mathscr{L} \cdot (\mathscr{L}+\mathscr{K}_{S}).$ These curves are excluded from the counting by subtracting from $p_{1}^{\ast}\xi \cdot \ldots p_{r}^{\ast}\xi$ the equivalence of the polydiagonals. Indeed, this operation takes care both of the excess contribution as well as the contribution from embedded, distinguished varieties. Since curves in $|\mathscr{L}|$ with higher geometric genus must have strictly fewer than $r$ singular points, the corresponding distinguished varieties must be supported on the diagonal subspace $\bigcup_{\pi \in \Pi_{r}^{\circ}} \Delta^{(r)}_{\pi}$ of $X \times_{Y} \ldots \times_{Y} X.$ So the cycle class $\gamma_{r\ast} I_{r} \in A^{r}(Y)$ represents a cycle which is reduced and enumerates precisely the finite number of $r$-nodal curves in the generic subsystem $\mathbb{P}^{r},$ with an ordering of the $r$ nodes. Since there are $r!$ ways to order the $r$ nodes, the class $\gamma_{r\ast} I_{r}/r!$ enumerates $r$-nodal curves, i.e., \begin{equation} N_{r}(S, \mathscr{L})H^{r} = \frac{1}{r!}\gamma_{r\ast} I_{r}. \end{equation} Since we defined $a_{1}(S,\mathscr{L})$ as $\int_{Y}\gamma_{1\ast}\xi,$ the pushdown to $Y$ of $\prod_{i=1}^{r}p^{\ast}_{i}\xi$ becomes $a_{1}(S,\mathscr{L})^{r}H^{r}.$ Also, Proposition \ref{prop:splitting} implies that $n^{(r)}_{\pi}B^{(r)}_{\pi}$ pushes down to $\prod_{i=1}^{r} a_{i}(S,\mathscr{L})^{s_{i}(\pi)}H^{r},$ with $s_{i}(\pi)$ denoting the number of blocks of size $i$ in the partition $\pi \in \Pi_{r}.$ For any $r$-tuple of non-negative integers $j_{i}$ such that $j_{1} + 2j_{2} + \ldots + rj_{r} = r,$ let $\widetilde{e}_{j_{1}\ldots j_{r}}$ denote the number of polydiagonals with $j_{i}$ blocks of size $i.$ Then it is clear that \begin{equation} N_{r}(S,\mathscr{L}) = \frac{1}{r!} \sum_{j_{1} + \ldots + rj_{r} = r} \widetilde{e}_{j_{1}\ldots j_{r}}\prod_{l=1}^{r} a_{l}(S,\mathscr{L})^{j_{l}}. \end{equation} Set $L_{r}(a_{1}(S,\mathscr{L}), \ldots, a_{r}(S,\mathscr{L}))$ to be the sum $\sum_{j_{1} + \ldots + rj_{r} = r} \widetilde{e}_{j_{1}\ldots j_{r}}\prod_{l=1}^{r} a_{l}(S,\mathscr{L})^{j_{l}}.$ If we regroup the polydiagonals by their number of blocks, $i,$ and note that polydiagonals with $i$ blocks can have no blocks of size $> r-i+1$ (indeed, each block must have at least one element, so we would get a number of elements $> (i-1) \cdot 1 + r-i+1 = r,$ which is impossible), then \begin{displaymath} L_{r}(a_{1}(S,\mathscr{L}), \ldots, a_{r}(S,\mathscr{L})) = \sum_{i=1}^{r} \sum_{J_{r,i}} e_{j_{1}\ldots j_{r-i+1}} \prod_{l=1}^{r-i+1} a_{l}(S,\mathscr{L})^{j_{l}}. \end{displaymath} Here, $J_{r,i}$ is the set of tuples $(j_{1}, \ldots, j_{r-i+1})$ such that we have $\sum lj_{l} = r$ and $\sum j_{l} = i$ (so $\sum j_{l}$ is the number of blocks and $\sum lj_{l}$ is the number of elements for the corresponding partition). The coefficient $e_{j_{1} \ldots j_{r-i+1}}$ is the number of polydiagonals with $i$ blocks, of which $j_{l}$ have size $l.$ But, according to Definition \ref{def:part_bell}, this is exactly how the coefficients of the partial Bell polynomials are defined, so $L_{r}(a_{1}(S,\mathscr{L}), \ldots, a_{r}(S,\mathscr{L}))$ is in fact equal to the $r$th complete Bell polynomial $P_{r}$ in the $a_{i}(S,\mathscr{L}), 1 \leq i \leq r,$ which is what we wanted to prove. \end{proof} \section{On the equivalence of the polydiagonals}\label{sec:equivalences} The previous section established the shape of the node polynomials $Z_r$, but is computationally incomplete, since apart from providing an intersection theoretical definition of the $a_i,$ it does not present them as linear combinations (with coefficients which are integers) of the Chern numbers of $(S,\mathscr{L}).$ The distinguished varieties supported on the small diagonal $\Delta^{(r)}_{12\ldots}$ of $X^{\times r}$ include the diagonal itself, in addition to embedded components. Our approach here is to first consider the proper contribution of the polydiagonals, the objective being to compute the excess contribution from their union, $\Delta(r),$ to the intersection product $X_1 \cdot \ldots \cdot X_r.$ In the next section, we treat the residual contribution coming from embedded components. We recall the definition of the Segre class of a closed subscheme: \begin{definition}\label{def:segre} Let $X$ be a closed subscheme of a scheme $Y.$ Let $C$ denote the normal cone of $X$ in $Y,$ and consider the projective completion \begin{equation} P(C\oplus \mathbf{1}) := \textnormal{Proj}(S^{\bullet}[z]). \end{equation} Denote by $q$ the projection from $P(C\oplus \mathbf{1})$ to $X,$ and by $\mathscr{O}(1)$ the canonical line bundle on $P(C \oplus \mathbf{1}).$ The Segre class of $X$ in $Y$ is the following class: \begin{equation} s(C) := q_{\ast}\left(\sum_{i \geq 0} c_{1}(\mathscr{O}(1))^{i} \cap [P(C \oplus \mathbf{1})]\right) \in A_{\ast}(X). \end{equation} \hfill $\blacksquare$ \end{definition} By \cite[Proposition 9.1.1]{Ful}, the equivalence of $\Delta(r)$ for the intersection product $X_1 \cdot \ldots \cdot X_r$ is \begin{equation}\label{eqn:diagonals_excess} (X_{1} \cdot \ldots \cdot X_{r})^{\Delta(r)} = \left\{ \prod_{i=1}^{r}c(N_{X_{i}}F^{\times r}|\Delta(r)) \cap s(\Delta(r),F^{\times r})\right\}_{N-r} \end{equation} The structure of the subscheme $\Delta(r),$ however, makes any direct attempt to control this difficult. Indeed, $\Delta(r)$ has several irreducible components, and while one can compute the contribution of each $\Delta^{(r)}_{\pi}$ separately (see below), this does not directly yield the contribution of their union. To clarify this, we proceed in several steps: \begin{definition}\label{def:equivalence} For each $\pi \in \Pi_{r}^{\circ},$ denote by $\mathscr{E}^{(r)}_{\pi}$ the equivalence of $\Delta^{(r)}_{\pi}$ for the intersection product $X_{1} \cdot \ldots \cdot X_{r},$ that is \begin{equation}\label{eqn:equivalence} \mathscr{E}^{(r)}_{\pi} := \left(X_{1} \cdot \ldots \cdot X_{r}\right)^{\Delta^{(r)}_{\pi}} \in A_{N-r}(\Delta^{(r)}_{\pi}). \end{equation} Also, let $Q^{(r)}_{\pi}$ denote the integer $$\int_{Y} f^{(r)}_{\pi\ast} \left(X_{1} \cdot \ldots \cdot X_{r} \right)^{\Delta^{(r)}_{\pi}} \in \mathbb{Z},$$ where $f^{(r)}_{\pi}: \Delta^{(r)}_{\pi} \rightarrow Y$ is the composition of the embedding of $\Delta^{(r)}_{\pi}$ into $F^{\times r}$ and the projection $\gamma_{r} F^{\times r} \rightarrow Y.$ For each $r \geq 1,$ let $\mathscr{E}_{r}$ denote $\mathscr{E}^{(r)}_{12\ldots r},$ and set $Q_{r} := Q^{(r)}_{12\ldots r}.$ \hfill $\blacksquare$ \end{definition} Below, we will compute the numbers $Q_{r}.$ For now, we note that they are --- in large part --- all we need to understand the equivalence of $\Delta(r):$ \begin{theorem}\label{thm:split_equiv} Let $\pi \in \Pi_{r}^{\circ}.$ For each $i \in [r],$ let $s_{i}(\pi)$ denote the number of blocks of length $i$ in the partition $\pi.$ Then $$Q^{(r)}_{\pi} = \prod_{i=1}^{r} Q_{i}^{s_{i}(\pi)}.$$ \end{theorem} \begin{proof} On $F^{\times r},$ let $N^{(r)}_{i}$ denote $p_{i}^{(r)\ast}\mathscr{P}_{F/Y}(\mathscr{L} \boxtimes \mathscr{O}_{Y}(1)),$ where $p^{(r)}_{i}: F^{\times r} \rightarrow F$ are the projections. Also, let $\mathbb{B}(\pi)$ denote the set of blocks of the partition $\pi,$ and for $I \in \mathbb{B}(\pi),$ let $|I|$ denote the number of elements in $I$ and $p_I: F^{\times r} \rightarrow F^{\times |I|}$ the projection $\prod_{i \in I}p^{(r)}_{i}.$ Then we have: \footnotesize \begin{eqnarray*} \mathscr{E}^{(r)}_{\pi} & = & (X_1 \cdot \ldots \cdot X_r)^{\Delta^{(r)}_{\pi}} \\ & = & \left\{c(N^{(r)}_1 \oplus \ldots \oplus N^{(r)}_r|\Delta^{(r)}_{\pi}) \cap s(\Delta^{(r)}_{\pi},F^{\times r})\right\}_{N-r} \\ & = & \left\{c\left(\bigoplus_{I \in \mathbb{B}(\pi)} p_{I}^{\ast}(N^{(|I|)}_{1} \oplus \ldots \oplus N^{(|I|)}_{|I|})|\Delta^{(r)}_{\pi} \right) \cap s\left(\prod_{I \in \mathbb{B}(\pi)} \Delta^{(|I|)}_{1\ldots |I|}, \prod_{I \in \mathbb{B}(\pi)} F^{\times |I|} \right) \right\}_{N-r} \\ & = & \prod_{I \in \mathbb{B}(\pi)} p_{I}^{\ast}\left\{c(N^{(|I|)}_{1} \oplus \ldots \oplus N^{(|I|)}_{|I|}|\Delta^{(|I|)}_{1\ldots |I|}) \cap s(\Delta^{(|I|)}_{1\ldots |I|},F^{\times |I|})\right\}_{N-|I|} \\ & = & \prod_{I \in \mathbb{B}(\pi)} p_{I}^{\ast}\mathscr{E}_{|I|}, \end{eqnarray*} \normalsize since $\Delta^{(r)}_{\pi} \cong \prod_{I \in \mathbb{B}(\pi)} \Delta^{(|I|)}_{1\ldots |I|}$ (fibered product over $Y$) and by definition of the intersection product as $A^{\ast}(F^{\times r}) \otimes A^{\ast}(F^{\times r}) \stackrel{\times}{\rightarrow} A^{\ast}(F^{\times r} \times F^{\times r}) \stackrel{\delta^{\ast}}{\rightarrow} A^{\ast}(F^{\times r}).$ But by a reasoning similar to Proposition \ref{prop:splitting}, the pushdown of $\prod_{I \in \mathbb{B}(\pi)} p_{I}^{\ast}\mathscr{E}_{|I|}$ to $Y$ is equal to $\prod_{I \in \mathbb{B}(\pi)} f_{\ast} \mathscr{E}_{|I|},$ hence \begin{eqnarray*} Q^{(r)}_{\pi} & = & \int_{Y} \prod_{I \in \mathbb{B}(\pi)} f_{\ast} \mathscr{E}_{|I|} \\ & = & \prod_{I \in \mathbb{B}(\pi)} Q_{|I|} = \prod_{i=1}^{r} Q_{i}^{s_{i}(\pi)}, \end{eqnarray*} as claimed. \end{proof} At this point, the naive way to proceed would be to use the principle of inclusion-exclusion to express $(X_1 \cdot \ldots \cdot X_r)^{\Delta(r)}$ as a linear combination of the $\mathscr{E}^{(r)}_{\pi}.$ The following example illustrates that this is impossible: \begin{example}\label{ex:case_three} Let $r :=3.$ There are four diagonals to consider, the small diagonal $\Delta^{(3)}_{123}$ and the three large diagonals; $\Delta^{(3)}_{12|3}, \Delta^{(3)}_{13|2}$ and $\Delta^{(3)}_{23|1}.$ Each of those contains the small diagonal. Thus, the principle of inclusion-exclusion predicts the following equality (where the terms on the right hand side are pushed forward to $\Delta(3)$): \begin{equation}\label{eqn:inc_exc_fail} (X_{1} \cdot X_{2} \cdot X_{3})^{\Delta(3)} = \sum_{i \neq j}(X_{1} \cdot X_{2} \cdot X_{3})^{\Delta^{(3)}_{ij}} - 2(X_{1} \cdot X_{2} \cdot X_{3})^{\Delta^{(3)}_{123}}. \end{equation} When pushing this down to $Y$ and taking the degree, the right hand side becomes $3Q_1Q_2 - 2Q_3,$ because of Theorem \ref{thm:split_equiv}. But this is not the correct ``total'' equivalence, simply because Segre classes do not satisfy the principle of inclusion-exclusion. This failure is easily illustrated by considering the following example: Let $X$ be the subscheme of $\mathbb{P}^{2}$ defined as the union of two lines; since it is a divisor of degree 2, its Segre class in $\mathbb{P}^{2}$ is $2l-4l^2,$ with $l$ the class of a hyperplane. However, inclusion-exclusion predicts $(l-l^2)+(l-l^2)-l^2 = 2l-3l^2,$ which is wrong. For more on this problem and how to understand it, see \cite{Alu1}. \hfill $\blacksquare$ \end{example} For us, this means that we need to construct appropriate correction terms, $\mathscr{C}^{(r)}_{\pi},$ such that $(X_1 \cdot \ldots \cdot X_{r})^{\Delta(r)}$ can be written as a linear combination, not of the $\mathscr{E}^{(r)}_{\pi},$ but of the corrected terms $\mathscr{E}^{(r)}_{\pi} + \mathscr{C}^{(r)}_{\pi}.$ For this, we make use of the classical theory of multiple point formulas, following essentially Kleiman's \cite{Klei1}. Let $f: X \rightarrow Y$ be the composition of the embedding $\iota$ of $X$ in $F = S \times Y$ and the projection $\gamma_{1}$ to $Y = |\mathscr{L}|.$ This is an lci of codimension 1. Its strict double points are points in $X$ corresponding to binodal curves with one marked node, while the double point locus also includes cuspidal curves. The double point formula \cite[Theorem 5.6]{Klei1} states that \begin{eqnarray*} m_{2} & = & f^{\ast}f_{\ast}[X] - c_{1} \cap [X] \\ & = & p_{1\ast} (p_{1}^{\ast}[X] \cdot p_{2}^{\ast}[X]) - \left\{\frac{f^{\ast}(c(T_{Y}))}{c(T_{X})}\right\}^{1} \cap [X] \\ & = & p_{1\ast} (p_{1}^{\ast}[X] \cdot p_{2}^{\ast}[X]) - \left\{c(N_{X}F)\iota^{\ast}\nu^{\ast}c(T_{S})^{-1} \cap [X] \right\}_{N-2}. \\ \end{eqnarray*} Indeed, to show that $f^{\ast}f_{\ast}[X] = p_{1\ast}(p_{1}^{\ast}[X] \cdot p_{2}^{\ast}[X]),$ consider the fibre diagram \[ \xymatrix { X \times_{Y} X \ar[d]_{p_1} \ar[r] & X \times X \ar[d]^{1 \times f} \\ X \ar[r]_{\gamma_f} \ar[d]_{f} & X \times Y \ar[d]^{f \times 1} \\ Y \ar[r]_{\delta} & Y \times Y } \] where $\gamma_f: X \hookrightarrow X \times Y$ is the graph embedding of $X,$ and $\delta: Y \hookrightarrow Y \times Y$ is the diagonal embedding of $Y.$ Then \begin{eqnarray*} f^{\ast}f_{\ast}[X] & = & \gamma_{f}^{\ast}([X] \times f_{\ast}[X]) \\ & = & p_{1\ast}(\gamma_{f}^{!}[X \times X]) \\ & = & p_{1\ast}(\delta^{!}[X \times X]) = p_{1\ast}(p_{1}^{\ast}[X] \cdot p_{2}^{\ast}[X]), \end{eqnarray*} where $\gamma_{f}^{!}: A^{\ast}(X \times X) \rightarrow A^{\ast}(X \times_{Y} X)$ and $\delta^{!}: A^{\ast}(X \times X) \rightarrow A^{\ast}(X \times_{Y} X)$ are the refined Gysin pullback homomorphisms induced by $\gamma_f$ and $\delta$ (see \cite[Sec. 6.2]{Ful} for a formal definition). On the other hand, to show that \begin{displaymath} \left\{\frac{f^{\ast}(c(T_{Y}))}{c(T_{X})}\right\}^{1} \cap [X] = \left\{c(N_{X}F)\iota^{\ast}\nu^{\ast}c(T_{S})^{-1} \cap [X] \right\}_{N-2}, \end{displaymath} we simply use the standard exact sequence of the regular embedding $\iota,$ \begin{equation}\label{eqn:exact_seq} 0 \rightarrow T_{X} \rightarrow \iota^{\ast}T_{F} \rightarrow N_{X}F \rightarrow 0, \end{equation} and the fact that $c(T_{F}) = \nu^{\ast}c(T_{S}) \oplus \gamma_{1}^{\ast} c(T_{Y})$ when $\nu$ is the projection $F = S \times Y \rightarrow S.$ We conclude that $\int_{Y}f_{\ast}m_{2} = Q_{1}^{2} - Q_{2},$ where the $Q_{i}$ are the terms introduced in Definition \ref{def:equivalence}. Indeed, \begin{eqnarray*} \mathscr{E}_{2} & = & \left(X_{1} \cdot X_{2}\right)^{\Delta^{(2)}_{12}} \\ & = & \left\{p_1^{\ast}c(N_{X}F|\Delta^{(2)}_{12})p_2^{\ast}c(N_{X}F|\Delta^{(2)}_{12}) \cap s(\Delta^{(2)}_{12},F^{\times 2}) \right\}_{N-2} \\ & = & \left\{c(N_{X}F)^{2} \cap c(N_{X}F)^{-1} \iota^{\ast}\nu^{\ast}c(T_{S})^{-1} \cap [X] \right\}_{N-2} \\ & = & \left\{c(N_{X}F)\iota^{\ast}\nu^{\ast}c(T_{S})^{-1} \cap [X] \right\}_{N-2} \\ & = & c_{1} \cap [X]. \end{eqnarray*} Now, the triple point formula \cite[Theorem 5.9]{Klei1} can be manipulated as follows (all maps have codimension 1): \begin{eqnarray*} m_{3} & = & f^{\ast}f_{\ast}m_2 - 2c_1 \cap m_2 + 2c_2 \cap m_1 \\ & = & f^{\ast}f_{\ast}(f^{\ast}f_{\ast}[X] - c_{1} \cap [X]) - 2c_{1} \cap (f^{\ast}f_{\ast}[X] - c_{1} \cap [X]) + 2c_{2} \cap [X] \\ & = & p_{1\ast} (p_{1}^{\ast}[X] \cdot p_{2}^{\ast}[X] \cdot p_{3}^{\ast}[X]) - 3c_{1} f^{\ast}f_{\ast}[X] + 2c_{1}^{2} \cap [X] + 2c_{2} \cap [X]. \end{eqnarray*} We now rewrite the term $c_1^{2} \cap [X]:$ Let $c(f)(t) := 1 + \sum c_i t^i$ denote the polynomial $c_{t}(N_{X}F)\iota^{\ast}\nu^{\ast}c_{t}(T_{S})^{-1}$ in $t.$ Then $c_{1}$ is the coefficient of $t$ in $c(f)(t).$ On the other hand, the equivalence $\mathscr{E}_{3}$ is defined as \begin{equation} \left\{c(N_{X}F)^2\iota^{\ast}\nu^{\ast}c(T_{S})^{-2} \cap [X] \right\}_{N-3}, \end{equation} which corresponds to capping the coefficient of $t^2$ in $c(f)(t)^2,$ namely $c_1^2 + 2c_2,$ with $[X].$ Thus, \begin{equation} 2c_{1}^{2} \cap [X] = 2\mathscr{E}_{3} - 4c_{2} \cap [X]. \end{equation} \begin{definition} We denote by $C_3$ the integer \begin{displaymath} C_{3} := -\int_{Y} f_{\ast} (c_{2} \cap [X]) = -\int_{Y} f_{\ast} \left\{c(N_{X}F)\iota^{\ast}\nu^{\ast}c(T_{S})^{-1} \cap [X] \right\}_{N-3}. \end{displaymath} \hfill $\blacksquare$ \end{definition} Recall that the third complete Bell polynomial is defined as $P_{3}(x_1,x_2,x_3) := x_1^3 + 3x_1x_2 + x_3.$ We therefore see that \begin{equation} \int_{Y}f_{\ast}m_{3} = Q_{1}^{3} - 3Q_{1}Q_{2} + 2Q_{3} + 2C_{3} = P_{3}(Q_{1},-Q_{2},2(Q_{3}+C_{3})). \end{equation} Note that, comparing with the original expression $Q_{1}^{3} - 3Q_{1}Q_{2} + 2Q_{3}$ predicted by inclusion-exclusion (cf. Example \ref{ex:case_three}), we recover a ``correction term.'' \begin{definition} We denote by $C_{4}$ the integer \begin{eqnarray*} C_{4} & := & -\int_{Y} f_{\ast} \left(\frac{3}{2} \left\{c(N_{X}F)^2(\iota^{\ast}\nu^{\ast}c(T_{S})^{-1})^2 \cap [X] \right\}_{N-4}\right) \\ & & + \int_{Y} f_{\ast}\left( 2\left\{c(N_{X}F)\iota^{\ast}\nu^{\ast}c(T_{S})^{-1} \cap [X] \right\}_{N-4}\right). \end{eqnarray*} \hfill $\blacksquare$ \end{definition} Recall that $P_{4}(x_{1},x_{2},x_{3},x_{4}) := x_1^4 + 6x_1^2x_2 + 4x_1x_3 + 3x_2^2 + x_4$ is the fourth Bell polynomial. Kleiman's 4-point formula \cite[Theorem 5.10]{Klei1} gives, by expanding the terms $m_2$ and $m_3,$ \begin{eqnarray*} m_{4} & = & f^{\ast}f_{\ast}m_{3} - 3c_{1} \cap m_{3} + 6c_{2} \cap m_{2} - 6(c_{1}c_{2} + 2c_{3}) \cap m_{1} \\ & = & (f^{\ast}f_{\ast})^{3}[X] - 6c_{1} \cap (f^{\ast}f_{\ast})^{2}[X] + 8(c_{1}^{2}+c_{2}) \cap f^{\ast}f_{\ast}[X] \\ & & + 3c_{1} \cap f^{\ast}f_{\ast}(c_{1} \cap [X]) - 6(3c_{1}c_{2} + 2c_{3} +c_{1}^{3}) \cap [X]. \end{eqnarray*} Now, $\mathscr{E}_{4}$ is defined as \begin{equation} \left\{c(N_{X}F)^3\iota^{\ast}\nu^{\ast}c(T_{S})^{-3} \cap [X]\right\}_{N-4}, \end{equation} which corresponds to capping the coefficient of $t^3$ in $c(f)(t)^3,$ which is $c_{1}^3 + 6c_1c_2 + 3c_3,$ with $[X].$ Also, considering the terms appearing in the definition of $C_4,$ we have \begin{equation} \left\{c(N_{X}F)^2(\iota^{\ast}\nu^{\ast}c(T_{S})^{-1})^2 \cap [X] \right\}_{N-4} \end{equation} which corresponds to taking the coefficient of $t^3$ in $c(f)(t)^2,$ namely $2(c_1c_2 + c_3),$ and capping with $[X].$ Finally, \begin{equation} \left\{c(N_{X}F)(\iota^{\ast}\nu^{\ast}c(T_{S})^{-1}) \cap [X] \right\}_{N-4} \end{equation} corresponds to capping the coefficient of $t^3$ in $c(f)(t),$ namely $c_3,$ with $[X].$ Hence, \begin{eqnarray*} & & \int_{Y} -6 f_{\ast}((3c_{1}c_{2} + 2c_{3} +c_{1}^{3}) \cap [X])\\ & = & \int_{Y} -6 f_{\ast}((c_{1}^{3} + 6c_1c_2 + 3c_3 - 3/2 (2c_1c_2 + 2c_3) +2c_3) \cap [X]) \\ & = & -6(Q_{4}+C_{4}), \end{eqnarray*} and we see that \begin{equation} \int_{Y}f_{\ast}m_{4} = P_{4}(Q_1, -Q_2, 2(Q_3+C_3), -6(Q_4+C_4)). \end{equation} \begin{remark}\label{rem:multiple_bell} There are two interesting observations to be made: First, we see that by combining certain terms in Kleiman's $r$-point formulas, we can express these formulas using Bell polynomials. Second, the ``correction terms'' $C_i,$ which a priori occur because we are trying to do inclusion-exclusion using objects (Segre classes) which do not behave well in this regard, are defined using the same classes which define the $Q_i,$ but considering parts of different dimensions. To state this more clearly, we introduce the class \begin{equation} \boxed{M_{r}(S,\mathscr{L}) := c(N_{X}F)^{r-1}(\iota^{\ast}\nu^{\ast}c(T_{S}))^{r-1} \cap [X]} \end{equation} for each $r \geq 2.$ Then $Q_{r}$ is obtained from the component of $M_{r}(S,\mathscr{L})$ of dimension $N-r,$ while we have \begin{eqnarray*} C_{3} & = & -\int_{Y} f_{\ast}\left\{M_{2}(S,\mathscr{L})\right\}_{N-3}, \\ C_{4} & = & -\int_{Y} f_{\ast} \left(3/2\left\{M_{3}(S,\mathscr{L})\right\}_{N-4} - 2\left\{M_{2}(S,\mathscr{L})\right\}_{N-4}\right). \end{eqnarray*} \end{remark} We see this as evidence supporting the following conjecture (recall that $P_r$ denotes the $r$th complete Bell polynomial in $r$ variables): \begin{conjecture}\label{conj:linearity_corr} For $r \geq 1,$ there is a $\mathbb{Q}$-linear combination $C_{r}$ of the integers \begin{equation} \int_{Y} f_{\ast} \{M_{i}(S,\mathscr{L})\}_{N-r}, \end{equation} for $2 \leq i \leq r-1,$ with $C_1 = C_2 = 0,$ such that \begin{equation} \int_{Y} f_{\ast}m_{r} = P_{r}(Q_1 + C_1, -2(Q_2 + C_2), \ldots, (-1)^{r-1}(r-1)!(Q_r+C_r)). \end{equation} \end{conjecture} Our next aim is to compute the equivalence terms $Q_n$ in the case of the projective plane; this simplification allows for a clearer presentation, but it is not difficult to see that more generally, both the equivalence terms $Q_n$ and (at least for $n \leq 4$) the correction terms $C_n$ are linear combinations of the four Chern numbers of $(S,\mathscr{L}),$ and the general closed expressions for the $Q_n$ can be obtained following the same steps as below, although the computations are slightly more involved. Let $S := \mathbb{P}^{2}$ and $\mathscr{L} := \mathscr{O}_{\mathbb{P}^{2}}(d).$ By Lemma \ref{prop:isom}, we know that $X$ is regularly embedded in $F$ with normal bundle \begin{equation}\label{eqn:principal} N_{X}F \cong \iota^{\ast}\mathscr{P}^{1}_{F/Y}\Bigl(\mathscr{O}_{\mathbb{P}^{2}}(d) \boxtimes \mathscr{O}_{Y}(1)\Bigr), \end{equation} and $[X] = c_3(N_{X}F).$ For a regular embedding $X \hookrightarrow Y$ we have $s(X,Y) = c(N_{X}Y)^{-1} \cap [X]$ by \cite[Section 4.2]{Ful}. Now, the embedding of $\Delta^{(n)}_{12\ldots n}$ in $F^{\times n}$ splits as $$\Delta^{(n)}_{12\ldots n} \hookrightarrow F \stackrel{\delta_{n}}{\hookrightarrow} F^{\times n},$$ where $\delta_n$ is the diagonal embedding. Hence \begin{eqnarray*} Q_{n} & = & \int_{Y} f_{\ast} (X_{1} \cdot \ldots \cdot X_{n})^{\Delta^{(n)}_{1\ldots n}} \\ & = & \int_{Y} f_{\ast} \left\{\prod_{i=1}^{n} c(p_{i}^{\ast}N_{X}F|\Delta^{(n)}_{1\ldots n}) \cap c(N_{\Delta^{(n)}_{1\ldots n} }F^{\times n})^{-1} \cap [\Delta^{(n)}_{1\ldots n} ]\right\}_{N-n} \\ & = & \int_{Y} f_{\ast} \left\{c(N_{X}F)^{n} c(N_{X}F)^{-1}c(N_{F}F^{\times n})^{-1} \cap [X] \right\}_{N-n} \\ & = & \int_{Y} f_{\ast} \left\{c(N_{X}F)^{n-1}c\left(T_{F/Y}^{\oplus(n-1)}\right)^{-1} \cap [X] \right\}_{N-n} \\ & = & \int_{Y} f_{\ast} \left\{c(N_{X}F)^{n-1}c(T_{F/Y})^{-(n-1)} \cap [X] \right\}_{N-n}. \end{eqnarray*} Let $l$ denote the class of a hyperplane on $\mathbb{P}^{2},$ and $H$ the class of a hyperplane on $Y = |\mathscr{L}| = \mathbb{P}^{N}.$ So $l^{3}=0$ and $H^{N+1} = 0.$ It is well-known that $c(T_{\mathbb{P}^{2}})^{-1} = 1-3l+6l^2.$ Hence the computation of $Q_{n}$ reduces to finding the coefficient of $H^{n}l^{2}$ in the polynomial \begin{equation}\label{eqn:poly} M_{n}(l,H,d) := \bigl(1 + H + (d-1)l \bigr)^{3(n-1)}(1-3l+6l^{2})^{n-1}\bigl(H+(d-1)l\bigr)^{3}. \end{equation} For this, we first extract the coefficient of $H^{n};$ this is a polynomial in $l$ and $d,$ from which we extract the coefficient of $l^{2}.$ We have: \[ \begin{dcases} \Bigl(1 + H+ (d-1)l \Bigr)^{3(n-1)} = \sum_{k=0}^{3n-3} {3n-3 \choose k}H^{k}\Bigl(1+(d-1)l\Bigr)^{3(n-1)-k}; \\ \Bigl(H+(d-1)l\Bigr)^{3} = H^{3} + 3H^{2}(d-1)l + 3H(d-1)^{2}l^{2}, \end{dcases} \] \noindent since $l^{3} = 0.$ Therefore, the coefficient of $H^{n}$ is easily shown to be \begin{displaymath} (1-3l+6l^{2})^{n-1}(1+(d-1)l)^{2n-2}(x_{n}l^{2} + y_{n}l + z_{n}), \end{displaymath} where \[ \begin{dcases} x_{n} := 3(d-1)^{2} {3n-3 \choose n-1} + 3(d-1)^{2} {3n-3 \choose n-2} + {3n-3 \choose n-3}(d-1)^{2}, \\ y_{n} := 3(d-1){3n-3 \choose n-2} + 2(d-1){3n-3 \choose n-3}, \\ z_{n} := {3n-3 \choose n-3}. \end{dcases} \] To find the coefficient of $l^{2}$ in this expression, expand \begin{displaymath} (1-3l+6l^{2})^{n-1} = \sum_{k=0}^{2} {n-1 \choose k}3^{k}l^{k}(2l-1)^{k}, \end{displaymath} with the convention that ${n \choose k} =0$ if $k > n.$ This is equal to \begin{displaymath} \alpha_{n} := 1 -3(n-1)l+\left(6(n-1)+9 {n-1 \choose 2}\right)l^{2}. \end{displaymath} On the other hand, we get \begin{displaymath} \beta_{n} := (1+(d-1)l)^{2n-2} = 1 + (2n-2)(d-1)l + {2n-2 \choose 2}(d-1)^{2}l^{2}. \end{displaymath} So we are looking for the coefficient of $l^{2}$ in the expression $\alpha_{n}\beta_{n}(x_{n}l^{2}+y_{n}l+z_{n}),$ which is \begin{eqnarray*} & & \left(6(n-1)+9{n-1 \choose 2}\right)z_{n} + {2n-2 \choose 2}(d-1)^{2}z_{n} + x_{n} \\ & - & 3(n-1)(2n-2)(d-1)z_{n} -3(n-1)y_{n} + (2n-2)(d-1)y_{n}. \end{eqnarray*} To conclude, we have the following theorem: \begin{theorem}\label{thm:equiv_diag} In the case of $\mathbb{P}^{2},$ the equivalence of the small diagonal $\Delta_{12\ldots n}$ for the intersection product $X_{1} \cdot \ldots \cdot X_{n}$ is a quadratic polynomial in $d,$ namely \begin{equation}\label{eqn:equiv_diag} Q_{n} = f_{n}d^{2} + g_{n}d + h_{n}, \end{equation} where (after some simplifications): \[ \begin{dcases} f_{n} := 3{3n-3 \choose n-1} + 3{3n-3 \choose n-2}(2n-1) + n{3n-3 \choose n-3}(2n-1), \\ g_{n} := -2n{3n-3 \choose n-3}(5n-4) - 3{3n-3 \choose n-2}(7n-5) - 6{3n-3 \choose n-1}, \\ h_{n} := {3n-3 \choose n-3}\left(\frac{25}{2}n^{2}-\frac{29}{2}+3\right) + 3{3n-3 \choose n-2}(5n-4) + 3{3n-3 \choose n-1}. \end{dcases} \] \end{theorem} \begin{table} \centering \begin{tabular}{|l||l|l|} \hline $n$ & $Q_{n}$ & $C_{n}$ \\ \hline 1 & $3d^2-6d+3$ & 0 \\ 2 & $18d^2-45d+27$ & 0 \\ 3 & $150d^2-444d+315$ & $-(30d^2-96d+72)$ \\ 4 & $1260d^2 -4140d + 3285$ & $-(420d^2 - 1425d + 1158)$\\ \hline \end{tabular} \caption{Equivalence and correction terms for $1 \leq n \leq 4.$} \label{table:equivalences} \end{table} \begin{remark}\label{rem:combinations} Above, we saw that the ``correction terms'' $C_{i}$ were linear combinations of terms which arose from the same polynomials $M_{n}(l,H,d),$ but extracting different coefficients. Of course, one can obtain closed formulas for these coefficients, proceeding the same way as above. For $1 \leq n \leq 4,$ the concrete expressions for $Q_{n}$ and $C_{n}$ are provided in Table \ref{table:equivalences}. \end{remark} \section{On the residual term}\label{sec:residual} Recall that, up to a factor $(-1)^{i-1}(i-1)!,$ $a_{i}(S,\mathscr{L})$ was defined as the degree of the pushdown through $\gamma_1$ of $\iota_{\ast}B^{(i)}_{1\ldots i} \in A^{i}(S \times Y),$ where $\iota$ denotes the inclusion $X \hookrightarrow F.$ In the previous section, we treated the contribution from the small diagonal $\Delta^{(i)}_{12\ldots i},$ while neglecting the contribution from embedded components, i.e., distinguished varieties having support inside this diagonal. Thus, the remaining question, which we explore in this section, is how the embedded components (the ``residual'' locus) contributes to $a_i.$ Assume that $\mathscr{L}$ is $r$-very ample, so that there is no interference from for instance non-reduced curves (cf. Theorem \ref{thm:shape}). The multiplicative structure imposed by the lattice of polydiagonals applies for the embedded components as well, so it suffices to study the embedded components with support on the small diagonal $\Delta^{(r)}_{12\ldots r}.$ We wish to show that the components supported on the small diagonal contribute linearly in the four Chern numbers of $(S,\mathscr{L});$ this is achieved, with the exception of one conjectural result (Conjecture \ref{conj:dep}). The geometric interpretation of the contribution is neither immediate nor easy, but is discussed towards the end of the section for low values of $r.$ Let $\Delta^{(r)}_{X}$ be the small diagonal in $X^{\times r}$ and $\Delta^{(r)}_{F}$ the small diagonal in $F^{\times r}.$ The arguments themselves are purely of technical nature. We proceed as follows: Let $V_{r}$ denote the blowup of $F^{\times r}$ along the small diagonal $\Delta^{(r)}_{F},$ and let $D_{r}$ be the exceptional divisor. We denote by $\widetilde{X_{i}}$ the strict transform of $X_{i}$ under the morphism $\pi_{r}: V_{r} \rightarrow F^{\times r}.$ Consider the subschemes $W_{r}$ and $W_{r}(X)$ of $V_{r}$ whose sheaves of ideals are \begin{eqnarray*} \mathscr{I}_{W_{r}} & := & \mathscr{I}_{D_{r}} \cdot \left(\mathscr{I}_{D_{r}} + \sum_{i=1}^{r} \mathscr{I}_{\widetilde{X_{i}}}\right) \\ \mathscr{I}_{W_{r}(X)} & := & \mathscr{I}_{\pi_{r}^{-1}(\Delta^{(r)}_{X})} \cdot \left(\mathscr{I}_{D_{r}} + \sum_{i=1}^{r} \mathscr{I}_{\widetilde{X_{i}}}\right). \end{eqnarray*} Then $W_{r}(X)$ is regularly embedded in $W_{r}$ with normal bundle \begin{equation}\label{eqn:normal_bundle} N_{W_{r}(X)}W_{r} \cong \eta_{r}^{\ast} N_{\Delta^{(r)}_{X}}\Delta^{(r)}_{F}, \end{equation} with $\eta_{r}$ the restriction of $\pi_{r}$ to the small diagonal of $X^{\times r}.$ We may consider the residual scheme $\textnormal{Res}_{r}$ of the divisor $D_{r}$ in $W_{r}.$ Then, according to \cite[Propostion 9.2]{Ful}, we have for all $m \geq 0,$ \begin{equation}\label{eqn:relate_segre} s(W_{r},V_{r})_{m} = s(D_{r},V_{r})_{m} + \mathscr{R}(r)_{m} \in A_{\ast}(D_{r}), \end{equation} where we have defined \begin{equation}\label{eqn:define_res} \mathscr{R}(r)_{m} := \sum_{j=0}^{N+2r-m}{N+2r-m \choose j}[-D_{r}]^{j}s(\textnormal{Res}_{r},V_{r})_{m+j}. \end{equation} It follows that the contribution to $X_{1} \cdot \ldots \cdot X_{r}$ from the small diagonal \textit{with} embedded components is \footnotesize \begin{eqnarray*} \eta_{r\ast} (X_{1} \cdot \ldots \cdot X_{r})^{W_{r}(X)} & = & \eta_{r\ast}\left\{\prod_{i=1}^{r} c(\eta_{r}^{\ast}N_{i}) \cap s(W_{r}(X),V_{r})\right\}_{N-r} \\ & = & \eta_{r\ast}\left\{\prod_{i=1}^{r} c(\eta_{r}^{\ast}N_{i})c(\eta_{r}^{\ast}N_{\Delta^{(r)}_{X}}\Delta^{(r)}_{F})^{-1} \cap s(W_{r},V_{r})\right\}_{N-r} \\ & = & (X_{1} \cdot \ldots \cdot X_{r})^{\Delta^{(r)}_{X}} + \eta_{r\ast} \sum_{m \geq 0} \left\{c\left(\eta_{r}^{\ast}N_{\Delta^{(r)}_{X}}\Delta^{(r)}_{F}\right)^{r-1} \cap \mathscr{R}(r)_{m}\right\}_{N-r} \end{eqnarray*} \normalsize where $N_{i}$ denotes the restriction to the small diagonal of the normal bundle of $X_{i}$ in $F^{\times r}.$ The last equality follows from Eq. (\ref{eqn:relate_segre}). The following theorem, due to Keel, expresses the Chow ring of a blow-up. Let $V$ be a variety and let $i: U \hookrightarrow V$ be a regularly embedded subvariety of codimension $d.$ Denote by $N$ the normal bundle of $U$ in $V.$ Let $\pi: \widetilde{V} \rightarrow V$ be the blow-up of $V$ along $U$ and denote by $\widetilde{U}$ the exceptional divisor. Define $g$ and $j$ by the commutative diagram: \[ \xymatrix { \widetilde{U} \ar^{j}[r] \ar_{g}[d] & \widetilde{V} \ar^{\pi}[d] \\ U \ar^{i}[r] & V } \] Let $P(t)$ be any polynomial whose constant term is $[U] \in A^{\ast}(V)$ and whose restriction to $A^{\ast}(U)$ is the Chern polynomial of the normal bundle $N,$ that is, \begin{equation}\label{eqn:chern_poly} i^{\ast}P(T) = t^{d} + t^{d-1}c_{1}(N) + \ldots + c_{d-1}(N)t + c_{d}(N). \end{equation} \begin{theorem}\label{thm:chow_blowup} \emph{(\cite{Keel}, Theorem 1 of Appendix.)} Suppose the map of bivariant rings $i^{\ast}: A^{\ast}(V) \rightarrow A^{\ast}(U)$ is surjective. Then $A^{\ast}(\widetilde{V})$ is isomorphic to \begin{equation}\label{eqn:chow_iso} \frac{A^{\ast}(V)[t]}{(P(t),t \cdot \textnormal{ker}(i^{\ast}))}. \end{equation} This isomorphism is induced by \begin{displaymath} \pi^{\ast}: A^{\ast}(V) \rightarrow A^{\ast}(\widetilde{V}) \end{displaymath} and by sending $-t$ to the class of the exceptional divisor. \end{theorem} We use this theorem to describe the Chow ring of the blow-up $V_{r}:$ \begin{corollary}\label{cor:chow_conf} The Chow ring of $V_{r}$ is \begin{equation}\label{eqn:conf_iso} A^{\ast}(V_{r}) = A^{\ast}(F^{\times r})[D_r]/I_{r}, \end{equation} where $I_{r}$ is the ideal generated by the following elements: \begin{enumerate} \item all $[D_r] \cdot (p_{i}^{\ast}\alpha - p_{j}^{\ast}\alpha)$ for $\alpha \in A^{\ast}(F^{\times r});$ \item $J_r \cdot [D_r],$ where $J_r$ is the kernel of the restriction map $\delta_{r}^{\ast}: A^{\ast}(F^{\times r}) \rightarrow A^{\ast}(\Delta^{(r)}_{F});$ \item $P_{r}(-[D_r]),$ where $P_{r}(t) := t^{2r-2} + \sum_{i=1}^{2r-2}\nu^{\ast}c_{i}(T_{S}^{\oplus (r-1)})t^{2r-2-i},$ $\nu$ being the projection from $F = S \times Y$ to the surface $S.$ \end{enumerate} \end{corollary} \begin{proof} This follows easily from Theorem \ref{thm:chow_blowup}, and using the fact that, in the Grothendieck ring, \begin{equation} N_{\Delta^{(r)}_{F}}F^{\times r} = T_{F/Y}^{\oplus (r-1)} = (\nu^{\ast}T_{S})^{\oplus (r-1)}. \end{equation} \end{proof} Next, we describe the Chow ring of the divisor $D_{r}:$ \begin{proposition}\label{prop:chow_div} For all $r \geq 2,$ the Chow ring $A^{\ast}(D_{r})$ is \begin{equation}\label{eqn:iso_div} A^{\ast}(D_{r}) = A^{\ast}(V_r)/K_r, \end{equation} where $K_r$ is the ideal generated by all $p_{i}^{\ast}\alpha - p_{j}^{\ast}\alpha$ for $i,j \in [r]$ and $\alpha \in A^{\ast}(F).$ \end{proposition} \begin{proof} This follows from \cite[Corollary 7b]{FM}. \end{proof} The residual scheme $\textnormal{Res}_{r}$ is a subscheme of $V_{r}$ whose sheaf of ideals is \begin{equation}\label{eqn:res_sheaf} \mathscr{I}^{Res}_{r} := \mathscr{I}_{D_{r}} + \sum_{i=1}^{r} \mathscr{I}_{\widetilde{X_{i}}}, \end{equation} i.e., it is the scheme-theoretic intersection $\textnormal{Res}_{r} = D_{r} \cap \bigcap_{i=1}^{r} \widetilde{X_{i}}.$ We introduce some notations: Let $L := c_{1}(\mathscr{L})$ and $K:= c_{1}(\mathscr{K}_{S}),$ which are classes in $A^{1}(S).$ The second Chern class of $S$ is denoted by $x \in A^{2}(S).$ Let $L,K$ and $x$ also denote their own pullbacks, through $\nu,$ to $F.$ Finally, let $H$ be the class of a hyperplane in $Y = \mathbb{P}^{N},$ and its pullback to $F.$ We consider $L,K,H$ to be weighted variables of degree 1, while $x$ is considered to have degree 2. \begin{conjecture}\label{conj:dep} The Segre class of $\textnormal{Res}_{r}$ in $V_{r},$ expressed in the Chow ring of $D_{r},$ is a polynomial in $L, K, H, [D_{r}]$ and $x.$ \end{conjecture} \begin{remark} By Proposition \ref{prop:isom}, $X$ is the zero scheme of a section $z'$ of the vector bundle $\mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}}),$ and is regularly embedded in $F$ of codimension 3 (and the Chern class of its normal bundle is a polynomial in $L,K,H$ and $x$). It follows that $X_{i}$ is the zero scheme of $p_{i}^{\ast}z',$ a section of $p_{i}^{\ast}\mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}}),$ and the strict transform $\widetilde{X_i}$ is the zero scheme of the induced section of \begin{displaymath} \pi^{\ast}p_i^{\ast}\mathscr{P}^{1}_{F/Y}(\widetilde{\mathscr{L}}) \otimes \mathscr{O}_{V_{r}}(-D_{r}). \end{displaymath} Hence, it seems plausible that the push-forward to $D_r$ of the Segre class of $\bigcap_{i=1}^{r} (\widetilde{X_i} \cap D_r)$ in $V_r$ is a function only of the pullbacks of $L,K,H,x$ through the projections $p_j,$ and $[D_r].$ Now, by Proposition \ref{prop:chow_div}, it follows that $p_i^{\ast}L = p_j^{\ast}L$ in $A^{\ast}(D_r),$ for all $i,j,$ and similarly for $K,H$ and $x.$ Hence, the push-forward of this Segre class to $D_r$ should be a polynomial in $L,K,H,x$ and $[D_r]$ only. \end{remark} The (push-forward of the) class $\eta_{r\ast}(X_{1} \cdot \ldots \cdot X_{r})^{W_{r}(X)}$ lives in $A^{\ast}(\Delta^{(r)}_{F}) \cong A^{\ast}(F),$ hence we make the following definition: \begin{definition}\label{def:res_contrib} For each $r \geq 2,$ define \begin{equation}\label{eqn:res_contrib} R_{r} := \sum_{m \geq 0}\int_{Y} f_{\ast} \eta_{r\ast} \left\{c(\eta_{r}^{\ast}N_{\Delta^{(r)}_{X}}\Delta^{(r)}_{F})^{r-1} \cap \mathscr{R}(r)_{m}\right\}_{N-r}. \end{equation} \hfill $\blacksquare$ \end{definition} This is the (degree of) the contribution from embedded components supported on $\Delta^{(r)}_{X}.$ The following statement is then a consequence of Conjecture \ref{conj:dep}: \begin{conj-prop}\label{conj-prop:linearity} There exists a linear polynomial $T^{\text Res}_{r}$ in four variables and with integer coefficients, such that \begin{equation}\label{eqn:polynomiality} R_{r} = T^{\text Res}_{r}(\partial,k,s,x), \end{equation} where $\partial := \mathscr{L}^{2}, k := \mathscr{LK}_{S},s:=\mathscr{K}_{S}^{2},x:=c_{2}(S)$ are the four Chern numbers of the pair $(S,\mathscr{L}).$ \end{conj-prop} \begin{proof} We are interested in the $(N-r)$-dimensional part of the class \footnotesize \begin{eqnarray*} \sigma_{r} & := & \eta_{r\ast} \sum_{m \geq 0}c\left(\eta_{r}^{\ast}N_{\Delta^{(r)}_{X}}\Delta^{(r)}_{F}\right)^{r-1} \cap \mathscr{R}(r)_{m} = \sum_{m \geq 0} c\left(N_{\Delta^{(r)}_{X}}\Delta^{(r)}_{F}\right)^{r-1} \cap \eta_{r\ast}\mathscr{R}(r)_{m} \\ & = & \sum_{m \geq 0}\sum_{j=0}^{N+2r-m} (-1)^{j} {N+2r-m \choose j} c\left(N_{\Delta^{(r)}_{X}}\Delta^{(r)}_{F}\right)^{r-1} \cap \eta_{r\ast}\left([D_{r}]^{j} \cdot s_{m+j} \right) \in A_{\ast}(\Delta^{(r)}_{F}), \end{eqnarray*} \normalsize where $s_{m+j}$ is the component of dimension $m+j$ of the Segre class $s(\textnormal{Res}_{r},V_{r}).$ By Conjecture \ref{conj:dep} and the fact that the exceptional divisor $D_{r}$ satisfies a polynomial equation involving $L,K,H$ and $x$ (cf. Corollary \ref{cor:chow_conf}), $\sigma_{r}$ is a polynomial in these four classes. Since the dimension of $\Delta^{(r)}_{F}$ is $N+2,$ the part of the class $\sigma_{r}$ of dimension $N-r$ is the part of this polynomial of total degree $N+2 - (N-r) = r+2.$ Pushing down to $Y$ and multiplying with $H^{N-r}$ kills everything but the part involving $H^{r},$ and we are left with a quadratic polynomial in $L,K$ and $x$ when $x$ is considered to have degree 2, i.e., a linear polynomial in $\partial,k,s$ and $x$ (when $x$ is considered to have degree 1). \end{proof} To summarize our results at this point, we have the following decomposition of $a_i(S,\mathscr{L}):$ \begin{equation} a_{i}(S,\mathscr{L}) = (-1)^{i-1}(i-1)!(Q_{i} + C_{i} + R_{i}). \end{equation} The equivalence term $Q_i$ can be computed and given a closed formula, and is a linear combination (with coefficients which are integers) of the Chern numbers of $(S,\mathscr{L}).$ The correction term $C_i$ can a priori also be computed and shown to have the same behaviour (and for $1 \leq i \leq 4,$ this is a theorem by the previous section). The residual term $R_i$ is a linear combination of the four Chern numbers $\partial,k,s,x,$ provided Conjecture \ref{conj:dep} holds. Thus, we have to a large extent identified the $a_i(S,\mathscr{L}).$ Note that, as proposed in \cite[Theorem 2.1]{Qvi}, one can also use the G\"ottsche--Yau--Zaslow formula (cf. Conjecture \ref{conj:gen_got}) together with some power series manipulations to show that each $a_{i}(S,\mathscr{L})$ must have the desired behaviour, namely that for each $i \geq 1,$ the integer $a_{i}(S,\mathscr{L})$ defined above is the value taken on $(\partial,k,s,x)$ by a universal, linear polynomial in four variables with integer coefficients. It is convenient to denote these polynomials by $a_{i}(\partial,k,s,x).$ Hence, there exist sequences of integers $\{D_{i}\}_{i \geq 1}, \{E_{i}\}_{i \geq 1}, \{F_{i}\}_{i \geq 1}$ and $\{G_{i}\}_{i \geq 1}$ such that \begin{equation} a_{i}(\partial,k,s,x) = (-1)^{i-1}(i-1)!(D_{i}\partial + E_{i} k +F_{i}s + G_{i}x). \end{equation} One can even compute the polynomials $a_{i}(\partial,k,s,x)$ from the G\"ottsche--Yau--Zaslow formula, altough this depends on knowing the coefficients of the power series $B_{1}(q)$ and $B_{2}(q),$ which are still not well understood. G\"ottsche computed these power series up to degree 28, cf. \cite[Remark 2.5]{Got}, a computation which depends on the fact (recently proven by Kleiman--Shende in \cite{KS}) that plane $r$-nodal curves of degree $d$ are enumerated by universal polynomials when $r \leq 2d-2.$ The algorithm for extracting the $a_i$ from the G\"ottsche--Yau--Zaslow formula is presented in \cite[Algorithm 2.1]{Qvi}; its output is collected in Table \ref{table:polys} for $1 \leq i \leq 15.$ The polynomials $\widetilde{a}_{i}(\partial, k, s, x)$ are obtained by dividing $a_{i}(\partial, k, s, x)$ by $(i-1)!.$ \begin{table} \centering \begin{sideways} \scriptsize \begin{tabular}{|l|l|} \hline $a_{1} =$ & $3\partial + 2k + x$ \\ \hline $a_{2} =$ & -- 42$\partial$ -- 39k -- 6s -- 7x\\ \hline $a_{3} =$ & 1380$\partial$ + 1576k + 376s + 138x\\ \hline $a_{4} =$ & --72360$\partial$ --95670k -- 28842s --3888x \\ \hline $a_{5} =$ & 5225472$\partial$ + 7725168k + 2723400s + 84384x\\ \hline $a_{6} =$ & -- 481239360$\partial$ -- 778065120k -- 308078520s + 7918560x\\ \hline $a_{7} =$ & 53917151040$\partial$ + 93895251840k + 40747613760s -- 2465471520x\\ \hline $a_{8} =$ & -- 7118400139200$\partial$ -- 13206119880240k -- 6179605765200s + 516524964480x \\ \hline $a_{9} =$ & 1082298739737600$\partial$ + 2121324101971200k + 1057994510106240s -- 105531591674880x \\ \hline $a_{10}=$ & -- 186244876934645760$\partial$ -- 383178257123397120k -- 201938068481143680s + 22522077486397440x\\ \hline $a_{11}=$ & 35785074342095769600$\partial$ + 76882882686451430400k + 42529950621208512000s -- 5120189378609356800x\\ \hline $a_{12}=$ & -- 7593954156671416934400$\partial$ -- 16965814444711292160000k -- 9799242960045675628800s + 1246637955659688345600x \\ \hline $a_{13}=$ & 1764002599954269954048000$\partial$ + 4083791314361072077209600k + 2452287375661994231961600s -- 325131495890223904358400x\\ \hline $a_{14}=$ & --445196702136181894778880000$\partial$ -- 1064857909823340069685248000k -- 662444750461765046378803200s + 90666752530924449021542400x \\ \hline $a_{15}=$ & 121304301227469541054089216000$\partial$ + 299017798634897453079185817600k + 192137539658526071385289113600s -- 26963216698297962471175987200x \\ \hline \hline $\widetilde{a}_{1} =$ & 3$\partial$ + 2k + x \\ \hline $\widetilde{a}_{2} =$ & --42$\partial$ -- 39k -- 6s -- 7x\\ \hline $\widetilde{a}_{3} =$ & 690$\partial$ + 788k + 188s + 69x\\ \hline $\widetilde{a}_{4} =$ & --12060$\partial$ -- 15945k -- 4807s -- 648x \\ \hline $\widetilde{a}_{5} =$ & 217728$\partial$ + 321882k + 113475s + 3516x\\ \hline $\widetilde{a}_{6} =$ & -- 4010328$\partial$ -- 6483876k -- 2567321s + 65988x\\ \hline $\widetilde{a}_{7} =$ & 74884932$\partial$ + 130410072k + 56593908s -- 3424266x\\ \hline $\widetilde{a}_{8} =$ & --1412380980$\partial$ -- 2620261881k -- 1226112255s + 102485112x \\ \hline $\widetilde{a}_{9} =$ & 26842726680$\partial$ + 52612204910k + 26239943207s -- 2617350984x \\ \hline $\widetilde{a}_{10}=$ & -- 513240952752$\partial$ -- 1055936555124k -- 556487181661s + 62064807888x\\ \hline $\widetilde{a}_{11}=$ & 9861407170992$\partial$ + 21186861410508k + 11720114258490s -- 1410986931936x\\ \hline $\widetilde{a}_{12}=$ & --190244562607008$\partial$ -- 425029422316200k -- 245491696730341s + 31230909182592x \\ \hline $\widetilde{a}_{13}=$ & 3682665360521280$\partial$ + 8525631885908256k + 5119580760611226s -- 678769122880224x \\ \hline $\widetilde{a}_{14}=$ & --71494333556133600$\partial$ -- 171005998538392560k -- 106382292871378404s -- 14560213534363728x\\ \hline $\widetilde{a}_{15}=$ & 1391450779290676680$\partial$ + 3429957097334083248k + 2203960837196658328s -- 309288199242633956x\\ \hline \hline \end{tabular} \end{sideways} \caption{The polynomials $a_{i}(\partial,s,k,x).$} \label{table:polys} \end{table} \normalsize Now, inverting the argument, both $B_{1}(q)$ and $B_{2}(q)$ can be deduced from the $a_i.$ Applying the G\"ottsche--Yau--Zaslow formula for an algebraic surface $S$ with $\chi(\mathscr{O}_{S})=0$ (and therefore with $x=-s$), and with $\mathscr{L}$ trivial, we get \begin{equation} \sum_{r \geq 0} Z_{r}(0,0,s,-s)(DG_{2}(\tau))^{r} = B_{1}(q)^{s}. \end{equation} Assume $B_{1}(q) = \sum_{r=0}^{\infty}b^{(1)}_{r}q^{r},$ and $\log B_{1}(q) = \sum_{r=1}^{\infty} c^{(1)}_{r}q^{r}.$ Then, by the definition of Bell polynomials (cf. Eq. (\ref{eqn:formal_id})), \begin{equation} b^{(1)}_{r} = \frac{P_{r}(1!c_{1}^{(1)},\ldots, r!c_{r}^{(1)})}{r!}, \end{equation} so the $b^{(1)}_{r}$ can be deduced from the $c^{(1)}_{r}.$ Let $y_{r}(n)$ denote the coefficient of $q^{n}$ in $(DG_{2}(\tau))^{r}.$ Writing \begin{displaymath} a_{i} = (-1)^{i-1}(i-1)!(D_{i}\partial + E_{i}k + F_{i}s + G_{i}x) = (-1)^{i-1}(i-1)!(F_{i}-G_{i})s, \end{displaymath} we get the equality \begin{equation} \sum_{r=1}^{\infty} \frac{(-1)^{r-1}(F_{r}-G_{r})}{r} \sum_{n=1}^{\infty} y_{r}(n)q^{n} = \sum_{n=1}^{\infty} c^{(1)}_{n}q^{n}, \end{equation} hence \begin{equation} c^{(1)}_{n} = \sum_{r= 1}^{\infty} y_{r}(n)\frac{(-1)^{r-1}(F_r-G_r)}{r}. \end{equation} Thus, $B_{1}(q)$ can be deduced from the $a_{i},$ and a similar argument holds for $B_{2}(q).$ This motivates a further study of the $a_i;$ in particular, we include what seems to be an interesting observation. Recall that for each $n \geq 1,$ we can write \begin{equation} a_{n}(\partial, k, s, x) = (-1)^{n-1}(n-1)!\left(D_{n}\partial + E_{n}k + F_{n}s + G_{n}x\right) \end{equation} for integers $D_{n}, E_{n}, F_{n}, G_{n}.$ Define sequences $D := \{D_{n+1}/D_{n}\}_{n \geq 1},$ $E := \{E_{n+1}/E_{n}\}_{n \geq 1},$ etc. The first terms of these sequences are collected in Table \ref{table:quotients}. In light of these values, we propose the following conjecture: \begin{conjecture}\label{conj:division} The four sequences $D,E,F$ and $G$ defined above are convergent. \end{conjecture} Provided convergence can be proved, it would be interesting to at least know whether all four sequences converge towards the same number (which, it would seem, is approximately equal to 20, at least for $D,E$ and $F$). \begin{table} \centering \begin{tabular}{|l||l|l|l|l|} \hline $n$ & $D_{n+1}/D_{n}$ & $E_{n+1}/E_{n}$ & $F_{n+1}/F_{n}$ & $G_{n+1}/G_{n}$ \\ \hline\hline 1 & 14 & 19,5 & --- & 7 \\ \hline 2 & 16,43 & 20,21 & 31,33 & 9,86 \\ \hline 3 & 17,48 & 20,23 & 25,57 & 9,39 \\ \hline 4 & 18,05 & 20,19 & 23,61 & 5,43 \\ \hline 5 & 18,42 & 20,14 & 22,62 & 18,77 \\ \hline 6 & 18,67 & 20,11 & 22,04 & 51,89 \\ \hline 7 & 18,86 & 20,09 & 21,67 & 29,93 \\ \hline 8 & 19,01 & 20,08 & 21,40 & 25,54 \\ \hline 9 & 19,12 & 20,07 & 21,21 & 23,71 \\ \hline 10 & 19,21 & 20,06 & 21,06 & 22,73 \\ \hline 11 & 19,29 & 20,06 & 20,95 & 22,13 \\ \hline 12 & 19,36 & 20,06 & 20,85 & 21,73 \\ \hline 13 & 19,41 & 20,06 & 20,78 & 21,45 \\ \hline 14 & 19,46 & 20,06 & 20,72 & 21,24 \\ \hline \end{tabular} \caption{Sequences $D_{n+1}/D_{n}, E_{n+1}/E_{n}, F_{n+1}/F_{n}, G_{n+1}/G_{n}.$} \label{table:quotients} \end{table} \vspace{5mm} We now relate the polynomials $a_i$ to Kazarian's Thom polynomials, studied in \cite{Kaz}, Kazarian studies, in a topological setting, topological Thom polynomials for multisingularities of a map of manifolds $f: M \rightarrow N.$ In particular, he considers the situation where $f$ is the map from $X,$ the critical locus inside $F = S \times |\mathscr{L}|,$ to $Y = |\mathscr{L}|.$ For each type of multisingularity $\underline{\alpha}$ of small codimension, he introduces and computes an associated integral, linear polynomial in the four Chern numbers of $(S,\mathscr{L}),$ which he denotes by $S_{\underline{\alpha}}.$ \begin{theorem}\label{thm:gen_shape} \emph{(\cite{Kaz}, Theorem 10.1.)} For each type $\underline{\alpha} = (\alpha_{1},\ldots,\alpha_{r})$ of multisingularity, the number of curves on $S$ lying in a sufficiently generic linear system $|\mathscr{L}|$ and passing through $N - \textnormal{codim }\underline{\alpha}$ points in general position (where $N$ is the dimension of $|\mathscr{L}|$) is given by \begin{equation}\label{eqn:gen_shape} N_{\underline{\alpha}}(S,\mathscr{L}) = \frac{1}{\# \textnormal{Aut}(\underline{\alpha})} \sum_{J_{1} \sqcup \ldots \sqcup J_{l} = [r]} \prod_{i=1}^{l} S_{\underline{\alpha}_{J_i}}. \end{equation} \end{theorem} In particular, we recover the expression of node polynomials as Bell polynomials. Indeed, Theorem \ref{thm:gen_shape} implies that \begin{eqnarray*} N_{r}(S,\mathscr{L}) & = & \frac{1}{\# \textnormal{Aut}(A_{1}^{r})} \sum_{J_{1} \sqcup \ldots \sqcup J_{l} = [r]} \prod_{i=1}^{l} S_{A_{1}^{|J_i|}} \\ & = & \frac{1}{r!}\sum_{l=1}^{r} \sum_{j_{1} + \ldots + j_{r-l+1} = r} e_{j_{1},\ldots, j_{r-l+1}} \prod_{i=1}^{r-l+1} S_{A_{1}^{j_{i}}}, \end{eqnarray*} with $e_{j_{1},\ldots, j_{r-l+1}}$ the number of ways to partition a set of $r$ elements into $l$ blocks of which $j_1$ have 1 element, $j_2$ have 2 elements, etc. But this is exactly the definition of Bell polynomials, so we get \begin{equation} N_{r}(S,\mathscr{L}) = \frac{1}{r!}P_{r}(S_{A_{1}}, \ldots, S_{A_{1}^{r}}). \end{equation} Thus, Kazarian's polynomial $S_{A_{1}^{i}}$ corresponds to the polynomial $a_{i}$ of Kleiman--Piene, introduced in \cite{KP1}. We defined $a_{i}$ as the degree of the pushdown to $Y$ of the contribution to $X_{1} \cdot \ldots \cdot X_{i}$ coming from all distinguished varieties with support in the small diagonal $\Delta^{(i)}_{X}.$ We will now summarize the geometric interpretation of the polynomials $a_{i}.$ In \cite{LiTz}, Li and Tzeng prove algebraically the existence of enumerative polynomials for curves with singularity type $\underline{\alpha}.$ However, the form promised by Theorem \ref{thm:gen_shape} is not, a priori, clear from the point of view of algebraic geometry and intersection theory. For the sake of the discussion, we'll assume that such a form is valid in the algebro-geometric setting. Recall that $f: X \rightarrow Y$ is the composition of the embedding $\iota: X \hookrightarrow F$ and the projection $F = S \times Y \rightarrow Y.$ If $\mathscr{L}$ is sufficiently ample on $S,$ we conjecture that \begin{equation}\label{eqn:expr_nodal} N_{r}(S,\mathscr{L}) = \frac{1}{r!} \int_{Y} f_{\ast} m_{r} - \sum_{\underline{\alpha} \in \Gamma_{r}^{\circ}} N_{\underline{\alpha}}(S,\mathscr{L}), \end{equation} where $m_{r}$ is the $r$-point cycle class of $f$ and $\Gamma_{r}^{\circ}$ is the set of all multisingularity types of codimension $r,$ $A_{1}^{r}$ excepted. Indeed, $m_{r}$ enumerates the $r$-fold points of $f,$ which includes curves with other codimension $r$ multisingularities than $r$ nodes. Using Remark \ref{rem:multiple_bell}, we can rewrite this conjectural equality as \begin{displaymath} N_{r}(S,\mathscr{L}) = \frac{1}{r!}P_{r}\left(Q_{1} + C_{1}, \ldots, (-1)^{r-1}(r-1)!(Q_{r} + C_{r})\right) - \sum_{\substack{\underline{\alpha} \in \Gamma_{r}^{\circ} \\ J_{1} \sqcup \ldots \sqcup J_{l} = [l(\alpha)]}} \frac{\prod_{i=1}^{l} S_{\underline{\alpha}_{J_i}}}{\# \textnormal{Aut}(\underline{\alpha})}, \end{displaymath} at least for $r \leq 4.$ Since this should be equal to $\frac{1}{r!}P_{r}(a_{1},\ldots,a_{r}),$ it follows, by comparing the linear terms on each side, that \begin{equation} \boxed{a_{i} = (-1)^{i-1}(i-1)!(Q_{i} + C_{i}) - \sum_{\underline{\alpha} \in \Gamma_{i}^{\circ}}\frac{i!}{\# \textnormal{Aut}(\underline{\alpha})}S_{\underline{\alpha}}.} \end{equation} We have used the convention $C_1 = C_2 = 0.$ For completeness, we include Kazarian's polynomials $S_{\underline{\alpha}}$ for all $\underline{\alpha}$ with codimension $\leq 4$ in Table \ref{table:kazarian}. \begin{table} \centering \begin{tabular}{|l||l|l|} \hline & $\underline{\alpha}$ & $S_{\underline{\alpha}}$ \\ \hline $\textnormal{cod}(\underline{\alpha}) = 1$ & $A_{1}$ & $3 \partial + 2k + x$ \\ \hline $\textnormal{cod}(\underline{\alpha}) = 2$ & $A_{2}$ & $12 \partial + 12k + 2s + 2x$ \\ & $A_{1}^2$ & $ -42 \partial - 39 k - 6 s - 7 x$ \\ \hline $\textnormal{cod}(\underline{\alpha}) = 3$ & $A_3$ & $50 \partial + 64 k + 17 s + 5 x$ \\ & $A_{1}A_{2}$ & $-240 \partial - 288 k - 72 s - 24 x$ \\ & $A_{1}^{3}$ & $1380 \partial + 1576 k + 376 s + 138 x$ \\ \hline $\textnormal{cod}(\underline{\alpha}) = 4$ & $A_4$ & $180 \partial + 280 k + 100 s$ \\ & $D_4$ & $15\partial + 20 k + 5 s + 5 x$ \\ & $A_{1}A_{3}$ & $-1260\partial - 1820 k - 596 s - 60 x$ \\ & $A_{2}^{2}$ & $-1260 \partial - 1800 k - 588 s - 48 x$ \\ & $A_{1}^{2}A_{2}$ & $9000 \partial + 12360 k + 3864 s + 456 x$\\ & $A_{1}^{4}$ & $-72360 \partial - 95670 k - 28842 s - 3888 x$\\ \hline \end{tabular} \caption{The polynomials $S_{\alpha}$ for $\textnormal{codim}(\alpha) \leq 4.$} \label{table:kazarian} \end{table} We therefore see that $a_{i}$ accumulates diverse ``corrections.'' The term $$(-1)^{i-1}(i-1)!(Q_{i}+C_{i})$$ handles the contribution of the small diagonal to the intersection product $X_{1} \cdot \ldots \cdot X_{i},$ while the remaining term handles curves with higher singularities appearing in the correct codimension $i.$ \begin{example}\label{ex:enumerations} For instance, we have \begin{eqnarray*} a_{2} & = & -(Q_{2}+2S_{A_{2}}) \\ & = & -42\partial - 39k - 6s - 7x, \\ a_{3} & = & 2(Q_{3}+C_{3}) - 6(S_{A_{1}A_{2}}+S_{A_{3}}) \\ & = & 1380\partial + 1576k + 376s + 138x; \\ a_{4} & = & -6(Q_4+C_4) - 24(S_{A_{1}A_{3}}+1/2S_{A_{1}^{2}A_{2}}+1/2S_{A_{2}^{2}}+S_{A_{4}}+S_{D_{4}}) \\ & = & -72360\partial - 95670k - 28842s - 3888x, \end{eqnarray*} where we have used the numerical expressions for the $S_{\underline{\alpha}}$ provided in \cite{Kaz} and reproduced in Table \ref{table:kazarian}. Thus, the conjectural equality presented in Eq. (\ref{eqn:expr_nodal}) is true up to at least $r=4.$ Terms such as $S_{A_{1}A_{2}}$ also have concrete interpretations. Assume one wants to compute the number of curves in $|\mathscr{L}|$ having one node and one cusp, and passing through $N-3$ points in general position on $S.$ The configuration space of choice for this computation is $F^{\times 2}.$ Let $C \subset X$ denote the locus of curves with a marked singularity which is a cusp \textit{or worse}. We are, a priori, interested in the intersection product $p_{1}^{\ast}[C] \cdot p_{2}^{\ast}[X],$ but there is an excess contribution from the diagonal $\Delta_{C} \cong C,$ as well as an embedded component related to tacnodal curves. In the case of $(\mathbb{P}^{2}, \mathscr{O}_{\mathbb{P}^{2}}(d)),$ we can compute explicitly the excess contribution, using results from \cite{Alu2}. Indeed, according to \cite[Lemma 1.4]{Alu2}, we have \begin{equation} c(N_{C}X) = 1 + 2(d-3)l + 2H, \end{equation} where $l$ denotes the class of a hyperplane in $\mathbb{P}^{2}$ and $H$ denotes the class of a hyperplane in the $\mathbb{P}^{N}$ of curves of degree $d.$ So we get \begin{eqnarray*} c(N_{X}F) & = & (1+(d-1)l + H)^{3} \\ c(N_{C}F) & = & (1 + 2(d-3)l + 2H)(1+(d-1)l + H)^{3}. \end{eqnarray*} Hence, since $c(N_{F}F^{\times 2})^{-1} = 1-3l+6l^2,$ the equivalence $E_{A_{1}A_{2}}$ of $\Delta_{C}$ for $p_{1}^{\ast}[C] \cdot p_{2}^{\ast}[X]$ is the coefficient of $l^{2}H^{3}$ in \begin{equation} (1+(d-1)l + H)^{3} \cdot (1-3l+6l^2) \cap (2(d-3)l+2H)((d-1)l + H)^{3}. \end{equation} A quick computation, using for instance \verb+Maple+, shows that this is equal to $60d^{2}-192d+144.$ Since $S_{A_{1}A_{2}} = -240d^{2}+864d-720$ and $S_{A_{3}} = 50d^{2}-192d+168,$ we see that \begin{equation} S_{A_{1}A_{2}} = -3(1/2E_{A_{1}A_{2}}+S_{A_{3}}), \end{equation} and the number of curves with a cusp and a node is given by \begin{equation} N_{A_{1}A_{2}} = S_{A_{1}}S_{A_{2}}+S_{A_{1}A_{2}}. \end{equation} \hfill $\blacksquare$ \end{example}
2,877,628,091,408
arxiv
\section{The classical proofs: a variational argument}\label{sec:classical} In this section, we follow \cite{watanabeBloch}. We consider interacting electrons on a discrete ring $\Pi={\mathbb Z}/L{\mathbb Z}$. The charge operator at site $x$ is given by $q_x = a_x^{*} a_x$, the charge in an interval is $Q_{[a,b]} = \sum_{x=a}^b q_x$ and the total charge is $Q_\Pi = \sum_{x\in \Pi} q_x$. The two essential assumptions on the system's Hamiltonian $H$ are first its {locality} $H = \sum_{x\in\Pi} h_{x,x+1}$, and second the {charge conservation} \begin{equation*} [H, Q_\Pi] = 0. \end{equation*} Of course, the nearest neighbour assumption is there for simplicity, the argument below is easily adapted to any finite range Hamiltonian. Together, the two assumptions imply that for any $x\in \Pi$, $h_{x,x+1}$ can be chosen to individually satisfy \begin{equation}\label{1d Charge conservation} [h_{x,x+1},Q_\Pi] = 0. \end{equation} Indeed, suppose that $H = \sum_{x\in \Pi}\tilde h_{x,x+1}$, then by charge conservation $H = \sum_{x\in \Pi} h_{x,x+1}$, where \begin{equation*} h_{x,x+1} = \frac{1}{2\pi}\int_0^{2\pi}\ep{\mathrm{i} \theta Q_\Pi} \tilde h_{x,x+1}\ep{-\mathrm{i} \theta Q_\Pi}d\theta \end{equation*} is still supported on $\{x,x+1\}$ and for which \begin{equation*} [ h_{x,x+1},Q_\Pi] = \frac{\mathrm{i}}{2\pi}\left.\ep{\mathrm{i} \theta Q_\Pi} \tilde h_{x,x+1}\ep{-\mathrm{i} \theta Q_\Pi}\right\vert_{\theta=0}^{\theta=2\pi} = 0 \end{equation*} since $Q_\Pi$ has integer spectrum. For later use, we write the term $h_{x,x+1}$ as a polynomial, $p_{x,x+1}$, in the local creational and annihilation operators, $h_{x,x+1} = p_{x,x+1}(a_x, a^*_x, a_{x+1}, a^*_{x+1})$. Condition (\ref{1d Charge conservation}) is equivalent to \begin{equation} \label{1.2} p_{x,x+1}(\ep{\mathrm{i} \varphi} a_x, \ep{-\mathrm{i} \varphi}a^*_x, \ep{\mathrm{i}\varphi} a_{x+1}, \ep{-\mathrm{i} \varphi} a^*_{x+1}) = p_{x,x+1}(a_x, a^*_x, a_{x+1}, a^*_{x+1}). \end{equation} The polynomial can depend on $x$, but all coefficients are assumed to be bounded by a constant $C$. Note that here and below, such constants are always independent of~$L$. Local gauge transformations are determined by a function $\theta: \Pi\to{\mathbb R}$, and implemented by the corresponding unitary \begin{equation*} U_\theta = \ep{\mathrm{i} ( \theta,q)},\qquad ( \theta,q)=\sum_{x\in \Pi}\theta_x q_x. \end{equation*} The gauge transformed $H_\theta = U_\theta^{*} H U_\theta$ satisfies the relation \begin{equation} \label{in_between} \left(f,\nabla H_\theta\right) = \sum_{x\in \Pi} f_x \mathrm{i} [H_\theta,q_x], \end{equation} for any function $f: \Pi\to{\mathbb R}$. We can write $H_\theta$ explicitly as \begin{equation} \label{eq:Hta} H_\theta = \sum_{x\in \Pi} p_{x,x+1} (\ep{\mathrm{i}\theta_x}a_x, \ep{-\mathrm{i}\theta_x}a_x^{*}, \ep{\mathrm{i}\theta_{x+1}}a_{x+1}, \ep{-\mathrm{i}\theta_{x+1}}a_{x+1}^{*}). \end{equation} By locality and charge conservation, the operator $\mathrm{i}[H,Q_{[a,b]}]$ is the difference of two currents, one along the edge $\langle a-1,a\rangle$, the other one along $\langle b,b+1\rangle$: \begin{equation}\label{1d currents} \mathrm{i}[H,Q_{[a,b]}] = J_{\langle a-1,a\rangle} - J_{\langle b,b+1\rangle}. \end{equation} Using (\ref{in_between}), we have $J_{\langle a-1,a \rangle} - J_{\langle b,b+1\rangle} = (\chi_{[a,b]},\nabla H_\theta)\vert_{\theta=0} = \partial_s H_{s \chi_{[a,b]}}\vert_{s=0}$, where $\chi_{[a,b]}$ is the characteristic function of the interval $[a,b]$. Comparing with (\ref{eq:Hta}) it follows that $$ J_{\langle x-1,x\rangle } = \left.\partial_s p_{x-1,x} (a_{x-1}, a_{x-1}^{*}, \ep{\mathrm{i} s}a_{x}, \ep{-\mathrm{i} s}a_{x}^{*}) \right\vert_{s=0}. $$ Accordingly, the current density is given by \begin{equation} \label{eq:j} j = \frac{1}{L}\sum_{x\in \Pi} J_{\langle x-1,x\rangle} = \frac{1}{L} \left.\partial_s \tilde{H}_s\right\vert _{s=0}, \end{equation} where $$ \tilde{H}_s = \sum_{x \in \Pi} p_{x-1,x} (a_{x-1}, a_{x-1}^{*}, \ep{\mathrm{i} s}a_{x}, \ep{-\mathrm{i} s}a_{x}^{*}). $$ In general, the `twist' Hamiltonian $\tilde{H}_s$ is not gauge equivalent to $H$. However, $$ \tilde{H}_{\frac{2 \pi}{L}} = H_{\varphi} $$ where $\varphi$ is the gauge transformation \begin{equation*} \varphi_x = 2\pi\frac{x}{L},\quad x\in \Pi. \end{equation*} Indeed, (\ref{1.2}) implies that $$ H_\varphi = \sum_{x \in \Pi} p_{x-1,x} (a_{x-1}, a_{x-1}^{*}, \ep{\mathrm{i} (\varphi_x - \varphi_{x-1})}a_{x}, \ep{-\mathrm{i} (\varphi_x - \varphi_{x-1})}a_{x}^{*}), $$ which is equal to $\tilde{H}_{\frac{2 \pi}{L}}$ for the particular choice of $\varphi$ (note that the twisting is correct, in particular, for the edge $\langle L, 1 \rangle $). Let now $\Omega$ be a (not necessarily unique) ground state of $H$. In this one-dimensional setting of a ring geometry, Bloch's theorem reads: There is constant $C>0$ such that \begin{equation}\label{1dBloch} \vert \langle \Omega,j\Omega\rangle\vert \leq \frac{C}{L}. \end{equation} To prove the claim, we expand $\langle \Omega, (\tilde H_s - H) \Omega\rangle$ to the first order. Using (\ref{eq:j}) we have, \begin{equation*} \langle \Omega, (\tilde H_s - H) \Omega\rangle = s L \langle \Omega,j\Omega\rangle + \langle \Omega,R_s\Omega\rangle, \end{equation*} where the rest term is given by \begin{equation*} R_s = \frac{s^2}{2} (\left.\partial^2_{s,s}\tilde{H}_s\right\vert_{s=t}) \end{equation*} for some $t\in(0,s)$. Hence there exists a constant $C$ such that \begin{equation}\label{RestEstimate} \Vert R_s \Vert \leq s^2 C L. \end{equation} With this, the energy of the gauge transformed state $\Omega_\varphi = U_{\varphi}\Omega$ can be compared with the ground state energy \begin{equation*} 0\leq \langle \Omega_\varphi, H\Omega_\varphi\rangle - \langle \Omega, H\Omega\rangle = \langle \Omega, (\tilde{H}_\frac{2 \pi}{L}-H)\Omega\rangle = 2\pi \langle \Omega, j\Omega\rangle + \langle \Omega, R_\frac{2 \pi}{L} \Omega\rangle. \end{equation*} Now, the argument can be repeated with $\varphi\to-\varphi$, yielding the inequality involving $-j$. Together, one concludes that \begin{equation*} -\langle \Omega, R_\frac{2 \pi}{L} \Omega\rangle\vert \leq 2\pi \langle\Omega, j\Omega\rangle \leq \langle \Omega, R_\frac{-2 \pi}{L} \Omega\rangle \end{equation*} which yields the claim~(\ref{1dBloch}) by~(\ref{RestEstimate}). Let us first point to the strength of the argument, namely the very limited assumptions made along the way. It is valid for systems having degenerate ground states. It does not make any assumptions about the spectral gap above the ground state energy. Another useful aspect of this variational argument built on a unitary is that it extends to thermal equilibrium states. Indeed, the free energy of a density matrix $\rho$ being \begin{equation*} F(\rho) = \mathrm{tr}(\rho H) - \beta^{-1} S(\rho), \end{equation*} the variation $\rho\mapsto \rho_\varphi=U_\varphi \rho U_\varphi^{*} $ leaves the entropy constant while the energy difference is given as above. Hence, if $\rho$ is an equilibrium state, namely a minimizer of $F$, we conclude that \begin{equation*} 0\leq F(\rho_{\pm\varphi}) - F(\rho) = \pm 2\pi \mathrm{tr}(\rho j) + \mathrm{tr}(\rho M_{\pm\varphi}) \end{equation*} and the norm estimate~(\ref{RestEstimate}) again implies that $\vert \mathrm{tr}(\rho j)\vert \leq CL^{-1}$ for equilibrium states at finite temperature. On the other hand, let us consider a quasi one-dimension ring of width $W$, imposing periodic boundary conditions in the transverse direction. Replacing in the discussion above the site $x$ by the full slab $[x]$ of width $W$, we obtain that \begin{equation}\label{2dFail} \vert \langle \Omega,J_W\Omega\rangle \vert \leq C\frac{W}{L}, \end{equation} where $J_W$ is the current density per slab. In particular, this is too weak to prove the vanishing of the current across a full `cut' of a two-dimensional system where $W/L\to r>0$ as $L\to\infty$. This illustrates that the arguments above does not extend to higher dimensions. We shall show in the following sections how this limitation can be overcome by assuming a spectral gap above the ground state energy, while still keeping a possible (finite) ground state degeneracy. \section{Absence of currents in gapped systems} For simplicity, we phrase the result in the geometric setting of a two-dimensional torus. Let $\Lambda = ({\mathbb Z}/L{\mathbb Z})^2$ be the discrete torus, with vertices denoted $x = (x_1,x_2)\in \Lambda$. It is equipped with a metric $d(\cdot,\cdot)$, which we take as the graph distance. The Hilbert space of the system is \begin{equation*} {\mathcal H} = {\mathcal F}_-({\mathbb C}^{\vert \Lambda \vert}), \end{equation*} where ${\mathcal F}_-({\mathbb C}^{\vert \Lambda\vert})$ is the antisymmetric Fock space of $\vert \Lambda\vert$ degrees of freedom. The observables are even elements of the CAR algebra, namely linear combinations of even monomials in the fermionic creation and annihilation operators. We denote $Q_X = \sum_{x\in X} q_x$ the charge in a set $X$, in particular $Q_\Lambda$ is the total charge. Let $H = \sum_{X \subset \Lambda} h_X$ be a local Hamiltonian having finite range, by which we mean that \begin{equation*} h_X = 0\quad \text{whenever}\quad\mathrm{diam}(X)\geq R, \end{equation*} and which is charge conserving, \begin{equation*} [h_X,Q_\Lambda] =0 \end{equation*} for all $X$, see~(\ref{1d Charge conservation}). The spectrum of $H$ is assumed to have a gap, namely \begin{equation}\label{gap} \sigma(H) \subset \Sigma \cup \Sigma_+ \end{equation} where $\mathrm{dist} (\Sigma , \Sigma_+) = \gamma>0$ uniformly in $L$. Then $P$ is the spectral projector \begin{equation*} P = \chi_{\Sigma}(H) \end{equation*} associated with $\Sigma$, which we assume to have a constant rank $p = \mathrm{rk}(P)$ for all $L$ large enough. The case we have in mind is a projection on low laying states, and we call $P$ the ground state projection. Let $\Gamma = \{x: 0 \leq x_1\leq L/2\}$ be the half-torus, and let $\partial_\pm$ be strips of width $R$ around the boundary of $\Gamma$, i.e. $\partial_- = \{x : |x_1| \leq R \}$. We denote $ Q = Q_\Gamma $ the operator of charge in the half-torus. Charge conservation and the fact that $H$ has finite range implies that \begin{equation}\label{MB currents} \mathrm{i} [H,Q] = \mathrm{i}[H_-,Q] + \mathrm{i}[H_+ Q] = J_- - J_+, \end{equation} where $H_\pm = \sum_{X \subset \partial_\pm} h_X$ and hence $J_\pm$ is supported in $\partial_\pm$ respectively. Since the distance between $\partial_-$ and $\partial_+$, is proportional to $L$, it is possible and will be useful to consider in $H_\pm$ all terms supported in wider strips $S_\pm$ that are still a distance $L$ apart, but have themselves a width of order $L$. Of course, this does not modify the operators $J_\pm$ at all. We consider the total current through a fiducial line at $x_1=0$, namely we put $J = J_-$. By $\stackrel{\scriptscriptstyle L}{=}$ we denote an equality up to ${\mathcal O}(L^{-\infty})$ terms (in the topology of the norm in operator equations). \begin{thm} \label{Gap_Bloch} In the setting above, $$ \mathrm{tr}(P J) \stackrel{\scriptscriptstyle L}{=} 0. $$ In particular, the average current in the state $P$ vanishes in the large volume limit. \end{thm} In the proof, we will use operators $K_\pm$, introduced for the present purpose in~\cite{MBIndex}, that encode charge fluctuations in the state $P$ on the boundaries $\partial_\pm$. Specifically, there exist operators $K_\pm$ such that \begin{enumerate} \item $\Vert K_\pm\Vert \leq C L$, \item $[K_\pm, A_X] = {\mathcal O}(\mathrm{dist}(X, \partial_\pm)^{-\infty})$, where $\Vert A_X\Vert = 1$ and $\mathrm{supp}(A_X) = X$ with $\vert X\vert \leq C$, \item $\overline Q := Q - (K_- - K_+)$ leaves the ground state space invariant, namely \begin{equation}\label{LCF} [\overline Q,P] = 0. \end{equation} \end{enumerate} Note that (i,ii) imply that $K_\pm$ are supported in $\partial_\pm$, up to tails having a fast decay. Explicitly, let $K$ be defined by \begin{equation} \label{HastingsGenerator} K:= \int_{-\infty}^{+\infty} W(t) \ep{\mathrm{i} t H} \mathrm{i}[H,Q] \ep{-\mathrm{i} t H} \,dt = \widehat W(-\mathrm{ad}_H)(\mathrm{i}\,\mathrm{ad}_H(Q)), \end{equation} with $W$ a real-valued, bounded, integrable function satisfying $W(t)={\mathcal O}(|t|^{-\infty})$ and $\widehat{W}(\omega)=-\frac{1}{\mathrm{i}\omega}$ for all $|\omega|\geq \gamma$, with $\gamma$ the spectral gap. These properties imply that $[K,P]=[Q,P]$. By the Lieb-Robinson bound, we conclude that the splitting~(\ref{MB currents}) lifts to $K=K_- + K_+$. \begin{proof}[Proof of Theorem~\ref{Gap_Bloch}] By the support properties of $K_\pm$ and $J$, we have $$ J = \mathrm{i}[H,K_-] + \mathrm{i}[H_-, \overline{Q}] + {\mathcal O}(L^{-\infty}), $$ where we used that both $[H_-,K_+]$ and $[(H-H_-),K_-]$ are ${\mathcal O}(L^{-\infty})$. With~(\ref{LCF}), we conclude that $$ PJP = \mathrm{i}[H,PK_-P] + \mathrm{i}[PH_-P, \overline{Q}] + P{\mathcal O}(L^{-\infty})P, $$ and hence $$ \mathrm{tr}(PJ) = {\mathcal O}(L^{-\infty}), $$ by cyclicity of the trace. \end{proof} Let us make a few remarks about the result. First of all, in the present higher dimensional setting, this shows that the total current across the fiducial line $\{x_1 = 0\}$ vanishes in the large volume limit and very fast indeed, namely \begin{equation*} \left\vert \langle J \rangle_P \right\vert \leq \frac{C_k}{L^k} \end{equation*} for all $k\in{\mathbb N}$, where $\langle J \rangle_P = p^{-1} \mathrm{tr}(PJ)$. This should be compared with~(\ref{2dFail}). The cost of this improvement is the additional spectral gap assumption, which we have seen to be a fundamental ingredient of the proof. Secondly, in the case of $p = \mathrm{rk}(P) >1$, the vanishing may in principle be due to cancellations within the ground state space. One additional assumption ensuring that this is not the case is that of local topological order in the ground state space, namely that \begin{equation*} PAP - \langle A\rangle_P P\stackrel{\scriptscriptstyle L}{=} 0 \end{equation*} for any local observable. It implies in particular that both $PH_-P$ and $PK_-P$ are proportional to $P$, since both $H_-$ and $K_-$ are sums of local terms. Hence the second line of the proof immediately gives \begin{equation*} PJP \stackrel{\scriptscriptstyle L}{=} 0. \end{equation*} We also point out that the ${\mathcal O}(L^{-\infty})$ smallness of the current is truly a ground state property so that the above result does indeed not extend to thermal equilibrium states. Operators $K_\pm$ are used in the above proof as a tool to zoom to one of the boundaries $\partial_\pm$. The technique was introduced in \cite{MBIndex} in the context of many-body index theory. In fact, Bloch's theorem is a consequence of this general theory, a connection we describe in the next section. \subsection{Connection to a many-body index} We briefly recall the definition of the many-body index introduced in~\cite{MBIndex} and generalized to the degenerate case in~\cite{PRBIndex}, we refer to~\cite{TOIndex} for a complete exposition. The theory describes an index associated to a charge transported across a fiducial hyperplane. Let $U$ be a unitary on ${\mathcal H}$ that implements transport. We assume that $U$ is generated by a possibly time dependent Hamiltonian $G(s)$ for $s\in[0,1]$, which may not need to be the generator of the physical time evolution. However, $G(s)$ is assumed to be local and charge conserving, namely \begin{equation*} G(s) = \sum_{X\subset\Lambda} g_X(s), \end{equation*} where $g_X(s)$ is supported in $X$ and \begin{equation*} [g_X(s), Q_\Lambda] = 0, \end{equation*} for all $s$. Locality is expressed in terms of the decay of the norm of $g_X(s)$ as function of the size of $X$, for example by assuming that % \begin{equation*} \sup_{s\in[0,1]}\sup_{x\in\Lambda}\sum_{X\ni x}\frac{\Vert g_X(s)\Vert}{\xi(\mathrm{diam}(X))} < C \end{equation*} uniformly in $L$, where $\xi:[0,\infty)\to (0,\infty)$ is an $L$-independent, rapidly decaying function: $\xi(r) = {\mathcal O}(r^{-\infty})$. With this, $U = U(1)$ is the solution of the Schr\"odinger equation \begin{equation*} \mathrm{i} \dot U(s) = G(s) U(s),\qquad U(0) = 1. \end{equation*} Its adjoint action on the observables satisfies a Lieb-Robinson bound, see for example~\cite{AmandaQL}, and hence \begin{equation*} U^{*} {\mathcal A}_X U\subset{\mathcal A}_X \end{equation*} for any set $X$, where ${\mathcal A}_X$ is the set of observables supported in $X$, up to corrections whose norm vanish fast in the distance to $X$. Secondly, $U$ conserves charge in the sense that \begin{equation*} U^{*} Q_X U - Q_X \in {\mathcal A}_{\partial X}, \end{equation*} where $\partial X = \{x: d(x,X)\leq 1\text{ and }d(x,X^c)\leq 1 \}$ is the boundary of $X$. In particular, the operator of net charge transported into the half-torus has the form \begin{equation}\label{UCC} U^{*} Q U - Q \stackrel{\scriptscriptstyle L}{=} T_- - T_+,\qquad T_\pm\in{\mathcal A}_{\partial_\pm} \end{equation} where $\partial_\pm$ are the two disjoint parts of the boundary of $\Gamma$. This of course is to be related to~(\ref{1d currents}) in the first section. Just as it was there, this specific form follows from the assumed locality and charge conservation, as \begin{align} U^{*} QU-Q &= \mathrm{i}\int_0^1 U^* (s) [G(s), Q] U(s) \,\mathrm{d} s \nonumber \\ & \stackrel{\scriptscriptstyle L}{=} \mathrm{i}\int_0^1 U^* (s) [G_{-}(s) , Q] U(s) \,\mathrm{d} s + \mathrm{i}\int_0^1 U^* (s) [G_{+}(s) , Q] U(s) \,\mathrm{d} s \label{Tminus} \end{align} identifies $T_\pm$. Indeed, by charge conservation the local expansion \begin{equation*} [G(s), Q] = \sum_{X\subset\Lambda}[g_X(s),Q] \end{equation*} asymptotically splits into the two contributions $X\cap \partial_-\neq\emptyset$ and $X\cap \partial_+\neq\emptyset$ where each one belong to ${\mathcal A}_{\partial_-}$, respectively ${\mathcal A}_{\partial_+}$, since the sets $X$ of diameter $\mathrm{diam}(X) = o(L)$ (in particular those that span both $\partial_\pm$) have vanishing contributions for large $L$. The Lieb-Robinson bound yields the claim~(\ref{UCC}). The final hypothesis of the theorem relates $P$ and $U$: the range of $P$ is asymptotically invariant under $U$, namely \begin{equation*} [U,P] \stackrel{\scriptscriptstyle L}{=} 0. \end{equation*} \begin{thm}\label{thm: Index} Under the assumptions above, \begin{equation*} \mathrm{dist} (\mathrm{tr}(P T_-), {\mathbb Z}) \stackrel{\scriptscriptstyle L}{=} 0. \end{equation*} \end{thm} In other words, the expected charge transport across the fiducial line (which in the present two-dimensional setting has length $L$) is an integer multiple of $1/p$ for large $L$ to almost exponential precision. Although we shall not delve into the proof, we point out few basic ideas. First of all, the gap, together with the Lieb-Robinson bound, implies that $P$ satisfies a clustering property, \begin{equation}\label{clustering} P ABP - PAPBP = \min\{\vert X\vert,\vert Y \vert\}{\mathcal O}(d(X,Y)^{-\infty}) \end{equation} for any $A\in {\mathcal A}_X,B\in{\mathcal A}_Y$, see~\cite{HastingsClustering,BrunoClustering}. While $[\overline Q,P] = 0$ implies that charge fluctuations between the ground state space and its orthogonal complement vanish, which is in stark contrast with the fluctuations of the charge in the half-space $Q$, the two `charges' have the same expected transport since \begin{equation*} \mathrm{tr}(P (U^{*} \overline Q U - \overline Q)_- ) \stackrel{\scriptscriptstyle L}{=} \mathrm{tr}(PT_-) - \mathrm{tr} (P(U^{*} K_- U - K_-)) \stackrel{\scriptscriptstyle L}{=} \mathrm{tr}(PT_-) \end{equation*} by the support property of $K_-$ in the first equality and cyclicity of the trace with $[U,P]\eqL0$ in the second one. The proof proceeds by computing the full counting statistics~\cite{KlichFCS,avron2008fredholm} of $\overline Q$ though the fiducial line $\partial_-$, which is associated with the operator $Z_-(\phi)$ defined by the factorization \begin{equation*} U^{*} \ep{\mathrm{i}\phi \overline Q} U \ep{-\mathrm{i}\phi \overline Q} = Z_-(\phi) Z_+(\phi). \end{equation*} This equality further allows us to point to the use of clustering in the proof. Indeed, the unitary operator $Z(\phi)$ on left hand side leaves the ground state space invariant by assumption on $U$ and construction of $\overline Q$. But clustering implies that \begin{equation*} PZP \stackrel{\scriptscriptstyle L}{=} PZ_-PZ_+P \end{equation*} so that \begin{equation*} 1\geq \Vert PZ_-P\Vert \geq \Vert PZP\Vert\stackrel{\scriptscriptstyle L}{=} 1 \end{equation*} which proves that $[Z_-(\phi),P]\eqL0$, namely that the ground state space is an invariant space of $Z_-(\phi)$, too. With the index in hand, Bloch's theorem is an elementary corollary of Theorem~\ref{thm: Index}. For any $t\in[0,1]$, we pick \begin{equation*} U = \ep{-\mathrm{i} t H} \end{equation*} which is local, conserves charge and commutes with $P$. Now, the operators on the r.h.s.~of~(\ref{MB currents}) are naturally identified with the currents across the lines $\partial_\pm$, see~(\ref{1d currents}). The charge transported in the interval $[0,t]$ is explicit, see~(\ref{Tminus}), \begin{equation*} \mathrm{tr} (P T_-(t)) =\mathrm{i}\int_0^t \mathrm{tr} (P U^* (s) [H_- , Q] U(s)) \,\mathrm{d} s = t\mathrm{tr} (P J) \end{equation*} since $U(s) P U(s)^{*} = P$ for any $s\in[0,1]$. All assumptions of Theorem~\ref{thm: Index} apply, so that \begin{equation*} \mathrm{dist} (t \mathrm{tr} (P J),{\mathbb Z}) \stackrel{\scriptscriptstyle L}{=} 0, \end{equation*} and since this is valid for all $t\in[0,1]$, we conclude that \begin{equation*} \langle J\rangle_P \stackrel{\scriptscriptstyle L}{=} 0, \end{equation*} which is Bloch's theorem again. \section{Currents in mesoscopic rings} A recurring question associated with Bloch's theorem is its apparent contradiction with the existence of superconducting currents. A short answer is that persistent currents in superconducting rings is a mesoscopic phenomenon. We are, however, not aware of a concrete microscopic model to demonstrate this point. On the other hand, the related persistent currents in mesoscopic metallic rings \cite{Buttiker1983} is modelled by a free Laplacian on a ring pierced by a magnetic flux. In this model, the current can be explicitly calculated. Apart from showing that it indeed vanishes in the large volume limit, the example also illustrates that the gap condition in Theorem~\ref{Gap_Bloch} is necessary: As we will see, the model is gapless and the current is of order $L^{-1}$ instead of $L^{-\infty}$ that would be guaranteed by the theorem, had the model have a gap. We describe a lattice version of the model \cite{AFG}. A single particle Hamiltonian associated to an electron hopping on a ring $\Pi={\mathbb Z}/L{\mathbb Z}$ threaded by a flux $\phi \in [0, 2 \pi)$ is $$ H = - \ep{\mathrm{i} \phi/L} T - \ep{-\mathrm{i} \phi/L} T^*, $$ where \begin{equation*} (T \psi)(x) = \psi(x-1), \end{equation*} and acts on the Hilbert space $l^2(\Pi)$. The normalized eigenstates of $H$ are given by $$ \psi_k(x) = \frac{1}{\sqrt L} \ep{2\pi \mathrm{i} x k/L},\qquad k\in{\mathbb Z}/L{\mathbb Z}, $$ with eigenvalues \begin{equation*} H \psi_k = -2 \cos \left( \frac{\phi - 2 \pi k}{L} \right) \psi_k. \end{equation*} The Hamiltonian does not have any gap in the spectrum that remains open in the large volume limit. We denote by $\{| x \rangle ,\, x=0, \dots , L-1\}$ the standard position eigenbasis and note that $T = \sum_{x\in\Pi}\vert x+1\rangle\langle x\vert $. The charge in the interval $[a,b]$ is given by $Q_{[a,b]} = \sum_{x=a}^b |x \rangle \langle x|$, namely it is the multiplication operator by the indicator function of the interval $[a,b]$. We have $$ \mathrm{i}[H, Q_{[a,b]}] = J_{\langle a-1,a \rangle} - J_{\langle b,b+1 \rangle}, $$ where \begin{equation*} J_{\langle x-1,x \rangle} = \mathrm{i} \ep {\mathrm{i} \phi/L} |x \rangle \langle x-1| - \mathrm{i} \ep{-\mathrm{i} \phi/L} | x-1 \rangle \langle x|. \end{equation*} The current per edge is $$ j = \frac{1}{L} \sum_{x=0}^{L-1} J_{\langle x-1, x \rangle} = \frac{\mathrm{i}}{L}\ep{\mathrm{i} \phi/L} T - \frac{\mathrm{i}}{L} \ep{-\mathrm{i} \phi/L} T^*. $$ Note that $j = -\partial_\phi H$, and hence we get $$ \langle \psi_k , j \psi_k \rangle = \frac{2}{L} \sin \left( \frac{2 \pi k-\phi}{L} \right). $$ By translation invariance, the expectation values of $j$ and $j_{\langle x-1,x \rangle}$ are the same in any eigenstate of $H$. The ground state of $N$ non-interacting electrons is given by a Fermi projection $P_F$ on the $N$ lowest energy levels of $H$. For $\phi \in (0, \pi)$ and $N = 2m+1$ (the even case being similar), this corresponds to the eigenvectors $\psi_k$ with $k$ in the interval $[-m,m]$. The current expectation value is then $$ \mathrm{tr} ( P_F j) = \sum_{k=-m}^m \frac{2}{L} \sin \left( \frac{ 2 \pi k -\phi}{L} \right). $$ The sum can be explicitly computed, and in the limit $L \to \infty$ with $N/L \to \rho <1$ we get $$ \mathrm{tr} ( P_F j) = -\frac{1}{L} \frac{2 \phi}{\pi} \sin(\pi \rho) + {\mathcal O}(L^{-2}). $$ We conclude that the current is indeed of order $1/L$. It vanishes in the large volume limit but only polynomially in $L$ showing that the gap assumption in Theorem~\ref{Gap_Bloch} is necessary. Note that for $\phi \neq 0$, the time-reversal invariance is broken. In the time-reversal invariant situation $\phi = 0$, the current vanishes identically. \section*{Acknowledgements} \noindent The results presented in this note owe very much to our collaboration with W.~De Roeck. We thank H.~Watanabe for an email correspondence related to this work. The work of S.B. was supported by NSERC of Canada. M.F. was supported in part by the NSF under grant DMS-1907435.
2,877,628,091,409
arxiv
\section{Related Work} Causation has received substantial research interest in many areas. In computer science, Pearl \cite{didelez2001judea} and Rosenbaum\cite{rosenbaum2002observational} laid the foundation for causal inference, upon which several fields, cognitive science, econometrics, epidemiology, philosophy and statistics have built their respective methodologies \cite{freedman1997association,bielby1977structural, robins2000marginal}. At the center of causation is a causal model. Arguably, one of the earliest and popular models is the Rubin-Neyman causal model \cite{sekhon2008neyman}. Under this model $X$ causes $Y$, if $X$ occusr before $Y$; and without $X$, $Y$ would be different. Beside the Rubin-Neyman model, there are several other causal models, including the Granger causality \cite{granger1988some} for time series, Bayes Networks \cite{jensen1996introduction}, Structural Equation Modeling \cite{bielby1977structural}, causal graphical models \cite{elwert2013graphical}, and more generally, probabilistic graphical models \cite{murphy2002dynamic}. In our work, we use the potential outcome framework from the Rubin-Neyman model and we use causal graphical models to identify and correct for biases. Causal graphical models are tools to visualize causal relationships among variables. Nodes of the causal graph are variables and edges are causal relationships. Most methods assume that the causal graph structure is a priori given, however, methods have been proposed for discovering the structure of the causal graph \cite{heckerman1995bayesian, heckerman1997bayesian}. In our work, the structure is partially given: we know the relationships among groups of variables, however we have to assign each variable to the correct group based on data. Knowing the correct graph structure is important, because substructures in the graph are suggestive of sources of bias. To correct for biases, we are looking for specific substructures. For example, causal chains can be sources of overcorrection bias and "V"-shaped structures can be indicative of confounding or endogenous selection bias \cite{robins2000marginal}. Many other interesting substructures have been studied \cite{cooper1997simple,silverstein2000scalable,mani2012theoretical}. In our work, we consider three fundamental such structures: direct causal effect, indirect causal effect and confounding. Of these, confounding is the most severe and received the most research interest. Numerous methods exist to handle confounding, which includes propensity score matching (PSM) \cite{austin2011introduction}, structural marginal models \cite{robins2000marginal} and g-estimation \cite{bielby1977structural}. The latter two extend PSM for various situations, for example, for time-varying interventions \cite{robins2000marginal}. Propensity score matching is used to estimate the effect of an intervention on an outcome. The propensity score is the propensity (probability) of a patient receiving the intervention given his baseline characteristics and the propensity score is used to create a new population that is free of confounding. Many PSM techniques exist and they typically differ in how they use the propensity score to create this new population \cite{lunceford2004stratification,austin2015estimating, rosenbaum1983central,austin2015moving}. Applications of causal modeling is not exclusive to social and life sciences. In data mining, Lambert et al. \cite{lambert2007more} investigated the causal effect of new features on click through rates and Chan et al. \cite{chan2010evaluating} used doubly robust estimation techniquest to determine the efficacy of display advertisements. Even extending association rules mining to causal rule mining has been attempted before \cite{li2013mining, holland1988differential,li2015observational}. Li et al. \cite{li2013mining} used odds ratio to identify causal patterns and later extended their technique \cite{li2015observational} to handle large data set. Their technique, however, is not rooted in a causal model and hence offers no protection against computing systematically biased estimates. In their proposed causal decision trees \cite{li2015causal}, they used the potential outcome framework, but still have not addressed correction for various biases, including confounding. \section{Simple Causal Rule Mining in Irregular Time-Series Data} \subsection{Introduction} According to the Center for Disease Control and Prevention, the incidence of sepsis or septicemia has doubled from 2000 through 2008, and hospitalizations have increased by 70\% for these diagnoses1. In addition, severe sepsis and shock have higher mortality rates than other sepsis diagnoses, accounting for an estimated mortality between 18\% and 40\%. During the first 30 days of hospitalization, mortality can range from 10\% to 50\% depending on the patients risk factors. Patients with severe sepsis or septic shock are sicker, have longer hospital stays, are more frequently discharged to other short-term hospital or long-term care institutions, and represent the most expensive hospital condition treated in 20112. The use of evidence-based practice (EBP) guidelines, such as the Surviving Sepsis Campaign (SSC), could lead to an earlier diagnosis, and consequently, earlier treatment. However, these guidelines have not been widely incorporated into clinical practice. The SSC is a compilation of international recommendations for the management of severe sepsis and shock. Many of these recommendations are interventions to prevent further system deterioration during and after diagnosis. Even when the presence of sepsis or progression to sepsis is suspected early in the course of treatment, timely implementation of adequate treatment management and guideline compliance are still a challenge. Therefore, the effectiveness of the guideline in preventing clinical complications for this population is still unclear to clinicians and researchers alike. The majority of studies have focused on early detection and prevention of sepsis and little is known about the compliance rate to SSC and the impact of compliance on the prevention of sepsis-related complications. Further, the measurement of adherence to individual SSC recommendations rather than the entire SSC is, to our knowledge, limited. The majority of studies have used traditional randomized control trials with analytic techniques such as regression modeling to adjust for risk factors known from previous research. Data-driven methodologies, such as data mining techniques and machine learning, have the potential to identify new insights from electronic health records (EHRs) that can strengthen existing EBP guidelines. The national mandate for all health professionals to implement interoperable EHRs by 2015 provides an opportunity for the reuse of potentially large amounts of EHR data to address new research questions that explore patterns of patient characteristics, evidence-based guideline interventions, and improvement in health. Furthermore, expanding the range of variables documented in EHRs to include team-based assessment and intervention data can increase our understanding of the compliance with EBP guidelines and the influence of these guidelines on patient outcomes. In the absence of such data elements, adherence to guidelines can only be inferred; it cannot be directly observed. In this section, we present a methodology for using EHR data to estimate the compliance with the SSC guideline recommendations and also estimate the effect of the individual recommendations in the guideline on the prevention of in-hospital mortality and sepsis-related complications in patients with severe sepsis and septic shock. \subsection{Methods} Data from the EHR of a health system in the Midwest was transferred to a clinical data repository (CDR) at the University of Minnesota which is funded through a Clinical Translational Science Award. After IRB approval, de-identified data for all adult patients hospitalized between 1/1/09 to 12/31/11 with a severe sepsis or shock diagnosis was obtained for this study. \subsubsection{Data and cohort selection} The sample included 186 adult patients age 18 years or older with an ICD-9 diagnosis code of severe sepsis or shock (995.92 and 785.5*) identified from billing data. Since 785.* codes corresponding to shock can capture patients without sepsis, patients without severe sepsis or septic shock, and patients who did not receive antibiotics were excluded. These exclusions aimed to capture only those patients who had severe sepsis and septic shock, and were treated for that clinical condition. The final sample consisted of 177 patients. \subsubsection{Variables of interest} Fifteen predictor variables (baseline characteristics) were collected. These include sociodemographics and health disparities data: age, gender, race, ethnicity, and payer (Medicaid represents low income); laboratory results: lactate and white blood cells count (WBC); vital signs: heart rate (HR), respiratory rate (RR), temperature (Temp), mean arterial blood pressure (MAP); and diagnoses for respiratory, cardiovascular, cerebrovascular, and kidney-related comorbid conditions. ICD-9 codes for comorbid conditions were selected according to evidence in the literature . Comorbidities were aggregated from the patient’s prior problem list to detect preexisting (upon admission) respiratory, cardiovascular, cerebrovascular, and kidney problems. Each category was treated as yes/no if any of the ICD-9 codes in that category were present. The outcomes of interest were inhospital mortality and development of new complications (respiratory, cardiovascular, cerebrovascular, and kidney) during the hospital encounter. New complications were determined as the presence of ICD-9 codes on the patients billing data that did not exist at the time of the admission. \subsubsection{Study design} This study aimed to analyze compliance with the SSC guideline recommendations in patients with severe sepsis or septic shock. Therefore, the baseline (TimeZero) was defined as the onset of sepsis and the patients were under observation until discharged. Unfortunately, the timestamp for the diagnoses is dated back to the time of admission; hence the onset of sepsis needs to be estimated. The onset time for sepsis was defined as the earliest time during a hospital encounter when the patient meets at least two of the following six criteria: MAP < 65, HR >100, RR >20, temperature < 95 or >100.94, WBC < 4 or > 12, and lactate > 2.0. The onset time was established based on current clinical practice and literature on sepsis5. The earliest time when two or more of these aforementioned conditions were met, a TimeZero flag was added to the time of first occurrence of that abnormality, and the timing of the SSC compliance commenced. \subsubsection{Guideline compliance} SSC guideline recommendations were translated into a readily computable set of rules. These rules have conditions related to an observation (e.g. MAP < 65 Hgmm) and an intervention to administer (e.g. give vasopressors) if the patient meets the condition of the rule. The SSC guideline was transformed into 15 rules in a computational format, one for each recommendation in the SSC guideline recommendations, and each rule was evaluated for each patient (see Figure 1). After each rule is an abbreviated name subsequently used in this paper. \begin{figure}[ht!] \centering \includegraphics[width=70mm, height=100mm]{image1.png} \caption{SSC rules for measuring guideline compliances \label{causa}} \end{figure} We call the treatment of a patient compliant (exposed) for a specific recommendation, if the patient meets the condition of the corresponding rule any time after TimeZero and the required intervention was administered; the treatment is non-compliant (unexposed) if the patient meets the condition of the corresponding rule after TimeZero, but the intervention was not administered (any time after TimeZero); and the recommendtion is not applicable to a treatment if the patient does not meet the condition of the corresponding rule. In estimating compliance (as a metric) with a specific recommendation, we simply measure the number of compliant encounters to which the recommendation is applicable. In this phase of the study, the time when a recommendation was administered was not incorporated in the analysis. We also estimate the effect of the recommendation on the outcomes. We call a patient exposed to a recommendation, if the recommendation is applicable to the patient and the corresponding intervention was administered to the patient. We call a patient unexposed to a recommendation if the recommendation is applicable but was not applied (the treatment was non-compliant). The incidence fraction in exposed patients with respect to an outcome is the fraction of patients with the outcome among the exposed patients. The incidence fraction of the unexposed patients can be defined analogously. We define the effect of the recommendation on an outcome as the difference in the incidence fractions between the unexposed and exposed patients. The recommendation is beneficial (protective against an outcome) if the effect is positive, namely, the incidence faction in the unexposed is higher than the incidence fraction in the unexposed patients. \subsubsection{Data quality} Included variables were assessed for data quality regarding accuracy and completeness based on the literature and domain knowledge. Constraints were determined for plausible values, e.g., a CVP reading could not be greater than 50. Values outside of constraints were recoded as missing values. Any observation that took place before the estimated onset of sepsis (TimeZero) was considered a baseline observation. Simple mean imputation was the method of choice for imputing missing values. Imputation was necessary for lactate (7.7\%), temperature (3\%), and WBC (3\%). There was no missing data for the other variables and for the outcomes of interest. Central venous pressure was not included as a baseline characteristic due to the high number of missing values (54\%). \subsubsection{Propensity score matching} Patients who received SSC recommendations may be in worse health than patient who did not receive SSC recommendations. For example, patients whose lactate was measured may have more apparent (and possibly advanced) sepsis than patients whose lactate was not measured. To compensate for such disparities, propensity score matching (PSM) was employed. The goal of PSM is to balance the data set in terms of the covariates between patients exposed and unexposed to the SSC guideline recommendations. This is achieved by matching exposed patients with unexposed patients on their propensity (probability) of receiving the recommendations. This ensures that at TimeZero, pairs of patients, one exposed and one unexposed, are at the same state of health and they only differs in their exposure to the recommendation. PSM is a popular technique for estimating treatment effects. To compute the propensity of patients to receive treatment, a logistic regression model was used, where the dependent variable is exposure to the recommendation and the independent variables are the covariates. The linear prediction (propensity score) of this model was computed for every patient. A new (matched) population was created from pairs of exposed and unexposed patients with matching propensity scores. Two scores match if they differ by no more than a certain caliper (.1 in our study). The effect of the recommendation was estimated by comparing the incident fraction among the exposed and unexposed patients in the matched population. \subsubsection{PSM nested inside bootstrapping simulation} In order to incorporate the effect of additional sources of variability arising due to estimation in the propensity score model and variability in the propensity score matched sample, 500 bootstrap samples were drawn from the original sample. In each of these bootstrap iterations, the propensity score model was estimated using the above caliper matching techniques and the effect of the recommendation was computed with respect to all outcomes. In recent years, bootstrap simulation has been widely employed in conjunction with PSM to better handle bias and confounding variables. For each recommendation and outcome, the 500 bootstrap iterations result in 500 estimates of the effect (of the recommendation on the outcome), approximating the sampling distribution of the effect. \subsection{Results} Table 1 shows the baseline characteristics of the study population. Results are reported as total count for categorical variables, and mean with inter-quartile (25\% to 75\%) range for continuous variables. As shown in Table 1, the majority of patients were male, Caucasian, and had Medicaid as the payer. Before the onset of sepsis, Cardiovascular comorbidities (56.4\%) were common, the mean HR (101.3) was slightly above the normal, as well as lactate (2.8), and WBC (15.8). The mean length of stay for the sample was 15 days, ranging from less than 24 hours to 6 months. TimeZero was within the first 24 hours of admission, and patients at that time were primarily (86.4\%) in the emergency department. \begin{table}[ht] \centering \small \begin{tabular}{lrr} \hline Feature & Mean \\ \hline Total Number of Patients & 177 \\ Average Age & 61 \\ Gender(Male) & 102 \\ Race(Caucasian) & 97 \\ Ethnicity(Latino) & 11 \\ Payer(Medicaid) & 102 \\ White Blood cell & 15.8 \\ Lactate & 28 \\ Mean blood Pressure & 73.9 \\ Temperature & 98.4 \\ Heart Rate & 101.3 \\ Respiratory Rate & 20.6 \\ Cardiovascular & 100 \\ Cerebrovascular & 66 \\ Respiratory & 69 \\ Kidney & 62 \\ \hline \end{tabular} \caption{Demographics statistics of patient population}\label{tbl:amstats} \end{table} In Figure 2, the effects of various rule-combination pairs are depicted. An effect is defined as the difference in the mean rate of progression to complications between the exposed and unexposed groups. Since we used bootstrap simulation, for each rule-complication pair, 500 replications were performed resulting in a sampling distribution for the effect. Sampling distribution for each rule-association pair is presented as boxplots. The boxplots represent the statistic measured, i.e. in this study, the differential impact of a recommendation on mortality between the exposed and unexposed population. When this statistic is 0, the recommendation has no effect. If the recommendation is greater than 0, it means that the recommendation is protective for that specific condition; and if the recommendation is below 0, the recommendation may even increase the risk for the outcome for that specific condition. \begin{figure}[ht!] \centering \includegraphics[width=90mm, height=130mm]{image3.png} \caption{Box-plots of the mean difference between groups (unexposed - exposed) to the guideline recommendations and each of the outcomes of interest. \label{causa}} \end{figure} The panes (groups of boxplots) correspond to the complications and the boxes within each pane correspond to the recommendation (rule). For example, the effect of the Ventilator rule (Recommendation 15: patients in respiratory distress should be put on ventilator) on mortality (Death) is shown in the rightmost box (Ventilator) in the bottom-most pane (Death). Since all effects in the boxplot are above 0, namely the number of observed complications in the unexposed group is higher than in the exposed, compliance with the Ventilator rule reduces the number of deaths. Therefore, the corresponding recommendation is beneficial to protect patients from Death (mortality). In Table 3, we present the 95\% Confidence Intervals for various rule-outcome pairs. 95\% Confidence intervals for various rule-outcome pairs. To further ensure the validity of the results, we examine the propensity score distribution in the exposed and unexposed group. As an example, Figure 3 illustrates the propensity score distribution for a randomly selected bootstrap iteration to measure the effect of Ventilator on Death. The horizontal axis represents the propensity score, which is the probability of receiving the interventions, and the vertical axis represents the density distribution, namely the proportion of patients in each group with a particular propensity for being put on Ventilator. Figure 3 shows substantial overlap between the propensity scores in the exposed and unexposed group. The propensity score overlap represents the distribution; the predictor Ventilator across the exposed and unexposed populations regarding the outcome Death; the balance was successful when the propensity score was applied for this population. Other rule-complication pairs exhibit similar propensity score distribution. \begin{figure}[ht!] \centering \includegraphics[width=90mm, height=60mm]{image4.png} \caption{Distribution of the propensity scores between exposed and unexposed groups for the outcome Death when patients and the SSC recommendation was Ventilator.. \label{causa}} \end{figure} \section{Conclusion} The overall purpose of this study was to use EHR data to determine compliance with the Surviving Sepsis Campaign (SSC) guideline and measure its impact on inpatient mortality and sepsis complications in patients with severe sepsis and septic shock. Results showed that compliance with many of the recommendations was > 95\% for MAP and CVP with fluid resuscitation given for low readings. Other high compliance (greater than 80\%) recommendations were: insulin given for high blood glucose and evaluating respiratory distress. The recommendations with the lowest compliance (< 30\%) were: vasopressor or albumin for continuing low MAP or CVP readings. This may be due to a study design artifact, where the rule only considered interventions initiated after TimeZero (estimated onset of sepsis) while the fluid resuscitation may have taken place earlier. Alternatively, the apparently poor compliance could also be explained with issues related to the coding of fluids: during data validation, we found that it was difficult to track fluids. Our study also demonstrates that retrospective EHR data can be used to evaluate the effect of compliance with guideline recommendations on outcomes. We found a number of SSC recommendations that were significantly protective against more than one complication: Ventilator was protective against Cardiovascular and Respiratory complications as well as Death; use of Vasopressors was protective for Respiratory complications. Other recommendations, BCulture, Antibiotic, Vasopressor, Lactate, CVP, and RespDistress, showed results less consistent with our expectation. For instance, Vasopressor used to treat low MAP, appears to increase cerebrovascular complications. While this finding is not statistically significant, it may be congruent with the fact that small brain vessels are very sensitive to changes in blood pressure. Low MAP can cause oxygen deprivation, and consequently brain damage. Ventilator, Vasopressor, and BGlucose showed protective effects against Respiratory complications. The SSC guideline recommends the implementation of ventilator therapy as soon as any change in respiratory status is noticed. This intervention aims to protect the patient against further system stress, restore hypoxia, help with perfusion across the main respiratory-cardio vessels, and decrease release of toxins due to respiratory efforts. Our study is a proof-of-concept study demonstrating that EHR data can be used to estimate the effect of guideline recommendations. However, for several combinations of recommendations and outcomes, the effect was not significant. We believe that the reason is that guidelines represent workflows and the effect of the workflow goes beyond the effects of the individual guideline recommendations. For example, by considering the recommendations outside the context of the workflow, we may ignore whether the intervention addressed the condition that triggered its administration. If low MAP triggered the administration of vasopressors, without considering the workflow, we do not know whether MAP returned to the normal levels thereafter. Thus we cannot equate an adverse outcome with the failure of the guideline, it may be the result of the insufficiency of the intervention. Moving forward, we are going to model the workflows behind the guidelines and apply the same principles that we developed in this work to estimate the effect of the entire workflow. This phase of our study did not address the timing of recommendations nor the time prior to TimeZero. For this analysis, guideline compliance was considered only after Time-Zero (the estimated onset), since compliance with SSC is only necessary in the presence of suspected or confirmed sepsis. There is no reason to suspect sepsis before TimeZero. However, some interventions may have started earlier, without respect to sepsis. For example, 100\% of the patients in this sample had antibiotics (potentially preventive antibiotics), but only 99 (55\%) patients received it after TimeZero. The EHR does not provide date and time for certain ICD-9 diagnoses. During a hospital stay, all new diagnoses are recorded with the admission date. We know whether a diagnosis was present on admission or not, thus we know whether it is a preexisting or new condition, but do not know precisely when the patient developed this condition during the hospitalization. For this reason, we are unable to detect whether the SSC guideline was applied before or after a complication occurred, thus we may underestimate the beneficial effect of some of the recommendations. For example, high levels of lactate is highly related to hypoxia and pulmonary damage. If these patients were checked for lactate after pulmonary distress, we would consider the treatment compliant with the Lactate recommendation, but we would not know that the respiratory distress was already present at the time of the lactate measurement and we would incorrectly count it as a complication that the guideline failed to prevent. \section{Complex Causal Rule Mining in Irregular Time-Series Data} \subsection{Introduction} Effective management of human health remains a major societal challenge as evidenced by the rapid growth in the number of patients with multiple chronic conditions. Type-II Diabetes Mellitus (T2DM), one of those conditions, affects 25.6 million (11.3\%) Americans of age 20 or older and is the seventh leading cause of death in the United States \cite{centers2011national}. Effective treatment of T2DM is frequently complicated by diseases comorbid to T2DM, such as high blood pressure, high cholesterol, and abdominal obesity. Currently, these diseases are treated in isolation, which leads to wasteful duplicate treatments and suboptimal outcomes. The recent rise in the number of patients with multiple chronic conditions necessitates comprehensive treatment of these conditions to reduce medical waste and improve outcomes. Finding optimal treatment for patients who suffer from multiple associated diseases, each of which can have multiple available treatments is a complex problem. We could simply use techniques based on association, but a reasonable algorithm would likely find that the use of a drug is associated with some unfavorable outcome. This does not mean that the drug is harmful; in fact in many cases, it simply means that patients who take the drug are sicker than those who do not and thus they have a higher chance of the unfavorable outcome. What we really wish to know is whether a treatment \emph{causes} an unfavorable outcome, as opposed to being merely associated with it. The difficulty in quantifying the effect of interventions on outcomes stems from subtle biases. Suppose we wish to quantify the effect of a cholesterol-lowering agent, statin, on diabetes. We could simply compare the proportion of diabetic patients in the subpopulation that takes statin and the subpopulation that does not and estimate the effect of statin as the difference between the two proportions. This method would give the correct answer only if the statin-taking and non-statin-taking patients are identical in all respects that influence the diabetes outcome. We refer to this situation as treated and untreated patients being \emph{comparable}. Unfortunately, statin taking patients are not comparable to non-statin-taking patients, because they take statin to treat high cholesterol, which by and in itself increases the risk of diabetes. High cholesterol \emph{confounds} the effect of statin. Many difference sources of bias exist, confounding is just one of the many. In this manuscript, we are going to address several different sources of bias, including confounding. Techniques to address such biases in causal effect estimation exist. However, these techniques have been designed to quantify the effect of a single intervention. In trying to apply these techniques to our problem of finding optimal treatment for patients suffering from varying sets of diseases, we face two challenges. First, patients with multiple conditions will likely need a combination of drugs. Quantifying the effect of multiple concurrent interventions is semantically different from considering only a single intervention. The key concept in estimating the effect of an intervention is \emph{comparability}: to estimate the effect of intervention, we need two groups of patients who are identical in all relevant aspects except that one group receives the intervention and the other group does not. For a single intervention, the first group is typically the sickest patients who still do not get treated and the second group consists of the healthiest patient who get treatment. They are reasonably in the same state of health. However, when we go from a single intervention to multiple intervention and try to estimate their \emph{joint} effect, comparability no longer exists. A patient requiring multiple simultaneous interventions is so fundamentally different from a patient who does not need any intervention that they are not comparable. The other key challenge in finding optimal intervention sets for patients with combinatorial sets of diseases is the combinatorial search space. Even if we could trivially extend the methods for quantifying the effect of a single intervention to a set of concurrent interventions, we would have to systematically explore a combinatorially large search space. The association rule mining framework \cite{agrawal1994fast} provides an efficient solution for exploring combinatorial search spaces, however, it only detects associative relationships. Our interest is in causal relationships. In this manuscript, we propose causal rule mining, a framework for transitioning from association rule mining towards causal inference in subpopulations. Specifically, given a set of interventions and a set of items to define subpopulations, we wish to find all subpopulations in which effective intervention combinations exist and in each such subpopulation, we wish to find all intervention combinations such that dropping any intervention from this combination will reduce the efficacy of the treatment. We call these \emph{closed intervention sets}, which are not be confused with closed item sets. As a concrete example, interventions can be drugs, subpopulations can be defined in terms of their diseases and for each subpopulations (set of diseases), our algorithm would return effective drug cocktails of increasing number of constituent drugs. Leaving out any drug from the cocktail will reduce the efficacy of the treatment. Closed intervention sets allow us to go from estimating a single intervention to multiple interventions. To address the exploration of the combinatorial search space, we propose a novel frequency-based anti monotonic pruning strategy enable by the closed intervention set concept. The essence of antimonotonic property is that if a set $I$ of interventions does not satisfy a criterion, none of its supersets will. The proposed pruning strategy based on the closed intervention is strictly more efficient than the traditional pruning strategy used by the Apriori algorithm \cite{agrawal1994fast}. Underneath our combinatorial exploration algorithm, we utilize the Rubin-Neyman model of causation \cite{sekhon2008neyman}. This model sets two conditions for causation: a set $X$ of interventions causes a change in $Y$ iff $X$ happens before $Y$ and $Y$ would be different had $X$ not occurred. The unobservable outcome of what would happen had a treated patient not received treatment is a \emph{potential outcome} and needs to be estimated. We present and compare five methods for estimating these potential outcomes and describe the biases these methods can correct. Typically the ground truth for the effect of drugs is not known. In order to assess the quality of the estimates, we conduct a simulation study utilizing five different synthetic data set that introduce a new source of bias. We will evaluate the effect of the bias on the five proposed methods underscoring the statements with rigorous proofs when possible. We also evaluate our work on a real clinical data set from Mayo Clinic. We have data for over 52,000 patients with 13 years of follow-up time. Our outcome of interest is 5-year incident T2DM and we wish to extract patterns of interventions for patients suffering from combinations of common comorbidities of T2DM. First, we evaluate our methodology in terms of the computational cost, demonstrating the effectiveness of the pruning methodologies. Next, we evaluate the patterns qualitatively, using patterns involving statins. We show that our methodology extracted patterns that allow us to explain the controversial patterns surrounding statin \cite{huupponen2013statins}. \myparag{Contributions.} (1) We propose a novel framework for extracting causal rules from observational data correcting for a number of common biases. (2) We introduce the concept of closed intervention sets to extend the concept of quantifying the effect of a single intervention to a set of concurrent interventions sidestepping the patient comparability problem. Closed intervention sets also allow for a pruning strategy that is strictly more efficient than the traditional pruning strategy used by the Apriori algorithm \cite{agrawal1994fast}. (3) We compare five methods of estimating causal effect from observational data that are applicable to our problem and rigorously evaluate them on synthetic data to mathematically prove (when possible) why they work. \subsection{Background: Association Rule Mining} We first briefly review the fundamental concepts of association rule mining and extend these concepts to causal rule mining in the next section. Consider a set $\mathcal{I}$ of \textbf{items}, which are single-term predicates evaluating to `true' or `false'. For example, $\{age>55\}$ can be in item. A k-\textbf{itemset} is a set of $k$ items, evaluated as the conjunction (logical 'and') of its constituent items. Consider a dataset D = \{ $d_1,d_2.....d_n$ \}, which consists of $n$ \textbf{observations}. Each observation, denoted by $D_j$ is a set of items. An itemset $I={i_1,i_2,\ldots,i_k}$ ($I\subset\mathcal{I}$) \textbf{supports} an observation $D_j$ if all items in $I$ evaluate to `true' in the observation. The \textbf{support} of $I$ is the fraction of the observations in $D$ that support $I$. An itemset is \textbf{frequent} if its support exceeds a pre-defined minimum support threshold. A association rule is a logical implication of form $X \Rightarrow Y$, where $X$ and $Y$ are disjoint itemsets. The support of a rule is $\mathrm{support}(XY)$ and the \textbf{confidence} of the rule is \begin{displaymath} \mathrm{conf}(X\Rightarrow Y)=\frac{\mathrm{support}(XY)}{\mathrm{support}(X)}=\mathrm{P}(Y|X). \end{displaymath} \subsubsection{Causal Rule Mining} Given an \textbf{intervention} itemset $X$ and an \textbf{outcome} item $Y$, such that $X$ and $Y$ are disjoint, a causal rule is an implication of form $X \rightarrow Y$, suggesting that $X$ \emph{causes} a change in $Y$. Let the itemset $S$ define a \textbf{subpopulation}, consisting of all observations that support $S$. This subpopulation consists of all observations for which all items in $S$ evaluate to `true'. The \textbf{causal rule} $X \rightarrow Y|_S$ implies that the intervention $X$ has causal effect on $Y$ in the subpopulation defined by $S$. The quantity of interest is the \textbf{causal effect}, which is the change in $Y$ in the subpopulation $S$ caused by $X$. We will formally define the metric used to quantify the causal effect shortly. \myparag{Rubin-Neyman Causal Model.} $X$ has a causal effect on $Y$ if (i) $X$ happens earlier than $Y$ and (ii) if $X$ had not happened, $Y$ would be different \cite{sekhon2008neyman}. Our study design ensures that the intervention $X$ precedes the outcome $Y$, but fulfilling the second conditions requires that we estimate the outcome for the same patient both under intervention and without intervention. \mysubparag{Potential Outcomes.} Every patient in the dataset has two potential outcomes: $Y_0$ denotes their outcome had they not had the intervention $X$; and $Y_1$ denotes the outcome had they had the intervention. Typically, only one of the two potential outcomes can be observed. The observable outcome is the \textbf{actual} outcome (denoted by $Y$) and the unobservable potential outcome is called the \textbf{counterfactual} outcome. Using the definition of counterfactual outcome, we can now define the metric for estimating the change in $Y$ caused by $X$. \textbf{Average Treatment response on the Treated} (ATT) is a widely known metric in the causal literature and is computed as follows: \begin{equation*} \mathrm{ATT}(X \rightarrow Y|_S) = \mathbb{E} [ Y_1 - Y_0 ]_{X=1} = \mathbb{E}[Y_1]_{X=1}-\mathbb{E}[Y_0]_{X=1}, \end{equation*} where $\mathbb{E}$ denotes the expectation and the $X=1$ in the subscript signals that we only evaluate the expectation in the treated patients $(X=1)$. ATT aims to compute an average per-patient change caused by the intervention. $Y_0$ = $Y_1$, indicates that the intervention resulted in no change in outcome for the patient. \mysubparag{Biases.} Beside $X$, numerous other variables can also exert influence over $Y$, leading to biases in the estimates. To correct for these biases, we have correctly account for these other effects. The quintessential tool for this purpose is the causal graph, depicted in Figure \ref{fig:causalGraph}. The nodes of this graph are sets of variables that play a causal role and edges are causal effects. This is not a correlation graph (or dependence graph), because for example, $U$ and $Z$ are dependent given $X$, yet there is no edge between them. Variables (items in $\mathcal{I}$) can exert influence on the effect of $X$ on $Y$ in three way: they may only influence $X$, they may only influence $Y$ or them may influence both $X$ and $Y$. Accordingly, variables can be categorized into four categories: \begin{tabular}{cl} $V$& are variables that directly influence $Y$ and thus have \\ & \emph{direct effect} on $Y$ \\ $U$& are variables that only influence $Y$ through $X$ and \\ &thus have \emph{indirect effect} on $Y$;\\ $Z$& are variables that influence both $X$ and $Y$ and are \\ &called \emph{confounders}; and finally \\ $O$& are variables that do not influence either $X$ or $Y$ \\ &and hence can be safely ignored. \end{tabular} \begin{figure}[ht!] \vspace{-3mm} \centering \includegraphics[width=60mm]{causation.jpg} \caption{Rubin-Neyman Causal Model \label{fig:causalGraph}} \end{figure} Most of the causal inference literature assumes that the causal graph is known and true. In other words, we know apriori which variables fall into each of the categories, $U$, $Z$, $V$ and $O$. In our case, only $X$ and $Y$ are specified and we have to infer which category each other variable (item) belongs to. Since this inference relies on association (dependence) rather than causation, the discovered graph may have errors, misclassifications of variables into the wrong category. For example, because of the marginal dependence between $U$ and $Y$, variables in $U$ can easily get misclassified as $Z$. Such misclassifications do not necessarily lead to biases, but they can cause loss of efficiency. \myparag{Problem Formulation.} Given a data set $D$, a set $\mathcal{S}$ of \textbf{subpopulation-defining} items, a set $\mathcal{X}$ of \textbf{intervention} items, a minimal support threshold $\theta$ and a minimum effect threshold $\eta$, we wish to find all subpopulations $S$ ($S\subset\mathcal{S}$) and all intervetions $X$ ($X\subset\mathcal{X}$), $X$ and $S$ are disjoint, such that the causal rule $X\rightarrow Y|_S$ is frequent and its intervention set $X$ is closed w.r.t. our metric of causal effect, ATT. Note that the meaning of $\theta$, the minimum support threshold, is different than in association rule mining literature. Typically, rules with support less than $\theta$ are considered uninteresting, in other cases, it is simply a computational convenience, but in our case, we set $\theta$ to a minimum value such that ATT is estimable for the discovered patterns. We call a causal rule \textbf{frequent} iff its support exceeds the user-specified minimum threshold $\theta$ \begin{displaymath} \mathrm{support}(X\rightarrow Y|_S)=\mathrm{support}(XYS)=\mathrm{P}(XYS) > \theta \end{displaymath} and we call an intervention set $X$ \textbf{closed} w.r.t. to ATT iff \begin{displaymath} \forall x\in X,\quad |ATT(x\rightarrow Y|_{S,X\setminus x})| > \eta, \end{displaymath} where $\eta$ is the user-specified minimum causal effect threshold. In other words, a causal rule is closed in a subpopulation, if its (absolute) effect is greater than any of its sub-rules. \mysubparag{Example.} In a medical setting, $\mathcal{X}$ may be drugs, $\mathcal{S}$ could be comorbid diseases. Then $X$ is a drug-combination that hopefully treats set of diseases $S$. This set of drugs being \emph{closed} w.r.t. ATT means that dropping any drug from $X$ will reduce the overall efficacy of the treatment; the patient is not taking unnecessary drugs. An itemset is closed if its support is strictly higher than all of its subitemsets'. Analogously, an intervention set is closed if its absolute causal effect is strictly higher than all of its subitemsets'. \subsubsection{Frequent Causal Pattern Mining Algorithm} We can now present our algorithm for causal pattern mining. At a very high level, the algorithm comprises of two nested frequent pattern enumeration \cite{goethals2003survey} loops. The outer loop enumerates subpopulation-defining itemsets $S$ using items in $\mathcal{S}$, while the inner loop enumerates intervention combinations using items in $\mathcal{X}\setminus\mathcal{S}$. More generally, $\mathcal{X}$ and $\mathcal{S}$ can overlap but we do not consider that in this paper. Effective algorithms to this end exists \cite{han2000freespan,han2004mining}, we simply use Apriori \cite{agrawal1994fast}. Once the patterns are discovered, the ATT of the interventions are computed, using one of the methods from Section \ref{sec:CausalMetrics} and the frequent, effective patterns are returned. On the surface, this approach appears very expensive, however several novel, extremely effective pruning strategies are possible and we describe them below. \myparag{Potential Outcome Support Pruning.} Let $X$ be an intervention $k$-itemset, $S$ be a subpopulation-defining itemset, and let $X$ and $S$ be disjoint. Further, $X_{-i}$ be an itemset that evaluates to `true' iff all items except the $i$th are `true' but the $i$th item is `false'. Using association rule mining terminology, all items in $X$ except the $i$th are present in the transaction. \begin{defn}[Potential Outcome Support Pruning] We only need to consider itemsets $X$ such that \begin{eqnarray*} \min \{ \mathrm{support}(S,X), & \mathrm{support}(\{S,X_{-1}), \ldots, &\\ &\quad\mathrm{support}(S,X_{-k}) \} & > \theta. \end{eqnarray*} \end{defn} In order to be able to estimate the effect of $x\in X$ in the subpopulation $S$, we need to have observations with $x$ `true' and also with $x$ `false' in $S$. \begin{lemma} Potential Outcome Support Pruning is antimonotonic. \end{lemma} \noindent\textsc{Proof:} Consider a causal rule $X\rightarrow Y|_S$. If the causal rule $X\rightarrow Y|_S$ is infrequent, then \begin{displaymath} \mathrm{support}(XS) < \theta \quad\lor\quad \exists i, \mathrm{support}(X_{-i}S) < \theta. \end{displaymath} If $\mathrm{support}(X_{-i}S)$ had insufficient support, then any extension of it with an intervention item $x$ will continue to have insufficient support, thus the $Xx\rightarrow Y|_S$ rule will have insufficient support. Likewise, if $\mathrm{support}(XS)$ had insufficient support, then any extension of it with an intervention item $x$ will also have insufficient support. \myparag{Pruning based on Causal Effect.} \begin{proposition} Effective causal rule pruning condition is anti-monotonic. \end{proposition} \textsc{Rationale:} To explain the rational, let us return to the medical example, where $X$ is a combination of drugs forming a treatment. Assuming that the effects of drugs are additive, if a casual rule $X\rightarrow Y|_S$ is ineffective because \begin{displaymath} \exists x_i\in X,\qquad |\mathrm{ATT}(x_i\rightarrow Y|_{S,X\setminus x_i}) | < \eta, \end{displaymath} then forming a new rule $Xx_j\rightarrow Y|_S$ will also be ineffective because \begin{displaymath} |\mathrm{ATT}(x_i \rightarrow Y|_{S,x_j,X\setminus x_i}) | \end{displaymath} will be ineffective. In the presence of positive interactions (that reinforce each other's effect) among the drugs, this statement may not hold true. Beside statistical reasoning, one can question why a patient should receive a drug that has no effect in a combination. \subsection{Causal Estimation Methods}\label{sec:CausalMetrics} ATT, our metric of interest, with respect to a single intervention $x$ in a subpopulation $S$ is defined as \begin{displaymath} \mathrm{ATT}(x\rightarrow Y|_S) =\mathbb{E}\left[Y_1 - Y_0\right]_{S,X=1}, \end{displaymath} which is the expected difference between the potential outcome under treatment $Y_1$ and the potential outcome without treatment $Y_0$ in patients with $S$ who actually received treatment. Since we consider treated patients, the potential outcome $Y_1$ can be observed, the potential outcome $Y_0$ cannot. Thus at least one of the two must be estimated. The methods we present below differ in which potential outcome they estimate and how they estimate it. For the discussion below, we consider the variables $X$, $Z$, $U$ and $V$ from the causal graph in Figure \ref{fig:causalGraph}. $X$ is a single intervention, $U$, $V$ and $Z$ can be sets of items. For regression models, we will denote the matrix defined by $U$, $V$ and $Z$ in the subpopulation $S$ as $U$, $V$ and $Z$ (same letter as the variable sets). \myparag{Counterfactual Confidence (CC).} This is the simplest method. We simply assume that the patients who receive intervention $X=1$ and those who do not $X=0$, do not differ in any important respect that would influence $Y$. Under this assumption, $Y_1$ in the treated is simply the actual outcome in the treated and the potential outcome $Y_0$ is simply the actual outcome in the non-treated ($X=0$). Thus \begin{eqnarray*} \mathrm{ATT} &=& \mathrm{conf}((X=1) \rightarrow Y|_S) - \mathrm{conf}((X=0) \rightarrow Y|_S), \\ &=&\mathrm{P}(Y|S, X=1)-\mathrm{P}(Y|S, X=0) \end{eqnarray*} In the followings, to improve readability, we drop the $S$ subscript. All evaluations take place in the $S$ subpopulations. \myparag{Direct Adjustment (DA). } We cannot estimate $Y_0$ in the treated ($X=1$) as the actual outcome $Y$ in the untreated, because the treated and untreated populations can significantly differ in variables such as $Z$ and $V$ that influence $Y$. In Direct Adjustment, we attempt to directly remove the effect of $V$ and $Z$ by including them into a regression model. Since a regression model relates the means of the predictors with the mean of the outcome, we can remove the effect of $V$ and $Z$ by making their means 0. Let $R$ be a generalized linear regression model, predicting $Y$ via a link function $g$ \begin{displaymath} g(Y|V, Z, X) = \beta_0+\beta_VV+\beta_ZZ+\beta_XX. \end{displaymath} Then the (link-transformed) potential outcome under treatment is $g(Y_1)=\beta0+\beta_VV+\beta_ZZ+\beta_X$ and the potential outcome without treatment is $g(Y_0)=\beta0+\beta_VV+\beta_ZZ$. The ATT is then \begin{eqnarray*} \mathrm{ATT}&=&\mathbb{E}\left[g^{-1}(Y_1|V,Z,X=1)\right]_{X=1} -\\ &&\qquad \mathbb{E}\left[g^{-1}(Y_0|V,Z,X=0)\right]_{X=1}. \end{eqnarray*} where $g^{-1}(Y_1|V,Z,X=1)$ is prediction for an observation with the observed $V$ and $Z$ but with $X$ set to 1. The $\mathbb{E}(\cdot)_{X=1}$ notation signifies that these expectation of the predictions are taken only over patients who actually received the treatment. The advantage of DA (over CC) is manyfold. First, it can adjust for $Z$ and $V$ as long the model specification is correct, namely the interaction terms that may exist among $Z$ and $V$ are specified correctly. Second, we get correct estimates even if we ignore $U$, because $U$ is conditionally independent of $Y$ given $X$. This unfortunately only is a theoretical advantage, because we have to infer from the data whether a variable is a predictor of $Y$ and $U$ is marginally dependent on $Y$, so we will likely adjust for $U$, even if we don't need to. \myparag{Counterfactual Model (CM).} In this technique, we build an explicit model for the potential outcome without treatment $Y_0$ using patients with $X=0$. Specifically, we build a model \begin{equation*} g(Y|V,Z,X=0)=\beta_0+\beta_VV+\beta_ZZ. \end{equation*} and estimate the potential outcome as \begin{equation*} g(Y_0|V,Z)=g(Y|V,Z,X=0). \end{equation*} The ATT is then \begin{equation*} \mathrm{ATT} = \mathrm{P}(Y|X=1) - \mathbb{E}\left[ g^{-1}(Y_0|V,Z)\right]_{X=1}. \end{equation*} Similarly to Direct Adjustment, the Counterfactual Model does not depend on $U$. However, in case of the Counterfactual Model, we are only considering the population with $X=0$. In this population, $U$ and $Y$ are independent, thus we will not include $U$ variables into the model. \myparag{Propensity Score Matching (PSM).} The central idea of Propensity Score Matching is to create a new population, such that patients in this new population are comparable in all relevant respects and thus the expectation of the potential outcome in the untreated equals the expectation of the actual outcome in the untreated. Patients are matched based on their propensity of receiving treatment. This propensity is computed as a logistic regression model with treatment as the dependent variable \begin{displaymath} \log\frac{\mathrm{P}(X)}{1-\mathrm{P}(X)}=\beta_0 + \beta_VV+\beta_ZZ. \end{displaymath} Patient pairs are formed, such that in each pair, one patient received treatment and the other did not and their propensities for treatment differ by no more than a user-defined caliper difference $\rho$. The matched population has an equal number of treated and untreated patients, is balanced on $V$ and $Z$, thus the patients are comparable in terms of their baseline risk of $Y$. Hopefully, the only factor causing a difference in outcome is the treatment. For estimating ATT, the potential outcome without treatment is estimated from the actual outcomes of the patients in the matched population who did not receive treatment: \begin{eqnarray*} ATT &=& \mathbb{E}\left[ Y_1 - Y_0 \right] \\ &-& \mathrm{P}(Y|X=1,M)-\mathrm{P}(Y|X=0,M), \end{eqnarray*} where M denotes the matched population. Among the methods we consider, propensity score matching most strictly enforces the patient comparability criterion, however, it is susceptible to misspecification of the propensity regression model, which can erode the quality of the matching. \myparag{Stratified Non-Parametric (SN).} In the stratified estimation, we directly compute the expectation via stratification. The assumption is that the patient in each stratum are comparable in all relevant respects and only differ in the presence or absence of intervention. In each stratum, we can estimate the potential outcome $Y_0$ in the treated as the actual outcome $Y$ in the untreated. \begin{eqnarray*} ATT &=& \mathbb{E}\left[Y_1-Y_0\right]_{X=1} \\ &=& \sum_l P(l|X=1) \left[ P(Y_1|l,X=1)-P(Y_0|l,X=1) \right] \\ &=& \sum_l P(l|X=1) \left[ P(Y|X=1)-P(Y|X=0) \right], \end{eqnarray*} where $l$ iterates over the combined levels of $V$ and $Z$. If we can identify the items that fall into $U$, then we can ignore them, otherwise, we should include them as well into the stratification. The stratified method makes very few assumptions and should arrive at the correct estimate as long as each of the strata are sufficiently large. The key disadvantage of the stratified method lies in stratification itself: when the number of items across which we need to stratify is too large, we may end up dividing the population into excessively many small subpopulations (strata) and become unable to estimate the causal effect in many of them thus introducing bias into the estimate. \subsection{Results} After describing our data and study design, we present three evaluations of the proposed methodology. The first evaluation demonstrates the computational efficiency of our pruning methodologies, isolating the effect of each pruning methods: (i) Apriori support-based pruning, (ii) Potential Outcome Support Pruning, and (iii) Potential Outcome Support Pruning in conjunction with Effective Causal Rule Pruning. In the second section, we provide a qualitative evaluation, looking at patterns involving statin. We attempt to use the extracted patterns to explain the controversial findings that exist in the literature regarding the effect of statin on diabetes. Finally, in order to compare the treatment effect estimates to a ground truth, which does not exits for real drugs, we simulate a data set using proportions we derived from the Mayo Clinic data set. \myparag{Data and Study Design.} In this study we utilized a large cohort of Mayo Clinic patients with data between 1999 and 2013. We included all adult patients (69,747) with research consent. The baseline of our study was set at Jan. 1, 2005. We collected lab results, medications, vital signs and status, and medication orders during a 6-year \emph{retrospective period} between 1999 and the baseline to ascertain the patient's baseline comorbidities. From this cohort, we excluded all patients with a diagnosis of diabetes before the baseline (478 patients), missing fasting plasma glucose measurements (14,559 patients), patients whose lipid health could not be determined (1,023 patients) and patients with unknown hypertension status (498 patients). Our final study cohort consists of 52,139 patients who were followed until the summer of 2013. Patients were phenotyped during the retrospective period. Comorbidities of interest include Impaired Fasting Glucose (IFG), abdominal obesity, Hypertension (HTN; high blood pressure) and hyperlipidemia (HLP; high cholesterol). For each comorbidity, the phenotyping algorithm classified patients into three broad levels of severity: normal, mild and severe. Normal patients show no sign of disease; mild patients are either untreated and out of control or are controlled using first-line therapy; severe patients require more aggressive therapy. IFG is categorized into normal and pre-diabetic, the latter indicating impaired fasting plasma glucose levels but not meeting the diabetes criteria yet. For this study, progression to T2DM within 5 years from baseline (i.e. Jan 1, 2005) was chosen as our outcome of interest. Out of 52,139 patients 3627 patients progressed to T2DM , 41028 patients did not progressed to T2DM and the remaining patients (7484) dropped out of the study. In Table \ref{tbl:stats} we present statistics about our patient population. \begin{table}[ht] \centering \small \begin{tabular}{lrr} \hline &\multicolumn{2}{c}{T2DM}\\ \cline{2-3} & Present & Absent \\ \hline Total Number of Patients & 3627 & 41028 \\ Average Age & 44.73 & 35.58 \\ Male(\%) & 51 & 41 \\ Female(\%) & 49 & 59 \\ \hline \multicolumn{3}{c}{Patient Diagnosis Status (\%)}\\ NormFG & 42 & 84 \\ PreDM & 58 & 16 \\ Normal Obesity & 29& 59 \\ Mild Obesity & 25 & 30\\ Severe Obesity & 46 & 11\\ Normal Hypertension & 48& 69\\ Mild Hypertension & 33 & 23\\ Severe Hypertension & 19 & 08 \\ Normal Hyperlipidemia & 12 & 29\\ Mild Hyperlipidemia & 72 & 64\\ Severe Hyperlipidemia & 16 & 07\\ \hline \multicolumn{3}{c}{Patient Medication Status(\%)}\\ Statin & 26 & 11\\ Fibrates & 03& 01\\ Cholesterol.Other & 02 & 01\\ Acerab & 17 & 07\\ Diuret & 18& 07\\ CCB & 08 & 04\\ BetaBlockers & 22 & 10 \\ HTN.Others & 01 & 01\\ \hline \end{tabular} \caption{Demographics statistics of patient population}\label{tbl:stats} \end{table} \subsubsection{Pruning Efficiency} In our work, we proposed two new pruning methods. First, we have the Potential Outcome Support Pruning, which aims to eliminate patterns for which the ATT is not estimable. Second, we have the Effective Causal Rule Pruning, where we eliminate patterns that do not improve treatment effectiveness relative to the subitemsets. In Figure \ref{causa} we present the number of patterns discovered using (i) the traditional Apriori support based pruning, (ii) our proposed Potential Outcome Support Pruning (POSP), and (iii) POSP in conjunction with Effective Causal Rule Pruning (ECRP). \begin{figure}[ht!] \centering \includegraphics[width=60mm, height=50mm]{cau.png} \caption{Comparison of Pruning Techniques \label{causa}} \end{figure} The number of patterns discovered by POSP is strictly less than the number of patterns discovered by the Apriori pruning. POSP in conjunction with ECRP is very effective. \subsubsection{Statin} In this section, we demonstrate that the proposed causal rule mining methodology can be used to discover non-trivial patterns from the above diabetes data set. In recent years, the use of statins, a class of cholesterol-lowering agents, have been prescribed increasingly. High cholesterol (hyperlipidemia) is linked to cardio-vascular mortality and the efficacy of statins in reducing cardio-vascular mortality is well documented. However, as evidenced by a 2013 BMJ editorial \cite{huupponen2013statins} devoted to this topic, statins are surrounded in controversy. In patients with normal blood sugar levels (labeled as NormalFG), statins have a detrimental effect, they increase the risk of diabetes; yet in pre-diabetic patients (PreDM), it appears to have no effect. What we demonstrate below is that this phenomenon is simply disease heterogeneity. First, we describe how this problem maps to the causal rule mining problem. Our set of interventions ($\mathcal{X}$) consists of statin and our subpopulation defining variables consist of the various levels of HTN, HLP and IFG ($\mathcal{S}$). Our interest is the effect of statin ($x$) on T2DM ($Y$) in all possible subpopulations $S$, $S\subset\mathcal{S}$. In this setup, HTN, which is associated with both hyperlipidemia (and statin use), as well as with T2DM, is a confounder ($Z$). A cholesterol drug, other than statin, (say) fibrates, are in the $U$ category: they are predictive of statin (patients on monotherapy who take fibrates do not take statins), but have no effect on $Y$, because its effect is already incorporated into the hyperlipidemia severity variables that defined the subpopulation. Variables that only influence diabetes but not statin use (say a diabetes drug) would fall into the $V$ category. All subpopulations have variables that fall into $Z$ and $U$ and some subpopulation may also have $V$. The HLP variable in Table \ref{tbl:stats} uses statin as part of its definition, thus we constructed two new variables. The first one is HLP1, a variable at the borderline between HLP-Normal and HLP-Mild, consisting of untreated patients with mildly abnormal lab results (these fall into HLP-Normal) and patients who are diagnosed and receive a first-line treatment (they fall into HLP-Mild). Comparability is the central concept of estimating causal effects and these patients are comparable at baseline. Similarly, we also created another variable, HLP2, which is at the border of HLP-Mild and HLP-Severe, again consisting of patients who are comparable in relevant aspects of their health at baseline. \begin{table}[ht] \centering \begin{tabular}{lrrrrr} \hline $S$ & CC & DA & CM & PSM & SN \\ \hline PreDM & 0.145 & 0.022 & 0.010 & 0.022 & 0.017 \\ NormFG & 0.060 & 0.023 & 0.034 & 0.017 & 0.029 \\ HLP1 & 0.078 & 0.019 & 0.014 & 0.010 & 0.010 \\ HLP2 & 0.021 & -0.013 & -0.010 & -0.021 & -0.015 \\ PreDM,HLP1 & 0.067 & 0.018 & 0.021 & 0.004 & 0.002 \\ PreDM,HLP2 & 0.001 & -0.038 & -0.031 & -0.048 & -0.043 \\ NormFG,HLP1 & 0.043 & 0.020 & 0.015 & 0.014 & 0.013 \\ NormFG,HLP2 & 0.017 & -0.002 & -0.002 & -0.005 & -0.004 \\ \hline \end{tabular} \caption{ATT due to statin in various subpopulations $S$ as estimated by the 5 proposed methods.}\label{tbl:statin} \end{table} Table \ref{tbl:statin} presents the ATT estimates obtained by the various methods proposed in Section 3.4 for some of the most relevant subpopulations. Negative ATT indicates beneficial effect and positive ATT indicates detrimental effect. Counterfactual confidence (CC) estimates statin to be detrimental in all subpopulations. While statins are known to have detrimental effect in patients with normal glucose levels \cite{huupponen2013statins}, it is unlikely that statins are universally detrimental, even in patients with severe hyperlipidemia, the very disease it is supposed to treat. The results between DA, CM, PSM and SN are similar, with PSM and SN having larger effect sizes in general. The picture that emerges from these results is that patients with severe hyperlipidemia appear to benefit from statin treatment even in terms of their diabetes outcomes, while statin treatment is moderately detrimental for patients with mild hyperlipidemia. Bootstrap estimation was used to compute the statistical significance of these results. For brevity, we report the results only for PSM. The estimates are significant in the following subpopulations: NormFG, PreDM+HLP2 (p-values are <.001) and NormFG+HLP1 (p-value .05). The true ATT in these subpopulations is not know. To investigate the accuracy that the various methods achieve, we use simulated that is largely based on this example \cite{huupponen2013statins, castrostatin}. \subsubsection{Synthetic Data} In this section, we describe four experiments utilizing synthetic data sets, each of which introduces a new potential source of bias. Our objective is to illustrate the ability of the five methods from Section 3.4 for adjusting for these biases. We compare their ATT estimates to the true ATT we used to generate the data set and discuss reasons for their success or failure.\\ The rows of Table \ref{tbl:synthetic} correspond to the synthetic data sets in increasing order of the biases we introduced and the columns corresponds to the methods: Conf (confidence), CC (Counterfactual Confidence), DA (Direct Adjustment), CM (Counterfactual Model), PSM (Propensity Score Matching) and SNP (Stratified Non-Parametric). \\ Some of these methods, DA, CM, PSM and SNP take the causal graph structure into account while estimating ATT. Specifically, they require the information whether a variable is a confounder ($Z$), has a direct effect ($V$), an indirect effect ($V$), or no effect ($O$). PSM and SNP are not sensitive to the inclusion of superfluous variables, they simply decrease the method's efficiency. \\ In all of the data sets, we use a notation consistent with Figure 1: $Z$ is the central disease with outcome $Y$; $X$ is the intervention of interest that treats $Z$; $V$ is another disease with direct causal effect on $Y$, but $V$ is independent of $X$; and $U$ is a third disease, which can be treated with $X$, but has no impact on $Y$. All data sets contain 5000 observations. \experiment{I. Direct Causal Effect from $V$.} We assume that every patient in the cohort has disease $Z$ at the same severity. They are all comparable w.r.t. $Z$. 30\% of the patients are subject to the intervention $X$ aimed at treating $Z$, while others are not. Untreated patients face a 25\% chance of having $Y$, while treated patients only have 10\% chance. Some patients, 20\% of the population, also have disease $V$, which directly affects $Y$: it increases the probability of $Y$ by 5\%. \\ In this example the true ATT is -.15, as $X$ reduces the chance of $Y$ by 15\%. Our causal graph dictates that $X$ and $V$ be marginally independent, hence this this effect is homogeneous across the levels of $V$. (Otherwise $V$ would become predictive of $X$ and it would become a confounder. Confounding is discussed in experiments III-V.) All methods estimated the ATT correctly, because ATT does not depend on $V$. We can demonstrate this by stratifying on $V$ and using the marginal independence of $X$ and $V$. \begin{eqnarray*}\hspace{-4mm} ATT &=& \mathbb{E}\left[ \mathrm{P}(Y|X=1) - \mathrm{P}(Y|X=0) \right] \\ &=& \sum_{v\in V} \mathrm{P}(V=v) \left[ \mathrm{P}(Y|V=v,X=1)-\mathrm{P}(Y|V=v,X=0) \right] \\ &=& \sum_{v\in V} \left[ \mathrm{P}(Y,V=v|X=1)-\mathrm{P}(Y,V=v|X=0) \right] \\ &=& \mathrm{P}(Y|X=1) - \mathrm{P}(Y|X=0) \end{eqnarray*} where $v$ denotes the levels of $V$. The marginal independence of $X$ and $V$ is used in step three: \begin{displaymath} \mathrm{P}(Y|V,X)=\frac{\mathrm{P}(Y,V,X)}{\mathrm{P}(V,X)}=\frac{\mathrm{P}(Y,V|X)\mathrm{P}(X)}{\mathrm{P}(X,V)}=\frac{\mathrm{P}(Y,V|X)}{\mathrm{P}(V)}. \end{displaymath} \experiment{II. Indirect Causal Effect.} The setup for this experiment is the same as for the 'Direct Causal Effect' experiment, except we have disease $U$ instead of $V$. Just like $Z$, disease $U$ is also treated by $X$, but $U$ has no direct effect on $Y$; its effect is indirect through $X$. $U$ is thus independent of $Y$ given $X$. The true ATT continues to be -.15. \\ Again, the ATT does not depend on $U$, hence all methods estimated it correctly. To demonstrate that ATT does not depend on $U$, we use stratification and the conditional independence of $Y$ and $U$. \begin{eqnarray*} ATT &=& \mathbb{E}\left[ \mathrm{P}(Y|X=1)-\mathrm{P}(Y|X=0) \right] \\ &=& \sum_{u\in U} \left[ \mathrm{P}(Y|U=u,X=1)\mathrm{P}(U=u|X=1) \right. \\ &&\qquad \left. - \mathrm{P}(Y|U=u,X=0)\mathrm{P}(U=u|X=0) \right] \\ &=& \sum_{u\in U} \left[ \mathrm{P}(Y|X=1)\mathrm{P}(U=u|X=1) \right. \\ && \qquad \left.- \mathrm{P}(Y|X=0)\mathrm{P}(U=u|X=0) \right] \\ &=& \mathrm{P}(Y|X=1)\sum_u\mathrm{P}(U=u|X=1) - \\ &&\qquad \mathrm{P}(Y|X=0)\sum_u\mathrm{P}(U=u|X=0) \\ &=& \mathrm{P}(Y|X=1)-\mathrm{P}(Y|X=0) \end{eqnarray*} \experiment{III. Confounding.} In this experiment, we consider the simplest case of confounding, involving a single disease $Z$, a single treatment $X$ and outcome $Y$. 20\% of the patients have disease $Z$ and 95\% of the diseased patients are treated with $X$, while 5\% are not. All treated patients have $Z$. 25\% of the untreated patients ($Z=1$ and $X=0$) have outcome $Y$; 10\% of the treated patients ($Z=1$ and $X=1$) have the outcome; and only 5\% of the healthy patients ($Z=0$) have it. The true ATT is -.15. \\ In the presence of confounding, the counterfactual confidence and ATT do not coincide. With $z$ denoting the levels of $Z$ and $\mathrm{P}(z)$ being a shorthand for $\mathrm{P}(Z=z)$, \begin{eqnarray*} ATT &=& \mathbb{E} \left[ \mathrm{P}(Y|X=1)-\mathrm{P}(Y|X=0)\right] \\ &=&\sum_z \mathrm{P}(z) \left[ \mathrm{P}(Y|X=1,z) - \mathrm{P}(Y|X=0,z) \right], \end{eqnarray*} while the counterfactual confidence (CC) is \begin{eqnarray*} CC &=& \mathrm{P}(Y|X=1)-\mathrm{P}(Y|X=0)\\ &=& \sum_z \left[ \mathrm{P}(Y|X=1,z)\mathrm{P}(z|X=1) \right.\\ &&\qquad \left. - \mathrm{P}(Y|X=0,z)\mathrm{P}(z|X=0) \right]. \end{eqnarray*} When $\mathrm{P}(z|X)\neq\mathrm{P}(z)$, these quantities do not coincide. However, any method that can estimate $\mathrm{P}(Y|X,Z)$ for all levels of $Z$ and $X$ will arrive at the correct ATT estimate. We used logistic regression in our implementation of the Direct Adjustment method, which can estimate $\mathrm{P}(Y|X,Z)$ when $X$ and $Z$ have no interactions. Note that the causal graph admits interaction between $X$ and $Z$, thus model misspecification can cause biases in the estimate. \experiment{IV. Confounding with Indirect Effect.} In addition to the Confounding experiment, we also have an indirect causal effect from $U$. We now have two diseases, $Z$ and $U$, each of which can be treated with $X$. 20\% of the population has $Z$ and independently, 20\% has $U$. 25\% of the patients who have $Z$ and have no treatment ($X=0$) have $Y$, while only 10\% of the treated ($X=1$) patients have it, regardless of whether the patient has $U$. (If the probability of $Y$ was affected by $U$, it would be another confounder, rather than have an indirect effect.) \\ $X$ has a beneficial ATT of -.15 in patients with $Z==1$ (and $X==1$) and has no effect in patients with $Z=0$ (who get $X$ because of $U$). Thus the true ATT=-.0833. \\ In this experiment, the counterfactual model was the best-performing model. The counterfactual model estimates the ATT through the definition \begin{displaymath} \mathrm{ATT} = \mathbb{E}\left[ \mathrm{P}(Y_1|X=1) - \mathrm{P}(Y_0|X=1)\right], \end{displaymath} where $Y_0$ is the potential outcome the patient would have without treatment $X=0$ and $\mathrm{P}(Y_0|X=1)$ is the counterfactual probability of $Y$ (the probability of $Y$ had they not received $X$) in the population who actually got $X=1$. Note that the potential outcome $Y_1|X=1$ in the patients who actually got $X=1$ is the observed outcome $Y|X=1$. With $u$ and $z$ denoting the levels of $U$ and $Z$, respectively and $\mathrm{P}(u)$ being a shorthand for $\mathrm{P}(U=u)$, \begin{eqnarray*} ATT &=& \mathbb{E}\left[ \mathrm{P}(Y|X=1) - \mathrm{P}(Y_0|X=1)\right] \\ &=& \sum_u \sum_z \mathrm{P}(u,z) \left[ \mathrm{P}(Y|X=1,u,z)-\mathrm{P}(Y_0|X=1,u,z) \right] \\ &=& \sum_z \mathrm{P}(z) \sum \left[ \mathrm{P}(Y|X=1,z)-\mathrm{P}(Y_0|X=1,z) \right] \\ &=& \sum_z \mathrm{P}(z) \sum \left[ \mathrm{P}(Y|X=1,z)-\mathrm{P}(Y|X=0,z) \right], \end{eqnarray*} which coincides with the data generation mechanism, hence the estimate is correct. \\ In the derivation, step 2 holds because $U$ and $Z$ are independent given $X$ and step 3 uses the fact that the counterfactual model estimates $P_0(Y|X=1,z,u)$ from the untreated patients, thus \begin{displaymath} \mathrm{P}(Y_0|X=1,z,u)=\mathrm{P}(Y|X=0,z,u)=\mathrm{P}(Y|X=0,z). \end{displaymath} \experiment{V. Confounding with Direct and Indirect Effects.} In this experiment, we have three diseases: our index disease $Z$, which is a confounder; $U$ having an indirect effect on $Y$ via $X$; and $V$ having a direct effect on $Y$. 20\% of the population has each of $Z$, $V$ and $U$ independently. 95\% of patients with $Z$ or $U$ get the intervention $X$. 25\% of the untreated patients with $Z$ get $Y$, while only 10\% of the treated patients do, regardless of whether they have $U$. Patients with $V$ face a 5\% in their chance of experiencing outcome $Y$. \\ $X$ has a beneficial ATT of -.15 in patients with $Z=1$ and have no effect in patients with $Z=0$ (who get $X$ because of $U$). Whether a patient has $V$ does not influence the effect of $X$. The true ATT is thus -.0833. \\ None of the methods estimated the effect correctly, but Propensity Score Matching came closest. Analytic derivation of why it performed well is outside the scope of this paper, but in essence, its success is driven by its ability to maximally exploit the independence relationships encoded in the causal graph. It can ignore $V$ when it constructs the propensity score model, because $X$ and $V$ are independent (when $Y$ not given); and it can ignore $U$ and $V$ when it computes the ATT in the propensity matched population. On the other hand, the causal graph admits interaction among $U$, $Z$ and $X$, thus a logistic regression model as the propensity score model can be subject to model misspecification. \\ The Stratified Non-Parametric method, which is essentially just a direct implementation of the definition of ATT, underestimated the ATT by almost 25\%. The reason lies in the excessive stratification across all combinations of the levels of $U$, $V$, and $Z$. Even with just three variables, most strata did not have sufficiently many patients (either treated or untreated) to estimate $\mathrm{P}(Y|X,u,v,z)$. In the discussion, we will describe remedies to overcome this problem. \begin{table}[!ht] \begin{tabular}{lrrrrrr} \hline & Conf & CC &DA & CM & PSM & SN \\ \hline I. & +.110 & -.150 & -.150 & -.150 & NA & -.150 \\ II. & +.099 & -.150 & -.150 & -.150 & -.151 & -.149\\ III. & +.099 & +.047 & -.136 & -.136 & -.136 & -.136 \\ IV. & +.077 & +.024 & -.019 & -.083 & -.068 & -.064 \\ V. & +.072 & +.038 & -.037 & -.105 & -.074 & -.067 \\ \hline \end{tabular} \caption{The ATT estimates by the 6 methods in the five experiments. The experimental conditions, the full names of the methods and the true ATT value are specified in the text.}\label{tbl:synthetic} \end{table} \section{Discussion And Conclusion} We proposed the causal rule mining framework, which transitions pattern mining from finding patterns that are associated with an outcome towards patterns that cause changes in the outcome. Finding causal relationships instead of associations is absolutely critical in health care, but also has appeal beyond health care. The numerous biases that arise in establishing causation make quantifying causal effects difficult. We use the Neyman-Rubin causal model to define causation and use the potential outcome framework to estimate the causal effects. We correct for three kinds of potential biases: those stemming from direct causal effect, indirect causal effect and confounding. We compared five different methods for estimating the causal effect, evaluated them on real and synthetic data and found that three of these methods gave very similar results. We have demonstrated on real clinical data that our proposed method can effectively enumerate causal patterns in a large combinatorial search space due to the two new pruning methods we developed for this work. We also demonstrated that the patterns discovered from the data were very rich and we managed to illustrate how the effect of statin is different in various subpopulations. The results we found are consistent with the literature but go beyond what is already known about statin's effect on the risk of diabetes. The discussions and experimental results provided in this paper provide some general guidance on when to use the different methods we described. We recommend counterfactual confidence if no confounding is suspected as counterfactual confidence is computationally efficient and can arrive at the correct solution even when direct effects and indirect effects are present. In the presence of confounding, propensity score matching gave the most accurate results, but due to the need to create a matched population, it has built-in randomness, increasing its variance. Moreover, the counterfactual model as well as the propensity score model are susceptible to model misspecification. If unknown interactions among variables are suspected, we recommend the stratified non-parametric method. With this technique, model misspecification is virtually impossible, however, its sample size requirement is high. The stratified model is suboptimal if we need to stratify across many variables. Stratifying across many variables can fragment the population into many strata too small to afford us with the ability to estimate the effects correctly. If the estimates use some strata but not others, they may be biased. \section*{Acknowledgments} This study is supported by National Science Foundation (NSF) grant: IIS-1344135. Contents of this document are the sole responsibility of the authors and do not necessarily represent official views of the NSF.
2,877,628,091,410
arxiv
\section{Introduction} The 2D Boussinesq system for vorticity of the fluid $\omega(x,t)$ and density (or temperature) $\rho(x,t)$ is given by \begin{eqnarray}\label{vortbous} \partial_t \omega +(u \cdot \nabla)\omega = \partial_{x_1} \rho;\,\,\, \,\,\, \partial_t \rho +(u \cdot \nabla)\rho = 0; \\ \nonumber u = \nabla^\perp (-\Delta)^{-1}\omega,\,\,\,\omega(x,0)=\omega_0(x),\,\,\,\rho(x,0)=\rho_0(x). \end{eqnarray} The 2D Boussinesq system models motion of buoyant incompressible fluid that takes place in atmosphere, ocean, inside Earth or stars, and in every kitchen. Global regularity of solutions is known when classical dissipation is present in at least one of the equations \cite{Chae}, \cite{HouLi}, or under a variety of more general conditions on dissipation (see e.g. \cite{CaoWu1} for more information). The regularity vs finite time blow up question for the inviscid 2D Boussinesq system \eqref{vortbous} is a well known open problem that has appeared, for example, on the ``eleven great problems of mathematical hydrodynamics" list proposed by Yudovich \cite{Yud2000}. There is also an interesting connection between \eqref{vortbous} and axi-symmetric three dimensional Euler equation: the equations are closely related and virtually identical away from the rotation axis (see e.g. \cite{MB}, page 186). There has been much numerical work on trying to find possible singular scenario for solutions of axi-symmetric 3D Euler equation with swirl or 2D Boussinesq system. Often, situations where strong growth of solutions has been observed were later determined to be regular by further numerical or analytic research. For numerical studies, see for example \cite{PumSig}, \cite{EShu}, or a review \cite{Gibbon}. Analytical tools for ruling out blow up scenario include nonlinearity depletion mechanisms discovered by Constantin, Fefferman and Majda \cite{CF1}, \cite{CFM} and later extensions in \cite{HL1}, \cite{HL2}. In a recent work \cite{HouLuo1}, Tom Hou and Guo Luo suggested a new scenario for possible singularity formation in 3D Euler equation. In their scenario, the flow is axi-symmetric and confined in a rotating cylinder with no flow condition on the boundary. The numerically observed growth of vorticity happens at the boundary of the cylinder, away from rotation axis. So one can equivalently work with \eqref{vortbous} set on a square $D$ (corresponding to a fixed angular variable in the 3D case). Motivated by \cite{HouLuo1}, Kiselev and Sverak \cite{KS} considered a similar setting for the 2D Euler equation on a disk. The solutions of 2D Euler equation with smooth initial data are well known to be globally regular. However the work \cite{KS} constructs examples with double exponential growth of the vorticity gradient. This is known to be the fastest possible rate of growth, and \cite{KS} provides the first example where such growth happens. The growth in \cite{KS} also happens on the boundary, and their result confirms that the scenario of \cite{HouLuo1} is indeed an interesting candidate to consider for blow up in solutions of 3D Euler equation or 2D Boussinesq system. Compared to the 2D Euler case, the 2D Boussinesq system presents significant new difficulties for analysis. There are nonlinear effects coming from the coupling in \eqref{vortbous}, and possible growth in vorticity makes solutions harder to control. A simplified one-dimensional model has been suggested in \cite{HouLuo1} and analyzed in \cite{HouLuo2}. It is given by \begin{eqnarray}\label{HL1} \partial_t \omega + u \partial_x \omega = \partial_x \rho; \,\,\, \partial_t \rho + u \partial_x \rho =0; \\ \nonumber u_x = H \omega,\,\,\,\omega(x,0)=\omega_0(x),\,\,\,\rho(x,0)=\rho_0(x) \end{eqnarray} where the the initial data is periodic with period two, the density function is even, the vorticity is odd with respect to $x=0$ and $x=1,$ and $H \omega$ denotes the periodic Hilbert transform of vorticity. Local well-posedness and a number of useful estimates have been proved in \cite{HouLuo2} for the system \eqref{HL1}, and both numerical simulations as well as formal arguments suggesting blow up have been carried out. However a fully rigorous proof of finite time blow up is currently not available for the system \eqref{HL1}. Our goal in this paper is to analyze a related but further simplified system that is inspired by \cite{KS}. The system is set on an interval $[0,1]$ with Dirichlet boundary conditions for $\omega$ and $\rho.$ \begin{equation} \begin{cases} & \partial_t \rho(t,x) + u(t,x)\partial_x\rho(t,x)=0,\\[0.1cm] &\partial_t \omega(t,x)+u(t,x)\partial_x \omega(t,x)=\partial_x\rho(t,x),\\[0.05cm] &u(t,x)=-x\Omega(t,x),\quad\Omega(t,x)=\int_x^1\dfrac{\omega(t,y)}{y}dy,\\ &\omega(0,x)=\omega_0(x),\quad \rho(0,x)=\rho_0(x), \quad \omega_0(0)=\omega_0(1)=\rho_0(0)=\rho_0(1)=0.\\ \end{cases} \label{eq:system} \end{equation} We choose to work with Dirichlet boundary conditions for both $\omega$ and $\rho,$ which is more natural than periodic setting for our version of the Biot-Savart law. The Biot-Savart law linking fluid velocity to vorticity is the main difference between \eqref{HL1} and \eqref{eq:system}. The law for the system \eqref{eq:system} is simpler, even though closely related to the law for the system \eqref{HL1}. This facilitates the analysis. Such simplified Biot-Savart law is motivated by the result proved in \cite{KS}. It is shown there that under certain conditions on the initial data $\omega_0,$ the flow near the origin $O$ is hyperbolic for all times. Namely, apart from small exceptional sectors, the velocity $u$ near $O$ satisfies \begin{eqnarray}\label{u1} u_1(x_1,x_2,t) = - \frac{4}{\pi} x_1\int_{Q(x_1,x_2)} \frac{y_1y_2}{|y|^4} \omega(y,t)\,dy_1dy_2 + x_1 B_1(x_1,x_2,t) \\ \label{u2} u_2(x_1,x_2,t) = \frac{4}{\pi}x_2\int_{Q(x_1,x_2)} \frac{y_1y_2}{|y|^4} \omega(y,t)\,dy_1dy_2 + x_2 B_2(x_1,x_2,t), \end{eqnarray} where $x_1,x_2 \geq 0,$ $|B_{1,2}(x_1,x_2,t)| \leq C(\gamma)\|\omega\|_{L^\infty}$ and $Q(x_1,x_2) = \{ y | y \in D, \,\,\,x_1 \leq y_1,\,\,\, x_2 \leq y_2 \}.$ The first term on the right hand side of \eqref{u1}, \eqref{u2} is the main term, and it is this term that is modeled by $u(x,t) = -x \int_x^1 \omega(y,t)/y \,dy$ in \eqref{eq:system}. Thus one can expect the system \eqref{eq:system} to be a reasonable model of the true 2D Boussinesq dynamics as far as the hyperbolic flow formulas like \eqref{u1}, \eqref{u2} remain valid, in particular all the time up to blow up if it happens. This is far from clear, even though the numerical simulations of Hou and Luo \cite{HouLuo1} seem to suggest that this might be the case. In the first two sections below we will establish local well-posedness and conditional regularity results for the system \eqref{eq:system}, in particular proving an analog of the celebrated Beale-Kato-Majda criterion \cite{BKM}. Then we will prove our main result \begin{thm}\label{mainthm} There exist $\omega_0,\rho_0 \in C_0^\infty([0,1])$ for which the solution of \eqref{eq:system} blows up in finite time. In particular, \[ \int_0^{T^*} \|\omega(t)\|_{L^\infty}\,dt \rightarrow \infty \] for some $T^*<\infty.$ \end{thm} Roughly speaking, the blow-up proof is done by tracking the evolution of $\Omega(x,t)$ along a family of characteristics originating from a sequence of points $x_1\geq x_2 \geq \dots,$ where $x_\infty := \lim_{n\to\infty} x_n>0$ satisfies $\rho_0(x_\infty)>0$. By obtaining lower bound on $\Omega$ on this family of characteristics, we conclude that the characteristic originating from $x_\infty$ must touch the origin before some finite time $T$, which implies that the classical solution has to break down at (or before) time $T$. The main new effect reflected in Theorem~\ref{mainthm} is a rigorous understanding of the mechanism how coupling in 2D Boussinesq can in principle lead to blow up. The main simplifications the result utilizes are lack of two-dimensional geometry which makes certain monotonicity properties easier to control as well as reliance on the stable hyperbolic form of fluid velocity akin to \cite{KS}. These simplifications are clearly significant, but one has to take the first step. \section{Local well-posedness} It will be often useful for us to solve equations for $\omega$ and $\rho$ on characteristics. Denote $\phi_t(x)$ the solution of $$\begin{cases} &\dfrac{d}{dt}\phi_t(x)=u(t,\phi_t(x)),\\ &\phi_0(x)=x.\\ \end{cases}$$ Then we have $$\rho(t,\phi_t(x))=\rho_0(x),$$ $$\omega(t,\phi_t(x))= \omega_0(x)+ \int_0^t(\partial_x\rho) (s,\phi_s(x))ds.$$ First we consider the following lemma which says that $u$ has almost one more derivative than $\omega$ has. \begin{lem} Let $\omega\in C_0^\infty((0,1))$ be a smooth function that is compactly supported in $(0,1)$.\\ Then we have $\begin{cases} &\|u\|_{H^{m+1}}\leq C_m\cdot\|\omega\|_{H^{m}} \mbox{ for } m\geq 0 \mbox{ and}\\ &\|u^{(m+1)}\|_{L^\infty}\leq C_m\cdot\|\omega^{(m)}\|_{L^\infty} \mbox{ for } m\geq 1. \end{cases}$ \end{lem} \begin{proof} Observe that $\|u\|_{L^\infty}\leq \|\omega\|_{L^1}$ and $u^\prime=-\Omega+\omega$. For $p\in[1,\infty)$, we obtain $\|\Omega\|_{L^p}\leq p\cdot\|\omega\|_{L^p}$ by using the following Hardy's inequality with $f(x)=\omega(x)/x:$ $$\Big(\int_0^\infty\Big(\int_x^\infty|f(x)|dx\Big)^pdx\Big)^{1/p}\leq p\Big(\int_0^\infty|f(x)|^px^pdx\Big)^{1/p} $$ It shows that $\|u\|_{H^1}\leq \|\omega\|_{L^2}$. For $u\in H^{m+1}$ estimate with $m\geq1$, observe that $u^{(m+1)}(x)=\sum_{i=0}^m C_{m,i}\cdot\frac{\omega^{(m-i)}(x)}{x^i}$ for some constants $C_{m,i}$. We claim $\|\frac{\omega^{(m-i)}(x)}{x^i}\|_{L^2}\leq C \|\omega^{(m)}\|_{L^2}$. Indeed, observe that for $n\geq1$ and for smooth $f$ which is compactly supported in $(0,1)$ \begin{equation*}\begin{split} \int_0^1\Big(\frac{f(x)}{x^{n}}\Big)^2dx&=\frac{f^2(x)}{(1-2n)x^{2n-1}}\Big|_{x=0}^{x=1}+\int_0^1\frac{2ff^\prime}{(2n-1)x^{2n-1}}dx \\ &\leq \frac{2}{2n-1}\Big( \int_0^1\Big(\frac{f(x)}{x^{n}}\Big)^2dx\Big)^{1/2} \Big( \int_0^1\Big(\frac{f^\prime(x)}{x^{n-1}}\Big)^2dx\Big)^{1/2}. \end{split}\end{equation*} This gives us $ \|f(x)/x^n\|_{L^2}\leq C \|f^\prime(x)/x^{n-1}\|_{L^2}$. We can iterate until we get $\|u\|_{H^{m+1}}\leq C\|\omega\|_{H^m}$. The $u^{(m+1)}\in L^\infty$ estimate follows from Taylor error estimates $|f(x)/x^n|\leq C\|f^{(n)}\|_{L^\infty}$. \end{proof} \begin{rem} The above $u^{(m+1)}\in L^\infty$ estimate does not hold for the case $m=0$. Instead, we have only pointwise estimate: \begin{equation*} |u^\prime(x)|\leq \|\omega\|_{L^\infty}\cdot(1 -\ln(x)) \mbox{ for } x\in(0,1).\end{equation*} However, it will be proved for a solution $\omega$ on $[0,T)$ with finite $T$ that $\int_0^{T}\|\omega(t)\|_{L^\infty}dt<\infty$ implies $\int_0^{T}\|\partial_xu(t)\|_{L^\infty}dt<\infty$ (see Proposition \ref{bkm}). \end{rem} \begin{rem} We can weaken the condition that $\omega$ is compactly supported in $(0,1)$. For example, in order to get $u\in H^{m+1}$ estimate assuming $\omega\in H^m_0((0,1))$ is enough {(where $H^m_0((0,1))$ is the completion of $(C^\infty_0\cap H^m)((0,1))$ by using the topology of $H^m((0,1)))$.} Recall that we used the fact that $\omega$ is compactly supported in $(0,1)$ only to say the boundary term $\Big(\dfrac{f(x)}{x^{n-(1/2)}}\Big)^2\Big|_{x=0}^1$ from integration by parts vanishes. From Sobolev embedding, $\omega\in H^m_0$ implies $\omega\in C^{m-1}$ and $\omega^{(i)}(0)=\omega^{(i)}(1)=0$ for $i=0,1,\dots,(m-1)$. Moreover, the embedding gives us $\omega^{(m-1)}\in {C^{1/2}}$-Holder space, which implies $\dfrac{\omega^{(m-1)}(x)}{\sqrt{x}}\leq C \|\omega\|_{H^m}$. Taking $\omega\in H^m_0$ suffices to carry out the same computation in the same manner as for compactly supported function. Similarly, it is enough for $u^{(m+1)}\in L^\infty$ estimate to assume $\omega^{(m)}\in L^\infty$ and $\omega^{(i)}(0)=\omega^{(i)}(1)=0$ for $i=0,1,\dots,(m-1)$ instead of assuming that $\omega$ is compactly supported in $(0,1)$. \end{rem} \begin{prop}\label{exist} Given any initial data { $(\omega_0,\rho_0)\in H^m_0((0,1))\times H^{m+1}_0((0,1))$} with $m\geq2,$ there exists $T=T(\|\omega_0\|_{H^m}+\|\rho_0\|_{H^{m+1}})>0$ such that the system has a unique classical solution $(\omega,\rho)\in C([0,T];H_0^{m}\times H_0^{m+1})$. \end{prop} \begin{proof} Consider a function $\psi\in C^\infty(\mathbb{R})$ such that $\int \psi =1$, $\psi\geq 0$ and $supp(\psi)\subset[-1,1],$ and set $\psi_\epsilon(x):= \psi(x/\epsilon)/\epsilon$ for $\epsilon>0$. First we replace the initial data $(\omega_0,\rho_0)$ with approximations compactly supported in $(0,1)$, given by $(\tilde{\omega_0},\tilde{\rho_0})(x):=(\omega_0,\rho_0)(\frac{x-2\epsilon}{1-4\epsilon})$. Then we mollify the initial data $(\tilde{\omega_0},\tilde{\rho_0})$ by convolution: $\omega^\epsilon_0:=\tilde{\omega_0}*\psi_\epsilon$ and $\rho^\epsilon_0:= \tilde{\rho_0}*\psi_\epsilon$. Note that $\omega_0^\epsilon$ and $\rho_0^\epsilon$ lie in $C^\infty$ and they are compactly supported in $[\epsilon,1-\epsilon]\subset(0,1)$.\\ Define $u_0^\epsilon(t,x):=-x\int_x^{1}\frac{\omega^\epsilon_{0}(y)}{y}dy$. Then consider the following iteration scheme for $n\geq1:$ \begin{equation}\label{iteration} \begin{cases} & \partial_t \rho_n^\epsilon + u_{n-1}^\epsilon\partial_x\rho_n^\epsilon=0 \mbox{ with } \rho^\epsilon(0) =\rho_0^\epsilon, \\ &\partial_t \omega_n^\epsilon+u_{n-1}^\epsilon\partial_x \omega_n^\epsilon=\partial_x\rho_n^\epsilon \mbox{ with } \omega_n^\epsilon(0) =\omega_0^\epsilon,\\ &u_{n}^\epsilon(t,x)=-x \int_x^{1}\frac{\omega_{n}^\epsilon(t,y)}{y}dy. \end{cases} \end{equation} Namely, for each $n\geq1$, we can solve the characteristic equations $$\begin{cases} & \frac{d}{dt}\phi_n^\epsilon(t,x)=u_{n-1}^\epsilon(t,\phi_n(t,x)),\\ & \phi_n^\epsilon(0,x)=x \end{cases}$$ for $t\in[0,\infty)$ since $u_{n-1}^\epsilon\in C^\infty_{t,x}$. Then define $\rho_n^\epsilon, \omega_n^\epsilon$ for $t\in[0,\infty)$ via the characteristics so that $\rho_n^\epsilon(t,\phi_n^\epsilon(t,x))= \rho_0^\epsilon(x)$ and $\omega_n^\epsilon(t,\phi_n^\epsilon(t,x))= \omega_0^\epsilon(x)+ \int_0^t(\partial_x\rho_n^\epsilon) (s,\phi_n^\epsilon(s,x))ds$. Note that this process can be repeated and we get $\rho_n^\epsilon,\omega_n^\epsilon\in C^\infty_{t,x}$ which are are compactly supported in $(0,1)$ for each $t>0$ since $x=0$ and $1$ are stationary points under the flow.\\ Let $m\geq2$. Simple energy estimates give us that, for any $n\geq1$, $$ \frac{d}{dt}\Big( \|\omega_n^\epsilon(t)\|^2_{H^m} +\|\rho_n^\epsilon(t)\|^2_{H^{m+1}} \Big)\leq C\Big(\|u_{n-1}^\epsilon(t)\|_{H^{m+1}}+1\Big)\Big( \|\omega_n^\epsilon(t)\|^2_{H^m} +\|\rho_n^\epsilon(t) \|^2_{H^{m+1}} \Big). $$ Since $\rho_n^\epsilon(t),\omega_n^\epsilon(t)$ are compactly supported in $(0,1)$, we have $\|u_{n-1}^\epsilon(t)\|_{H^{m+1}}\leq C \|\omega_{n-1}^\epsilon(t)\|_{H^{m}}$ by the previous lemma. As a result, we obtain $\begin{cases}&\frac{d}{dt}f_n^\epsilon(t)\leq C\sqrt{ f_{n-1}^\epsilon(t)}f_n^\epsilon(t),\\& f_n^\epsilon(0)=f^\epsilon_0\end{cases}$ where $f_n^\epsilon(t):= \|\omega_n^\epsilon(t)\|^2_{H^m} +\|\rho_n^\epsilon(t)\|^2_{H^{m+1}} +1$ and $f^\epsilon_0:= \|\omega_0^\epsilon\|^2_{H^m} +\|\rho_0^\epsilon\|^2_{H^{m+1}} +1$. After a straightforward monotonicity argument, this implies \begin{equation}\label{unif_esti_n_e} (f_n^\epsilon(t))\leq 1/((f^\epsilon_0)^{-1/2}-Ct)^2,\quad\mbox{for }n\geq 1 \mbox{ and for } 0\leq t< C/\sqrt{f^\epsilon_0}. \end{equation} Denote $f_0:= \|\omega_0\|^2_{H^m} +\|\rho_0\|^2_{H^{m+1}} +1$. Take $T$ between $0$ and $C/\sqrt{f_0}$. Thanks to the fact that $f^\epsilon_0$ converges to $f_0$ as $\epsilon\rightarrow 0$, we know $T<C/\sqrt{f^\epsilon_0}$ for sufficiently small $\epsilon>0$. Then, for small $\epsilon>0$, we get \begin{equation}\label{bounded}\sup_{t\in[0,T]}\Big( \|\omega_n^\epsilon(t)\|_{H^m}+\|\rho_n^\epsilon(t)\|_{H^{m+1}} \Big)<\infty\end{equation} and, by using the structure of \eqref{iteration}, \begin{equation}\label{time_derivative_bounded}\sup_{t\in[0,T]}\Big( \|\partial_t\omega_n^\epsilon(t)\|_{H^{m-1}}+\|\partial_t\rho_n^\epsilon(t)\|_{H^m} \Big)<\infty.\end{equation} Note that the above estimates are uniform in $n\geq 1$. Then the existence of a solution $(\omega^\epsilon,\rho^\epsilon)\in C([0,T];H_0^{m}\times H_0^{m+1})$ to \eqref{eq:system} corresponding the mollified initial data $(\omega^\epsilon_0,\rho^\epsilon_0)$ follows the standard argument (e.g. see \cite{MB}).\\ We briefly sketch this argument here. First there exists a weak-$*$ limit $(\omega^\epsilon,\rho^\epsilon)\in L^\infty(0,T;H_0^{m}\times H_0^{m+1}),$ which follows from \eqref{bounded} by Banach-Alaoglu theorem. Then, by using \eqref{bounded} and \eqref{time_derivative_bounded}, we can show strong convergence $(\omega_n^\epsilon,\rho_n^\epsilon)\rightarrow (\omega^\epsilon,\rho^\epsilon)$ in $C([0,T];H^{m-\delta} \times H^{m+1-\delta})$ for all real $\delta>0$. Recall that we assumed $m\geq2$. Thus, from Sobolev's inequality, all terms in \eqref{eq:system} become continuous (pointwise). Moreover \eqref{iteration} converges pointwise to \eqref{eq:system}. It shows that $(\omega^\epsilon,\rho^\epsilon)$ is a classical solution to \eqref{eq:system}. Since $H^{-(m-\delta)}\times H^{-(m+1-\delta)}$ is dense in $H^{-m}\times H^{-(m+1)}$, our solution $(\omega^\epsilon,\rho^\epsilon)$ is weakly continuous in time variable as a $H_0^{m}\times H_0^{m+1}$ valued function. Lastly, thanks to weak continuity in time and the estimate \eqref{unif_esti_n_e}, we can show $(\omega^\epsilon,\rho^\epsilon)\in C([0,T];H_0^{m} \times H_0^{m+1})$ by showing that both $\|\omega_n^\epsilon(t)\|_{H^m}$ and $\|\rho_n^\epsilon(t)\|_{H^{m+1}}$ are continuous in time variable $t\in[0,T]$. In addition, we have \begin{equation}\label{unif_esti_e} f^\epsilon(t)\leq 1/((f^\epsilon_0)^{-1/2}-Ct)^2,\quad \mbox{ for } 0\leq t\leq T. \end{equation} To find a solution for the original initial data $(\omega_0,\rho_0)$, recall that $T$ does not depend on $\epsilon$, the estimate \eqref{unif_esti_e} is uniform in $\epsilon>0$, and $f^\epsilon_0$ converges to $f_0$ as $\epsilon\rightarrow 0$. Then we repeat the above procedure as $\epsilon\rightarrow 0$ in order to get a solution $(\omega,\rho)\in C([0,T];H_0^{m}\times H_0^{m+1})$ to \eqref{eq:system} corresponding to $(\omega_0,\rho_0)$ with the same estimate \begin{equation* \Big( \|\omega(t)\|^2_{H^m} +\|\rho(t)\|^2_{H^{m+1}} +1\Big) \leq 1/((f_0)^{-1/2}-Ct)^2\quad \mbox{ for } 0\leq t\leq T. \end{equation*} Its uniqueness in the space $C([0,T];H_0^{m}\times H_0^{m+1})$ is easy to show (e.g. see \cite{ChaeNam}). \end{proof} \section{Beale-Kato-Majda type criteria} \begin{prop}\label{bkm} Let {$(\omega,\rho)\in C([0,T);H^{m}_0\times H_0^{m+1})$} be the unique solution provided by Proposition~\ref{exist} for initial data $(\omega_0,\rho_0)\in H^{m}_0\times H^{m+1}_0$ with $m\geq2$. Then for any finite $T^*\leq T$, the followings are equivalent:\\ \noindent (1). $\sup_{t\in[0,T^*]}\Big( \|\omega(t)\|_{H^m}+ \|\rho(t)\|_{H^{m+1}} \Big)<\infty$.\\ (2). $\int_0^{T^*}\|\partial_xu(t)\|_{L^\infty}dt<\infty$.\\ (3). $\int_0^{T^*}\|\omega(t)\|_{L^\infty}dt<\infty$.\\ (4). $\int_0^{T^*}\|\partial_x\rho(t)\|_{L^\infty}dt<\infty$.\\ \end{prop} \begin{rem} It is well known that for a full 2D inviscid Boussinesq system, either $\int_0^{T^*}\|\nabla u(t)\|_{L^\infty}dt<\infty$ or $\int_0^{T^*}\|\nabla \rho(t)\|_{L^\infty}dt<\infty$ implies (1) (see e.g. \cite{ChaeNam}, \cite{CaoWu1}). Whether (3) implies (1) for 2D inviscid Boussinesq system is an interesting open question. \end{rem} \begin{proof} The implication $(1)\Rightarrow (2),(3)$ and $(4)$ is obvious from Sobolev's inequality. The direction $(2)\Rightarrow (1)$ follows from a standard energy estimate. Indeed, if we denote $M:=\int_0^{T^*}\|\partial_xu(t)\|_{L^\infty}dt<\infty$, then we get for any $t\in[0,T^*]$, \begin{equation*}\begin{split} &\|\partial_x\rho(t)\|^2_{L^2}\leq e^{CM} \|\partial_x\rho_0\|^2_{L^2},\\ &\|\omega(t)\|^2_{L^2}\leq e^{CM}(1+T^*) (\|\omega_0\|^2_{L^2}+\|\partial_x\rho_0\|^2_{L^2}),\\ &\|\partial_x\rho(t)\|_{L^\infty}\leq e^{M} \|\partial_x\rho_0\|_{L^\infty}, \mbox{ and}\\ &\|\omega(t)\|_{L^\infty}\leq \|\omega_0\|_{L^\infty} +e^{M}T^* \|\partial_x\rho_0\|_{L^\infty}.\\ \end{split}\end{equation*} Then straightforward estimates lead us to \begin{equation*}\begin{split} & \|\omega^\prime(t)\|_{L^2}+\|\rho^{\prime\prime}(t)\|_{L^2}\leq C_{M,T^*, \|\omega_0\|_{H^1}, \|\rho_0\|_{H^{2}} }\quad\mbox{and}\\ & \|\omega^\prime(t)\|_{L^\infty}+\|\rho^{\prime\prime}(t)\|_{L^\infty} \leq C_{M,T^*, \|\omega_0\|_{W^{1,\infty}}, \|\rho_0\|_{W^{2,\infty}}. }\\ \end{split}\end{equation*} where $W^{n,p}$ is the usual Sobolev space. We repeat this procedure until we get \begin{equation*} \|\omega^{(m)}(t)\|_{L^2}+\|\rho^{(m+1)}(t)\|_{L^2}\leq C_{M,T^*, \|\omega_0\|_{H^m}, \|\rho_0\|_{H^{m+1}} }.\end{equation*} For (4)$\Rightarrow$(3), we use the characteristic representation for $\omega$: $$\omega(t,\phi_t(x))= \omega_0(x)+ \int_0^t(\partial_x\rho) (s,\phi_s(x))ds.$$ For the direction $(3)\Rightarrow(2)$, we denote $M:=\int_0^{T^*}\|\omega(t)\|_{L^\infty}dt<\infty$. Then we make an $L^\infty$-estimate for $\partial_x u$ in the following way.\\ 1. From $|u(t,x)|\leq \|\omega(t)\|_{L^\infty}\cdot x \cdot (-\ln(x))$, we get $\phi_t(x)\geq x^{\exp(\int_0^{t}\|\omega(s)\|_{L^\infty}ds)}\geq x^{\exp{(M)}}$ for $t\leq T^*$. We also get $\phi_{-t}(x)\leq x^{\exp(-M)}$. 2. From $\partial_x u= -\Omega +\omega$, we get \begin{equation*}\begin{split} |(\partial_x u)(t,\phi_t(x))|&\leq |\omega(t,\phi_t(x))|+|\Omega(t,\phi_t(x))| \leq \|\omega(t)\|_{L^\infty}(1+(-\ln(\phi_t(x)) )\\ &\leq \|\omega(t)\|_{L^\infty}(1+e^M(-\ln(x) )). \end{split}\end{equation*} 3. From $ \partial_t (\partial_x\rho) + u\partial_x(\partial_x\rho) =-(\partial_xu)(\partial_x\rho)$, we obtain \begin{equation*}\begin{split} |(\partial_x \rho)(t,\phi_t(x))|&\leq |(\partial_x \rho_0)(x)| +\int_0^t|(\partial_x u)(s,\phi_s(x))|\cdot|(\partial_x \rho)(s,\phi_s(x))|ds. \end{split}\end{equation*} This implies \begin{equation*}\begin{split} |(\partial_x \rho)(t,\phi_t(x))|&\leq |(\partial_x \rho_0)(x)| \exp\Big(\int_0^t|(\partial_x u)(s,\phi_s(x))|ds\Big)\\ &\leq |(\partial_x \rho_0)(x)| \exp\Big(\int_0^t\|\omega(s)\|_{L^\infty}(1+e^M(-\ln(x) )ds\Big)\\ &\leq |(\partial_x \rho_0)(x)| e^M \left(\frac{1}{x} \right)^{e^M\cdot M}. \end{split}\end{equation*} 4. For a moment, assume that $M$ is so small that $M\cdot e^M\leq \frac{1}{2}$. Thanks to $\omega_0(0)=\partial_x\rho_0(0)=0$, we can estimate \begin{equation*}\begin{split} |\omega(t,\phi_t(x))|&\leq |\omega_0(x)| +\int_0^t|(\partial_x \rho)(s,\phi_s(x))|ds\\ &\leq |\omega_0(x)| + |(\partial_x \rho_0)(x)| \cdot e^M\cdot \left(\frac{1}{x}\right)^{e^M\cdot M}\cdot T^*\\ &\leq \|\omega_0^\prime\|_{L^\infty}\cdot x + \|\rho_0^{\prime\prime}\|_{L^\infty} \cdot x \cdot e^M\cdot \left(\frac{1}{x}\right)^{1/2}\cdot T^*\\ \leq C_0 \sqrt{x} e^M (T^*+1) \end{split}\end{equation*} where $C_0:=\|\omega_0^\prime\|_{L^\infty}+ \|\rho_0^{\prime\prime}\|_{L^\infty}$. So we get a decay estimate of $\omega(t,x)$ near $x=0$: \begin{equation*}\begin{split} |\omega(t,x)|&\leq C_0 \sqrt{\phi_{-t}(x)} e^M (T^*+1)\leq C_0 x^{\frac{1}{2}\exp(-M)} e^M (T^*+1). \end{split}\end{equation*} This implies $L^\infty$ estimate of $\Omega$: \begin{equation*}\begin{split} |\Omega(t,x)|&\leq \int_0^1\frac{|\omega(t,y)|}{y}dy\leq C_0 e^M (T^*+1) \int_0^1 y^{\frac{1}{2}\exp(-M)-1}dy\leq 2C_0 e^{2M} (T^*+1). \end{split}\end{equation*} Then we use $\partial_x u= -\Omega +\omega$ to get $$\|\partial_xu(t)\|_{L^\infty}\leq \|\omega(t)\|_{L^\infty}+2C_0e^{2M}(T^*+1) \quad\mbox{ for } t\in[0,T^*].$$ 5. For general large $M$, we find $\sigma\in(0,T^*)$ such that $M_{\sigma}:=\int_{\sigma}^{T^*} |\omega(s)|_{L^\infty}ds$ is so small that $M_{\sigma}\cdot e^{M_{\sigma}}\leq \frac{1}{2}$. We do the same process not from $t=0$ but from $t=\sigma$ to get $$\|\partial_xu(t)\|_{L^\infty}\leq \|\omega(t)\|_{L^\infty}+2C_{\sigma}e^{2M}((T^*-\sigma)+1) \quad\mbox{ for } t\in[\sigma,T^*]$$ where $C_{\sigma}:= \sup_{t\in[0,\sigma]}\Big( \|\omega(t)\|_{H_0^2}+ \|\rho(t)\|_{H^3_0}\Big)$. Note that $C_\sigma$ is finite because $(\omega,\rho)$ lies in $C([0,T);H_0^2\times H^3_0)$ and $\sigma<T^*\leq T$.\\ Since $\|\partial_xu(t)\|_{L^\infty}\leq C\|u(t)\|_{H^2}\leq C\|\omega(t)\|_{H^1}\leq CC_\sigma$ for any $t\in[0,\sigma]$, we conclude $$\|\partial_xu(t)\|_{L^\infty}\leq \|\omega(t)\|_{L^\infty}+2C_{\sigma}e^{2M}(T^*+1) +CC_\sigm \quad\mbox{ for } t\in[0,T^*].$$ \end{proof} \section{Finite-time blow up examples} Before we construct a finite-time blow up example, let us first state a lemma concerning the growth of $\Omega$ along the characteristics $\phi_t(x)$. \begin{lem} \label{lemma:Omega_t} Along the characteristic $\phi_t(x)$, we have \begin{equation} \frac{d}{dt}\Omega(t,\phi_t(x))=\int_{\phi_t(x)}^1\frac{\omega(t,y)^2}{y}dy +\int_{\phi_t(x)}^1 \frac{\partial_x\rho(t,y)}{y}dy. \label{eq:dt_Omega} \end{equation} \end{lem} \begin{proof} Note that \begin{equation}\begin{split} \frac{d}{dt}\Omega(t,\phi_t(x))&=\partial_t\Omega(t,\phi_t(x)) +u(t,\phi_t(x))~\partial_x\Omega(t,\phi_t(x)). \label{eq:dt_Omega_1} \end{split}\end{equation} Let us compute $\partial_x \Omega$ and $\partial_t \Omega$ respectively. The definition of $\Omega$ directly gives that \begin{equation} \partial_x\Omega(t,x)=-\dfrac{\omega(t,x)}{x}, \label{eq:dx_Omega} \end{equation} whereas $\partial_t \Omega(t,x)$ can be computed as follows: \begin{equation}\begin{split} \partial_t\Omega(t,x)&= \int_x^1\frac{\partial_t \omega(t,y)}{y}dy= -\int_x^1\frac{u(t,y)\partial_x \omega(t,y)}{y}dy+ \int_x^1\frac{\partial_x\rho(t,y)}{y}dy\\ &=\int_x^1{\Omega(t,y)\partial_x \omega(t,y)}dy+ \int_x^1\frac{\partial_x\rho(t,y)}{y}dy\\ &=-\Omega(t,x) \omega(t,x)+\int_x^1\frac{\omega(t,y)^2}{y}dy+\int_x^1 \frac{\partial_x\rho(t,y)}{y}dy.\end{split} \label{eq:dt_Omega_2}\end{equation} In order to obtain \eqref{eq:dt_Omega}, it suffices to replace $x$ by $\phi_t(x)$ in \eqref{eq:dx_Omega} and \eqref{eq:dt_Omega_2}, and plug them into \eqref{eq:dt_Omega_1}. \end{proof} We now prove the following Proposition from which, given Proposition~\ref{bkm}, Theorem~\ref{mainthm} follows. \begin{prop} There exist a pair of smooth functions $\rho_0$ and $\omega_0$ supported in $[\frac{1}{4},\frac{3}{4}]$, such that there is no global classical solution to \eqref{eq:system} with initial data $(\rho_0, \omega_0)$. \label{prop:monotone_blowup} \end{prop} \begin{proof} \textbf{Step 1.} We construct a pair of initial data $(\rho_0, \omega_0)$ as follows. Let $\rho_0$ be smooth, nonnegative, supported in $[\frac{1}{4}, \frac{3}{4}]$, with $\max \rho_0 = \rho_0(\frac{1}{2}) = 2$, and $\rho_0(\frac{1}{3})=1$. Moreover, assume $\rho_0$ is increasing in $[\frac{1}{4}, \frac{1}{2}]$, and decreasing in $[\frac{1}{2}, \frac{3}{4}]$. Let $\omega_0$ be smooth, nonnegative, supported in $[\frac{1}{4}, \frac{1}{2}]$, with $\omega_0\equiv M$ in $[0.3, 0.45]$, where $M$ is a large constant to be determined later. Figure \ref{fig:init_2} gives a sketch of the initial data. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8]{init_2.pdf} \caption{\label{fig:init_2} A sketch of the initial data $(\rho_0, \omega_0)$.} \end{center} \end{figure} Towards a contradiction we assume that there is a global classical solution. Let us first make a few observations. Note that for all $x\in(0,1)$, the characteristic $\phi_t(x)$ must be well-defined for all time, and $\rho$ is conserved along $\phi_t(x)$, i.e. $\rho(t, \phi_t(x)) = \rho_0(x)$. Moreover, for all $t\geq 0$, we have \begin{equation}\omega(x,t) \leq 0 \text{ for }x\in[\phi_t(1/2),1]. \label{eq:omega_temp} \end{equation} To see this, recall that by definition, $\rho_0$ is decreasing in $[\frac{1}{2}, 1]$. If there is a global classical solution, then the characteristics do not cross, hence for all $t\geq 0$, we have $\rho_x(x,t) \leq 0$ in $[\phi_t(1/2),1]$. We then obtain \eqref{eq:omega_temp} as a direct consequence, since the time derivative of $\omega$ along the characteristics $\phi_t(x)$ is equal to $\rho_x$. Moreover, we have that $\phi_t(1/2)$ is increasing for all $t$. Note that $$\frac{d}{dt} \phi_t(1/2) = -\phi_t(1/2) \Omega(t, \phi_t(1/2)) = -\phi_t(1/2)\int_{\phi_t(1/2)}^1 \frac{\omega(y,t)}{y}dy,$$ which is always non-negative due to \eqref{eq:omega_temp}. \vspace{0.1cm} \noindent\textbf{Step 2.} Our goal is to find a point $x_\infty$ (with $\rho_0(x_\infty)>0$) and a finite time $T$, such that $\phi_T(x_\infty) = 0$. This would imply that the classical solution has to break down at (or before) time $T$. To show this, the main idea is to consider a family of characteristics originating from a sequence of points $\{x_n\}$. Let $x_1 = 1/3$ (recall that we let $\rho_0(\frac{1}{3}) =1)$. For $n>1$, find $x_n \in [0,\frac{1}{2}]$, such that $\rho_0(x_n) = \frac{1}{2} + 2^{-n}$. Observe that we have $x_1>x_2>x_3>\cdots$ since $\rho_0$ is increasing in $[0,\frac{1}{2}]$. Denote $x_\infty := \lim_{n\to\infty} x_n$, and it follows that $x_\infty>0$ and $\rho(x_\infty)=1/2$. The choice of $\{x_n\}$ is illustrated in Figure \ref{fig:init_1}. Also, we choose $M$ large enough such that $C_0 := \Omega(0,x_1)=\int_{1/3}^1 \frac{\omega_0(y)}{y} dy>20$ (e.g. $M=200$ should work). Note that at $t=0$, $\Omega(0,x)$ is decreasing in $x$ due to the non-negativity of $\omega_0$. This implies that at $t=0$, we have $\Omega(0, x_n)> 20$ for all $n\geq 1$. \begin{figure}[h!] \begin{center} \includegraphics[scale=1]{init_1.pdf} \caption{\label{fig:init_1} A sketch of the choice of $\{x_n\}$.} \end{center} \end{figure} Let us denote $\rho_n := \rho_0(x_n)$, $\Phi_n(t):=\phi_t(x_n)$, $\Omega_n(t):=\Omega(t,\Phi_n(t))$. Observe that $\frac{d}{dt}\Phi_n(t)=u(t,\Phi_n(t))=-\Phi_n(t) \Omega_n(t)$. Denoting $\psi_n(t):=-\ln\Phi_n(t)$ for $n\geq1$, we get $$\frac{d}{dt}\psi_n(t)=\Omega_n(t).$$ To see how $\Omega_n(t)$ grows in time, we apply Lemma \ref{lemma:Omega_t} to $x_n$, and use the fact that $\phi_t(1/2) \geq 1/2$ for all $t\geq 0$. This gives \begin{equation} \begin{split} \frac{d}{dt} \Omega_n(t) &\geq \int_{\Phi_n(t)}^{\phi_t(1/2)} \frac{\partial_x \rho(t,y)}{y} dy + \int_{\phi_t(1/2)}^{1} \frac{\partial_x \rho(t,y)}{y} dy\\ &\geq \int_{\Phi_n(t)}^{\phi_t(1/2)} \underbrace{\frac{\partial_x \rho(t,y)}{y}}_{\geq 0} dy + \underbrace{\frac{1}{\phi_t(1/2)}}_{\leq 2} \underbrace{\big(\rho(t,1)-\rho(t,\phi_t(1/2)\big)}_{=-2}\\ &\geq \int_{\Phi_n(t)}^{\phi_t(1/2)} \frac{\partial_x \rho(t,y)}{y} dy - 4 \quad \text{ for all }n\geq 1. \end{split} \label{eq:dt_Omega_crude} \end{equation} Recall that $\omega_0$ is chosen such that $\Omega_n(0)\geq 20$, hence \eqref{eq:dt_Omega_crude} immediately implies $\Omega_n(t) \geq 0 $ for all $n$ and all $t\in[0,5)$. Since $\frac{d}{dt} \psi_n(t) = \Omega_n(t)$, we have that $\psi_n(t)$ is increasing for $t\in[0,5)$. For $n\geq 2$, using \eqref{eq:dt_Omega_crude}, we have \begin{equation} \begin{split} \frac{d}{dt} \Omega_n(t) &\geq \int_{\Phi_n(t)}^{\Phi_{n-1}(t)} \frac{\partial_x \rho(t,y)}{y} dy - 4\\ &\geq (\rho_{n-1}-\rho_n)\frac{1}{\Phi_{n-1}(t)} - 4\\ & = 2^{-n} e^{\psi_{n-1}(t)} - 4. \end{split} \label{eq:dt_Omega_3} \end{equation} Collecting everything together, we arrive at the following system of inequalities: \begin{equation} \begin{cases}& \psi^{\prime\prime}_n(t)\geq 2^{-n} e^{\psi_{n-1}(t)}-4\\[0.2cm] & \psi^{\prime}_{n-1}(t) = \Omega_{n-1}(t) \geq 0\\[0.2cm] & \psi_n(t)\geq \psi_{n-1}(t) \geq 0 \end{cases} \quad\text{ for $n\geq 2$, $0\leq t< 5$.} \label{eq:collection_2} \end{equation} \textbf{Step 3.} Take $t_1=1$, and let $t_{n+1}=t_{n}+2^{-n}$ and $\tilde{t}_n=t_n+2^{-(n+1)}$ for $n\geq 1$. Let $T:=\lim_{n\rightarrow \infty} t_n=2$ (Note that \eqref{eq:collection_2} holds until $t=5$, hence it holds for all $t\leq T$). We will show that $a_n:= \psi_n(t_n) \to \infty$ as $n\to \infty$. Take $n\geq 2$. Since $\psi_{n-1}(t)$ is increasing in $t$ for all $t< 5$, we have \begin{equation} \psi^{\prime\prime}_n(t)\geq 2^{-n} e^{\psi_{n-1}(t_{n-1})}-4\quad \text{ for }t_{n-1}\leq t < 5. \label{eq:temp_psi} \end{equation} This implies that for $\tilde{t}_{n-1}\leq t \leq t_n$ (note that all $t_n$'s are less than 5), \begin{equation*} \begin{split} \psi^\prime_n(t)& \geq \psi^\prime_n(\tilde{t}_{n-1})-4(t_n-\tilde t_{n-1})\quad \text{(since $\psi_n''(t)\geq -4$)}\\ &\geq (2^{-n} e^{\psi_{n-1}(t_{n-1})}-4) (\tilde{t}_{n-1}-t_{n-1})+\psi^\prime_n(t_{n-1})-4(t_n-\tilde t_{n-1}) \quad\text{(using \eqref{eq:temp_psi})}\\ &\geq ( 2^{-n} e^{\psi_{n-1}(t_{n-1})}-4) 2^{-n} - 4\cdot 2^{-n}\\ &= ( 2^{-n} e^{\psi_{n-1}(t_{n-1})}-8) 2^{-n}. \end{split} \end{equation*} Once we have the lower bound for $\psi_n'(t)$ for $\tilde t_{n-1} \leq t \leq t_n$, we can use it get a lower bound for $\psi_n(t_n)$ as follows: \begin{equation} \begin{split} \psi_n(t_n)&\geq ( 2^{-n} e^{\psi_{n-1}(t_{n-1})}-8) 2^{-n}\cdot(t_n-\tilde{t}_{n-1})+\psi_n(\tilde{t}_{n-1})\\ &\geq ( 2^{-n} e^{\psi_{n-1}(t_{n-1})}-8) 2^{-2n} +\psi_{n-1}(\tilde t_{n-1})\\ &\geq ( 2^{-n} e^{\psi_{n-1}(t_{n-1})}-8) 2^{-2n} +\psi_{n-1}(t_{n-1}), \end{split} \label{eq:psi_temp} \end{equation} where in the second inequality we used the fact that $\psi_n \geq \psi_{n-1}$, and in the last inequality we used that $\phi_{n-1}$ is increasing for $t\leq 5$, hence $\phi_{n-1}(\tilde t_{n-1}) \geq \phi_{n-1}( t_{n-1})$. \noindent \textbf{Step 4}. Denoting $a_n:= \psi_n(t_n)$, we obtain the following recursive relation from \eqref{eq:psi_temp}: \begin{equation*} \begin{split} a_n &\geq 2^{-2n}(2^{-n}e^{a_{n-1}}-8)+a_{n-1}\\ &\geq e^{a_{n-1}-3n}-1+a_{n-1}\quad \text{ for $n\geq 2$}. \end{split} \end{equation*} One can then use induction to show that if $a_1 \geq 9$, then $a_n\geq 3n+6$ for all $n\geq 1$, hence $a_n\to\infty$ as $n\to\infty$. Finally it remains to check that whether $a_1\geq 9$ is satisfied, i.e. whether $\psi_1(1) \geq 9$. Recall that $\psi_1(0)\geq 0$, and $\psi_1'(t) = \Omega_1(t)$, with $\Omega_1(t)\geq 20$ and $\Omega_1'(t)\geq -4$. Hence we have $\Omega_1(t)\geq 16$ for $0\leq t\leq 1$, which gives $\psi_1(1)\geq 16$, and this concludes the proof. \end{proof} {\bf Acknowledgement.} KC has been partially supported by the National Science Foundation (NSF) grant DMS-1159133. AK and YY have been partially supported by the NSF-DMS grants 1104415 and 1159133. AK acknowledges support of the Guggenheim Fellowship and thanks Guo Luo and Vladimir Sverak for useful discussions.
2,877,628,091,411
arxiv
\section*{Preamble} \hspace*{1.5em}The recto of the Rhind Mathematical Papyrus (RMP) {\bf \cite{Peet,Chace,Robins}} contains the so-called Egyptian $2/D$ table. The genesis of a project such as build this table will never really be apprehended. This is not a project as impressive as the construction of a pyramid or temple, however it has been well and truly succeeded. It is impossible to doubt that pyramid works have not been carried out without a hierarchy of teams well organized in various specialties. A perfectly organized hierarchy that included team leaders and supervisors. \\ \hspace*{1.5em}It is not hard to imagine that a structured similar organization was also used for the $2/D$ table. This table has not been an exercise in style. It is imperative to keep in mind that it can not be the work of a single scribe, but surely results of indefinite periods of trials and improvements done by an elite team of scribes talented for calculating. As it is well known through dialogues of Plato, the idea of a small number of scholars (philosophers) comes frequently. To these people only, was reserved the right to reflect on issues such as calculations or the study of numbers. He knew very well that this type of elite was present in the community of scribes of ancient Egypt. He was also aware of their very advanced knowledges in these areas, but without knowing all secrets. There is no reason today to reject the idea of an elite team or even a chief scribe empowered to decide the last.\\ The time for carrying the table was perhaps over more than a generation \footnote{The creative flash of an inspired scholar (ancient or modern) is short. What is generally much longer is the development of the idea and achievement of tools (theoretical or practical) necessary for its application. Of course once the tools lapped their use takes little time!}, in order to provide a satisfactory completed product. In such a product nothing should have been left to chance and everything has been deliberately chosen. This is not like a school exercise where one can use a decomposition rather than another to solve a given problem.\\ \hspace*{1.5em}Once found suitable methods for calculations, it becomes possible to take a look at ``the preliminary draft" in its entirety. This look is necessary in order to preserve an overall coherence. Some difficulties thus may be highlighted and resolved by a minimum of general decisions, the simplest as possible. The number of potential solutions appears as considerably lower than {\sl ab initio} unrealistic calculations published in the modern literature {\bf \cite {Gillings,BruckSalom}}, namely 22295 or around 28000. We find that it is enough to consider only $71+71$ possibilities, then results could be examined before making consistent decisions. This is realistic. A team spirit is very suitable to make obvious the need for a classification and successive resolutions of difficulties encountered during the project progress. Directives given by a leader are implied. All these ideas have put us on the track to a comprehensive approach. These ones are the filigree of our analysis.\\ \section{Data from the papyrus} \hspace*{1.5em}RMP is also well known by the name of his transcriber, the scribe Ahmes. This latter copied the document around 1650 BCE. The source, now lost, could date from XIIth dynasty, a golden age of the middle kingdom. RMP recto shows a table of $2$ divided by numbers $D$ from $5$ up to $101$ into "unit fractions". Number $3$ may be considered as implicitly included, because its decomposition is used in the verso for some problems or it appears elsewhere in Papyrus Kahun {\bf \cite{Imhausen}}. This fact has been commented pertinently by Abdulaziz {\bf \cite{Abdulaziz} }. \\ \hspace*{1.5em}For $D$ prime only (except number $101$), we present below a reordered excerpt from the $2/D$ table by using {\textcolor{red}{our favorite red numbers $m$}}, that just show the multiplicity of a denominator with $D$. Please note that {\it they are not the red auxiliary numbers used by Ahmes}, {\sl ie} those ``decoded" by Gardner {\bf \cite{GardnerMilo}}, but related with these latter by means of the divisors of the first denominator $D_1$. \\ \begin{table}[htpb] \caption{ \small REORDERED $2/D$ TABLE FOR PRIME NUMBERS $D$} \begin{center} \small \begin{tabular}{c} \begin{tabular}{ccc} $\!\!\!\!\!$\begin{tabular}{|c|r|}\hline $\scriptstyle 2/D=1/D_1+1/D_2 \;\, \sf [2-terms] $ \\ \hline \hline $2/3=1/2+1/6\,_ {\textcolor{red}{2}}$ \\ \hline $2/5=1/3+1/15\,_{\textcolor{red}{ 3}}$ \\ \hline $2/7=1/4+1/28\,_ {\textcolor{red}{4}}$ \\ \hline $2/11=1/6+1/66\,_{\textcolor{red}{ 6}}$ \\ \hline $2/23=1/12+1/276\,_{\textcolor{red}{ {12}}}$ \\ \hline \end{tabular} & $\!\!\!\!\!$\begin{tabular}{|c|r}\hline $\scriptstyle 2/D=1/D_1+1/D_2+1/D_3\;\, \sf [3-terms] $ \\ \hline \hline $2/13=1/8+1/52\,_{\textcolor{red}{ 4}}+1/104\,_{\textcolor{red}{ 8}}$ \\ \hline $2/17=1/12+1/51\,_{\textcolor{red}{ 3}}+1/68\,_{\textcolor{red}{ 4}}$ \\ \hline $2/19=1/12+1/76\,_{\textcolor{red}{ 4}}+1/114\,_{\textcolor{red}{ 6}}$ \\ \hline $2/31=1/20+1/124\,_{\textcolor{red}{ 4}}+1/155\,_{\textcolor{red}{ 5}}$ \\ \hline $2/37=1/24+1/111\,_{\textcolor{red}{ 3}}+1/296\,_{\textcolor{red}{ 8}}$ \\ \hline $2/41=1/24+1/246\,_{\textcolor{red}{ 6}}+1/328\,_{\textcolor{red}{ 8}}$ \\ \hline $2/47=1/30+1/141\,_{\textcolor{red}{ 3}}+1/470\,_{\textcolor{red}{10}}$ \\ \hline $2/53=1/30+1/318\,_{\textcolor{red}{ 6}}+1/795\,_{\textcolor{red}{15}}$ \\ \hline $2/59=1/36+1/236\,_{\textcolor{red}{ 4}}+1/531\,_{\textcolor{red}{9}}$ \\ \hline $2/67=1/40+1/335\,_{\textcolor{red}{ 5}}+1/536\,_{\textcolor{red}{8}}$ \\ \hline $2/71=1/40+1/568\,_{\textcolor{red}{ 8}}+1/710\,_{\textcolor{red}{10}}$ \\ \hline $2/97=1/56+1/679\,_{\textcolor{red}{ 7}}+1/776\,_{\textcolor{red}{8}}$ \\ \hline \end{tabular} & $\!\!\!\!\!$\begin{tabular}{|c|r|}\hline $\scriptstyle 2/D=1/D_1+1/D_2+1/D_3+1/D_4\;\, \sf[4-terms] $ \\ \hline \hline $2/29=1/24+1/58\,_{\textcolor{red}{ 2}}+1/174\,_{\textcolor{red}{ 6}}+1/232\,_{\textcolor{red}{ 8}}$ \\ \hline $2/43=1/42+1/86\,_{\textcolor{red}{ 2}}+1/129\,_{\textcolor{red}{ 3}}+1/301\,_{\textcolor{red}{ 7}}$ \\ \hline $2/61=1/40+1/244\,_{\textcolor{red}{ 4}}+1/488\,_{\textcolor{red}{ 8}}+1/610\,_{\textcolor{red}{ 10}}$ \\ \hline $2/73=1/60+1/219\,_{\textcolor{red}{ 3}}+1/292\,_{\textcolor{red}{ 4}}+1/365\,_{\textcolor{red}{ 5}}$ \\ \hline $2/79=1/60+1/237\,_{\textcolor{red}{ 3}}+1/316\,_{\textcolor{red}{ 4}}+1/790\,_{\textcolor{red}{ 10}}$ \\ \hline $2/83=1/60+1/332\,_{\textcolor{red}{ 4}}+1/415\,_{\textcolor{red}{ 5}}+1/498\,_{\textcolor{red}{ 6}}$ \\ \hline $2/89=1/60+1/356\,_{\textcolor{red}{ 4}}+1/534\,_{\textcolor{red}{ 6}}+1/890\,_{\textcolor{red}{ 10}}$ \\ \hline \end{tabular} \end{tabular} \end{tabular} \end{center} \label{Papyrus} \end{table} \pagebreak \section { Outlines of a global approach} \label{PART 0} Actually the whole $2/D$ project can been viewed as a 3-component set (or 3-phases, if you like). \sf FIRST: \rm discovery of a unique [2-terms] solution, if $D$ is a prime number.\\ \sf SECOND: \rm for a {\sf sub-project [composite numbers] }from $9$ up to $99$, realize that a mini-table, with just four numbers, enables to derive all the composite numbers by using a \sl multiplicative \rm operation \footnote{Idea already suggested by Gillings {\bf \cite{Gillings}}}. \\ Four numbers, \sf 3, 5, 7, 11 \rm are enough. For instance $99$ is reached with \sf{3}\rm $ \times \mathit {33} $ or \sf{11}\rm $ \times \mathit {9}$. This mini-table, a kind of 'Mother-table', looks as follows: \\ \begin{table}[htp] \caption{ \sf Basic Mother-Table} \begin{center} \begin{tabular}{|l|} \hline $2/3=1/2+1/6\,_ {\textcolor{red}{2}}$ \\ [0.01in] $2/5=1/3+1/15\,_{\textcolor{red}{ 3}}$ \\ [0.01in] $2/7=1/4+1/28\,_ {\textcolor{red}{4}}$ \\ [0.01in] $\cdots\cdots\cdots\cdots\cdots\cdots\cdot$ \\ [0.01in] $2/11=1/6+1/66\,_{\textcolor{red}{ 6}}$ \\ [0.01in] \hline \end{tabular} \end{center} \label{MotherTable} \end{table} One sees the first four two-terms decompositions of $2/D$. $D$ being prime, the table is \sf unique. \rm\\ In `theory', {\sl except if a better decision should be token,} any fraction $2/D$ ($D$ composite) could be decomposed from this table by dividing a given row by a convenient number. Consider an example: $2/65\!=\mbox{\sf [ (row 2 )/ (number 13) ]}=\!1/39+1/195\,_{\textcolor{red}{3 }} $, what is the solution adopted in the papyrus. As a matter of fact, all decompositions for the {\sf sub-project} were given in two-terms (except for {\sf 2/95} as a logical consequence of the guidelines adopted by the scribes, that we will justify properly later) \footnote{ All the Egyptian decompositions for composite numbers are analyzed in our second paper {\bf \cite{Brehamet}}} .\\ \hspace*{1.5em}As the `Mother-table' has no need to higher value than $11$ for the {\sf sub-project}, we can better understand that, from $13$, it could have been decided to leave decompositions into 2 terms. \\ {\sf THIRD:} nothing does more obstacle to start a main part of the whole project, namely decompositions into 3 (or 4 terms if necessary), for all prime numbers starting from 13 until 97.\\ The study carried in this paper is devoted to the third phase. \subsection{General presentation} \label{PART I} \setcounter{equation}{0} \renewcommand{\theequation }{\Roman{section}.\arabic{equation}} \hspace*{1.5em}We could have present the problems in the Egyptian manner, as did Abdulaziz {\bf \cite{Abdulaziz}} like for example $47 \quad \overline{30}\quad \overline{141}\quad \overline{470}\quad \mbox{what means}\quad 2/47=1/30+1/141+1/470$, but we preferred a modern way, more easily understandable to us today. This is unrelated to the spirit in which we thought. Consider $D$ as given, $D_1$ is an unknown value to be found. Assume now that $d_2$, $d_3$, $d_4$ are distinct divisors of $D_1$, with $d_2> d_3> d_4$. These are also unknowns to find.\\ In order to standardize the notations, $D$ is used for {\sf D}enominators and $d$ for {\sf d}ivisors.\\ Look at the following (modern) equations that {\sf decompose the 'unity'} in $3$ or $4$ parts: \begin{equation} \mathbf{1}= \frac{D}{2D_1}+ \frac{d_2}{2D_1}+ \frac{d_3}{2D_1}. \end{equation} \begin{equation} \mathbf{1}= \frac{D}{2D_1}+ \frac{d_2}{2D_1}+ \frac{d_3}{2D_1}+ \frac{d_4}{2D_1}. \end{equation} \vspace{0.2em} It can be viewed under another standpoint like additive operations on integers: \begin{equation} 2D_1={D}+ d_2+ d_3. \label{eq:additive3} \end{equation} \begin{equation} 2D_1={D}+ d_2+ d_3+ d_4. \label{eq:additive4} \end{equation} Since $d_2$, $d_3$, $d_4$ divide $D_1$ then we are sure to find Egyptian decompositions. Indeed, dividing by $DD_1$ we always get sums of unit fractions: \begin{equation} \frac{2}{D}= \frac{1}{D_1}+ \frac{1}{(D_1/d_2)D}+ \frac{1}{(D_1/d_3)D}. \label{eq:FEgypt3} \end{equation} \begin{equation} \frac{2}{D}= \frac{1}{D_1}+ \frac{1}{(D_1/d_2)D}+ \frac{1}{(D_1/d_3)D}+ \frac{1}{(D_1/d_4)D}. \label{eq:FEgypt4} \end{equation} This method was apparently followed {\bf \cite {GardnerMilo}} in RMP table for prime numbers $D$ from $13$ up to $97$ .\\ As can be seen, except $D_1$, all denominators of each equation appear as a multiple of $D$, namely \begin{equation} D_i=m_i D, \mbox{\hspace{0.5em}where}\hspace{0.5em}m_i=(D_1/d_i). \label{eq:Relationmd} \end{equation} Let us briefly summarize the possibilities as follows \begin{equation} \frac{2}{D}= \frac{1}{D_1}+ \frac{1}{D_2}+ \frac{1}{D_3}. \label{eq:Egypt3} \end{equation} \begin{equation} \frac{2}{D}= \frac{1}{D_1}+ \frac{1}{D_2}+ \frac{1}{D_3}+ \frac{1}{D_4}. \label{eq:Egypt4} \end{equation} The main task consists in the determination of $D_1$ and the convenient choice of $d_i$, from the additive equations (\ref{eq:additive3}) or (\ref{eq:additive4}). The $d_i$'s are the red auxiliary numbers used by the scribe Ahmes. \begin{equation} d_i=\frac{D_1}{m_i}. \label{eq:Ahmesdi} \end{equation} \section{[2-terms] analysis} \label{TwoTerms} \setcounter{equation}{0} \begin{equation} \frac{2}{D}= \frac{1}{D_1}+ \frac{1}{D_2}. \label{eq:Egypt2} \end{equation} The only comment (admiring) on the subject is that the scribes actually found the right solution (unique) to the problem, namely \begin{equation} D_1=\frac{D+1}{2}\quad \mbox{and}\quad D_2=\frac{D(D+1)}{2}. \end{equation} \section{[3-terms] analysis} \label{ThreeTerms} \setcounter{equation}{0} \sf Right now consider the [3-terms] cases. Egyptians gave:\\ \rm \vspace{1em} \begin{tabular}{ccll} \begin{tabular}{|c|} \hline \tt Ahmes's selections \rm [3-terms]\\ [0.01in] \hline $2/13=1/8+1/52\,_{\textcolor{red}{ 4}}+1/104\,_{\textcolor{red}{ 8}}$ \\ [0.01in] \hline $2/17=1/12+1/51\,_{\textcolor{red}{ 3}}+1/68\,_{\textcolor{red}{ 4}}$ \\ [0.01in] \hline $2/19=1/12+1/76\,_{\textcolor{red}{ 4}}+1/114\,_{\textcolor{red}{ 6}}$ \\ [0.01in] \hline $2/31=1/20+1/124\,_{\textcolor{red}{ 4}}+1/155\,_{\textcolor{red}{ 5}}$ \\ [0.01in] \hline $2/37=1/24+1/111\,_{\textcolor{red}{ 3}}+1/296\,_{\textcolor{red}{ 8}}$ \\ [0.01in] \hline $2/41=1/24+1/246\,_{\textcolor{red}{ 6}}+1/328\,_{\textcolor{red}{ 8}}$ \\ [0.01in] \hline $2/47=1/30+1/141\,_{\textcolor{red}{ 3}}+1/470\,_{\textcolor{red}{10}}$ \\ [0.01in] \hline $2/53=1/30+1/318\,_{\textcolor{red}{ 6}}+1/795\,_{\textcolor{red}{15}}$ \\ [0.01in] \hline $2/59=1/36+1/236\,_{\textcolor{red}{ 4}}+1/531\,_{\textcolor{red}{9}}$ \\ [0.01in] \hline $2/67=1/40+1/335\,_{\textcolor{red}{ 5}}+1/536\,_{\textcolor{red}{8}}$ \\ [0.01in] \hline $2/71=1/40+1/568\,_{\textcolor{red}{ 8}}+1/710\,_{\textcolor{red}{10}}$ \\ [0.01in] \hline $2/97=1/56+1/679\,_{\textcolor{red}{ 7}}+1/776\,_{\textcolor{red}{8}}$ \\ [0.01in] \hline \end{tabular} & \begin{tabular}{c} $\Leftarrow$ \end{tabular} & \begin{tabular}{|c|} \hline \tt Unity decomposition\\ [0.01in] \hline $16 = 13 + 2 + 1_{}$ \\ [0.01in] \hline $24 = 17 + 4 + 3_{}$ \\ [0.01in] \hline $24 = 19 + 3 + 2_{}$ \\ [0.01in] \hline $40= 31 + 5 + 4_{}$ \\ [0.01in] \hline $48 = 37 + 8 + 3_{}$ \\ [0.01in] \hline $48 = 41 + 4 + 3_{}$ \\ [0.01in] \hline $60 = 47 + 10 + 3_{}$ \\ [0.01in] \hline $60 = 53 + 5 + 2_{}$ \\ [0.01in] \hline $72 = 59 + 9 + 4_{}$ \\ [0.01in] \hline $80 = 67 + 8 + 5_{}$ \\ [0.01in] \hline $80 = 71 + 5 + 4_{}$ \\ [0.01in] \hline $112 = 97 + 8 + 7_{}$ \\ [0.01in] \hline \end{tabular} & \begin{tabular}{l} . \end{tabular} \end{tabular} \\ \vspace{1em} \sf The task of finding $D_1$ is rather simple, from the moment when one realizes that it is enough to establish a table of odd numbers $(2n+1)_{|n\geq 1}$ as a sum of two numbers $ d_2 +d_3$, with $d_2>d_3$. This is easy to do and independent of any context. The table contains $n$ doublets \{$d_2, d_3$\} and $\sup(d_2)=2n$. One can start with the lowest values as follows: $d_3=1, d_2=2,4,6, \cdots; d_3=2, d_2=3,5,7, \cdots$ and so on.\rm\\ From Eq.(\ref{eq:additive3}) the first candidate possible for $D_1$ starts at an initial value $D_1^0=(D+1 )/2$ as in Fibonnaci's studies {\bf \cite{Fibonacci}}. We can search for general solutions of the form \begin{equation} D_1^n=D_1^0 + n, \end{equation} whence \begin{equation} 2D_1^n-{D}= 2n+1 =d_2+ d_3. \label{eq:additive3bis} \end{equation} Since one of the two $D_1$ divisors \{$d_2,d_3$\} is even, then $D_1$ can not be odd, it must be even. This was rightly stressed by Bruins {\bf \cite{Bruins}}. From the first table of doublets, a new table (of trials) is built, where this time doublets are selected if $d_2,d_3$ divide $[(D+d_2+d_3)/2]$. This provides a $D_1^n$ possible. In this favorable case, first $D_3$ is calculated by $DD_1/d_3$, then $D_2$ by $DD_1/d_2$.\\ For $D$ given, the table of trials defined by the equation just below \begin{equation} \mathtt{2n+1=d_2+d_3}, \mbox{\hspace{0.5em} where $d_2$ and $d_3$ divide $D_1^n$ }, \label{eq:dividers3terms} \end{equation} is bounded by a $n_{max}$ \footnote{It can be proved that no solution can be found beyond $n =(D-3)/2$.}. By simplicity in our tables, $D_1^n$ will not be written as $D_1^n (d_2,d_3)$.\\ Even by hand, a realization of this table takes few time. For example decompositions into 3 terms lead to a total of trials with only $71$ possibilities! From this low value, it is conceivable to present all results according to an appropriate parameter. Once found a $d_3$, a good idea would be select a $d_2$ the closest as possible of $d_3$. This provides a type of classification never glimpsed to our knowledge. Thus, a {\sf key parameter} of our paper is defined as follows: \begin{equation} \Delta_{d}= d_2 -d_3. \end{equation} {\sf Remarks: Clearly Eq. (\ref{eq:dividers3terms}) is related to Bruins's method of ``parts'' redistribution $d_2,d_3$ {\bf \cite{Bruins}}. However our method is {\sl `artisanal'} and does not need to know the arithmetic properties of $D_1$. Once $D$ given, $D_1^n$ are found by trials, without calculations. Unlike to Bruins which sought some forms of $D_1$ for finding then possible $D$ values. The approach is quite different as well as the reasons justifying the Egyptian choices.}\\ \vspace{0.2em} \hspace*{2.5em}{\sf Although our conceptual formalism is different from that of Abdulaziz {\bf \cite{Abdulaziz}}, we (fortunately) found some similarities, but also elements without counterpart to us. A welcome unison is the following:\\ Let us consider its fractional parameter $[R]$ that is crucial for all its analyses. In our notations we find } \begin{equation} D_1 [R] = (2D_1 - D) = 2n+1 =d_2+ d_3, \end{equation} or equivalently expressed \begin{equation} [R] =\frac{1}{(D_1/ d_2)}+\frac{1}{(D_1/ d_3)}. \end{equation} {\sf When it is said ``{\sl ... keeping the terms of $[R]$ less than $10$ was an essential part of determining how $2$:$n$ is to be decomposed.}'', this should be understood as $(D_1/ d_3)\leq 10$ and formulated for us as the condition (\ref{eq:ConditionD1_3}) with a Top-flag $\mbox{\boldmath $\top$ }\!\!_ f^{\;\;[3]}=10$. }(See below for our Top-flag definition)\\ {\sf However note that the `necessity' of our Top-flag comes directly from the value of $D$, without constituting a check on $D_1$. That only follows from Eq. (\ref {eq:TopFlag3}).\\ \hspace*{1.5em}In contrast, parameter $[Q]$, defined in Ref. {\bf \cite{Abdulaziz}} by $[Q]=1-[R]$, does not appear to us and plays no role in our analyses. In addition, as the impact of closeness ($\Delta_{d}$) does not seem to have been apprehended, it is clear that our argumentation will generally be different. Even if, for some 'easy' cases, we agree.} \hspace*{1.5em}In short, for producing their final table, we assume that the scribes have analyzed all preliminary trial results before doing their choice among various alternatives, considered in their totality, not individually. \\ Furthermore, {\sf due to decimal numeration used by ancient Egyptians}, one can easily understand that a boundary with a Top-flag $\mbox{\boldmath $\top$ }\!\!_ f^{\;\;[3]}$ \rm for the last denominator was chosen with a priority value equal to $\mathbf {10}$ (if possible according to the results given by trials). \\ \hspace*{1.5em}The idea of a Top-flag is far to be a {\it `deus ex machina'}. It naturally arises if we try to solve the problem of decomposition in full generality. See {\sf Appendix A} for more details. \\ Chief scribe wisely decided to impose a upper bound to all the denominators $D_3$, such that \begin{equation} D_3 \leq D \mbox{\boldmath $\top$ }\!\!_ f^{\;\;[3]} . \label{eq:TopFlag3} \end{equation} This cut-off beyond $\mbox{\boldmath $\top$ }\!\!_ f^{\;\;[3]}$ is equivalent to a mathematical condition on $D_1$: \begin{equation} D_1 \leq d_3\,\mbox{\boldmath $\top$ }\!\!_ f^{\;\;[3]} . \label{eq:ConditionD1_3} \end{equation} \hspace*{1.5em}Remark that this condition might be exploited \sf from the beginning \rm of the calculations for avoiding to handle too large denominators $D_3$. Simply find $d_3$, find $d_2$, then calculate $D_1$, if condition ( \ref{eq:ConditionD1_3}) \\ is not fulfilled then quit, do not calculate $D_3$, $D_2$ and go to next values for $d_3$, $d_2$, $D_1$ and so on.\\ Actually, if we follow the method of trials for finding the good choices in the order $d_3 \rightarrow d_2 \rightarrow D_1$, we are naturally led to be careful of the closeness of $d_2$, $d_3$, measured by $\Delta_{d}$. This can suggest the idea of a classification according to increasing values of $\Delta_{d}$. \\ Since this classification seriously enlightens many solutions chosen by the scribes, it is not impossible to imagine that this `{\sl artisan method}' was actually followed. This is a plausible hypothesis, valueless of evidence obviously. An advantage is also that a similar classification can be applied to the decompositions into 4 terms with the same success, see Sect. \ref{FourTerms}.\\ The symbol $^{Eg}$ will be used for indicating Egyptian selections in our tables.\\ Let us now display a preliminary table of trials, see Table \ref{Tble3terms71}.\\ \begin{table} [htp] \caption{\sf Table of trials [3-terms] with increasing order of $\Delta_{d}$, only 71 possibilities ! } \begin{center} \scriptsize \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf Table of trials [3-terms] with increasing order of $\Delta_{d}$ }\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & Possible [3-terms] decompositions \\ [0.01in] \hline \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $8$ & $ \mathbf {2/13=1/8+1/52\,_{\textcolor{red}{ 4}}+1/104\,_{\textcolor{red}{ 8}}}\;\, ^{Eg}$ \\ \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $10$ & $ \mathbf {2/17\mathit{_a}=1/10+1/85\,_{\textcolor{red}{ 5}}+1/170\,_{\textcolor{red}{ 10}}}$ \\ \hline $3$ & $7$ & $4$ & $3$ & $\mathbf{\textcolor{red}{ 1}}$& $12$ & $ \mathbf {2/17\mathit{_b}=1/12+1/51\,_{\textcolor{red}{ 3}}+1/68\,_{\textcolor{red}{ 4}}}\;\, ^{Eg}$ \\ \hline $2$ & $5$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $12$ & $\mathbf {2/19=1/12+1/76\,_{\textcolor{red}{ 4}}+1/114\,_{\textcolor{red}{ 6}}}\;\, ^{Eg}$ \\ \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $16$ & $ \mathbf {2/29=1/16+1/232\,_{\textcolor{red}{ 8}}+1/464\,_{\textcolor{red}{ 16}} }$ \\ \hline $2$ & $5$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $18$ & $ \mathbf {2/31\mathit{_a}=1/18+1/186\,_{\textcolor{red}{ 6}}+1/279\,_{\textcolor{red}{ 9}}}$ \\ \hline $4$ & $9$ & $5$ & $4$ & $\mathbf{\textcolor{red}{ 1}}$& $20$ & $ \mathbf {2/31\mathit{_b}=1/20+1/124\,_{\textcolor{red}{ 4}}+1/155\,_{\textcolor{red}{ 5}}}\;\, ^{Eg}$ \\ \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $20$ & $ \mathbf {2/37=1/20+1/370\,_{\textcolor{red}{ 10}}+1/740\,_{\textcolor{red}{ 20}}}$ \\ \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $22$ & $ \mathbf {2/41\mathit{_a}=1/22+1/451\,_{\textcolor{red}{ 11}}+1/902\,_{\textcolor{red}{ 22}}}$ \\ \hline $3$ & $7$ & $4$ & $3$ & $\mathbf{\textcolor{red}{ 1}}$& $24$ & $ \mathbf {2/41\mathit{_b}=1/24+1/246\,_{\textcolor{red}{ 6}}+1/328\,_{\textcolor{red}{ 8}}}\;\, ^{Eg}$ \\ \hline $2$ & $5$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $24$ & $ \mathbf {2/43=1/24+1/344\,_{\textcolor{red}{ 8}}+1/516\,_{\textcolor{red}{ 12}} }$ \\ \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $28$ & $ \mathbf {2/53=1/28+1/742\,_{\textcolor{red}{ 14}}+1/1484\,_{\textcolor{red}{ 28}}} $ \\ \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $32$ & $ \mathbf {2/61=1/32+1/976\,_{\textcolor{red}{ 16}}+1/1952\,_{\textcolor{red}{ 32}} }$ \\ \hline $2$ & $5$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $36$ & $ \mathbf {2/67=1/36+1/804\,_{\textcolor{red}{ 12}}+1/1206\,_{\textcolor{red}{ 18}}} $\\ \hline $4$ & $9$ & $5$ & $4$ & $\mathbf{\textcolor{red}{ 1}}$& $40$ & $ \mathbf {2/71\mathit{_a}=1/40+1/568\,_{\textcolor{red}{ 8}}+1/710\,_{\textcolor{red}{ 10}} }\;\, ^{Eg}$ \\ \hline $6$ & $13$ & $7$ & $6$ & $\mathbf{\textcolor{red}{ 1}}$& $42$ & $ \mathbf {2/71\mathit{_b}=1/42+1/426\,_{\textcolor{red}{ 6}}+1/497\,_{\textcolor{red}{ 7}}}$ \\ \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $38$ & $ \mathbf {2/73=1/38+1/1387\,_{\textcolor{red}{ 19}}+1/2274\,_{\textcolor{red}{ 38}} }$ \\ \hline $2$ & $5$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $42$ & $ \mathbf {2/79=1/42+1/1106\,_{\textcolor{red}{ 14}}+1/1659\,_{\textcolor{red}{ 21}} }$ \\ \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $46$ & $ \mathbf {2/89\mathit{_a}=1/46+1/2047\,_{\textcolor{red}{ 23}}+1/4094\,_{\textcolor{red}{ 46}} }$ \\ \hline $3$ & $7$ & $4$ & $3$ & $\mathbf{\textcolor{red}{ 1}}$& $48$ & $ \mathbf {2/89\mathit{_b}=1/48+1/1068\,_{\textcolor{red}{ 12}}+1/1424\,_{\textcolor{red}{ 16}} }$ \\ \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $50$ & $ \mathbf {2/97\mathit{_a}=1/50+1/2425\,_{\textcolor{red}{ 25}}+1/4850\,_{\textcolor{red}{ 50}}} $ \\ \hline $7$ & $15$ & $8$ & $7$ & $\mathbf{\textcolor{red}{ 1}}$& $56$ & $ \mathbf {2/97\mathit{_b}=1/56+1/679\,_{\textcolor{red}{ 7}}+1/776\,_{\textcolor{red}{ 8}}}\;\, ^{Eg}$ \\ \hline \hline $3$ & $7$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $10$ & $ \mathbf {2/13=1/10+1/26\,_{\textcolor{red}{ 2}}+1/65\,_{\textcolor{red}{ 5}}}$ \\ \hline $2$ & $5$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $12$ & $ \mathbf {2/19=1/12+1/57\,_{\textcolor{red}{ 3}}+1/228\,_{\textcolor{red}{ 12}}}$ \\ \hline $2$ & $5$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $24$ & $ \mathbf {2/43=1/24+1/258\,_{\textcolor{red}{ 6}}+1/1032\,_{\textcolor{red}{ 24}} }$ \\ \hline $3$ & $7$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $30$ & $ \mathbf {2/53=1/30+1/318\,_{\textcolor{red}{ 6}}+1/795\,_{\textcolor{red}{ 15}}}\;\, ^{Eg}$ \\ \hline $2$ & $5$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $32$ & $ \mathbf {2/59=1/32+1/472\,_{\textcolor{red}{ 8}}+1/1888\,_{\textcolor{red}{ 32}}} $ \\ \hline $2$ & $5$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $36$ & $ \mathbf {2/67\mathit{_a}=1/36+1/603\,_{\textcolor{red}{ 9}}+1/2412\,_{\textcolor{red}{ 36}}} $\\ \hline $6$ & $13$ & $8$ & $5$ & $\mathbf{\textcolor{red}{ 3}}$& $40$ & $ \mathbf {2/67\mathit{_b}=1/40+1/335\,_{\textcolor{red}{ 5}}+1/536\,_{\textcolor{red}{ 8}}}\;\, ^{Eg}$ \\ \hline $3$ & $7$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $40$ & $ \mathbf {2/73=1/40+1/1584\,_{\textcolor{red}{ 8}}+1/1460\,_{\textcolor{red}{ 20}} }$ \\ \hline $2$ & $5$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $44$ & $ \mathbf {2/83=1/44+1/913\,_{\textcolor{red}{ 11}}+1/3652\,_{\textcolor{red}{ 44}} }$ \\ \hline \hline $3$ & $7$ & $6$ & $1$ & $\mathbf{\textcolor{red}{ 5}}$& $12$ & $ \mathbf {2/17=1/12+1/34\,_{\textcolor{red}{ 2}}+1/204\,_{\textcolor{red}{ 12}}}$ \\ \hline $4$ & $9$ & $7$ & $2$ & $\mathbf{\textcolor{red}{ 5}}$& $14$ & $ \mathbf {2/19=1/14+1/38\,_{\textcolor{red}{ 2}}+1/133\,_{\textcolor{red}{ 7}}}$ \\ \hline $3$ & $7$ & $6$ & $1$ & $\mathbf{\textcolor{red}{ 5}}$& $18$ & $ \mathbf {2/29=1/18+1/87\,_{\textcolor{red}{ 3}}+1/522\,_{\textcolor{red}{ 18}} }$ \\ \hline $5$ & $11$ & $8$ & $3$ & $\mathbf{\textcolor{red}{ 5}}$& $24$ & $ \mathbf {2/37=1/24+1/111\,_{\textcolor{red}{ 3}}+1/296\,_{\textcolor{red}{ 8}}}\;\, ^{Eg}$ \\ \hline $3$ & $7$ & $6$ & $1$ & $\mathbf{\textcolor{red}{ 5}}$& $24$ & $ \mathbf {2/41=1/24+1/164\,_{\textcolor{red}{ 4}}+1/984\,_{\textcolor{red}{ 24}}}$ \\ \hline $4$ & $9$ & $7$ & $2$ & $\mathbf{\textcolor{red}{ 5}}$& $28$ & $ \mathbf {2/47=1/28+1/188\,_{\textcolor{red}{ 4}}+1/658\,_{\textcolor{red}{ 14}}}$ \\ \hline $3$ & $7$ & $6$ & $1$ & $\mathbf{\textcolor{red}{ 5}}$& $30$ & $ \mathbf {2/53=1/30+1/265\,_{\textcolor{red}{ 5}}+1/1590\,_{\textcolor{red}{ 30}}} $ \\ \hline $6$ & $13$ & $9$ & $4$ & $\mathbf{\textcolor{red}{ 5}}$& $36$ & $ \mathbf {2/59=1/36+1/236\,_{\textcolor{red}{ 4}}+1/531\,_{\textcolor{red}{ 9}}}\;\, ^{Eg}$ \\ \hline $3$ & $7$ & $6$ & $1$ & $\mathbf{\textcolor{red}{ 5}}$& $48$ & $ \mathbf {2/89=1/48+1/712\,_{\textcolor{red}{ 8}}+1/4272\,_{\textcolor{red}{ 48}} }$ \\ \hline \hline $4$ & $9$ & $8$ & $1$ & $\mathbf{\textcolor{red}{ 7}}$& $16$ & $ \mathbf {2/23=1/16+1/46\,_{\textcolor{red}{ 2}}+1/368\,_{\textcolor{red}{ 16}}}$ \\ \hline $6$ & $13$ & $10$ & $3$ & $\mathbf{\textcolor{red}{ 7}}$& $30$ & $ \mathbf {2/47=1/30+1/141\,_{\textcolor{red}{ 3}}+1/470\,_{\textcolor{red}{ 10}}}\;\, ^{Eg}$ \\ \hline $5$ & $11$ & $9$ & $2$ & $\mathbf{\textcolor{red}{ 7}}$& $36$ & $ \mathbf {2/61=1/36+1/244\,_{\textcolor{red}{ 4}}+1/1098\,_{\textcolor{red}{ 18}} }$ \\ \hline $4$ & $9$ & $8$ & $1$ & $\mathbf{\textcolor{red}{ 7}}$& $40$ & $ \mathbf {2/71=1/40+1/355\,_{\textcolor{red}{ 5}}+1/2840\,_{\textcolor{red}{ 40}}} $ \\ \hline $7$ & $15$ & $11$ & $4$ & $\mathbf{\textcolor{red}{ 7}}$& $44$ & $ \mathbf {2/73=1/44+1/292\,_{\textcolor{red}{ 4}}+1/803\,_{\textcolor{red}{ 11}} }$ \\ \hline $5$ & $11$ & $9$ & $2$ & $\mathbf{\textcolor{red}{ 7}}$& $54$ & $ \mathbf {2/97=1/54+1/582\,_{\textcolor{red}{ 6}}+1/2619\,_{\textcolor{red}{ 27}}} $ \\ \hline \hline $5$ & $11$ & $10$ & $1$ & $\mathbf{\textcolor{red}{ 9}}$& $20$ & $ \mathbf {2/29=1/20+1/58\,_{\textcolor{red}{ 2}}+1/580\,_{\textcolor{red}{ 20}} }$ \\ \hline $6$ & $13$ & $11$ & $2$ & $\mathbf{\textcolor{red}{ 9}}$& $22$ & $ \mathbf {2/31=1/22+1/62\,_{\textcolor{red}{ 2}}+1/341\,_{\textcolor{red}{ 11}}}$ \\ \hline $5$ & $11$ & $10$ & $1$ & $\mathbf{\textcolor{red}{ 9}}$& $50$ & $ \mathbf {2/89=1/50+1/445\,_{\textcolor{red}{ 5}}+1/4450\,_{\textcolor{red}{ 50}} }$ \\ \hline \hline $7$ & $15$ & $13$ & $2$ & $\mathbf{\textcolor{red}{ 11}}$& $26$ & $ \mathbf {2/37=1/26+1/74\,_{\textcolor{red}{ 2}}+1/481\,_{\textcolor{red}{ 13}}}$ \\ \hline $6$ & $13$ & $12$ & $1$ & $\mathbf{\textcolor{red}{ 11}}$& $36$ & $ \mathbf {2/59=1/36+1/177\,_{\textcolor{red}{ 3}}+1/2124\,_{\textcolor{red}{ 36}}} $ \\ \hline $8$ & $17$ & $14$ & $3$ & $\mathbf{\textcolor{red}{ 11}}$& $42$ & $ \mathbf {2/67=1/42+1/201\,_{\textcolor{red}{ 3}}+1/938\,_{\textcolor{red}{ 14}}}$ \\ \hline $6$ & $13$ & $12$ & $1$ & $\mathbf{\textcolor{red}{ 11}}$& $48$ & $ \mathbf {2/83=1/48+1/332\,_{\textcolor{red}{ 4}}+1/3984\,_{\textcolor{red}{ 48}} }$ \\ \hline $7$ & $15$ & $13$ & $2$ & $\mathbf{\textcolor{red}{ 11}}$& $52$ & $ \mathbf {2/89=1/52+1/356\,_{\textcolor{red}{ 4}}+1/2314\,_{\textcolor{red}{ 26}} }$ \\ \hline \hline $7$ & $15$ & $14$ & $1$ & $\mathbf{\textcolor{red}{ 13}}$& $28$ & $ \mathbf {2/41=1/28+1/82\,_{\textcolor{red}{ 2}}+1/1148\,_{\textcolor{red}{ 28}}}$ \\ \hline $8$ & $17$ & $15$ & $2$ & $\mathbf{\textcolor{red}{ 13}}$& $30$ & $ \mathbf {2/43=1/30+1/86\,_{\textcolor{red}{ 2}}+1/645\,_{\textcolor{red}{ 15}} }$ \\ \hline $7$ & $15$ & $14$ & $1$ & $\mathbf{\textcolor{red}{ 13}}$& $56$ & $ \mathbf {2/97=1/56+1/388\,_{\textcolor{red}{ 4}}+1/5432\,_{\textcolor{red}{ 56}}} $ \\ \hline \hline $8$ & $17$ & $16$ & $1$ & $\mathbf{\textcolor{red}{ 15}}$& $32$ & $ \mathbf {2/47=1/32+1/94\,_{\textcolor{red}{ 2}}+1/1504\,_{\textcolor{red}{ 32}}} $ \\ \hline $8$ & $17$ & $16$ & $1$ & $\mathbf{\textcolor{red}{ 15}}$& $48$ & $ \mathbf {2/79=1/48+1/237\,_{\textcolor{red}{ 3}}+1/3792\,_{\textcolor{red}{ 48}} }$ \\ \hline \hline $9$ & $19$ & $18$ & $1$ & $\mathbf{\textcolor{red}{ 17}}$& $36$ & $ \mathbf {2/53=1/36+1/106\,_{\textcolor{red}{ 2}}+1/1908\,_{\textcolor{red}{ 36}}}$ \\ \hline $9$ & $19$ & $18$ & $1$ & $\mathbf{\textcolor{red}{ 17}}$& $54$ & $ \mathbf {2/89=1/54+1/267\,_{\textcolor{red}{ 3}}+1/4306\,_{\textcolor{red}{ 54}} }$ \\ \hline $11$ & $23$ & $20$ & $3$ & $\mathbf{\textcolor{red}{ 17}}$& $60$ & $ \mathbf {2/97=1/60+1/291\,_{\textcolor{red}{ 3}}+1/1940\,_{\textcolor{red}{ 20}}}$ \\ \hline \hline $10$ & $21$ & $20$ &$1$ & $\mathbf{\textcolor{red}{ 19}}$& $40$ & $ \mathbf {2/59=1/40+1/118\,_{\textcolor{red}{ 2}}+1/2360\,_{\textcolor{red}{ 40}}}$ \\ \hline $11$ & $23$ & $21$ & $2$ & $\mathbf{\textcolor{red}{ 19}}$& $42$ & $ \mathbf {2/61=1/42+1/122\,_{\textcolor{red}{ 2}}+1/1281\,_{\textcolor{red}{ 21}} }$ \\ \hline \hline $12$ & $25$ & $23$ & $2$ & $\mathbf{\textcolor{red}{ 21}}$& $46$ & $ \mathbf {2/67=1/46+1/134\,_{\textcolor{red}{ 2}}+1/1541\,_{\textcolor{red}{ 23}}}$ \\ \hline \hline $12$ & $25$ & $24$ & $1$ & $\mathbf{\textcolor{red}{ 23}}$& $48$ & $ \mathbf {2/71=1/48+1/142\,_{\textcolor{red}{ 2}}+1/3408\,_{\textcolor{red}{ 48}}}$ \\ \hline $13$ & $27$ & $25$ & $2$ & $\mathbf{\textcolor{red}{ 23}}$& $50$ & $ \mathbf {2/73=1/50+1/146\,_{\textcolor{red}{ 2}}+1/1825\,_{\textcolor{red}{ 25}} }$ \\ \hline \hline $14$ & $29$ & $27$ & $2$ & $\mathbf{\textcolor{red}{ 25}}$& $54$ & $ \mathbf {2/79=1/54+1/158\,_{\textcolor{red}{ 2}}+1/2133\,_{\textcolor{red}{ 27}} }$ \\ \hline \hline $14$ & $29$ & $28$ & $1$ & $\mathbf{\textcolor{red}{ 27}}$& $56$ & $ \mathbf {2/83=1/56+1/166\,_{\textcolor{red}{ 2}}+1/4648\,_{\textcolor{red}{ 56}} }$ \\ \hline \hline $15$ & $31$ & $30$ & $1$ & $\mathbf{\textcolor{red}{ 29}}$& $60$ & $ \mathbf {2/89=1/60+1/178\,_{\textcolor{red}{ 2}}+1/5340\,_{\textcolor{red}{ 60}} }$ \\ \hline \hline $17$ & $35$ & $33$ & $2$ & $\mathbf{\textcolor{red}{ 31}}$& $66$ & $ \mathbf {2/97=1/66+1/194\,_{\textcolor{red}{ 2}}+1/3201\,_{\textcolor{red}{ 33}}} $ \\ \hline \end{tabular} \end{center} \label{Tble3terms71} \end{table} \normalsize \clearpage As it is clear from Table \ref{Tble3terms71} an obvious preference for the smallest $\Delta_{d}$ seems to be well followed. \\ After cut-off by $\mbox{\boldmath $\top$ }\!\!_ f^{\;\;[3]}= 10$ Table \ref{Tble3terms71} is reduced and allows us to analyze the following options: \begin{table} [htbp] \caption{3-terms options} \begin{center} \scriptsize \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf Trials [3-terms] ordered with $\Delta_{d}\nearrow$ showing where are the Egyptian options }\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & [3-terms] decompositions $\mathbf {\textcolor{red}{ m_3\leq 10}}$\\ [0.01in] \hline \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $8$ & $ \mathbf {2/13=1/8+1/52\,_{\textcolor{red}{ 4}}+1/104\,_{\textcolor{red}{ 8}}}\;\, ^{Eg}$ \\ \hline \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $10$ & $ \mathbf {2/17\mathit{_a}=1/10+1/85\,_{\textcolor{red}{ 5}}+1/170\,_{\textcolor{red}{ 10}}}$ \\ \hline $3$ & $7$ & $4$ & $3$ & $\mathbf{\textcolor{red}{ 1}}$& $12$ & $ \mathbf {2/17\mathit{_b}=1/12+1/51\,_{\textcolor{red}{ 3}}+1/68\,_{\textcolor{red}{ 4}}}\;\, ^{Eg{\textcolor{red}{{\star }}}}$ \\ \hline \hline $2$ & $5$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $12$ & $\mathbf {2/19=1/12+1/76\,_{\textcolor{red}{ 4}}+1/114\,_{\textcolor{red}{ 6}}}\;\, ^{Eg}$ \\ \hline \hline $2$ & $5$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $18$ & $ \mathbf {2/31\mathit{_a}=1/18+1/186\,_{\textcolor{red}{ 6}}+1/279\,_{\textcolor{red}{ 9}}}$ \\ \hline $4$ & $9$ & $5$ & $4$ & $\mathbf{\textcolor{red}{ 1}}$& $20$ & $ \mathbf {2/31\mathit{_b}=1/20+1/124\,_{\textcolor{red}{ 4}}+1/155\,_{\textcolor{red}{ 5}}}\;\, ^{Eg{\textcolor{red}{{\star }}}}$ \\ \hline \hline $3$ & $7$ & $4$ & $3$ & $\mathbf{\textcolor{red}{ 1}}$& $24$ & $ \mathbf {2/41=1/24+1/246\,_{\textcolor{red}{ 6}}+1/328\,_{\textcolor{red}{ 8}}}\;\, ^{Eg}$ \\ \hline \hline $4$ & $9$ & $5$ & $4$ & $\mathbf{\textcolor{red}{ 1}}$& $40$ & $ \mathbf {2/71\mathit{_a}=1/40+1/568\,_{\textcolor{red}{ 8}}+1/710\,_{\textcolor{red}{ 10}} }\;\, ^{Eg}$ \\ \hline $6$ & $13$ & $7$ & $6$ & $\mathbf{\textcolor{red}{ 1}}$& $42$ & $ \mathbf {2/71\mathit{_b}=1/42+1/426\,_{\textcolor{red}{ 6}}+1/497\,_{\textcolor{red}{ 7}}}$ \\ \hline \hline $7$ & $15$ & $8$ & $7$ & $\mathbf{\textcolor{red}{ 1}}$& $56$ & $ \mathbf {2/97=1/56+1/679\,_{\textcolor{red}{ 7}}+1/776\,_{\textcolor{red}{ 8}}}\;\, ^{Eg{\textcolor{red}{{\star }}}}$ \\ \hline \hline \hline $3$ & $7$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $10$ & $ \mathbf {2/13=1/10+1/26\,_{\textcolor{red}{ 2}}+1/65\,_{\textcolor{red}{ 5}}}$ \\ \hline \hline $6$ & $13$ & $8$ & $5$ & $\mathbf{\textcolor{red}{ 3}}$& $40$ & $ \mathbf {2/67=1/40+1/335\,_{\textcolor{red}{ 5}}+1/536\,_{\textcolor{red}{ 8}}}\;\, ^{Eg}$ \\ \hline \hline \hline $4$ & $9$ & $7$ & $2$ & $\mathbf{\textcolor{red}{ 5}}$& $14$ & $ \mathbf {2/19=1/14+1/38\,_{\textcolor{red}{ 2}}+1/133\,_{\textcolor{red}{ 7}}}$ \\ \hline \hline $5$ & $11$ & $8$ & $3$ & $\mathbf{\textcolor{red}{ 5}}$& $24$ & $ \mathbf {2/37=1/24+1/111\,_{\textcolor{red}{ 3}}+1/296\,_{\textcolor{red}{ 8}}}\;\, ^{Eg}$ \\ \hline \hline $6$ & $13$ & $9$ & $4$ & $\mathbf{\textcolor{red}{ 5}}$& $36$ & $ \mathbf {2/59=1/36+1/236\,_{\textcolor{red}{ 4}}+1/531\,_{\textcolor{red}{ 9}}}\;\, ^{Eg}$ \\ \hline \hline \hline $6$ & $13$ & $10$ & $3$ & $\mathbf{\textcolor{red}{ 7}}$& $30$ & $ \mathbf {2/47=1/30+1/141\,_{\textcolor{red}{ 3}}+1/470\,_{\textcolor{red}{ 10}}}\;\, ^{Eg}$ \\ \hline \end{tabular}\\ \end{center} \label{3TERMSOPT} \end{table}\\ This table shows rare instances where multipliers $m_2$, $m_3$ are consecutive. It is always an interesting quality that does not require sophisticated mathematical justification. That will be denoted by a asterisk ${\textcolor{red}{^{\star }}}$. Two instances are found also in [4-terms] series with $m_2$, $m_3$, $m_4$, see Section \ref{FourTerms}. \\ \nopagebreak [4] Just as an indication, we display below the cases dropped out of a [3-terms] decomposition:\\ \begin{table}[htbp] \caption{\sf Fractions to be broken down into 4-terms} \begin{center} \scriptsize \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf Table of trials [3-terms] for fractions to be broken down into 4-terms}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & Possible [3-terms] decompositions \\ \hline \hline $4$ & $9$ & $8$ & $1$ & $\mathbf{\textcolor{red}{ 7}}$& $16$ & $ \mathbf {2/23=1/16+1/46\,_{\textcolor{red}{ 2}}+1/368\,_{\textcolor{red}{ 16}}}$ \\ \hline \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $16$ & $ \mathbf {2/29=1/16+1/232\,_{\textcolor{red}{ 8}}+1/464\,_{\textcolor{red}{ 16}} }$ \\ \hline $3$ & $7$ & $6$ & $1$ & $\mathbf{\textcolor{red}{ 5}}$& $18$ & $ \mathbf {2/29=1/18+1/87\,_{\textcolor{red}{ 3}}+1/522\,_{\textcolor{red}{ 18}} }$ \\ \hline $5$ & $11$ & $10$ & $1$ & $\mathbf{\textcolor{red}{ 9}}$& $20$ & $ \mathbf {2/29=1/20+1/58\,_{\textcolor{red}{ 2}}+1/580\,_{\textcolor{red}{ 20}} }$ \\ \hline \hline $2$ & $5$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $24$ & $ \mathbf {2/43=1/24+1/258\,_{\textcolor{red}{ 6}}+1/1032\,_{\textcolor{red}{ 24}} }$ \\ \hline $2$ & $5$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $24$ & $ \mathbf {2/43=1/24+1/344\,_{\textcolor{red}{ 8}}+1/516\,_{\textcolor{red}{ 12}} }$ \\ \hline $8$ & $17$ & $15$ & $2$ & $\mathbf{\textcolor{red}{ 13}}$& $30$ & $ \mathbf {2/43=1/30+1/86\,_{\textcolor{red}{ 2}}+1/645\,_{\textcolor{red}{ 15}} }$ \\ \hline \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $32$ & $ \mathbf {2/61=1/32+1/976\,_{\textcolor{red}{ 16}}+1/1952\,_{\textcolor{red}{ 32}} }$ \\ \hline $5$ & $11$ & $9$ & $2$ & $\mathbf{\textcolor{red}{ 7}}$& $36$ & $ \mathbf {2/61=1/36+1/244\,_{\textcolor{red}{ 4}}+1/1098\,_{\textcolor{red}{ 18}} }$ \\ \hline $11$ & $23$ & $21$ & $2$ & $\mathbf{\textcolor{red}{ 19}}$& $42$ & $ \mathbf {2/61=1/42+1/122\,_{\textcolor{red}{ 2}}+1/1281\,_{\textcolor{red}{ 21}} }$ \\ \hline \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $38$ & $ \mathbf {2/73=1/38+1/1387\,_{\textcolor{red}{ 19}}+1/2274\,_{\textcolor{red}{ 38}} }$ \\ \hline $3$ & $7$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $40$ & $ \mathbf {2/73=1/40+1/1584\,_{\textcolor{red}{ 8}}+1/1460\,_{\textcolor{red}{ 20}} }$ \\ \hline $7$ & $15$ & $11$ & $4$ & $\mathbf{\textcolor{red}{ 7}}$& $44$ & $ \mathbf {2/73=1/44+1/292\,_{\textcolor{red}{ 4}}+1/803\,_{\textcolor{red}{ 11}} }$ \\ \hline $13$ & $27$ & $25$ & $2$ & $\mathbf{\textcolor{red}{ 23}}$& $50$ & $ \mathbf {2/73=1/50+1/146\,_{\textcolor{red}{ 2}}+1/1825\,_{\textcolor{red}{ 25}} }$ \\ \hline \hline $2$ & $5$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $42$ & $ \mathbf {2/79=1/42+1/1106\,_{\textcolor{red}{ 14}}+1/1659\,_{\textcolor{red}{ 21}} }$ \\ \hline $8$ & $17$ & $16$ & $1$ & $\mathbf{\textcolor{red}{ 15}}$& $48$ & $ \mathbf {2/79=1/48+1/237\,_{\textcolor{red}{ 3}}+1/3792\,_{\textcolor{red}{ 48}} }$ \\ \hline $14$ & $29$ & $27$ & $2$ & $\mathbf{\textcolor{red}{ 25}}$& $54$ & $ \mathbf {2/79=1/54+1/158\,_{\textcolor{red}{ 2}}+1/2133\,_{\textcolor{red}{ 27}} }$ \\ \hline \hline $2$ & $5$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $44$ & $ \mathbf {2/83=1/44+1/913\,_{\textcolor{red}{ 11}}+1/3652\,_{\textcolor{red}{ 44}} }$ \\ \hline $6$ & $13$ & $12$ & $1$ & $\mathbf{\textcolor{red}{ 11}}$& $48$ & $ \mathbf {2/83=1/48+1/332\,_{\textcolor{red}{ 4}}+1/3984\,_{\textcolor{red}{ 48}} }$ \\ \hline $14$ & $29$ & $28$ & $1$ & $\mathbf{\textcolor{red}{ 27}}$& $56$ & $ \mathbf {2/83=1/56+1/166\,_{\textcolor{red}{ 2}}+1/4648\,_{\textcolor{red}{ 56}} }$ \\ \hline \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $46$ & $ \mathbf {2/89=1/46+1/2047\,_{\textcolor{red}{ 23}}+1/4094\,_{\textcolor{red}{ 46}} }$ \\ \hline $3$ & $7$ & $6$ & $1$ & $\mathbf{\textcolor{red}{ 5}}$& $48$ & $ \mathbf {2/89=1/48+1/712\,_{\textcolor{red}{ 8}}+1/4272\,_{\textcolor{red}{ 48}} }$ \\ \hline $3$ & $7$ & $4$ & $3$ & $\mathbf{\textcolor{red}{ 1}}$& $48$ & $ \mathbf {2/89=1/48+1/1068\,_{\textcolor{red}{ 12}}+1/1424\,_{\textcolor{red}{ 16}} }$ \\ \hline $5$ & $11$ & $10$ & $1$ & $\mathbf{\textcolor{red}{ 9}}$& $50$ & $ \mathbf {2/89=1/50+1/445\,_{\textcolor{red}{ 5}}+1/4450\,_{\textcolor{red}{ 50}} }$ \\ \hline $7$ & $15$ & $13$ & $2$ & $\mathbf{\textcolor{red}{ 11}}$& $52$ & $ \mathbf {2/89=1/52+1/356\,_{\textcolor{red}{ 4}}+1/2314\,_{\textcolor{red}{ 26}} }$ \\ \hline $9$ & $19$ & $18$ & $1$ & $\mathbf{\textcolor{red}{ 17}}$& $54$ & $ \mathbf {2/89=1/54+1/267\,_{\textcolor{red}{ 3}}+1/4306\,_{\textcolor{red}{ 54}} }$ \\ \hline $15$ & $31$ & $30$ & $1$ & $\mathbf{\textcolor{red}{ 29}}$& $60$ & $ \mathbf {2/89=1/60+1/178\,_{\textcolor{red}{ 2}}+1/5340\,_{\textcolor{red}{ 60}} }$ \\ \hline \end{tabular} \end{center} \label{Frac3become4} \end{table} \normalsize \hspace*{1.5em}Our definition of $\mbox{\boldmath $\top$ }\!\!_ f $ does not depend on a arbitrary value of $D_3$ fixed to $1000$ as often assumed in the literature. It depends only on the circumstances imposed by the current project. Subdivide now table \ref{3TERMSOPT} into 3 sets according to the properties of each $D$. A first with a only one ${\Delta_{d}}$, a second with two different ${\Delta_{d}} $ and a third with two conflicting identical ${\Delta_{d}}$. That yields:\\ \clearpage \begin{table} [htbp] \caption{\sf A single $\Delta_{d}$ [3-terms]\rm } \begin{center} \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{6}{|c|}{\sf D with a single $\Delta_{d} $ \rm $\quad$(options: no)} & \multicolumn{1}{l|}{\sf Scribes's decision: obvious}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & $\qquad$ [3-terms] decomposition \\ \hline \hline $3$ & $7$ & $4$ & $3$ & $\mathbf{\textcolor{red}{ 1}}$& $24$ & $ \mathbf {2/41=1/24+1/246\,_{\textcolor{red}{ 6}}+1/328\,_{\textcolor{red}{ 8}}}\;\, ^{Eg}$ \\ \hline \hline $7$ & $15$ & $8$ & $7$ & $\mathbf{\textcolor{red}{ 1}}$& $56$ & $ \mathbf {2/97=1/56+1/679\,_{\textcolor{red}{ 7}}+1/776\,_{\textcolor{red}{ 8}}}\;\, ^{Eg{\textcolor{red}{{\star }}}}$ \\ \hline \hline \hline $6$ & $13$ & $8$ & $5$ & $\mathbf{\textcolor{red}{ 3}}$& $40$ & $ \mathbf {2/67=1/40+1/335\,_{\textcolor{red}{ 5}}+1/536\,_{\textcolor{red}{ 8}}}\;\, ^{Eg}$ \\ \hline \hline \hline $5$ & $11$ & $8$ & $3$ & $\mathbf{\textcolor{red}{ 5}}$& $24$ & $ \mathbf {2/37=1/24+1/111\,_{\textcolor{red}{ 3}}+1/296\,_{\textcolor{red}{ 8}}}\;\, ^{Eg}$ \\ \hline \hline $6$ & $13$ & $9$ & $4$ & $\mathbf{\textcolor{red}{ 5}}$& $36$ & $ \mathbf {2/59=1/36+1/236\,_{\textcolor{red}{ 4}}+1/531\,_{\textcolor{red}{ 9}}}\;\, ^{Eg}$ \\ \hline \hline \hline $6$ & $13$ & $10$ & $3$ & $\mathbf{\textcolor{red}{ 7}}$& $30$ & $ \mathbf {2/47=1/30+1/141\,_{\textcolor{red}{ 3}}+1/470\,_{\textcolor{red}{ 10}}}\;\, ^{Eg}$ \\ \hline \end{tabular}\\ \end{center} \label{1Delta3} \end{table} \normalsize \begin{table} [h] \caption{\sf Two different $\Delta_{d}$ [3-terms]\rm} \begin{center} \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf D with two different $\Delta_{d}$ \rm $\quad$(options: yes) }\\ \hline \multicolumn{6}{|c|}{} & \multicolumn{1}{l|}{\sf Scribes's decision: smallest $\Delta_{d}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & $\qquad$ [3-terms] decompositions \\ \hline \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $8$ & $ \mathbf {2/13=1/8+1/52\,_{\textcolor{red}{ 4}}+1/104\,_{\textcolor{red}{ 8}}}\;\, ^{Eg}$ \\ \hline \hline $3$ & $7$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $10$ & $ \mathbf {2/13=1/10+1/26\,_{\textcolor{red}{ 2}}+1/65\,_{\textcolor{red}{ 5}}}$ \\ \hline \hline $2$ & $5$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $12$ & $\mathbf {2/19=1/12+1/76\,_{\textcolor{red}{ 4}}+1/114\,_{\textcolor{red}{ 6}}}\;\, ^{Eg}$ \\ \hline \hline $4$ & $9$ & $7$ & $2$ & $\mathbf{\textcolor{red}{ 5}}$& $14$ & $ \mathbf {2/19=1/14+1/38\,_{\textcolor{red}{ 2}}+1/133\,_{\textcolor{red}{ 7}}}$ \\ \hline \end{tabular}\\ \end{center} \label{2Delta3} \end{table} \begin{table} [h] \caption{\sf Two conflicting identical $\Delta_{d}$ [3-terms]\rm} \begin{center} \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf D with two conflicting identical $\Delta_{d}$ \rm $\quad$(options: yes)}\\ \hline \multicolumn{6}{|c|}{} & \multicolumn{1}{l|}{\sf Scribes's decision: consecutive multipliers}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & $\qquad$ [3-terms] decompositions \\ \hline \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $10$ & $ \mathbf {2/17\mathit{_a}=1/10+1/85\,_{\textcolor{red}{ 5}}+1/170\,_{\textcolor{red}{ 10}}}$ \\ \hline $3$ & $7$ & $4$ & $3$ & $\mathbf{\textcolor{red}{ 1}}$& $12$ & $ \mathbf {2/17\mathit{_b}=1/12+1/51\,_{\textcolor{red}{ 3}}+1/68\,_{\textcolor{red}{ 4}}}\;\, ^{Eg{\textcolor{red}{{\star }}}}$ \\ \hline \hline $2$ & $5$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $18$ & $ \mathbf {2/31\mathit{_a}=1/18+1/186\,_{\textcolor{red}{ 6}}+1/279\,_{\textcolor{red}{ 9}}}$ \\ \hline $4$ & $9$ & $5$ & $4$ & $\mathbf{\textcolor{red}{ 1}}$& $20$ & $ \mathbf {2/31\mathit{_b}=1/20+1/124\,_{\textcolor{red}{ 4}}+1/155\,_{\textcolor{red}{ 5}}}\;\, ^{Eg{\textcolor{red}{{\star }}}}$ \\ \hline \end{tabular}\\ \vspace{0.5em} \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{6}{|c|}{} & \multicolumn{1}{l|}{\sf Scribes's decision: 2n $\leq$ 10}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & $\qquad$ [3-terms] decompositions \\ \hline \hline $4$ & $9$ & $5$ & $4$ & $\mathbf{\textcolor{red}{ 1}}$& $40$ & $ \mathbf {2/71\mathit{_a}=1/40+1/568\,_{\textcolor{red}{ 8}}+1/710\,_{\textcolor{red}{ 10}} }\;\, ^{Eg}$ \\ \hline $6$ & $13$ & $7$ & $6$ & $\mathbf{\textcolor{red}{ 1}}$& $42$ & $ \mathbf {2/71\mathit{_b}=1/42+1/426\,_{\textcolor{red}{ 6}}+1/497\,_{\textcolor{red}{ 7}}} {\;\,^{\textcolor{red}{{\star }}}}$ \\ \hline \end{tabular} \end{center} \label{11Delta3} \end{table} Remark: {\it in the cases involving options possible, and in these cases only,} \rm the solutions for \\ \{\sf 2/D = 2/13, 2/19, 2/17, 2/31\} \rm were chosen respectively in the set \sf \{$n= 1, 2, 3, 4$\}$_{\mathbf{|}_\mathbf{{\textcolor{red}{2n\leq 10}}}}$.\\\rm For ruling on {\sf 2/71} \rm there is no convincing arithmetical argumentation, then the choice could \\ have been the simplicity and direct observation: once again a boundary like $2n\leq {\textcolor{red}{ 10}}$ is used for picking $n=4$. That's it. Too simple, but why not? \\ \vspace{0.5em \hspace*{1.5em}{\sf After this natural selection by cut-off with a Top-flag $\sf \mbox{\boldmath $\top$ }\!\!_ f^{\;\;[3]}= \sf10$ and appropriate decisions,} it remains some cases to be examined, especially these with $\boxed{10 < m_3 \leq 16}$ because of the singular status of {\sf 2/23,} that the scribes will retain with a decomposition into 2 terms. We display below these cases. Of course {\sf 2/61, 2/83} are {\sl ex officio} excluded from the analysis.\\ (Anticipation is made on [4-terms] analysis and related decisions that follow, like $\sf \mbox{\boldmath $\top$ }\!\!_ f^{\;\;[4]}= \sf10$) \newpage \begin{table}[htbp] \caption{Dynamic comparison for transitions $\mathbf {3 \Rightarrow 4}$} \begin{center} \scriptsize \begin{tabular}{|c|r|} \hline {\sl Unique} [2-terms] solution \\ [0.01in] \hline $\mathbf {2/23=1/12+1/276\,_{\textcolor{red}{ {12}}}}\;\, ^{Eg}$ \\ [0.01in] \hline \end{tabular}\\ \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf Selected trials [3-terms] for $\boxed{ \bf 2/23}\quad {\footnotesize \sl enigma\, ?}\quad \mathbf {(m_3=16)}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & {\it Unique} [3-terms] decomposition \\ [0.01in] \hline \hline $4$ & $9$ & $8$ & $1$ & $\mathbf{\textcolor{red}{ 7}}$& $16$ & $ \mathbf {2/23=1/16+\boxed{1/46\,_{\textcolor{red}{ 2}}}+1/368\,_{\textcolor{red}{ 16}}}$ \\ \hline \end{tabular} \scriptsize \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{8}{|c|}{\sf Selected trials [4-terms] $\boxed{2/23}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$ &$D_1^n$ & [4-terms] decomposition $\mathbf {\textcolor{red}{ m_4\leq 10}}$\\ [0.01in] \hline \hline $8$ & $17$ & $10$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $20$ & $\mathbf {2/23=1/20+\boxed{1/46\,_{\textcolor{red}{ 2}}}+1/92\,_{\textcolor{red}{ 4}}+1/230\,_{\textcolor{red}{10}}}$ \\ \hline \end{tabular} \end{center} \begin{center} \scriptsize \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf Selected trials [3-terms] $\boxed{2/29}\qquad \quad \mathbf {(m_3=16)}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & Possible [3-terms] decomposition \\ [0.01in] \hline \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $16$ & $ \mathbf {2/29=1/16+\boxed{1/232\,_{\textcolor{red}{ 8}}}+1/464\,_{\textcolor{red}{ 16}} }$ \\ [0.01in] \hline \end{tabular} \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{8}{|c|}{\sf Selected trials [4-terms] $\boxed{2/29}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$ &$D_1^n$ & Possible [4-terms] decompositions $\mathbf {\textcolor{red}{ m_4\leq 10}}$\\ [0.01in] \hline \hline $9$ & $19$ & $12$ & $4 $ & $3 $ & $\mathbf{\textcolor{red}{ 1}}$& $24$ & $\mathbf {2/29=1/24+1/58\,_{\textcolor{red}{ 2}}+1/174\,_{\textcolor{red}{ 6}}+ \boxed{1/232\,_{\textcolor{red}{8}}}}\;\, ^{Eg}$ \\ \hline $5$ & $11$ & $5$ & $4$ & $2$ & $\mathbf{\textcolor{red}{ 2}}$& $20$ & $ \mathbf {2/29=1/20+1/116\,_{\textcolor{red}{ 4}}+1/145\,_{\textcolor{red}{ 5}}+1/290\,_{\textcolor{red}{ 10}}}$ \\ \hline $15$ & $31$ & $15$ & $10$ & $6$ & $\mathbf{\textcolor{red}{ 4}}$& $30$ & $ \mathbf {2/29=1/30+1/58\,_{\textcolor{red}{ 2}}+1/87\,_{\textcolor{red}{ 3}}+1/145\,_{\textcolor{red}{5}}}$ \\ \hline \end{tabular} \end{center} \begin{center} \scriptsize \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf Selected trials [3-terms] $\boxed{2/89}\qquad \quad \mathbf {(m_3=16)}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & Possible [3-terms] decomposition \\ [0.01in] \hline \hline $3$ & $7$ & $4$ & $3$ & $\mathbf{\textcolor{red}{ 1}}$& $48$ & $ \mathbf {2/89\mathit{_b}=1/48+1/1068\,_{\textcolor{red}{ 12}}+1/1424\,_{\textcolor{red}{ 16}} }$ \\ \hline \end{tabular} \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{8}{|c|}{\sf Selected trials [4-terms] $\boxed{2/89}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$ &$D_1^n$ & Possible [4-terms] decompositions $\mathbf {\textcolor{red}{ m_4\leq 10}}$\\ [0.01in] \hline \hline $15$ & $31$ & $15$ & $10$ & $6$ & $\mathbf{\textcolor{red}{ 4}}$& $60$ & $ \mathbf {2/89=1/60+1/356\,_{\textcolor{red}{ 4}}+1/534\,_{\textcolor{red}{ 6}}+1/890\,_{\textcolor{red}{10}}}\;\, ^{Eg}$ \\ \hline \end{tabular} \end{center} \begin{center} \scriptsize \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf Selected trials [3-terms] for $\boxed{\bf 2/53}\quad {\footnotesize \sl enigma\, ?}\quad \mathbf {(m_3=15)}$ }\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & Possible [3-terms] decomposition \\ [0.01in] \hline \hline $3$ & $7$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $30$ & $ \mathbf {2/53=1/30+\boxed{1/318\,_{\textcolor{red}{ 6}}}+1/795\,_{\textcolor{red}{ 15}}}\;\, ^{Eg}$ \\ \hline \end{tabular} \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{8}{|c|}{\sf Selected trials [4-terms] for $\boxed{2/53}$ }\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$ &$D_1^n$ & [4-terms] decomposition $\mathbf {\textcolor{red}{ m_4\leq 10}}$\\ [0.01in] \hline \hline $9$ & $19$ & $9$ & $6 $ & $4 $ & $\mathbf{\textcolor{red}{ 2}}$& $36$ & $\mathbf {2/53=1/36+1/212\,_{\textcolor{red}{ 4}} +\boxed{1/318\,_{\textcolor{red}{ 6}}}+1/477\,_{\textcolor{red}{9}}}$ \\ \hline \end{tabular} \end{center} \begin{center} \scriptsize \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf Selected trials [3-terms] $\boxed{2/43}\qquad \quad \mathbf {(m_3=12) }$ or $\; \mathbf {(m_3=15) }$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & Possible [3-terms] decompositions \\ [0.01in] \hline \hline $2$ & $5$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $24$ & $ \mathbf {2/43=1/24+1/344\,_{\textcolor{red}{ 8}}+1/516\,_{\textcolor{red}{ 12}} }$ \\ [0.01in] \hline $8$ & $17$ & $15$ & $2$ & \barre{$\mathbf{\textcolor{red}{ 13}}$}& $30$ & \barre{$ \mathbf {2/43=1/30+\boxed{1/86\,_{\textcolor{red}{ 2}}}+1/645\,_{\textcolor{red}{ 15}} }$ }\\ [0.01in] \hline \end{tabular}= \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{8}{|c|}{\sf Selected trials [4-terms] $\boxed{2/43}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$ &$D_1^n$ & [4-terms] decomposition $\mathbf {\textcolor{red}{ m_4\leq 10}}$\\ [0.01in] \hline \hline $20$ & $41$ & $21$ & $14$ & $6$ & $\mathbf{\textcolor{red}{ 8}}$& $42$ & $ \mathbf {2/43=1/42+\boxed{1/86\,_{\textcolor{red}{ 2}}}+1/129\,_{\textcolor{red}{ 3}}+1/301\,_{\textcolor{red}{7}}}\;\, ^{Eg}$ \\ \hline \end{tabular} \end{center} \begin{center} \scriptsize \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf Selected trials [3-terms] $\boxed{2/73}\qquad \quad \mathbf {(m_3=11)}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & Possible [3-terms] decomposition \\ [0.01in] \hline \hline $7$ & $15$ & $11$ & $4$ & $\mathbf{\textcolor{red}{ 7}}$& $44$ & $ \mathbf {2/73=1/44+\boxed{1/292\,_{\textcolor{red}{ 4}}}+1/803\,_{\textcolor{red}{ 11}} }$ \\ [0.01in] \hline \end{tabular} \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{8}{|c|}{\sf Selected trials [4-terms] $\boxed{2/73}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$ &$D_1^n$ & [4-terms] decomposition $\mathbf {\textcolor{red}{ m_4\leq 10}}$\\ [0.01in] \hline \hline $23$ & $47$ & $20$ & $15$ & $12$ & $\mathbf{\textcolor{red}{ 3}}$& $60$ & $ \mathbf {2/73\mathit{_c}=1/60+1/219\,_{\textcolor{red}{ 3}}+\boxed{1/292\,_{\textcolor{red}{ 4}}}+1/365\,_{\textcolor{red}{5}}}\;\, ^{Eg{\textcolor{red}{{\star }}}}$ \\ \hline \end{tabular} \end{center} \end{table} \normalsize \clearpage We repeat that we are always in a logic of a construction site with difficulties arising in different parts of the project. Problems are processed case after case and do not interfere with another previous part. If not, all becomes incomprehensible. A overview supervised by a chief scribe can not be conflicted. The 6 cases presented above confront us with {\sl a dynamic alternative: select the transition from 3 to 4 fractions, or reject it}. This exceptional situation is new in the table construction project, as well as the solution itself! It can be observed that 5 cases on 6 have in common the fact that a same denominator appears in [3-terms] and [4-terms] decompositions. \\ {\sl A priori}, this fact may be seen as not being an improvement to better decompose a [3-terms] fraction into [4-terms]. Unless we find a real improvement worthwhile.\\ {\sf $\boxed{2/89}$ :} sixth case, out of the category `same denominator', is quickly ruled and [4-terms] decomposition is adopted. (Anyway it belonged to this table only because $m_3=16$). \\ {\sf $\boxed{2/43}$ :} once dropped out the option $m_3=15$, due to a too high gap $\Delta_{d}=13$, the same argument holds, then [4-terms] decomposition is adopted. \\ {\sf $\boxed{2/73}$ :} the [4-terms] expansion provides an improvement since that leads to three consecutive multipliers \{$3,\,4,\,5$\}, thus this solution is adopted.\\ {\sf Three cases (slightly reordered) remain to be solved, they are displayed in the following table.}\\ \begin{center} \scriptsize \begin{tabular}{|c|r|} \hline {\sl Unique} [2-terms] solution \\ [0.01in] \hline $\mathbf {2/23=1/12+1/276\,_{\textcolor{red}{ {12}}}}\;\, ^{Eg}$ \\ [0.01in] \hline \end{tabular}\\ \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf Selected trials [3-terms] for $\boxed{ \bf 2/23}\quad {\footnotesize \sl enigma\, ?}\quad \mathbf {(m_3=16)}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & {\it Unique} [3-terms] decomposition \\ [0.01in] \hline \hline $4$ & $9$ & $8$ & $1$ & $\mathbf{\textcolor{red}{ 7}}$& $16$ & $ \mathbf {2/23=1/16+\boxed{1/46\,_{\textcolor{red}{ 2}}}+1/368\,_{\textcolor{red}{ 16}}}$ \\ \hline \end{tabular} \scriptsize \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{8}{|c|}{\sf Selected trials [4-terms] $\boxed{2/23}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$ &$D_1^n$ & [4-terms] decomposition $\mathbf {\textcolor{red}{ m_4\leq 10}}$\\ [0.01in] \hline \hline $8$ & $17$ & $10$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $20$ & $\mathbf {2/23=1/20+\boxed{1/46\,_{\textcolor{red}{ 2}}}+1/92\,_{\textcolor{red}{ 4}}+1/230\,_{\textcolor{red}{10}}}$ \\ \hline \end{tabular} \end{center} \begin{center} \scriptsize \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf Selected trials [3-terms] for $\boxed{\bf 2/53}\quad {\footnotesize \sl enigma\, ?}\quad \mathbf {(m_3=15)}$ }\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & Possible [3-terms] decomposition \\ [0.01in] \hline \hline $3$ & $7$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $30$ & $ \mathbf {2/53=1/30+\boxed{1/318\,_{\textcolor{red}{ 6}}}+1/795\,_{\textcolor{red}{ 15}}}\;\, ^{Eg}$ \\ \hline \end{tabular} \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{8}{|c|}{\sf Selected trials [4-terms] for $\boxed{2/53}$ }\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$ &$D_1^n$ & [4-terms] decomposition $\mathbf {\textcolor{red}{ m_4\leq 10}}$\\ [0.01in] \hline \hline $9$ & $19$ & $9$ & $6 $ & $4 $ & $\mathbf{\textcolor{red}{ 2}}$& $36$ & $\mathbf {2/53=1/36+1/212\,_{\textcolor{red}{ 4}} +\boxed{1/318\,_{\textcolor{red}{ 6}}}+1/477\,_{\textcolor{red}{9}}}$ \\ \hline \end{tabular} \end{center} \begin{center} \scriptsize \begin{tabular}{|l|c||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf Selected trials [3-terms] $\boxed{2/29}\qquad \quad \mathbf {(m_3=16)}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $\textcolor{red}{\Delta_{d}}$ & $D_1^n$ & Possible [3-terms] decomposition \\ [0.01in] \hline \hline $1$ & $3$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $16$ & $ \mathbf {2/29=1/16+\boxed{1/232\,_{\textcolor{red}{ 8}}}+1/464\,_{\textcolor{red}{ 16}} }$ \\ [0.01in] \hline \end{tabular} \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{8}{|c|}{\sf Selected trials [4-terms] $\boxed{2/29}$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$ &$D_1^n$ & Possible [4-terms] decompositions $\mathbf {\textcolor{red}{ m_4\leq 10}}$\\ [0.01in] \hline \hline $9$ & $19$ & $12$ & $4 $ & $3 $ & $\mathbf{\textcolor{red}{ 1}}$& $24$ & $\mathbf {2/29\mathit{_a}=1/24+1/58\,_{\textcolor{red}{ 2}}+1/174\,_{\textcolor{red}{ 6}}+ \boxed{1/232\,_{\textcolor{red}{8}}}}\;\, ^{Eg}$ \\ \hline $5$ & $11$ & $5$ & $4$ & $2$ & $\mathbf{\textcolor{red}{ 2}}$& $20$ & $ \mathbf {2/29\mathit{_b}=1/20+1/116\,_{\textcolor{red}{ 4}}+1/145\,_{\textcolor{red}{ 5}}+1/290\,_{\textcolor{red}{ 10}}}$ \\ \hline $15$ & $31$ & $15$ & $10$ & $6$ & $\mathbf{\textcolor{red}{ 4}}$& $30$ & $ \mathbf {2/29\mathit{_c}=1/30+1/58\,_{\textcolor{red}{ 2}}+1/87\,_{\textcolor{red}{ 3}}+1/145\,_{\textcolor{red}{5}}}$ \\ \hline \end{tabular} \end{center} For each fraction the same denominators (inside a box) have a well defined position in a [3-terms] expansion and another in a [4-terms]. We denote respectively these positions by \sf rank$^{[3]}$ {\rm and} rank$^{[4]}$.\rm \\ Same denominators will be denoted by $\boxed{same D_i}$. The table below summarizes the situation.\\ \vspace{0.5em} \begin{center} \begin{tabular} {|c|c|c|c|l|}\hline \sf Fraction & $\boxed{same D_i}$ & \sf rank$^{[3]}$ & \sf rank$^{[4]}$ & Appreciation on ranks\\ \hline \sf 2/23 & $\boxed{46}$ & $\mathbf 2 $ & $\mathbf 2 $ & no interest \\ \hline \sf 2/53 & $\boxed{318}$ & $\mathbf 2 $ & $\mathbf 3 $ & too near \\ \hline \sf 2/29$\mathit{_a} $& $\boxed{232}$ & $\mathbf 2 $ & $\mathbf 4 $ & acceptable + smallest $\textcolor{red}{\Delta_{d}^{'}}$ \\ \hline \end{tabular} \\ \end{center} Some convenient rulings ensue, namely \\ {\sf 2/23}; no solution; then come back to the only one solution in 2 terms.\\ {\sf 2/53}; maintain [3-terms] solution; reject [4-terms] solution. \\ {\sf 2/29$\mathit{_a} $}; adopt [4-terms] solution. \newpage \section{[4-terms] analysis} \label{FourTerms} \setcounter{equation}{0} Right now consider the [4-terms] cases. Egyptians gave:\\ \vspace{1.5em} \begin{tabular}{ccll} \begin{tabular}{|c|} \hline \tt Ahmes's selections \rm [4-terms]\\ [0.01in] \hline $2/29=1/24+1/58\,_{\textcolor{red}{ 2}}+1/174\,_{\textcolor{red}{ 6}}+1/232\,_{\textcolor{red}{ 8}}$ \\ [0.01in] \hline $2/43=1/42+1/86\,_{\textcolor{red}{ 2}}+1/129\,_{\textcolor{red}{ 3}}+1/301\,_{\textcolor{red}{ 7}}$ \\ [0.01in] \hline $2/61=1/40+1/244\,_{\textcolor{red}{ 4}}+1/488\,_{\textcolor{red}{ 8}}+1/610\,_{\textcolor{red}{ 10}}$ \\ [0.01in] \hline $2/73=1/60+1/219\,_{\textcolor{red}{ 3}}+1/292\,_{\textcolor{red}{ 4}}+1/365\,_{\textcolor{red}{ 5}}$ \\ [0.01in] \hline $2/79=1/60+1/237\,_{\textcolor{red}{ 3}}+1/316\,_{\textcolor{red}{ 4}}+1/790\,_{\textcolor{red}{ 10}}$ \\ [0.01in] \hline $2/83=1/60+1/332\,_{\textcolor{red}{ 4}}+1/415\,_{\textcolor{red}{ 5}}+1/498\,_{\textcolor{red}{ 6}}$ \\ [0.01in] \hline $2/89=1/60+1/356\,_{\textcolor{red}{ 4}}+1/534\,_{\textcolor{red}{ 6}}+1/890\,_{\textcolor{red}{ 10}}$ \\ [0.01in] \hline \end{tabular} & \begin{tabular}{c} $\Leftarrow$ \end{tabular} & \begin{tabular}{|c|} \hline \tt Unity decomposition\\ [0.01in] \hline $48 = 29 + 12 + 4 + 3_{}$ \\ [0.01in] \hline $84 = 43 + 21 + 14 + 6_{}$ \\ [0.01in] \hline $80 = 61 + 10 + 5 + 4_{}$ \\ [0.01in] \hline $120 = 73 + 20 + 15 +12_{}$ \\ [0.01in] \hline $120 = 79 + 20 + 15 +6_{}$ \\ [0.01in] \hline $120 = 83 + 15 + 12 + 10_{}$ \\ [0.01in] \hline $120 = 89 + 15 + 10 + 6_{}$ \\ [0.01in] \hline \end{tabular} & \begin{tabular}{l} . \end{tabular} \end{tabular} \\ \vspace{1.5em} {\sf The task of finding $D_1$ is rather simple, from the moment when one realizes that it is enough to establish a table of odd numbers $(2n+1)_{|n\geq 3}$ as a sum of three numbers $ d_2 +d_3+d_4$, with $d_2>d_3>d_4$. This is easy to do and independent of any context. The table contains ($\boldsymbol[\frac{n}{2}\boldsymbol]\boldsymbol[\frac{n+1}{2}\boldsymbol]$ \!-$1$) triplets \{$d_2, d_3,d_4$\} and $\sup(d_2)=2n$-$2$.} Square brackets here $ \boldsymbol [\;\boldsymbol]$ means `integral part of'. {\sf One can start with the lowest values as follows: $d_4=1, d_3=2,3,4, \cdots, d_2=3,4,5, \cdots; d_4=2, d_3=3,4,5, \cdots, d_2=4,5,6, \cdots$ and so on, with the condition $d_3+d_2 \equiv d_4+1 \mod(2)$. }\\ From Eq.(\ref{eq:additive4}) the first candidate possible for $D_1$ starts at the value $D_1^0=(D+1 )/2$. We can search for general solutions of the form \begin{equation} D_1^n=D_1^0 + n, \end{equation} whence \begin{equation} 2D_1^n-{D}= 2n+1 =d_2+ d_3+d_4. \label{eq:additive4bis} \end{equation} From the first table of triplets, a new table (of trials) is built, where this time triplets are selected if $d_2,d_3,d_4$ divide $[(D+d_2+d_3+d_4)/2]$. This provides a $D_1^n$ possible. In this favorable case, first $D_4$ is calculated by $DD_1/d_4$, then $D_3$ by $DD_1/d_3$, and $D_2$ by $DD_1/d_2$.\\ This table of trials, properly defined by the equation just below (included the constraints), ie \begin{equation} \mathtt{2n+1=d_2+d_3+d_4}, \mbox{\hspace{0.5em} where $d_2$, $d_3$ and $d_4$ divide $D_1^n$ }, \label{eq:dividers4terms} \end{equation} is obviously a bit longer to establish than for doublets. By simplicity $D_1^n$ will be not written as $D_1^n (d_2,d_3,d_4)$. For decompositions into 4 terms the total of trials yields only $71$ possibilities ! \\ Of course our remark previously made about doublets is still valid for triplets. Likewise, Abdulaziz's parameter [R] takes the form \begin{equation} [R] =\frac{1}{(D_1/ d_2)}+\frac{1}{(D_1/ d_3)}+\frac{1}{(D_1/ d_4)}. \end{equation} The notation used in our tables will be \begin{equation} \Delta_{d}^{'}= d_3 -d_4, \end{equation} Chief scribe wisely decided to impose a upper bound to all the denominators $D_4$, such that \rm \begin{equation} D_4 \leq D \mbox{\boldmath $\top$ }\!\!_ f^{\;\;[4]} . \end{equation} This cut-off beyond $\mbox{\boldmath $\top$ }\!\!_ f^{\;\;[4]}$ is equivalent to a mathematical condition on $D_1$: \begin{equation} D_1 \leq d_4\,\mbox{\boldmath $\top$ }\!\!_ f^{\;\;[4]} . \label{eq:ConditionD1_4} \end{equation} Here again, choosing $\mbox{\boldmath $\top$ }\!\!_ f^{\;\;[4]}=10$ is quite appropriate. Thus a general coherence is ensured throughout the project, since 11 out of 12 decompositions into 3 terms were solved with $\mbox{\boldmath $\top$ }\!\!_ f^{\;\;[3]}=10$.\\ \hspace*{1.5em}Remark that the condition (\ref{eq:ConditionD1_4}) might be exploited \sf from the beginning \rm of the calculations for avoiding to handle too large denominators $D_4$. Simply find $d_4$, find $d_3$, find $d_2$, calculate $D_1$, if (\ref{eq:ConditionD1_4}) is not fulfilled then quit, do not calculate $D_4$, $D_3$, $D_2$ and go to next values for $d_4$, $d_3$, $d_2$, $D_1$ etc. \begin{table}[htp] \caption{\sf Table of trials [4-terms] with increasing order of $\Delta_{d}^{'}$, only 71 possibilities ! } \scriptsize \begin{center} \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{8}{|c|}{\sf Trials [4-terms] with increasing order of $\Delta_{d}^{'}$ }\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$ &$D_1^n$ & Possible [4-terms] decompositions \\ [0.01in] \hline \hline $9$ & $19$ & $12$ & $4 $ & $3 $ & $\mathbf{\textcolor{red}{ 1}}$& $24$ & $\mathbf {2/29=1/24+1/58\,_{\textcolor{red}{ 2}}+1/174\,_{\textcolor{red}{ 6}}+1/232\,_{\textcolor{red}{8}}}\;\, ^{Eg}$ \\ \hline $5$ & $11$ & $6$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $36$ & $\mathbf {2/61\mathit{_a}=1/36+1/366\,_{\textcolor{red}{ 6}}+1/732\,_{\textcolor{red}{ 12}}+1/1098\,_{\textcolor{red}{ 18}}}$ \\ \hline $9$ & $19$ & $10$ & $5$ & $4$ & $\mathbf{\textcolor{red}{ 1}}$& $40$ & $\mathbf {2/61\mathit{_b}=1/40+1/244\,_{\textcolor{red}{ 4}}+1/488\,_{\textcolor{red}{ 8}}+1/610\,_{\textcolor{red}{ 10}}}\;\, ^{Eg}$ \\ \hline $3$ & $7$ & $4$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $40$ & $ \mathbf {2/73\mathit{_a}=1/40+1/730\,_{\textcolor{red}{ 10}}+1/1460\,_{\textcolor{red}{ 20}}+1/2920\,_{\textcolor{red}{ 40}}}$ \\ \hline $5$ & $11$ & $6$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $42$ & $ \mathbf {2/73\mathit{_b}=1/42+1/511\,_{\textcolor{red}{ 7}}+1/1022\,_{\textcolor{red}{ 14}}+1/1533\,_{\textcolor{red}{ 21}}}$ \\ \hline $11$ & $23$ & $16$ & $4$ & $3$ & $\mathbf{\textcolor{red}{ 1}}$& $48$ & $ \mathbf {2/73\mathit{_c}=1/48+1/219\,_{\textcolor{red}{ 3}}+1/876\,_{\textcolor{red}{ 12}}+1/1168\,_{\textcolor{red}{ 16}}}$ \\ \hline $8$ & $17$ & $12$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $48$ & $ \mathbf {2/79\mathit{_a}=1/48+1/316\,_{\textcolor{red}{ 4}}+1/1264\,_{\textcolor{red}{ 16}}+1/1896\,_{\textcolor{red}{ 24}}}$ \\ \hline $20$ & $41$ & $30$ & $6$ & $5$ & $\mathbf{\textcolor{red}{ 1}}$& $60$ & $ \mathbf {2/79\mathit{_b}=1/60+1/158\,_{\textcolor{red}{ 2}}+1/790\,_{\textcolor{red}{ 10}}+1/948\,_{\textcolor{red}{ 12}}}$ \\ \hline $6$ & $13$ & $8$ & $3$ & $2$ & $\mathbf{\textcolor{red}{ 1}}$& $48$ & $ \mathbf {2/83\mathit{_a}=1/48+1/498\,_{\textcolor{red}{ 6}}+1/1328\,_{\textcolor{red}{ 16}}+1/1992\,_{\textcolor{red}{ 24}}}$ \\ \hline $6$ & $13$ & $6$ & $4$ & $3$ & $\mathbf{\textcolor{red}{ 1}}$& $48$ & $ \mathbf {2/83\mathit{_b}=1/48+1/664\,_{\textcolor{red}{ 8}}+1/996\,_{\textcolor{red}{ 12}}+1/1328\,_{\textcolor{red}{ 16}}}$ \\ \hline $14$ & $29$ & $14$ & $8$ & $7$ & $\mathbf{\textcolor{red}{ 1}}$& $56$ & $ \mathbf {2/83\mathit{_c}=1/56+1/332\,_{\textcolor{red}{ 4}}+1/581\,_{\textcolor{red}{ 7}}+1/664\,_{\textcolor{red}{ 8}}}$ \\ \hline $18$ & $37$ & $30$ & $4$ & $3$ & $\mathbf{\textcolor{red}{ 1}}$& $60$ & $ \mathbf {2/83\mathit{_d}=1/60+1/166\,_{\textcolor{red}{ 2}}+1/1245\,_{\textcolor{red}{ 15}}+1/1660\,_{\textcolor{red}{20}}}$ \\ \hline $3$ & $7$ & $4$ & $2$ & $1$ & $\mathbf{\textcolor{red}{ 1}}$& $48$ & $ \mathbf {2/89\mathit{_a}=1/48+1/1068\,_{\textcolor{red}{ 12}}+1/2136\,_{\textcolor{red}{ 24}}+1/4272\,_{\textcolor{red}{ 48}}}$ \\ \hline $15$ & $31$ & $20$ & $6$ & $5$ & $\mathbf{\textcolor{red}{ 1}}$& $60$ & $ \mathbf {2/89\mathit{_b}=1/60+1/267\,_{\textcolor{red}{ 3}}+1/890\,_{\textcolor{red}{ 5}}+1/1068\,_{\textcolor{red}{12}}}$ \\ \hline \hline $6$ & $13$ & $9$ & $3 $ & $1$ & $\mathbf{\textcolor{red}{ 2}}$& $18$ & $\mathbf {2/23=1/18+1/46\,_{\textcolor{red}{ 2}}+1/138\,_{\textcolor{red}{ 6}}+1/414\,_{\textcolor{red}{18}}}$ \\ \hline $5$ & $11$ & $5$ & $4$ & $2$ & $\mathbf{\textcolor{red}{ 2}}$& $20$ & $ \mathbf {2/29=1/20+1/116\,_{\textcolor{red}{ 4}}+1/145\,_{\textcolor{red}{ 5}}+1/290\,_{\textcolor{red}{ 10}}}$ \\ \hline $6$ & $13$ & $7$ & $4$ & $2$ & $\mathbf{\textcolor{red}{ 2}}$& $28$ & $ \mathbf {2/43=1/28+1/172\,_{\textcolor{red}{ 4}}+1/301\,_{\textcolor{red}{ 7}}+1/602\,_{\textcolor{red}{ 14}}}$ \\ \hline $8$ & $17$ & $13$ & $3$ & $1$ & $\mathbf{\textcolor{red}{ 2}}$& $39$ & $\mathbf {2/61=1/39+1/183\,_{\textcolor{red}{ 3}}+1/793\,_{\textcolor{red}{ 13}}+1/2379\,_{\textcolor{red}{ 39}}}$ \\ \hline $5$ & $11$ & $7$ & $3$ & $1$ & $\mathbf{\textcolor{red}{ 2}}$& $42$ & $ \mathbf {2/73\mathit{_a}=1/42+1/438\,_{\textcolor{red}{ 6}}+1/1022\,_{\textcolor{red}{ 14}}+1/3066\,_{\textcolor{red}{ 42}}}$ \\ \hline $8$ & $17$ & $9$ & $5$ & $3$ & $\mathbf{\textcolor{red}{ 2}}$& $45$ & $ \mathbf {2/73\mathit{_b}=1/45+1/365\,_{\textcolor{red}{ 5}}+1/657\,_{\textcolor{red}{ 9}}+1/1095\,_{\textcolor{red}{ 15}}}$ \\ \hline $18$ & $37$ & $15$ & $12$ & $10$ & $\mathbf{\textcolor{red}{ 2}}$& $60$ & $ \mathbf {2/83=1/60+1/332\,_{\textcolor{red}{ 4}}+1/415\,_{\textcolor{red}{ 5}}+1/498\,_{\textcolor{red}{6}}}\;\, ^{Eg}$ \\ \hline $18$ & $37$ & $21$ & $9$ & $7$ & $\mathbf{\textcolor{red}{ 2}}$& $63$ & $ \mathbf {2/89=1/63+1/267\,_{\textcolor{red}{ 3}}+1/623\,_{\textcolor{red}{ 7}}+1/801\,_{\textcolor{red}{9}}}$ \\ \hline \hline $8$ & $17$ & $10$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $20$ & $\mathbf {2/23=1/20+1/46\,_{\textcolor{red}{ 2}}+1/92\,_{\textcolor{red}{ 4}}+1/230\,_{\textcolor{red}{10}}}$ \\ \hline $8$ & $17$ & $10$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $30$ & $ \mathbf {2/43\mathit{_a}=1/30+1/129\,_{\textcolor{red}{ 3}}+1/258\,_{\textcolor{red}{ 6}}+1/645\,_{\textcolor{red}{15}}}$ \\ \hline $10$ & $21$ & $16$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $32$ & $ \mathbf {2/43\mathit{_b}=1/32+1/86\,_{\textcolor{red}{ 2}}+1/344\,_{\textcolor{red}{ 8}}+1/1376\,_{\textcolor{red}{32}}}$ \\ \hline $5$ & $11$ & $6$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $36$ & $\mathbf {2/61\mathit{_a}=1/36+1/366\,_{\textcolor{red}{ 6}}+1/549\,_{\textcolor{red}{ 9}}+1/2196\,_{\textcolor{red}{ 36}}}$ \\ \hline $11$ & $23$ & $14$ & $6$ & $3$ & $\mathbf{\textcolor{red}{ 3}}$& $42$ & $ \mathbf {2/61\mathit{_b}=1/42+1/183\,_{\textcolor{red}{ 3}}+1/427\,_{\textcolor{red}{ 7}}+1/854\,_{\textcolor{red}{14}}}$ \\ \hline $13$ & $27$ & $22$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $44$ & $ \mathbf {2/61\mathit{_c}=1/44+1/122\,_{\textcolor{red}{ 2}}+1/671\,_{\textcolor{red}{ 11}}+1/2684\,_{\textcolor{red}{44}}}$ \\ \hline $15$ & $31$ & $26$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $52$ & $ \mathbf {2/73\mathit{_a}=1/52+1/146\,_{\textcolor{red}{ 2}}+1/949\,_{\textcolor{red}{ 13}}+1/3796\,_{\textcolor{red}{ 52}}}$ \\ \hline $19$ & $39$ & $28$ & $7$ & $4$ & $\mathbf{\textcolor{red}{ 3}}$& $56$ & $ \mathbf {2/73\mathit{_b}=1/56+1/146\,_{\textcolor{red}{ 2}}+1/584\,_{\textcolor{red}{ 8}}+1/1022\,_{\textcolor{red}{14}}}$ \\ \hline $23$ & $47$ & $20$ & $15$ & $12$ & $\mathbf{\textcolor{red}{ 3}}$& $60$ & $ \mathbf {2/73\mathit{_c}=1/60+1/219\,_{\textcolor{red}{ 3}}+1/292\,_{\textcolor{red}{ 4}}+1/365\,_{\textcolor{red}{5}}}\;\, ^{Eg}$ \\ \hline $8$ & $17$ & $12$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $48$ & $ \mathbf {2/79\mathit{_a}=1/48+1/316\,_{\textcolor{red}{ 4}}+1/948\,_{\textcolor{red}{ 12}}+1/3792\,_{\textcolor{red}{ 48}}}$ \\ \hline $8$ & $17$ & $8$ & $6$ & $3$ & $\mathbf{\textcolor{red}{ 3}}$& $48$ & $ \mathbf {2/79\mathit{_b}=1/48+1/474\,_{\textcolor{red}{ 6}}+1/632\,_{\textcolor{red}{ 8}}+1/1264\,_{\textcolor{red}{ 16}}}$ \\ \hline $16$ & $33$ & $28$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $56$ & $ \mathbf {2/79\mathit{_c}=1/56+1/158\,_{\textcolor{red}{ 2}}+1/1106\,_{\textcolor{red}{ 14}}+1/4424\,_{\textcolor{red}{ 56}}}$ \\ \hline $6$ & $13$ & $8$ & $4$ & $1$ & $\mathbf{\textcolor{red}{ 3}}$& $48$ & $ \mathbf {2/83\mathit{_a}=1/48+1/498\,_{\textcolor{red}{ 6}}+1/996\,_{\textcolor{red}{ 12}}+1/3984\,_{\textcolor{red}{ 48}}}$ \\ \hline $8$ & $17$ & $10$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $50$ & $ \mathbf {2/83\mathit{_b}=1/50+1/415\,_{\textcolor{red}{ 5}}+1/830\,_{\textcolor{red}{ 10}}+1/2075\,_{\textcolor{red}{ 25}}}$ \\ \hline $18$ & $37$ & $30$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $60$ & $ \mathbf {2/83\mathit{_c}=1/60+1/166\,_{\textcolor{red}{ 2}}+1/996\,_{\textcolor{red}{ 12}}+1/2490\,_{\textcolor{red}{30}}}$ \\ \hline \hline $15$ & $31$ & $15$ & $10$ & $6$ & $\mathbf{\textcolor{red}{ 4}}$& $30$ & $ \mathbf {2/29=1/30+1/58\,_{\textcolor{red}{ 2}}+1/87\,_{\textcolor{red}{ 3}}+1/145\,_{\textcolor{red}{5}}}$ \\ \hline $14$ & $29$ & $15$ & $9$ & $5$ & $\mathbf{\textcolor{red}{ 4}}$& $45$ & $ \mathbf {2/61=1/45+1/183\,_{\textcolor{red}{ 3}}+1/305\,_{\textcolor{red}{ 5}}+1/549\,_{\textcolor{red}{9}}}$ \\ \hline $17$ & $35$ & $27$ & $6$ & $2$ & $\mathbf{\textcolor{red}{ 4}}$& $54$ & $ \mathbf {2/73=1/54+1/146\,_{\textcolor{red}{ 2}}+1/657\,_{\textcolor{red}{ 9}}+1/1971\,_{\textcolor{red}{ 27}}}$ \\ \hline $15$ & $31$ & $15$ & $10$ & $6$ & $\mathbf{\textcolor{red}{ 4}}$& $60$ & $ \mathbf {2/89=1/60+1/356\,_{\textcolor{red}{ 4}}+1/534\,_{\textcolor{red}{ 6}}+1/890\,_{\textcolor{red}{10}}}\;\, ^{Eg}$ \\ \hline \hline $9$ & $19$ & $12$ & $6 $ & $1 $ & $\mathbf{\textcolor{red}{ 5}}$& $24$ & $\mathbf {2/29=1/24+1/58\,_{\textcolor{red}{ 2}}+1/116\,_{\textcolor{red}{ 4}}+1/696\,_{\textcolor{red}{24}}}$ \\ \hline $8$ & $17$ & $10$ & $6$ & $1$ & $\mathbf{\textcolor{red}{ 5}}$& $30$ & $ \mathbf {2/43=1/30+1/129\,_{\textcolor{red}{ 3}}+1/215\,_{\textcolor{red}{ 5}}+1/1290\,_{\textcolor{red}{30}}}$ \\ \hline $11$ & $23$ & $14$ & $7$ & $2$ & $\mathbf{\textcolor{red}{ 5}}$& $42$ & $ \mathbf {2/61\mathit{_a}=1/42+1/183\,_{\textcolor{red}{ 3}}+1/366\,_{\textcolor{red}{ 6}}+1/1281\,_{\textcolor{red}{21}}}$ \\ \hline $17$ & $35$ & $24$ & $8$ & $3$ & $\mathbf{\textcolor{red}{ 5}}$& $48$ & $ \mathbf {2/61\mathit{_b}=1/48+1/122\,_{\textcolor{red}{ 2}}+1/366\,_{\textcolor{red}{ 6}}+1/976\,_{\textcolor{red}{16}}}$ \\ \hline $11$ & $23$ & $16$ & $6$ & $1$ & $\mathbf{\textcolor{red}{ 5}}$& $48$ & $ \mathbf {2/73\mathit{_a}=1/48+1/219\,_{\textcolor{red}{ 3}}+1/584\,_{\textcolor{red}{ 8}}+1/3504\,_{\textcolor{red}{ 48}}}$ \\ \hline $11$ & $23$ & $12$ & $8$ & $3$ & $\mathbf{\textcolor{red}{ 5}}$& $48$ & $ \mathbf {2/73\mathit{_b}=1/48+1/292\,_{\textcolor{red}{ 4}}+1/438\,_{\textcolor{red}{ 6}}+1/1168\,_{\textcolor{red}{ 16}}}$ \\ \hline $12$ & $25$ & $18$ & $6$ & $1$ & $\mathbf{\textcolor{red}{ 5}}$& $54$ & $ \mathbf {2/83\mathit{_a}=1/54+1/249\,_{\textcolor{red}{ 3}}+1/747\,_{\textcolor{red}{ 9}}+1/4482\,_{\textcolor{red}{ 54}}}$ \\ \hline $18$ & $37$ & $30$ & $6$ & $1$ & $\mathbf{\textcolor{red}{ 5}}$& $60$ & $ \mathbf {2/83\mathit{_b}=1/60+1/166\,_{\textcolor{red}{ 2}}+1/830\,_{\textcolor{red}{ 10}}+1/4980\,_{\textcolor{red}{60}}}$ \\ \hline $11$ & $23$ & $14$ & $7$ & $2$ & $\mathbf{\textcolor{red}{ 5}}$& $56$ & $ \mathbf {2/89=1/56+1/356\,_{\textcolor{red}{ 4}}+1/712\,_{\textcolor{red}{ 8}}+1/2492\,_{\textcolor{red}{ 28}}}$ \\ \hline \hline $14$ & $29$ & $18$ & $9$ & $2$ & $\mathbf{\textcolor{red}{ 7}}$& $36$ & $ \mathbf {2/43=1/36+1/86\,_{\textcolor{red}{ 2}}+1/172\,_{\textcolor{red}{ 4}}+1/774\,_{\textcolor{red}{18}}}$ \\ \hline $9$ & $19$ & $10$ & $8$ & $1$ & $\mathbf{\textcolor{red}{ 7}}$& $40$ & $\mathbf {2/61=1/40+1/244\,_{\textcolor{red}{ 4}}+1/305\,_{\textcolor{red}{ 5}}+1/2440\,_{\textcolor{red}{ 40}}}$ \\ \hline $23$ & $47$ & $30$ & $12$ & $5$ & $\mathbf{\textcolor{red}{ 7}}$& $60$ & $ \mathbf {2/73=1/60+1/146\,_{\textcolor{red}{ 2}}+1/365\,_{\textcolor{red}{ 5}}+1/876\,_{\textcolor{red}{12}}}$ \\ \hline $14$ & $29$ & $18$ & $9$ & $2$ & $\mathbf{\textcolor{red}{ 7}}$& $54$ & $ \mathbf {2/79=1/54+1/237\,_{\textcolor{red}{ 3}}+1/474\,_{\textcolor{red}{ 6}}+1/2133\,_{\textcolor{red}{ 27}}}$ \\ \hline $18$ & $37$ & $20$ & $12$ & $5$ & $\mathbf{\textcolor{red}{ 7}}$& $60$ & $ \mathbf {2/83=1/60+1/249\,_{\textcolor{red}{ 3}}+1/415\,_{\textcolor{red}{ 5}}+1/996\,_{\textcolor{red}{12}}}$ \\ \hline $11$ & $23$ & $14$ & $8$ & $1$ & $\mathbf{\textcolor{red}{ 7}}$& $56$ & $ \mathbf {2/89=1/56+1/356\,_{\textcolor{red}{ 4}}+1/623\,_{\textcolor{red}{ 7}}+1/4984\,_{\textcolor{red}{ 56}}}$ \\ \hline \hline $20$ & $41$ & $21$ & $14$ & $6$ & $\mathbf{\textcolor{red}{ 8}}$& $42$ & $ \mathbf {2/43=1/42+1/86\,_{\textcolor{red}{ 2}}+1/129\,_{\textcolor{red}{ 3}}+1/301\,_{\textcolor{red}{7}}}\;\, ^{Eg}$ \\ \hline $15$ & $31$ & $15$ & $12$ & $4$ & $\mathbf{\textcolor{red}{ 8}}$& $60$ & $ \mathbf {2/89=1/60+1/356\,_{\textcolor{red}{ 4}}+1/445\,_{\textcolor{red}{ 5}}+1/1335\,_{\textcolor{red}{ 15}}}$ \\ \hline \hline $21$ & $43$ & $26$ & $13$ & $4$ & $\mathbf{\textcolor{red}{ 9}}$& $52$ & $ \mathbf {2/61=1/52+1/122\,_{\textcolor{red}{ 2}}+1/244\,_{\textcolor{red}{ 4}}+1/793\,_{\textcolor{red}{13}}}$ \\ \hline $20$ & $41$ & $30$ & $10$ & $1$ & $\mathbf{\textcolor{red}{ 9}}$& $60$ & $ \mathbf {2/79\mathit{_a}=1/60+1/158\,_{\textcolor{red}{ 2}}+1/474\,_{\textcolor{red}{ 6}}+1/4740\,_{\textcolor{red}{ 60}}}$ \\ \hline $20$ & $41$ & $20$ & $15$ & $6$ & $\mathbf{\textcolor{red}{ 9}}$& $60$ & $ \mathbf {2/79\mathit{_b}=1/60+1/237\,_{\textcolor{red}{ 3}}+1/316\,_{\textcolor{red}{ 4}}+1/790\,_{\textcolor{red}{10}}}\;\, ^{Eg}$ \\ \hline $15$ & $31$ & $20$ & $10$ & $1$ & $\mathbf{\textcolor{red}{ 9}}$& $60$ & $ \mathbf {2/89=1/60+1/267\,_{\textcolor{red}{ 3}}+1/534\,_{\textcolor{red}{ 6}}+1/5340\,_{\textcolor{red}{ 60}}}$ \\ \hline \hline $25$ & $51$ & $35$ & $14$ & $2$ & $\mathbf{\textcolor{red}{ 12}}$& $70$ & $ \mathbf {2/89=1/70+1/178\,_{\textcolor{red}{ 2}}+1/445\,_{\textcolor{red}{ 5}}+1/3115\,_{\textcolor{red}{35}}}$ \\ \hline \hline $23$ & $47$ & $30$ & $15$ & $2$ & $\mathbf{\textcolor{red}{13}}$& $60$ & $ \mathbf {2/73=1/60+1/146\,_{\textcolor{red}{ 2}}+1/292\,_{\textcolor{red}{ 4}}+1/2190\,_{\textcolor{red}{30}}}$ \\ \hline $18$ & $37$ & $20$ & $15$ & $2$ & $\mathbf{\textcolor{red}{ 13}}$& $60$ & $ \mathbf {2/83=1/60+1/249\,_{\textcolor{red}{ 3}}+1/332\,_{\textcolor{red}{ 4}}+1/2490\,_{\textcolor{red}{30}}}$ \\ \hline \hline $24$ & $49$ & $32$ & $16$ & $1$ & $\mathbf{\textcolor{red}{ 15}}$& $64$ & $ \mathbf {2/79=1/64+1/158\,_{\textcolor{red}{ 2}}+1/316\,_{\textcolor{red}{ 4}}+1/5056\,_{\textcolor{red}{ 64}}}$ \\ \hline $26$ & $53$ & $34$ & $17$ & $2$ & $\mathbf{\textcolor{red}{ 15}}$& $68$ & $ \mathbf {2/83=1/68+1/166\,_{\textcolor{red}{ 2}}+1/332\,_{\textcolor{red}{ 4}}+1/2822\,_{\textcolor{red}{34}}}$ \\ \hline \hline $23$ & $47$ & $27$ & $18$ & $2$ & $\mathbf{\textcolor{red}{ 16}}$& $54$ & $ \mathbf {2/61=1/54+1/122\,_{\textcolor{red}{ 2}}+1/183\,_{\textcolor{red}{ 3}}+1/1647\,_{\textcolor{red}{27}}}$ \\ \hline \hline $27$ & $55$ & $36$ & $18$ & $1$ & $\mathbf{\textcolor{red}{ 17}}$& $72$ & $ \mathbf {2/89=1/72+1/178\,_{\textcolor{red}{ 2}}+1/356\,_{\textcolor{red}{ 4}}+1/6408\,_{\textcolor{red}{72}}}$ \\ \hline \hline $30$ & $61$ & $36$ & $24$ & $1$ & $\mathbf{\textcolor{red}{ 23}}$& $72$ & $ \mathbf {2/83=1/72+1/166\,_{\textcolor{red}{ 2}}+1/249\,_{\textcolor{red}{ 3}}+1/5976\,_{\textcolor{red}{72}}}$ \\ \hline \hline $33$ & $67$ & $39$ & $26$ & $2$ & $\mathbf{\textcolor{red}{ 24}}$& $78$ & $ \mathbf {2/89=1/78+1/178\,_{\textcolor{red}{ 2}}+1/267\,_{\textcolor{red}{ 3}}+1/3471\,_{\textcolor{red}{39}}}$ \\ \hline \end{tabular} \end{center} \label{Complete4Terms} \end{table} \normalsize \clearpage Table \ref{Complete4Terms} shown above is only as an indication for us and, certainly, was not calculated in its entirety. {\sf 2/23} has been reported only for memory because it was solved at the end of Sect. \ref{ThreeTerms}.\\ With their experience related to 3-terms series, cut-off beyond $10$ has been applied by the scribes. Indeed all cases (here \sf 7\rm) may support this cut-off without any exception. Table \ref{Complete4Terms} becomes: \\ \begin{table}[h] \caption{\sf [4-terms] options} \small \begin{center} \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{8}{|c|}{\sf Trials [4-terms] ordered with $\Delta_{d}^{'}\nearrow$ showing where are the Egyptian options}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$ &$D_1^n$ & Possible [4-terms] decompositions $\mathbf {\textcolor{red}{ m_4\leq 10}}$ \\ \hline \hline $9$ & $19$ & $12$ & $4 $ & $3 $ & $\mathbf{\textcolor{red}{ 1}}$& $24$ & $\mathbf {2/29=1/24+1/58\,_{\textcolor{red}{ 2}}+1/174\,_{\textcolor{red}{ 6}}+1/232\,_{\textcolor{red}{8}}}\;\, ^{Eg}$ \\ \hline \hline $9$ & $19$ & $10$ & $5$ & $4$ & $\mathbf{\textcolor{red}{ 1}}$& $40$ & $\mathbf {2/61=1/40+1/244\,_{\textcolor{red}{ 4}}+1/488\,_{\textcolor{red}{ 8}}+1/610\,_{\textcolor{red}{ 10}}}\;\, ^{Eg}$ \\ \hline \hline $14$ & $29$ & $14$ & $8$ & $7$ & $\mathbf{\textcolor{red}{ 1}}$& $56$ & $ \mathbf {2/83=1/56+1/332\,_{\textcolor{red}{ 4}}+1/581\,_{\textcolor{red}{ 7}}+1/664\,_{\textcolor{red}{ 8}}}$ \\ \hline \hline \hline $5$ & $11$ & $5$ & $4$ & $2$ & $\mathbf{\textcolor{red}{ 2}}$& $20$ & $ \mathbf {2/29=1/20+1/116\,_{\textcolor{red}{ 4}}+1/145\,_{\textcolor{red}{ 5}}+1/290\,_{\textcolor{red}{ 10}}}$ \\ \hline \hline $18$ & $37$ & $15$ & $12$ & $10$ & $\mathbf{\textcolor{red}{ 2}}$& $60$ & $ \mathbf {2/83=1/60+1/332\,_{\textcolor{red}{ 4}}+1/415\,_{\textcolor{red}{ 5}}+1/498\,_{\textcolor{red}{6}}}\;\, ^{Eg{\textcolor{red}{{\star }}}}$ \\ \hline \hline $18$ & $37$ & $21$ & $9$ & $7$ & $\mathbf{\textcolor{red}{ 2}}$& $63$ & $ \mathbf {2/89=1/63+1/267\,_{\textcolor{red}{ 3}}+1/623\,_{\textcolor{red}{ 7}}+1/801\,_{\textcolor{red}{9}}}$ \\ \hline \hline \hline $8$ & $17$ & $10$ & $5$ & $2$ & $\mathbf{\textcolor{red}{ 3}}$& $20$ & $\mathbf {\cancel{2/23}=1/20+1/46\,_{\textcolor{red}{ 2}}+1/92\,_{\textcolor{red}{ 4}}+1/230\,_{\textcolor{red}{10}}}$ \\ \hline \hline $23$ & $47$ & $20$ & $15$ & $12$ & $\mathbf{\textcolor{red}{ 3}}$& $60$ & $ \mathbf {2/73=1/60+1/219\,_{\textcolor{red}{ 3}}+1/292\,_{\textcolor{red}{ 4}}+1/365\,_{\textcolor{red}{5}}}\;\, ^{Eg{\textcolor{red}{{\star }}}}$ \\ \hline \hline \hline $15$ & $31$ & $15$ & $10$ & $6$ & $\mathbf{\textcolor{red}{ 4}}$& $30$ & $ \mathbf {2/29=1/30+1/58\,_{\textcolor{red}{ 2}}+1/87\,_{\textcolor{red}{ 3}}+1/145\,_{\textcolor{red}{5}}}$ \\ \hline \hline $14$ & $29$ & $15$ & $9$ & $5$ & $\mathbf{\textcolor{red}{ 4}}$& $45$ & $ \mathbf {2/61=1/45+1/183\,_{\textcolor{red}{ 3}}+1/305\,_{\textcolor{red}{ 5}}+1/549\,_{\textcolor{red}{9}}}$ \\ \hline \hline $15$ & $31$ & $15$ & $10$ & $6$ & $\mathbf{\textcolor{red}{ 4}}$& $60$ & $ \mathbf {2/89=1/60+1/356\,_{\textcolor{red}{ 4}}+1/534\,_{\textcolor{red}{ 6}}+1/890\,_{\textcolor{red}{10}}}\;\, ^{Eg}$ \\ \hline \hline \hline $20$ & $41$ & $21$ & $14$ & $6$ & $\mathbf{\textcolor{red}{ 8}}$& $42$ & $ \mathbf {2/43=1/42+1/86\,_{\textcolor{red}{ 2}}+1/129\,_{\textcolor{red}{ 3}}+1/301\,_{\textcolor{red}{7}}}\;\, ^{Eg}$ \\ \hline \hline $20$ & $41$ & $20$ & $15$ & $6$ & $\mathbf{\textcolor{red}{ 9}}$& $60$ & $ \mathbf {2/79=1/60+1/237\,_{\textcolor{red}{ 3}}+1/316\,_{\textcolor{red}{ 4}}+1/790\,_{\textcolor{red}{10}}}\;\, ^{Eg}$ \\ \hline \end{tabular} \end{center} \label{4TERMSOPT} \end{table} We follow the same way as for the [3-terms] series with slightly different subsets. That yields:\\ \begin{table}[htbp] \caption{\sf A single or two different $\Delta_{d}^{'}$ [4-terms]\rm } \begin{center} \scriptsize \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf D with a single $\Delta_{d}^{'} $$\quad$(options: no)} & \multicolumn{1}{l|}{\sf Scribes's decision: obvious }\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$ &$D_1^n$ &$\qquad \quad$[4-terms] decompositions \\ \hline \hline $23$ & $47$ & $20$ & $15$ & $12$ & $\mathbf{\textcolor{red}{ 3}}$& $60$ & $ \mathbf {2/73=1/60+1/219\,_{\textcolor{red}{ 3}}+1/292\,_{\textcolor{red}{ 4}}+1/365\,_{\textcolor{red}{5}}}\;\, ^{Eg{\textcolor{red}{{\star }}}}$ \\ \hline \hline \hline $20$ & $41$ & $21$ & $14$ & $6$ & $\mathbf{\textcolor{red}{ 8}}$& $42$ & $ \mathbf {2/43=1/42+1/86\,_{\textcolor{red}{ 2}}+1/129\,_{\textcolor{red}{ 3}}+1/301\,_{\textcolor{red}{7}}}\;\, ^{Eg}$ \\ \hline \hline \hline $20$ & $41$ & $20$ & $15$ & $6$ & $\mathbf{\textcolor{red}{ 9}}$& $60$ & $ \mathbf {2/79=1/60+1/237\,_{\textcolor{red}{ 3}}+1/316\,_{\textcolor{red}{ 4}}+1/790\,_{\textcolor{red}{10}}}\;\, ^{Eg}$ \\ \hline \end{tabular}\\ \vspace{0.5em} \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{\sf D with two different $\Delta_{d}^{'}\quad$(options: yes)} & \multicolumn{1}{l|}{\sf Scribes's decision: smallest $\Delta_{d}^{'} $}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$ &$D_1^n$ & $\qquad \quad$[4-terms] decompositions \\ \hline \hline $9$ & $19$ & $12$ & $4 $ & $3 $ & $\mathbf{\textcolor{red}{ 1}}$& $24$ & $\mathbf {2/29=1/24+1/58\,_{\textcolor{red}{ 2}}+1/174\,_{\textcolor{red}{ 6}}+1/232\,_{\textcolor{red}{8}}}\;\, ^{Eg}$ \\ \hline \hline $5$ & $11$ & $5$ & $4$ & $2$ & $\mathbf{\textcolor{red}{ 2}}$& $20$ & $ \mathbf {2/29=1/20+1/116\,_{\textcolor{red}{ 4}}+1/145\,_{\textcolor{red}{ 5}}+1/290\,_{\textcolor{red}{ 10}}}$ \\ \hline \hline $15$ & $31$ & $15$ & $10$ & $6$ & $\mathbf{\textcolor{red}{ 4}}$& $30$ & $ \mathbf {2/29=1/30+1/58\,_{\textcolor{red}{ 2}}+1/87\,_{\textcolor{red}{ 3}}+1/145\,_{\textcolor{red}{5}}}$ \\ \hline \hline \hline $9$ & $19$ & $10$ & $5$ & $4$ & $\mathbf{\textcolor{red}{ 1}}$& $40$ & $\mathbf {2/61=1/40+1/244\,_{\textcolor{red}{ 4}}+1/488\,_{\textcolor{red}{ 8}}+1/610\,_{\textcolor{red}{ 10}}}\;\, ^{Eg}$ \\ \hline \hline $14$ & $29$ & $15$ & $9$ & $5$ & $\mathbf{\textcolor{red}{ 4}}$& $45$ & $ \mathbf {2/61=1/45+1/183\,_{\textcolor{red}{ 3}}+1/305\,_{\textcolor{red}{ 5}}+1/549\,_{\textcolor{red}{9}}}$ \\ \hline \end{tabular}\\ \vspace{0.5em} \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{} & \multicolumn{1}{l|}{\sf Scribes's decision: consecutive multipliers}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$&$D_1^n$ & $\qquad \quad$[4-terms] decompositions \\ \hline \hline $14$ & $29$ & $14$ & $8$ & $7$ & $\mathbf{\textcolor{red}{ 1}}$& $56$ & $ \mathbf {2/83=1/56+1/332\,_{\textcolor{red}{ 4}}+1/581\,_{\textcolor{red}{ 7}}+1/664\,_{\textcolor{red}{ 8}}}$ \\ \hline \hline $18$ & $37$ & $15$ & $12$ & $10$ & $\mathbf{\textcolor{red}{ 2}}$& $60$ & $ \mathbf {2/83=1/60+1/332\,_{\textcolor{red}{ 4}}+1/415\,_{\textcolor{red}{ 5}}+1/498\,_{\textcolor{red}{6}}}\;\, ^{Eg{\textcolor{red}{{\star }}}}$ \\ \hline \end{tabular}\\ \vspace{0.5em} \begin{tabular}{|l|c|l||l||l||c|l|l|} \hline \multicolumn{7}{|c|}{} & \multicolumn{1}{l|}{\sf Scribes's decision: no odd denominator $D_1$}\\ \hline $n$ & $2n+1$ & $d_2$ & $d_3$ & $d_4$ &$\textcolor{red}{\Delta_{d}^{'}}$&$D_1^n$ & $\qquad \quad$[4-terms] decompositions \\ \hline \hline $18$ & $37$ & $21$ & $9$ & $7$ & $\mathbf{\textcolor{red}{ 2}}$& $63$ & $ \mathbf {2/89=1/63+1/267\,_{\textcolor{red}{ 3}}+1/623\,_{\textcolor{red}{ 7}}+1/801\,_{\textcolor{red}{9}}}$ \\ \hline \hline $15$ & $31$ & $15$ & $10$ & $6$ & $\mathbf{\textcolor{red}{ 4}}$& $60$ & $ \mathbf {2/89=1/60+1/356\,_{\textcolor{red}{ 4}}+1/534\,_{\textcolor{red}{ 6}}+1/890\,_{\textcolor{red}{10}}}\;\, ^{Eg}$ \\ \hline \end{tabular} \end{center} \normalsize \label{1Delta4} \end{table} We recall that any odd denominator $D_1$ could lead to a solution for [3-terms] decompositions as checked in tables \ref {3TERMSOPT} or \ref {Frac3become4}. Its occurrence arises only 2 times in table \ref{1Delta4} [4-terms]. The first, for $2/61$, was dropped out because a ${\Delta_{d}^{'}}=4$ too high. The second one regards {\sf 2/89} (first row). Then, for a unifying sake and avoiding singularity, chief scribe decided to discard $D_1=63$ in this case.\\ \hspace*{1.5em}Remark that we are very far from assumptions of Gillings {\bf \cite{Gillings}} about Egyptian preferences for even numbers instead of odd, regarding the denominators in general. Thus the {\it `no odd precept' } was a low priority. At low ratio also (2 times only), this will be applied to the composite numbers $D$ {\bf \cite{Brehamet}}. \section {Conclusion} \hspace{1.5em}As we saw, the most recent analysis (2008) has been performed on the `$2/n$' table by Abdulaziz {\bf \cite{Abdulaziz}} (see his group $G_2$). It can be appreciated as a kind of mathematical anastylosis using materials issued from the RMP and other documentation. Ancient calculation procedure, using mainly fractions, is faithfully respected, but leads to arithmetical depth analyses of each divisor of $D_1$. \\ \hspace*{1.5em}Our global approach avoided the difficulties of sophisticated arithmetical studies. This provides the advantage of forgetting quickly some widespread `modern' ideas about the topic. \\ $\bullet$ No, the last denominator is not bounded by a fixed value of $1000$. It only depends on the `circumstances' related to the value of $D$. For 3 or 4 terms, a limitation like $D_h \leq 10D$ is quite suitable, except only for {\sf 2/53} where $10$ is replaced by $15$. An observation well stressed in Ref. {\bf \cite{Abdulaziz}}.\\ $\bullet$ No requirement is found about the denominator $D_1$ as having to be the greatest if alternatives.\\ $\bullet$ Once for all, a systematic predilection for even denominators does not need to be considered. Only once, we were forced to discard $D_1 = 63$ (odd) for deciding on {\sf 2/89}.\\ $\bullet$ Of course, there is no theoretical formula that can give immediately the first denominator as a function of $D$. It must necessarily go through trials and few selection criteria. The simpler the better, like the $\Delta$-classification presented in this paper. Maybe is it this classification that induces the opportunity of a comprehensive approach ? Strictly speaking, there are no algorithms in the method, just tables and pertinent observation. This is how $2/23$, $2/29 $ or $2/53$ have found a logical explanation, more thorough than the arguments commonly supplied for these `singularities'. \\ \hspace*{1.5em}Find a simple logic according to which there is no singular case was the goal of the present paper.\\ Perhaps, chronologically, the study of prime numbers has been elaborated \sf before \rm that of composite numbers. It is nothing more than an hypothesis consistent with the spirit of our study. Yes ancient scribes certainly have been able to calculate and analyze all the preliminary cases. Ultimately, our unconventional method allows to reconstruct the table fairly easily with weak mathematical assumptions, except maybe the new idea to consider as beneficial to have consecutive multipliers. \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} \section*{\sf Appendix A: why a boundary with a Top-flag?} In this appendix, we continue to consider prime denominators $D$. For [2-terms] decompositions this concept of a Top-flag has no meaning since the last denominator is unique. \\ Obviously, doubtless far from Egyptian concepts, there are another equations {\sf more general} than\\ Eqs. (\ref{eq:FEgypt3}) or (\ref{eq:FEgypt4}), namely \begin{equation} \frac{2}{D}= \frac{1}{D_1}+ \frac{1}{m_2D}+ \frac{1}{m_3D}. \end{equation} \begin{equation} \frac{2}{D}= \frac{1}{D_1}+ \frac{1}{m_2D}+ \frac{1}{m_3D}+ \frac{1}{m_4D}. \end{equation} We can imagine these as issued from another kind of unity decomposition like \begin{equation} \mathbf{1}= \frac{D}{2D_1}+ \frac{1}{2m_2}+ \frac{1}{2m_3}. \end{equation} \begin{equation} \mathbf{1}= \frac{D}{2D_1}+ \frac{1}{2m_2}+ \frac{1}{2m_3}+ \frac{1}{2m_4}. \end{equation} $D/2D_1$ remains in the lead of equality and $\mathbf{1}$ is a sum of terms, each with a even denominator . \\ These (modern) equations have additional solutions of no use for the scribes .\\ \it A priori \rm the solutions are infinite, then for avoiding such a tedious research (today and in the past time), it is necessary to limit the highest denominator $D_h=m_h D$. How to do that ? Simply by defining a kind of `Top-flag' $\mbox{\boldmath $\top$ }\!\!_ f^{\;\;[h]}$ such as \begin{equation} D_h \leq D \mbox{\boldmath $\top$ }\!\!_ f^{\;\;[h]} . \end{equation} Indeed, as soon as one decides to study a three-terms decomposition or more, it should be realized that an upper boundary for the last denominator has to be fixed. If not, the number of solutions becomes infinite [countable]. Recall that $m_2 < m_3 < m_4$ and $D_2 < D_3 < D_4$. Unfortunately (or not) the author of this paper has begun the calculations with a even more general problem, this of solving \begin{equation} \frac{2}{D}= \sum _{i=1}^{h} \frac{1}{D_i}, \label{eq:EgyptGeneral} \end{equation} without any criteria of multiplicity involving multipliers like $m_i$ ($i>2$).\\ Certainly this was the reflex of Gillings {\bf{\cite{Gillings}}} or Bruckheimer and Salomon {\bf{\cite{BruckSalom}}}. The problem is solvable and the solutions available by means of a small computer. After a necessary arithmetical analysis, it can be found that $(h-1)$ sets of solutions exist. One with $(h-1)$ multipliers $m_i$, another with $(h-2)$ multipliers and so on. No solution exists if one searches for $D_i$ ($i\geq 2$) not multiple of $D$.\\ Even a low-level programming code like {\sf sb} can be used instead of {\sf Fortran} to perform computations in a very acceptable speed. We quickly realized the necessity of stopping the calculations by using a limitation regarding the last highest denominator $D_h$. Whence the introduction of a Top-flag.\\ Actually the Egyptian $2/D$ table shows a subset of more general solutions because the multipliers $m_i$ have a specific form involving $D_\mathbf1$ and some of its divisors $d_i$. For example out of this subset, you can find an unexpected [4-terms] solution for 2/23 with $\mbox{\boldmath $\top$ }\!\!_ f^{\;\;[4]}= 10$, namely \begin{tabular}{|c|} \hline $2/23=1/15+1/115\,_{\textcolor{red}{ 5}}+1/138\,_{\textcolor{red}{ 6}}+1/230\,_{\textcolor{red}{ 10}}$ \\ [0.01in] \hline \end{tabular}.\\ \hspace*{1.5em}So, if we restrict ourself to retrieve Egyptian fractions given in the table, it naturally comes to mind to limit the highest denominator by an upper boundary: a convenient Top-flag. \\ Excepted the Babylonian system example in base $60$, a numeration in base $10$ is rather universal, because of our two hands with each {\sf 5} fingers. It is of common sense that the selection was generally $\mbox{\boldmath $\top$ }\!\!_ f^{\;\;[h]}= 10 \;(=2\times \sf 5)$, not excluding a favorable appreciation for $\mbox{\boldmath $\top$ }\!\!_ f^{\;\;[3]}= 15 \;(=3\times \sf 5)$ as for {\sf 2/53}. \end{flushleft}
2,877,628,091,412
arxiv
\section{Introduction} \hspace{\parindent} Until now regularizations by way of the zeta function have been successful in some physical problems\cite{Kunimasa,Leeuwen}, but the Riemann hypothesis itself has been remained to be proved. Recently we have shown the evidence using the Euler's alternating summation, which is finite even in the critical strip and seems to be essential to clarify the Riemann hypothesis, in the story of the finite quantities.\cite{Fujimoto5} Here we will briefly review the essential points and give other evidences, one of which is also related with the finite ratio appeared in the functional equation and another is in the story of the divergent quantities for the previous derivation. The definition of the Riemann zeta function is \begin{equation} \zeta(z)=\lim_{n\to\infty}\zeta_n(z),\ \zeta_n(z)\equiv\sum_{k=1}^n\frac{1}{k^z} \label{e101r} \end{equation} for $\Re z>1$ \cite{Conrey}. In this note we adopt a hat notation such as $\hat{\zeta}(z)$, which is well defined even in the critical strip $0<\Re z<1$, such as the Euler's alternating series \begin{equation} \hat\zeta(z)=\frac{1}{1-2^{1-z}}\lim_{n\to\infty}\xi_n(z),\ \xi_n(z)\equiv\sum_{k=1}^n\frac{(-1)^{k-1}}{k^z}. \label{e102r} \end{equation} We often mention a hat notation as a ``regularized" form because a hat expression is defined by a subtraction an infinite number from a divergent quantity. In this note, we deal with the Euler's alternating series of the Riemann zeta function as (\ref{e102r}) to well-define even in the critical strip $0<\Re z<1$ and utilize the functional equation to indicate the hypothesis. Hereafter we are only interested in the region $\Re z\ge\frac{1}{2}$ for the Riemann zeta function, because the functional equation ensures the regularized nature of the zeta function for the other half plane $\Re z<\frac{1}{2}$. \vskip 5mm There is a relation called the functional equation for the Riemann zeta function \begin{equation} \hat{\zeta}(z)=\hat{H}(z)\hat{\zeta}(1-z), \label{e121r} \end{equation} where $\hat{H}(z)$ is given by $\displaystyle 2\Gamma(1-z)(2\pi)^{z-1}\sin\frac{\pi z}{2}$ and is not equal to zero for $\displaystyle{\frac{1}{2}\leq\Re{z}<1}$. Hereafter we deal with $\hat{H}(z)$ as the infinite limit of $\hat{H}_n(z)$ defined by \begin{equation} \hat{H}_n(z)\equiv\frac{\hat{\zeta}_n(z)}{\hat{\zeta}_n(1-z)}, \label{e122r} \end{equation} where $\hat{\zeta}_n(z)$ is defined by \begin{equation} \hat{\zeta}_n(z)\equiv\zeta_n(z)-\frac{n^{1-z}}{1-z} \label{e123r} \end{equation} but as we will give a notice, we have to take care of substituting a zero for $z$ in the limit of $n\to\infty$. The relation between $\zeta_n(z)$ and $\xi_n(z)$ is special because the relation form itself conserves before and after the regularization as \begin{eqnarray} \xi_{2n}(z)&=&\zeta_{2n}(z)-2^{1-z}\zeta_n(z), \label{e202ar}\\ \xi_{2n}(z)&=&\hat{\zeta}_{2n}(z)-2^{1-z}\hat{\zeta}_n(z), \label{e202br} \end{eqnarray} where we used the relation (\ref{e123r}) and we do not use a hat notation for $\xi_{2n}(z)$ because it is already well-defined in the critical strip. Adding the term $\hat{\zeta}_{2n}(z)$ to both sides of (\ref{e202br}), we get \begin{equation} \xi_{2n}(z)+\hat{\zeta}_{2n}(z)=2\hat{\zeta}_{2n}(z)-2^{1-z}\hat{\zeta}_n(z), \label{e203r} \end{equation} where the left-hand side is an order of $O(n^{-(1+\Re z)})$ for $n\to\infty$ shown in Appendix, whereas an order of each term is $O(n^{-\Re z})$. When we put $z=\rho$ which is one of the non-trivial zeros for the Riemann zeta function in (\ref{e203r}), take the absolute values and use the property of the zeta function that $1-\rho$ is also a zero as $\rho$ is, we get \begin{eqnarray} 2|\hat{\zeta}_{2n}(\rho)|&=&|2^{1-\rho}||\hat{\zeta}_n(\rho)|+O(n^{-(\sigma+1)}) \label{e2041r}\\ 2|\hat{\zeta}_{2n}(1-\rho)|&=&|2^{\rho}||\hat{\zeta}_n(1-\rho)|+O(n^{\sigma-2}), \label{e205r} \end{eqnarray} where $\sigma=\Re \rho$. Combining (\ref{e2041r}) with (\ref{e205r}), we get \begin{equation} |\hat{H}_{2n}(\rho)|=|2^{1-2\rho}||\hat{H}_n(\rho)|+O(n^{-2\sigma}). \label{e206r} \end{equation} When we think about the limit of $n\to\infty$ in (\ref{e206r}), the left hand side will coincide with $|\hat{H}(\rho)|$ and $\displaystyle\lim_{n\to\infty}|\hat{H}_{n}(\rho)|$ converges to the same value in the right hand side. The reason why we have introduced the absolute values in (\ref{e2041r}) to (\ref{e206r}) is as follows. A dependence of $n$ in $\hat{H}_{n}(\rho)$ also appears as a function of $n$ in the argument. To eliminate this dependence, we have taken absolute values. After all we can conclude that the term $|2^{1-2\rho}|$ is equal to one which means that a real part of the zero $\Re\rho$ is identical to one half. \vskip 5mm For a while, we think about each value of $n$-dependent function for the non trivial zeros. We can easily get the $n$-dependent value and the next leading order for the $\hat{H}_n(z)$ as follows: \begin{eqnarray} \hat{\zeta}_n(\rho)&=&\frac{1}{2n^\rho}+O(n^{-(\sigma+1)}),\\ \hat{\zeta}_n(1-\rho)&=&\frac{1}{2n^{1-\rho}}+O(n^{\sigma-2}). \label{e200a} \end{eqnarray} So we can reach the relation \begin{equation} \hat{H}_n(\rho)\equiv\frac{\hat{\zeta}_n(\rho)}{\hat{\zeta}_n(1-\rho)} =n^{1-2\rho}+O(n^{-2\sigma}). \label{e200-1a} \end{equation} When we think about the limit of $n\to\infty$ in (\ref{e200-1a}), the left hand side will converges the finite value which coincides with $\hat{H}(\rho)$ beside the argument. The real part of $1-2\rho$ must be equal to one in the right-hand side to have a finite value. Then we can conclude that a real part of the zero $\Re\rho$ is identical to one half. \vskip 5mm For the use of non-regularized quantities $\zeta_n(z)$ appeared on right-hand side in Eq.(\ref{e123r}), the functional equation for $n$ can be also defined as follows: \begin{equation} \zeta_n(z)\equiv H_n(z)\zeta_n(1-z), \label{e201a} \end{equation} where $\zeta_n(z)$ is defined by Eq.(\ref{e101r}) and we refer $H_n(z)$ to the non-regularized coefficient as \begin{eqnarray} H_n(z)&=&\frac{\zeta_n(z)}{\zeta_n(1-z)}=\frac{\hat{\zeta}_n(z)+n^{1-z}/(1-z)}{\hat{\zeta}_n(1-z)+n^z/z}\\ &=&n^{1-2z}\left\{\frac{\hat{\zeta}_n(z)/n^{1-z}+1/(1-z)}{\hat{\zeta}_n(1-z)/n^z+1/z}\right\}\\ &=&n^{1-2z}\left\{\frac{\hat{\zeta}_n(z)/n^{1-z}+1/(1-z)}{\hat{H}_n(z)^{-1}\hat{\zeta}_n(z)/n^z+1/z}\right\ \label{e202a} \end{eqnarray} Considering the limit of $n\to\infty$ for $z=\rho$, we get a relation \begin{eqnarray} H_n(\rho)&=&n^{1-2\rho}\left\{\frac{\hat{\zeta}_n(\rho)/n^{1-\rho}+1/(1-\rho)} {\hat{H}_n(\rho)^{-1}\hat{\zeta}_n(\rho)/n^\rho+1/\rho}\right\}\\ &=&\frac{\rho}{1-\rho}n^{1-2\rho}+O(n^{-2\sigma}) \label{e203} \end{eqnarray} When we deal with the Euler's alternating series in Eq.(\ref{e102r}) for the Riemann zeta function, we can evaluate the function for $z$ even in the critical strip $0<\Re z<1$ as mentioned above. The discussion about the ratio of $H_{2n}(\rho)$ and $H_n(\rho)$ is parallel to the regularized quantities\cite{Fujimoto5} which we have briefly reviewed above, by using Eq.(\ref{e202ar}), we get \begin{equation} \lim_{n\to\infty}{\zeta}_{2n}(\rho)=2^{1-\rho}\lim_{n\to\infty}{\zeta}_n(\rho) \label{e207a} \end{equation} and using the property that $1-\rho$ is also a zero as well as $\rho$ is, we also get \begin{equation} \lim_{n\to\infty}{\zeta}_{2n}(1-\rho)=2^{\rho}\lim_{n\to\infty}{\zeta}_n(1-\rho). \label{e208a} \end{equation} Combining Eqs.(\ref{e207a}) with (\ref{e208a}), we get \begin{equation} \lim_{n\to\infty}{H}_{2n}(\rho)=2^{1-2\rho}\lim_{n\to\infty}{H}_n(\rho). \label{e209a} \end{equation} So the way to reach the conclusion is taking an absolute value in (\ref{e209a}) with (\ref{e206r}) \begin{equation} |2^{1-2\rho}|=\lim_{n\to\infty}\left|\frac{{H}_{2n}(\rho)}{{H}_n(\rho)}\right| =\lim_{n\to\infty}\left|\frac{\hat{H}_{2n}(\rho)}{\hat{H}_n(\rho)}\right| =\frac{|\hat{H}(\rho)|}{|\hat{H}(\rho)|}=1, \label{e212a} \end{equation} where we have to use the regularized quantities because we do not know whether the non-regularized quantities limit to the same absolute value, but we can confirm that this is the fact and which claims that a real part of the zero $\Re\rho$ is equal to $\displaystyle\frac{1}{2}$. \vskip 5mm On the other hand, by using Eq.(\ref{e202ar}) and the property of the Euler's alternating series, we can get \begin{equation} \zeta_{2n}(\rho)=2^{1-\rho}\zeta_n(\rho)+O(n^{-\sigma}), \label{e221a} \end{equation} and the $n$-dependent zeta function in each side can be reduced to $n$-th power using Eqs.(\ref{e123r}) and (\ref{e201a}), and we get the relation \begin{equation} H_{2n}(\rho)(2n)^{\rho}/\rh =(2n)^{1-\rho}/(1-\rho)+O(n^{-\sigma}) \end{equation} leads us to \begin{equation} H_{2n}(\rho)=\frac{\rho}{1-\rho}(2n)^{1-2\rho}+O(n^{-2\sigma}), \label{e222a} \end{equation} which is consistent with (\ref{e203}). In Eq.(\ref{e222a}) we can find the fact that even the non-regularized quantities $H_n(\rho)$ converges as \begin{equation} \lim_{n\to\infty}\frac{H_n(\rho)}{n^{1-2\rho}}=\frac{\rho}{1-\rho}, \label{e223a} \end{equation} as far as a real part of the zero is equal to $\displaystyle\frac{1}{2}$, which is consistent with the Riemann hypothesis. We can also show the relations concerning to regularized quantity for $n\to\infty$ as \begin{equation} \lim_{n\to\infty}\frac{\hat{H}_n(\rho)}{n^{1-2\rho}}=1\ \ \ {\rm and}\ \ \ \lim_{n\to\infty}\frac{\hat{\zeta}'_n(\rho)}{\hat{\zeta}'_n(1-\rho)}=-\hat{H}(\rho), \label{e224a} \end{equation} where the prime means the derivative for $z$. All these order estimation above are made use of the relation derived from Hardy and Littlewood\cite{Hardy} \begin{equation} \hat{\zeta}(z)=\hat{\zeta}_n(z)+O(n^{-\Re z}) \end{equation} for $|\Im z|\le 2\pi n/C$, where $C$ is constant greater than one, which means that $\Im z$ can be taken as large as $n$. Finally we have to mention that the regularized form for the Riemann zeta function above in the region $\frac{1}{2}\le\Re z<1$ coincides with the analytic continuation. \vskip 5mm \newpage \renewcommand{\theequation}{\Alph{section}\arabic{equation}} \setcounter{section}{1} \setcounter{equation}{0} \section*{Appendix} \hspace{\parindent} Here we briefly show that the function of $z$ with a parameter $n$, namely the left-hand side of (\ref{e203r}) \begin{equation} h_{2n}(z)\equiv \xi_{2n}(z)+\hat{\zeta}_{2n}(z) \label{a001} \end{equation} converges to zero rapidly compared with $\xi_{2n}(z)$ or $\hat{\zeta}_{2n}(z)$ for $n\to\infty$ at $z=\rho$, one of the non-trivial zeroes for the Riemann zeta function. The definitions given in (\ref{e101r}) and (\ref{e123r}) are \begin{eqnarray} \zeta_n(z)&\equiv& \sum_{k=1}^n\frac{1}{k^z}\nonumber\\ \hat{\zeta}_n(z)&\equiv& \zeta_n(z)-\frac{n^{1-z}}{1-z}\nonumber\\ \hat{\zeta}(z)&\equiv& \lim_{n\to\infty}\hat{\zeta}_n(z).\nonumber \end{eqnarray} By using the analytical continuation of the Euler-Maclaurin sum formula, we can write \begin{equation} \hat{\zeta}(z)=\zeta_n(z)-\frac{n^{1-z}}{1-z}-\frac{1}{2n^z}+R_n(z), \label{a002} \end{equation} where $R_n(z)$ represents a residue term including the Bernoulli terms, which order of $o(n^{-\Re z})$ for $n\to\infty$. We write down \begin{equation} \hat{\zeta}(z)-\hat{\zeta}_n(z)=-\frac{1}{2n^z}+R_n(z). \label{a003} \end{equation} On the other hand, the definition for the Euler alternating series given in (\ref{e102r}) is \begin{eqnarray} \xi_n(z)&\equiv& \sum_{k=1}^n\frac{(-1)^{k-1}}{k^z}\nonumber\\ \xi(z)&\equiv& \lim_{n\to\infty}\xi_n(z).\nonumber \end{eqnarray} This immediately means that \begin{equation} \xi(z)=(1-2^{1-z})\hat{\zeta}(z), \label{a004} \end{equation} \begin{equation} \xi_{2n}(z =\hat{\zeta}_{2n}(z)-2^{1-z}\hat{\zeta}_n(z) \label{a005} \end{equation} and \begin{equation} h_{2n}(z = 2\hat{\zeta}_{2n}(z)-2^{1-z}\hat{\zeta}_n(z) \label{a006} \end{equation} Here we note the difference of first order as \begin{eqnarray} g_{2n}(z)&\equiv& \frac{\xi_{2n-1}(z)+\xi_{2n}(z)}{2}\nonumber\\ &=& \xi_{2n}(z)+\frac{1}{2(2n)^z} \label{a007} \end{eqnarray} and we get \begin{eqnarray} & & \hat{\zeta}(z)-h_{2n}(z)\nonumber\\ &=& \hat{\zeta}(z)-\{2\hat{\zeta}_{2n}(z)-2^{1-z}\hat{\zeta}_n(z)\}\nonumber\\ &=& \{\hat{\zeta}(z)-\hat{\zeta}_{2n}(z)\}-\{\hat{\zeta}_{2n}(z)-2^{1-z}\hat{\zeta}_n(z)\}\nonumber\\ &=& \left\{-\frac{1}{2(2n)^z}+R_{2n}(z)\right\}-\xi_{2n}(z)\nonumber\\ &=& -g_{2n}(z)+R_{2n}(z). \label{a008} \end{eqnarray} Putting $z=\rho$, one of non-trivial zeroes, we get a relation \begin{equation} h_{2n}(\rho)=g_{2n}(\rho)-R_{2n}(\rho). \label{a009} \end{equation} Meanwhile we evaluate $g_{2n}(z)$ as \begin{eqnarray} g_{2n}(z)&=& \xi_{2n}(z)+\frac{1}{2(2n)^z}\nonumber\\ &=& \zeta_{2n}(z)-2^{1-z}\zeta_n(z)+\frac{1}{2(2n)^z}\nonumber\\ &=& \left\{\hat{\zeta}(z)+\frac{(2n)^{1-z}}{1-z}+\frac{1}{2(2n)^z}-R_{2n}(z)\right\}\nonumber\\ && -2^{1-z}\left\{\hat{\zeta}(z)+\frac{n^{1-z}}{1-z}+\frac{1}{2n^z}-R_n(z)\right\} +\frac{1}{2(2n)^z}\nonumber\\ &=& (1-2^{1-z})\hat{\zeta}(z)-R_{2n}(z)+2^{1-z}R_n(z)\nonumber\\ &=& \xi(z)-R_{2n}(z)+2^{1-z}R_n(z) \label{a010} \end{eqnarray} and again putting $z=\rho$, we get another relation \begin{equation} g_{2n}(\rho)=-R_{2n}(\rho)+2^{1-\rho}R_n(\rho). \label{a011} \end{equation} By using (\ref{a009}) and (\ref{a011}), we get \begin{equation} h_{2n}(\rho)=-2R_{2n}(\rho)+2^{1-\rho}R_n(\rho). \label{a013} \end{equation} Eqs.(\ref{a011}) and (\ref{a013}) are both order of $o(n^{-1/2})$, which shows that the $h_{2n}(\rho)$ converges to zero more rapid than $\xi_{2n}(\rho)$ or $\hat{\zeta}_{2n}(\rho)$. \vskip 5mm \noindent
2,877,628,091,413
arxiv
\section{Introduction} Gravitation has a quite peculiar property: particles with different masses and different compositions feel it in such a way that all of them acquire the same acceleration and, given the same initial conditions, follow the same path. Such universality of response --- usually referred to as {\it universality of free fall} --- is the most fundamental characteristic of the gravitational interaction \cite{mtw}. It is unique, peculiar to gravitation: no other basic interaction of nature has it. Universality of free fall is usually identified with the weak equivalence principle, which establishes the equality between inertial and gravitational masses. In fact, in order to move along the same trajectory, the motion of different particles must be independent of their masses, which have then to be canceled out from the equations of motion. Since this cancellation can only be made when the inertial and gravitational masses coincide, this coincidence will naturally imply universality. General relativity, Einstein's theory for gravitation, is fundamentally based on the weak equivalence principle. In fact, to comply with universality, the presence of a gravitational field is supposed to produce {\em curvature in spacetime}, the gravitational interaction being achieved by letting (spinless) particles to follow the {\it geodesics} of the curved spacetime. In other words, the connection describing the gravitational interaction is assumed to have non-vanishing curvature, but vanishing torsion. And now comes the important point: a general spacetime--rooted (or Lorentz) connection has {\em two} fundamental properties: curvature {\em and}\, torsion. Why should then matter produce {\it only curvature}? Was Einstein wrong when he made this assumption? Does torsion play any role in gravitation? The purpose of these notes is to discuss possible answers to this question, as well as to analyze their theoretical and experimental consistencies \cite{moriond}. In order to do that we begin by introducing, in the next section, some fundamental concepts related to spacetime and gravitation. Then, in section 3, we briefly review the main points of general relativity, a theory in which torsion is assumed to vanish from the very beginning. Section 4 gives a discussion of teleparallel gravity, a theory in which, instead of torsion, curvature is assumed to vanish. In spite of this fundamental difference, general relativity and teleparallel gravity are found to provide completely equivalent descriptions of the gravitational interaction. According to these theories, therefore, curvature and torsion are equivalent ways of describing gravitation, and consequently related to the same gravitational degrees of freedom. Section 5 briefly outlines Einstein--Cartan theory, which presupposes the simultaneous existence of curvature and torsion. According to it, torsion becomes important only in the presence of intrinsic spin, and new physical phenomena --- ignored by general relativity --- are predicted to exist in its presence. For spinless matter, it coincides with general relativity. In this theory, therefore, curvature and torsion are related to different degrees of freedom of gravity. Section 6 discusses new general relativity, a generalized teleparallel model with three free--parameters, which should be determined by experience. Differently from Einstein--Cartan, in this theory torsion is not necessarily related to intrinsic spin. However, similarly to Einstein--Cartan, torsion is assumed to be responsible for describing possible corrections to general relativity --- and consequently to the teleparallell equivalent of general relativity. Also in this case, therefore, curvature and torsion are related to different degrees of freedom of gravity. Finally, section 7 is devoted to a comparative discussion of the different interpretations for torsion. \section{Basic concepts} \subsection{Linear frames and tetrads} \label{subsecFrames} The geometrical setting of any theory for gravitation is the tangent bundle, a natural construction always present on spacetime \cite{livro}. At each point of spacetime --- the bundle base space --- there is always a tangent space --- the fiber --- on which a spacetime transformation group acts. In the case of the Lorentz group, the tangent space provides a representation for the group, the vector representation. The bundle formalism provides a natural extension to other representations --- tensorial or spinorial. In what follows, we are going to use the Greek alphabet $(\mu, \nu, \rho, \dots = 0,1,2,3)$ to denote indices related to spacetime, and the first half of the Latin alphabet $(a,b,c, \dots = 0,1,2,3)$ to denote indices related to the tangent space, assumed to be a Minkowski spacetime with metric \begin{equation} \eta_{ab} = \mbox{diag}(+1,-1,-1,-1). \end{equation} The second half of the Latin alphabet $(i,j,k, \dots = 1,2,3)$ will be reserved for space indices. Spacetime coordinates, therefore, will be denoted by $\{x^\mu\}$, whereas the tangent space coordinates will be denoted by $\{x^a\}$. Such coordinate systems define, on their domains of definition, local bases for vector fields, formed by the sets of gradients \begin{equation} \{\partial_\mu\} \equiv \{{\partial}/{\partial x^\mu}\} \quad \mbox{and} \quad \{\partial_a\} \equiv \{{\partial}/{\partial x^a}\}, \end{equation} as well as bases $\{dx^\mu\}$ and $\{dx^a\}$ for covector fields, or differentials. These bases are dual, in the sense that \begin{equation} dx^\mu \, ({\partial_\nu}) = \delta^\mu_\nu \quad \mbox{and} \quad dx^a \, ({\partial_b}) = \delta^a_b. \end{equation} On the respective domains of definition, any vector or covector field can be expressed in terms of these bases, which can furthermore be extended by direct product to constitute bases for general tensor fields. A {\em holonomic} (or coordinate) base like $\{{\partial_a}\}$, related to coordinates, is a very particular case of linear base. Any set of four linearly independent fields $\{e_{a}\}$ will form another base, and will have a dual $\{e^{a}\}$ whose members are such that $e^{a}(e_b) = \delta^a_b$. These frame fields are the general linear bases on the spacetime differentiable manifold, whose set, under conditions making of it also a differentiable manifold, constitutes the bundle of linear frames. Of course, on the common domains they are defined, the members of a base can be written in terms of the members of the other, that is, \begin{equation} e_a = e_a{}^\mu \, \partial_\mu, \quad e^{a} = e^{a}{}_\mu \, dx^\mu, \end{equation} and conversely. These frames, with their bundles, are constitutive parts of spacetime. They are automatically present as soon as spacetime is taken to be a differentiable manifold \cite{livro}. We are going to use the notation $\{h_{a}, h^{a}\}$ for a generic tetrad field (or simply ``tetrad''), a field of linear frames connected with the presence of a gravitational field. Consider the spacetime metric $g$ with components $g_{\mu \nu}$, in some dual holonomic base $\{d x^{\mu}\}$: \begin{equation} g = g_{\mu \nu} dx^{\mu} \otimes dx^{\nu} = g_{\mu \nu} dx^{\mu} dx^{\nu}. \label{eq:Riemetric} \end{equation} A tetrad field $h_{a} = h_{a}{}^{\mu} \, {\partial_{\mu}}$ will relate $g$ to the tangent--space metric $\eta = \eta_{a b} \, dx^a dx^b$ by \begin{equation} \eta_{a b} = g(h_{a},h_{b}) = g_{\mu \nu} \, h_{a}{}^{\mu} h_{b}{}^{\nu}. \label{eq:gtoeta} \end{equation} This means that a tetrad field is a linear frame whose members $h_{a}$ are (pseudo) orthogonal by the metric $g$. The components of the dual base members $h^{a} = h^{a}{}_{\nu} dx^{\nu}$ satisfy \begin{equation} h^{a}{}_{\mu} h_{a}{}^{\nu} = \delta_{\mu}^{\nu} \quad {\rm and} \quad h^{a}{}_{\mu} h_{b}{}^{\mu} = \delta^{a}_{b}, \label{eq:tetradprops1} \end{equation} so that Eq.~(\ref{eq:gtoeta}) has the converse \begin{equation} g_{\mu \nu} = \eta_{a b} \, h^{a}{}_{\mu} h^{b}{}_{\nu}. \label{eq:tettomet} \end{equation} Anholonomy --- the property of a differential form which is not the differential of anything, or of a vector field which is not a gradient --- is commonplace in many chapters of Physics. Heat and work, for instance, are typical anholonomic coordinates on the space of thermodynamic variables, and the angular velocity of a generic rigid body is a classical example of anholonomic velocity. In the context of gravitation, anholonomy is related, through the equivalence principle, to the very existence of a gravitational field \cite{ABP03}. Given a Riemannian metric as in~(\ref{eq:tettomet}), the presence or absence of a gravitational field is fixed by the anholonomic or holonomic character of the forms $h^{a} = h^{a}{}_{\nu} dx^{\nu}$. We can think of a coordinate change $\{x^a\} \leftrightarrow \{x^\mu\}$ represented by \begin{equation} dx^a = \left(\partial_{\mu} x^a\right) \, dx^\mu \quad \mathrm{and} \quad dx^\mu = \left(\partial_a x^{\mu}\right) \, dx^a. \end{equation} The 1-form $dx^a$ is holonomic, just the differential of the coordinate $x^a$, and the objects $\partial_{\mu} x^a$ are the components of the holonomic form $dx^a$ written in the base $\{dx^\mu\}$, with $\partial_a x^{\mu}$ its inverse. Thus, such a coordinate change is just a change of holonomic bases of 1-forms. For the dual base we have the relations \begin{equation} \partial_\mu = \left(\partial_{\mu} x^a\right) \, \partial_a \quad \mathrm{and} \quad \partial_a = \left(\partial_a x^{\mu}\right) \, \partial_\mu. \label{1} \end{equation} Take now a dual base $h^a$ such that $d h^a \ne 0$, that is, not formed by differentials. Apply the anholonomic 1-forms $h^a$ to ${\partial}/{\partial_\mu}$. The result, $h^a{}_\mu = h^a \, ({\partial_\mu})$, is the component of $h^a$ = $h^a{}_\mu dx^\mu$ along $dx^\mu$. The procedure can be inverted when the $h^a$'s are linearly independent, and defines vector fields $h_a$ = $h_a{}^\mu {\partial_\mu}$ which are not gradients. Because closed forms are locally exact, holonomy/anholonomy can be given a trivial criterion: a form is holonomic {\it iff} its exterior derivative vanishes. A holonomic tetrad will always be of the form $h^{a} = dx^a$ for some coordinate set $\{x^a\}$. For such a tetrad, tensor (\ref{eq:tettomet}) would simply give the components of the Lorentz metric $\eta$ in the coordinate system $\{x^\mu\}$. An anholonomic basis $\{h_{a}\}$ satisfies the commutation table \begin{equation} [h_{a}, h_{b}] = f^{c}{}_{a b}\ h_{c}, \label{eq:comtable} \end{equation} with $f^{c}{}_{a b}$ the so-called structure coefficients, or coefficients of anholonomy. The frame $\{{\partial_{\mu}}\}$ has been presented above as holonomic precisely because its members commute with each other. The dual expression of the commutation table above is the Cartan structure equation \begin{equation} d h^{c} = - {\textstyle{\frac{1}{2}}} \, f^{c}{}_{a b}\ h^{a} \wedge h^{b} = {\textstyle{\frac{1}{2}}} \, (\partial_\mu h^c{}_\nu - \partial_\nu h^c{}_\mu)\ dx^\mu \wedge dx^\nu. \label{eq:dualcomtable} \end{equation} The structure coefficients represent the curls of the base members: \begin{equation} f^c{}_{a b} = h^c ([h_a, h_b]) = h_a{}^{\mu} h_b{}^{\nu} (\partial_\nu h^c{}_{\mu} - \partial_\mu h^c{}_{\nu} ) = h^c{}_{\mu} [h_a(h_b{}^{\mu}) - h_b(h_a{}^{\mu})]. \label{fcab} \end{equation} If $f^{c}{}_{a b}$ = $0$, then $d h^{a} = 0$ implies the local existence of functions (coordinates) $x^a$ such that $h^{a}$ = $dx^a$. Nothing really new: the tetrads are gradients only when their curls vanish. \subsection{Connections} In order to define derivatives with a well-defined tensor behavior (that is, which are covariant), it is essential to introduce connections $\Gamma^\lambda{}_{\mu \nu}$, which are vectors in the last index but whose non--tensorial behavior in the first two indices compensates the non--tensoriality of the ordinary derivatives. Linear connections have a great degree of intimacy with spacetime because they are defined on the bundle of linear frames, which is a constitutive part of its manifold structure. That bundle has some properties not found in the bundles related to {\it internal} gauge theories. Mainly, it exhibits soldering, which leads to the existence of torsion for every connection \cite{koba}. Linear connections --- in particular, Lorentz connections --- always have torsion, while internal gauge potentials have not. It is important to remark that, from a formal point of view, curvature and torsion are properties of a connection~\cite{koba}. Strictly speaking, there are no such things as curvature or torsion of spacetime, but only curvature or torsion of a connection. This becomes evident if we notice that many different connections are allowed to exist in the very same spacetime. Of course, when restricted to the specific case of general relativity, where the only connection at work is the Levi--Civita connection, universality of gravitation allows it to be interpreted as part of the spacetime definition. However, in the presence of different connections with different curvatures and torsions, it seems far wiser and convenient to take spacetime simply as a manifold, and connections (with their curvatures and torsions) as additional structures. A spin connection $A_\mu$ is a connection of the form \begin{equation} A_\mu = {\textstyle{\frac{1}{2}}} \, A^{ab}{}_\mu \, S_{ab}, \end{equation} with $S_{ab}=-S_{ba}$ Lorentz generators in a given representation. On the other hand, a tetrad field relates internal with external tensors. For example, if $V^a$ is a Lorentz vector, \begin{equation} V^\rho = h_a{}^\rho \, V^a \end{equation} will be a spacetime vector. However, in the specific case of connections, an additional {\it vacuum} term appears when transforming internal to external indices, and vice versa. In fact, a general linear connection $\Gamma^{\rho}{}_{\nu \mu}$ is related to the corresponding spin connection $A^{a}{}_{b \mu}$ through \begin{equation} \Gamma^{\rho}{}_{\nu \mu} = h_{a}{}^{\rho} \partial_{\mu} h^{a}{}_{\nu} + h_{a}{}^{\rho} A^{a}{}_{b \mu} h^{b}{}_{\nu}. \label{geco} \end{equation} The inverse relation is, consequently, \begin{equation} A^{a}{}_{b \mu} = h^{a}{}_{\nu} \partial_{\mu} h_{b}{}^{\nu} + h^{a}{}_{\nu} \Gamma^{\nu}{}_{\rho \mu} h_{b}{}^{\rho}. \label{gsc} \end{equation} Equations (\ref{geco}) and (\ref{gsc}) are simply different ways of expressing the property that the total --- that is, acting on both indices --- covariant derivative of the tetrad vanishes identically: \begin{equation} \partial_{\mu} h^{a}{}_{\nu} - \Gamma^{\rho}{}_{\nu \mu} h^{a}{}_{\rho} + A^{a}{}_{b \mu} h^{b}{}_{\nu} = 0. \label{todete} \end{equation} A connection $\Gamma^\rho{}_{\lambda\mu}$ is said to be metric compatible if \begin{equation} \label{fourm} \partial_\lambda g_{\mu\nu} - \Gamma^\rho{}_{\mu \lambda}g_{\rho\nu} - \Gamma^\rho{}_{\nu \lambda} g_{\mu \rho} = 0. \end{equation} From the tetrad point of view, by using Eqs.~(\ref{geco}) and (\ref{gsc}), this equation can be rewritten in the form \begin{equation} h_\mu (\eta_{ab}) - A^d{}_{a\mu} \, \eta_{db} - A^d{}_{b\mu} \, \eta_{ad} = 0, \end{equation} where $h_\mu = h^a{}_\mu \partial_a$. Since $h_\mu (\eta_{ab}) = 0$, we obtain \begin{equation} A_{ba\mu} = -\, A_{ab\mu}. \end{equation} The underlying content of the metric--preserving property is, therefore, that the spin connection is Lorentzian. The curvature and the torsion of the connection $A^{a}{}_{b \mu}$ are defined respectively by \begin{equation} R^a{}_{b \nu \mu} = \partial_{\nu} A^{a}{}_{b \mu} - \partial_{\mu} A^{a}{}_{b \nu} + A^a{}_{e \nu} A^e{}_{b \mu} - A^a{}_{e \mu} A^e{}_{b \nu} \end{equation} and \begin{equation} T^a{}_{\nu \mu} = \partial_{\nu} h^{a}{}_{\mu} - \partial_{\mu} h^{a}{}_{\nu} + A^a{}_{e \nu} h^e{}_{\mu} - A^a{}_{e \mu} h^e{}_{\nu}. \end{equation} Using the relation (\ref{gsc}), they can be expressed in a purely spacetime form: \begin{equation} \label{sixbm} R^\rho{}_{\lambda\nu\mu} \equiv h_a{}^\rho \, h^b{}_\lambda \, R^a{}_{b \nu \mu} = \partial_\nu \Gamma^\rho{}_{\lambda \mu} - \partial_\mu \Gamma^\rho{}_{\lambda \nu} + \Gamma^\rho{}_{\eta \nu} \Gamma^\eta{}_{\lambda \mu} - \Gamma^\rho{}_{\eta \mu} \Gamma^\eta{}_{\lambda \nu} \end{equation} and \begin{equation} T^\rho{}_{\nu \mu} \equiv h_a{}^\rho \, T^a{}_{\nu \mu} = \Gamma^\rho{}_{\mu\nu}-\Gamma^\rho{}_{\nu\mu}. \label{sixam} \end{equation} The connection coefficients can be conveniently decomposed according to\footnote{The magnitudes related with general relativity will be denoted with an over ``$\circ$''.} \begin{equation} \Gamma^\rho{}_{\mu\nu} = {\stackrel{\circ}{\Gamma}}{}^{\rho}{}_{\mu \nu} + K^\rho{}_{\mu\nu}, \label{prela0} \end{equation} where \begin{equation} {\stackrel{\circ}{\Gamma}}{}^{\sigma}{}_{\mu \nu} = {\textstyle \frac{1}{2}} g^{\sigma \rho} \left( \partial_{\mu} g_{\rho \nu} + \partial_{\nu} g_{\rho \mu} - \partial_{\rho} g_{\mu \nu} \right) \label{lci} \end{equation} is the torsionless Levi--Civita connection of general relativity, and \begin{equation} K^\rho{}_{\mu\nu} = {\textstyle \frac{1}{2}} \left(T_\nu{}^\rho{}_\mu + T_\mu{}^\rho{}_\nu - T^\rho{}_{\mu\nu}\right) \label{contor} \end{equation} is the contortion tensor. In terms of the spin connection, decomposition (\ref{prela0}) assumes the form \begin{equation} A^c{}_{a\nu} = {\stackrel{~\circ}{A}}{}^c{}_{a\nu} + K^c{}_{a\nu}, \label{rela00} \end{equation} where ${\stackrel{~\circ}{A}}{}^c{}_{a \nu}$ is the Ricci coefficient of rotation, the spin connection of general relativity. Now, since the spin connection is a tensor in the last index, we can write \begin{equation} A^a{}_{bc} = A^a{}_{b \mu} \, h_c{}^\mu. \end{equation} It can thus be easily verified that, in the anholonomic basis $h_a$, the curvature and torsion components are given respectively by \begin{equation} \label{13bm} R^a{}_{bcd} = h_c (A^a{}_{bd}) - h_d (A^a{}_{bc}) + A^a{}_{ec} \, A^a{}_{bd} - A^a{}_{ed} \, A^e{}_{bc} + f^e{}_{cd} \, A^a{}_{be} \end{equation} and \begin{equation} T^a{}_{bc} = - f^a{}_{bc} + A^a{}_{cb} - A^a{}_{bc}. \label{13am} \end{equation} Seen from this frame, therefore, torsion includes the anholonomy. Use of (\ref{13am}) for three combinations of the indices gives \begin{equation}% A^{a}{}_{b c} = - {\textstyle{\frac{1}{2}}} (f^{a}{}_{b c} + T^{a}{}_{b c} + f_{b c}{}^{a} + T_{b c}{}^{a} + f_{c b}{}^{a} + T_{c b}{}^{a}). \label{tobetaken2} \end{equation}% When torsion vanishes, as in general relativity, we obtain the usual expression of the Ricci coefficient of rotation in terms of the anholonomy: \begin{equation}% {\stackrel{~\circ}{A}}{}^{a}{}_{b c} = -\, {\textstyle{\frac{1}{2}}} (f^{a}{}_{b c} + f_{b c}{}^{a} + f_{c b}{}^{a}). \label{tobetaken3} \end{equation}% We have now all tools necessary to study the possible roles played by torsion in gravitation. We begin by reviewing, in the next section, the basics of general relativity, Einstein's theory for gravitation. \section{General relativity} Universality of both gravitational and inertial effects was one of the hints taken by Einstein in the way towards his theory. Another clue was the notion of field. This concept provides the best approach to interactions consistent with special relativity. All known forces are mediated by fields on spacetime. If a field is to represent gravitation, it should, by the considerations above, be a universal field, equally felt by every particle. And, of all the fields present in a spacetime, the metric appears as the most fundamental. A gravitational field should, therefore, be represented by a metric $g_{\mu \nu}$, with its absence being described by the flat Minkowski metric. The relevant connection in general relativity is the Levi--Civita connection~(\ref{lci}). It is the only Lorentz connection with vanishing torsion. It has, however, non-vanishing curvature, which is related to the presence of a gravitational field. In general relativity, therefore, torsion is chosen to vanish from the very beginning, and has no role in the description of the gravitational interaction. The minimal coupling prescription in this theory amounts to replace the usual ordinary derivative by a covariant derivative with the Levi--Civita connection. Acting on a spacetime vector field $V^\rho$, for example, it reads \begin{equation} {\stackrel{\circ}{\nabla}}{}_\nu V^\rho = \partial_\nu V^\rho + {\stackrel{\circ}{\Gamma}}{}^\rho{}_{\mu \nu} \, V^\mu. \label{grcp} \end{equation} Acting on a Lorentz vector $V^a$, it is \begin{equation} {\stackrel{\circ}{\mathcal D}}{}_\nu V^a = \partial_\nu V^a + {\stackrel{~\circ}{A}}{}^a{}_{b \nu} \, V^b. \label{grcpbis} \end{equation} The gravitational field is described by the Einstein--Hilbert Lagrangian \begin{equation} {\stackrel{\circ}{\mathcal L}}{} = -\, \frac{\sqrt{-g}}{2 k^2} \; {\stackrel{\circ}{R}}{}, \label{eq:LagGR} \end{equation} where $g = \det(g_{\mu \nu})$, $k^2 = 8 \pi G/c^{4}$ and ${\stackrel{\circ}{R}}{} = g^{\lambda \rho} \, {\stackrel{\circ}{R}}{}^\nu{}_{\lambda \nu \rho}$ is the scalar curvature of the Levi--Civita connection. With a matter (source) field represented by ${\mathcal L}{}_m$, the total Lagrangian is \begin{equation} {\mathcal L}{} = {\stackrel{\circ}{\mathcal L}}{} + {\mathcal L}{}_m. \end{equation} Variation of the corresponding action integral with respect to the metric tensor $g^{\mu \nu}$ yields Einstein equation \begin{equation} {\stackrel{\circ}{R}}{}_{\mu \nu} + {\textstyle{\frac{1}{2}}}\, g_{\mu \nu} \, {\stackrel{\circ}{R}}{} = k^2 \, \Theta_{\mu \nu}, \label{efe} \end{equation} where $\Theta_{\mu \nu}$ is the symmetric source energy--momentum tensor. On the other hand, the action integral of a particle of mass $m$ in a gravitational field is \begin{equation} {\mathcal S} = -\, m c \int_a^b ds, \end{equation} with $ds = ({g_{\mu \nu} \, dx^\mu dx^\nu})^{1/2}$ the coordinate--independent spacetime line element. The corresponding equation of motion is consistent with the minimal coupling prescription (\ref{grcp}), and is given by the geodesic equation \begin{equation} \frac{d u^\rho}{d s} + {{\stackrel{\circ}{\Gamma}}{}}{}^\rho{}_{\mu \nu} \; u^\mu \; u^\nu = 0. \label{eq:geodesic} \end{equation} This equation says simply that the particle four-acceleration --- its left--hand side --- vanishes. This property reveals the absence of the concept of gravitational {\em force}, a basic characteristic of the geometric description. In fact, instead of acting through a force, the presence of gravitation is supposed to produce a {\em curvature in spacetime}, the gravitational interaction being described by letting (spinless) particles to follow the geodesics of the metric field. Notice that no other kind of spacetime deformation is supposed to exist. Torsion, which would be another natural spacetime deformation, is assumed to vanish from the start. This is the approach of general relativity, in which geometry replaces the concept of gravitational force, and the trajectories are determined, not by force equations, but by geodesics. It is important to notice that only an interaction presenting the property of universality can be described by a geometrization of spacetime. It is also important to mention that, in the eventual lack of universality,\footnote{Although the weak equivalence principle has passed all experimental tests and seems to be true at the classical level \cite{exp}, there are compelling evidences that it might not be valid at the quantum level \cite{vaxjo05}.} the general relativity description of gravitation would break down. \section{Teleparallel gravity} The first attempt to unify gravitation and electromagnetism was made by H.~Weyl in 1918 \cite{oft}. That proposal, though unsuccessful, introduced for the first time the notions of {\em gauge transformations} and {\em gauge invariance}, and was the seed which has grown into today's gauge theories. Ten years after the introduction of torsion by E. Cartan in 1923 a second unification attempt was made by Einstein. It was based on the concept of distant (or absolute) parallelism, also referred to as teleparallelism. The crucial idea was the introduction of a tetrad field. Since the specification of a tetrad involves sixteen components, and the gravitational field, represented by the spacetime metric, requires only ten, the six additional degrees of freedom were related by Einstein to the electromagnetic field \cite{sauer}. This attempt did not succeed either but, like Weyl's, introduced ideas that remain important to this day. In fact, teleparallel gravity can be considered today a viable theory for gravitation \cite{review}. It can be interpreted as a gauge theory for the translation group: the fundamental field is the gauge potential $B_{\mu}$, a field assuming values in the Lie algebra of the translation group, \begin{equation} B_{\mu} = B^{a}{}_{\mu} \, P_a, \label{B} \end{equation} where $P_{a} = \partial /\partial x^a$ are the translation generators. It appears naturally as the nontrivial part of the tetrad field: \begin{equation} h^{a}{}_{\mu} = \partial_{\mu}x^{a} + B^{a}{}_{\mu}. \label{tetrada} \end{equation} The fundamental connection of teleparallel gravity is the Weit\-zen\-b\"ock connection\footnote{It should be remarked that R. Weitzenb\"ock has never written such connection. Even though, this name has been commonly used to denote a particular Cartan connection with vanishing curvature.} which, in terms of the tetrad, is written as\footnote{The magnitudes related with teleparallel gravity will be denoted with an over ``$\bullet$''.} \begin{equation} {\stackrel{\bullet}{\Gamma}}{}^{\rho}{}_{\mu\nu} = h_a{}^\rho \, \partial_\nu h^a{}_\mu. \label{wcon} \end{equation} In contrast to Levi--Civita, it is a connection with non-vanishing torsion, but vanishing curvature. The Weitzenb\"ock and Levi--Civita connections are related by \begin{equation} {\stackrel{\bullet}{\Gamma}}{}^{\rho}{}_{\mu\nu} = {\stackrel{\circ}{\Gamma}}{}^{\rho}{}_{\mu\nu} + {\stackrel{\bullet}{K}}{}^{\rho}{}_{\mu\nu}, \label{rela0} \end{equation} where \begin{equation} {\stackrel{\bullet}{K}}{}^{\rho}{}_{\mu\nu} = {\textstyle{\frac{1}{2}}} \, ({\stackrel{\bullet}{T}}{}_\mu{}^\rho{}_\nu + {\stackrel{\bullet}{T}}{}_\nu{}^\rho{}_\mu - {\stackrel{\bullet}{T}}{}^{\rho}{}_{\mu\nu}) \end{equation} is the contortion tensor, with \begin{equation} {\stackrel{\bullet}{T}}{}^{\rho}{}_{\mu\nu} = {\stackrel{\bullet}{\Gamma}}{}^{\rho}{}_{\nu\mu} - {\stackrel{\bullet}{\Gamma}}{}^{\rho}{}_{\mu\nu} \end{equation} the torsion of the Weitzenb\"ock connection. The coupling prescription in teleparallel gravity is obtained by requiring consistency with the general covariance principle, an active version of the strong equivalence principle \cite{weinberg}. It follows that it is actually equivalent to that of general relativity \cite{mospe}. Acting on a spacetime vector field $V^\rho$, for example, it is given by \begin{equation} {\stackrel{\bullet}{\nabla}}{}_\nu V^\rho = \partial_\nu V^\rho + ({\stackrel{\bullet}{\Gamma}}{}^\rho{}_{\mu \nu} - {\stackrel{\bullet}{K}}{}^\rho{}_{\mu \nu}) \, V^\mu. \label{tgcp} \end{equation} Since, as a consequence of definition (\ref{wcon}), the Weitzenb\"ock spin connection vanishes identically, \begin{equation} {\stackrel{~\bullet}{A}}{}^{a}{}_{b \mu} = h^{a}{}_{\nu} \partial_{\mu} h_{b}{}^{\nu} + h^{a}{}_{\nu} {\stackrel{\bullet}{\Gamma}}{}^{\nu}{}_{\rho \mu} h_{b}{}^{\rho} = 0, \end{equation} the corresponding covariant derivative of a Lorentz vector $V^a$ is \cite{equivcova} \begin{equation} {\stackrel{\bullet}{\mathcal D}}{}_\nu V^a = \partial_\nu V^a + (0 - {\stackrel{\bullet}{K}}{}^a{}_{b \nu}) \, V^b. \label{tgcpbis} \end{equation} Note that the covariant derivatives (\ref{tgcp}) and (\ref{tgcpbis}) are the Levi--Civita derivatives rephrased in terms of the Weitzenb\"ock connection. The Lagrangian of the teleparallel equivalent of general relativity is \begin{equation} {\stackrel{\bullet}{\mathcal L}}{} = \frac{h}{2 k^2} \; \left[\frac{1}{4} \; {\stackrel{\bullet}{T}}{}^\rho{}_{\mu \nu} \; {\stackrel{\bullet}{T}}{}_\rho{}^{\mu \nu} + \frac{1}{2} \; {\stackrel{\bullet}{T}}{}^\rho{}_{\mu \nu} \; {\stackrel{\bullet}{T}}{}^{\nu \mu} {}_\rho - {\stackrel{\bullet}{T}}{}_{\rho \mu}{}^{\rho} \; {\stackrel{\bullet}{T}}{}^{\nu \mu}{}_\nu \right], \label{lagr3} \end{equation} where $h = \det (h^a{}_\mu)$. The first term corresponds to the usual Lagrangian of gauge theories. However, owing to the presence of tetrad fields, algebra and spacetime indices can here be changed into each other, and new contractions turn out to be possible. It is exactly this possibility that gives rise to the other two terms. If we define the tensor \begin{equation} {\stackrel{\bullet}{S}}{}^{\rho\mu\nu} = -\, {\stackrel{\bullet}{S}}{}^{\rho\nu\mu} = \left[ {\stackrel{\bullet}{K}}{}^{\mu\nu\rho} - g^{\rho\nu}\,{\stackrel{\bullet}{T}}{}^{\sigma\mu}{}_{\sigma} + g^{\rho\mu}\,{\stackrel{\bullet}{T}}{}^{\sigma\nu}{}_{\sigma} \right], \label{S} \end{equation} the teleparallel Lagrangian (\ref{lagr3}) can be rewritten as \cite{maluf94} \begin{equation} {\stackrel{\bullet}{\mathcal L}}{} = \frac{h}{4 k^2} \; {\stackrel{\bullet}{S}}{}^{\rho\mu\nu} \, {\stackrel{\bullet}{T}}{}_{\rho\mu\nu}. \label{gala} \end{equation} Using relation (\ref{rela0}), it is easy to show that \begin{equation} {\stackrel{\bullet}{\mathcal L}}{} = {\stackrel{\circ}{\mathcal L}}{} - \partial_\mu \left(2 \, h \, k^{-2} \, {\stackrel{\bullet}{T}}{}^{\nu \mu}{}_\nu \right), \end{equation} where ${\stackrel{\circ}{\mathcal L}}{}$ is the Einstein--Hilbert Lagrangian (\ref{eq:LagGR}), and where we have used $h = \sqrt{-g}$. Up to a divergence, therefore, the teleparallel Lagrangian is equivalent to the Lagrangian of general relativity. Let us consider now \begin{equation} {\mathcal L} = {\stackrel{\bullet}{\mathcal L}}{} + {\mathcal L}_m, \end{equation} with ${\mathcal L}_m$ the Lagrangian of a general matter field. Variation of the corresponding action integral with respect to the gauge field $B^a{}_\rho$ leads to the teleparallel version of the gravitational field equation \begin{equation} \partial_\sigma(h\, {\stackrel{\bullet}{S}}{}_\mu{}^{\rho \sigma}) - k^2 \, (h\, {\stackrel{\bullet}{t}}{}_{\mu}{}^{\rho}) = k^2 \, (h\, {\Theta}_{\mu}{}^{\rho}), \label{tfe1} \end{equation} where ${\stackrel{\bullet}{t}}{}_{\mu}{}^{\rho}$ represents the energy--momentum pseudotensor of the gravitational field~\cite{gemt}, and ${\Theta}_{\mu}{}^{\rho}$ is the symmetric source energy--momentum tensor. Using relation (\ref{rela0}), the left-hand side of the field equation (\ref{tfe1}) can be shown to be \begin{equation} \partial_\sigma(h {\stackrel{\bullet}{S}}{}_\mu{}^{\rho \sigma}) - k^2 \, (h {\stackrel{\bullet}{t}}{}_{\mu}{}^{\rho}) = h \left({\stackrel{\circ}{R}}_\mu{}^{\rho} - {\textstyle{\frac{1}{2}}} \, \delta_\mu{}^{\rho} \; {\stackrel{\circ}{R}} \right). \label{ident} \end{equation} This means that, as expected due to the equivalence between the cor\-re\-sponding Lagrangians, the teleparallel field equation (\ref{tfe1}) is equivalent to Einstein field equation (\ref{efe}). Observe that the symmetric energy--momentum tensor appears as the source in both theories: as the source of curvature in general relativity, and as the source of torsion in teleparallel gravity. In teleparallel gravity, the action describing a particle of mass $m$ in a gravitational field $B^a{}_\mu$ is given by \begin{equation} {\mathcal S} = -\, m \, c \int_{a}^{b} \left[u_a \, dx^a + B^{a}{}_{\mu} \, u_{a} \, dx^{\mu} \right], \label{acaopuni0} \end{equation} where $u^a = h^a{}_\mu \, u^\mu$ is the anholonomic particle four--velocity. The first term represents the action of a free particle, and the second the coupling of the particle's mass with the gravitational field. The corresponding equation of motion is consistent with the coupling prescription (\ref{tgcp}), and is given by the {\it force equation}~\cite{paper1} \begin{equation} \frac{d u_\mu}{d s} - {\stackrel{\bullet}{\Gamma}}{}^\theta{}_{\mu \nu} \; u_\theta \; u^\nu = {\stackrel{\bullet}{T}}{}^\theta{}_{\mu \nu} \; u_\theta \, u^\nu, \label{forceq} \end{equation} with torsion playing the role of {\it gravitational force}. It is similar to the Lorentz force e\-qua\-tion of electrodynamics, a property related to the fact that teleparallel gravity is, like Maxwell's theory, a gauge theory. By using relation (\ref{rela0}), the force equation (\ref{forceq}) can be rewritten in terms of the Levi--Civita connection, in which case it reduces to the geodesic equation (\ref{eq:geodesic}). The force equation (\ref{forceq}) of teleparallel gravity and the geodesic equation (\ref{eq:geodesic}) of general relativity describe, therefore, the same physical trajectory. This means that the gravitational interaction has two {\em equivalent} descriptions: one in terms of curvature, and another in terms of torsion. Although equivalent, however, there are conceptual differences between these two descriptions. In general relativity, a theory based on the weak equivalence principle, curvature is used to {\it geometrize} the gravitational interaction. In teleparallel gravity, on the other hand, torsion accounts for gravitation, not by geometrizing the interaction, but by acting as a {\it force}. Like Maxwell theory, there are no geodesics in teleparallel gravity, only force equations. An important property of teleparallel gravity is that, due to its gauge structure, it does not require the weak equivalence principle to describe the gravitational interaction \cite{wep}. To understand this point, let us consider a particle with inertial mass $m_i$ and gravitational mass $m_g$. In this case, the teleparallel action is written as \begin{equation} {\mathcal S} = - \, m_i \, c \int_{a}^{b} \left[u_a \, d{x}^a + \frac{m_g}{m_i} \, B^{a}{}_{\mu} \, u_{a} \, dx^{\mu} \right]. \label{acaopuni} \end{equation} Observe the similarity with the action \begin{equation} {\mathcal S} = - \, m_i \, c \int_{a}^{b} \left[u_a \, d{x}^a + \frac{q}{m_i} \, A_{a} \, dx^{a} \right], \label{acaopuni2} \end{equation} which describes a particle with mass $m_i$ and electric charge $q$ in an electromagnetic field $A_a$. We see from these expressions that the electric charge $q$ plays the same role as the gravitational mass $m_g$. Variation of (\ref{acaopuni}) yields \begin{equation} P^\rho{}_\mu \left(\partial_\rho {x}^a + \frac{m_g}{m_i} \, {B}^a{}_\rho \right) \frac{d u_a}{d s} = \frac{m_g}{m_i} \; {\stackrel{\bullet}{T}}{}^a{}_{\mu \rho} \; u_a \, u^\rho, \label{eqmot3bis} \end{equation} with $P^\rho{}_\mu = \delta^\rho_\mu - u^\rho \, u_\mu$ a projection tensor. This is the equation of motion for particles with $m_g \neq m_i$ in the presence of gravitation. For $m_i = m_g$, it reduces to the teleparallel force equation (\ref{forceq}), which in turn is equivalent to the geodesic equation (\ref{eq:geodesic}) of general relativity. It is, however, impossible to get this kind of equation in the context of general relativity --- which is not valid if there is no universality. In other words, whereas the geometrical description of general relativity breaks down, the gauge description of teleparallel gravity stands up in the lack of universality.\footnote{Differently from general relativity, both teleparallel and Newtonian gravities are able to manage without the weak equivalence principle. Furthermore, since these two theories describe the gravitational interaction by a force equation, the Newtonian limit is found to follow much more naturally from teleparallel gravity than from general relativity.} This is a very important issue because, even though the equivalence principle has got through many experimental tests, there are many controversies related with its validity \cite{synge}, mainly at the quantum level \cite{quantu}. One may wonder why gravitation has two equivalent descriptions. This duplicity is related precisely to that peculiar property of gravitation, universality. As remarked above, gravitation can be described in terms of a gauge theory --- just teleparallel gravity. Universality of free fall, on the other hand, makes it possible a second, geometrized description, based on the weak equivalence principle --- just general relativity. As the sole universal interaction, it is the only one to allow also a geometrical interpretation, and hence two alternative descriptions. From this point of view, curvature and torsion are simply alternative ways of describing the gravitational field \cite{aap}, and consequently related to the same degrees of freedom. The gravitational interaction can thus be described {\em alternatively} in terms of curvature, as is usually done in general relativity, or in terms of torsion, in which case we have the so-called teleparallel gravity. Accordingly, we can say that, from the point of view of teleparallel gravity, Einstein was right when he did not include torsion in general relativity. \section{Einstein--Cartan theory} The main idea behind the Einstein--Cartan construction \cite{ect} is that, at the microscopic level, matter is represented by elementary particles, which in turn are characterized by mass (that is, energy and momentum) and spin. If one adopts the same {\it geometrical spirit of general relativity}, not only mass but also spin should be source of gravitation at this level. According to this scheme, energy--momentum should appear as source of curvature, whereas spin should appear as source of torsion. The relevant connection of this theory is a general Cartan connection $\Gamma^\rho{}_{\mu \nu}$, presenting both curvature and torsion. Similarly to general relativity, the Lagrangian of the gravitational field in Einstein--Cartan theory is \begin{equation} {\mathcal L}_{EC} = -\, \frac{\sqrt{-g}}{2 k^2} \; R. \end{equation} Observe that, although it formally coincides with the Einstein--Hilbert Lagrangian, the scalar curvature refers now to the general Cartan connection. Considering then the Lagrangian \begin{equation} {\mathcal L} = {\mathcal L}_{EC} + {\mathcal L}_m, \end{equation} with ${\mathcal L}_m$ the Lagrangian of a general matter field, the ensuing field equations are obtained through variations with respect to the metric $g^{\mu \nu}$ and to the torsion $T_\rho{}^{\mu \nu}$. The result are \begin{equation} R_{\mu \nu} - {\textstyle{\frac{1}{2}}}\, g_{\mu \nu} R = k^2 \, \theta_{\mu \nu} \end{equation} and \begin{equation} T^\rho{}_{\mu \nu} = k^2 \left( S^\rho{}_{\mu \nu} + {\textstyle{\frac{1}{2}}}\, \delta^\rho{}_\mu \, S^\alpha{}_{\alpha \nu} - {\textstyle{\frac{1}{2}}}\, \delta^\rho{}_\nu \, S^\alpha{}_{\alpha \mu} \right), \end{equation} where $\theta_{\mu \nu}$ is the {\it canonical} energy--momentum tensor of the source, which is related to the symmetric energy--momentum tensor $\Theta_{\mu \nu}$ through the Belinfante--Rosenfeld procedure \cite{br}, and $S^\rho{}_{\mu \nu}$ is the spin tensor. An emblematic property of Einstein--Cartan theory is that the field equation for torsion is an algebraic equation, and consequently torsion is a non-propagating field. In spite of this peculiarity, this theory can be considered as a paradigm of more general gravitational models --- like gauge theories for the Poincar\'e \cite{kibble} and the affine groups \cite{hcmn} --- in the sense that all these models presuppose new physics associated to torsion. In other words, curvature and torsion in these theories represent independent gravitational degrees of freedom. We can then say that, from the point of view of the Einstein--Cartan theory, as well as of the more general gauge theories for gravitation, Einstein made a mistake by neglecting torsion. The coupling prescription in Einstein--Cartan theory is usually assumed to be given by the covariant derivative in terms of the connection $\Gamma^\rho{}_{\mu \nu}$. Acting on a spacetime vector field $V^\rho$, for example, it reads \begin{equation} \nabla_\nu V^\rho = \partial_\nu V^\rho + \Gamma^\rho{}_{\mu \nu} \, V^\mu, \label{eccp} \end{equation} whereas acting on a Lorentz vector $V^a$, it is \begin{equation} {\mathcal D}_\nu V^a = \partial_\nu V^a + A^a{}_{b \nu} \, V^b. \end{equation} Now, in this theory, the equation of motion of particles is usually obtained by considering the generalized matter energy--momentum covariant conservation law, integrating over a space-like section of the world tube of the particle, and expanding the gravitational field in power series~\cite{Pap51}. For spinning particles, in addition to the usual Papapetrou coupling between particle's spin with the Riemann tensor, there will appear in the equation of motion a coupling between spin and torsion. For spinless particles, it reduces to the geodesic equation (\ref{eq:geodesic}). Differently from general relativity and teleparallel gravity, therefore, where the equations of motion of spinless particles are obtained by replacing the ordinary differential by the corresponding covariant differential, the equation of motion for such particles in Einstein--Cartan theory does not follow from the minimal coupling prescription. To a certain extent, and considering the crucial role played by the minimal coupling prescription in the description of the fundamental interactions, this point can be considered a drawback of the Einstein--Cartan model. Furthermore, the coupling prescription (\ref{eccp}) presents some additional problems: it is not consistent \cite{ap0} with the general covariance principle \cite{weinberg} --- an active version of the usual (passive) strong equivalence principle --- and when applied to describe the interaction of the electromagnetic field with gravitation, it violates the U(1) gauge invariance of Maxwell's theory. \section{New general relativity} As already remarked, the teleparallel structure was used by Einstein in his unsuccessful attempt to unify gravitation and electromagnetism. In the sixties, M{\o}ller~\cite{moller} revived the idea of teleparallelism, but then with the sole purpose of describing gravitation. Afterwards, Pellegrini \& Plebanski~\cite{pelle} found a Lagrangian formulation for teleparallel gravity, a problem that was reconsidered later by M{\o}ller~\cite{moller2}. In 1967, Hayashi \& Nakano~\cite{haya} formulated a gauge model for the translation group. A few years later, Hayashi~\cite{hay77} pointed out the connection between that theory and teleparallelism, and an attempt to unify these two developments was made by Hayashi \& Shirafuji~\cite{hs79} in 1979. In this approach, general relativity --- or its teleparallel equivalent --- is supplemented with a generalized teleparallel gravity, a theory that involves only torsion, and presents three free parameters, to be determined by experiment. Like in the teleparallel equivalent of general relativity, the relevant connection of new general relativity is the Weitzenb\"ock connection (\ref{wcon}). The coupling prescription, however, is assumed to be given by a covariant derivative in terms of the Weitzenb\"ock connection: \begin{equation} {\stackrel{\bullet}{D}}{}_\nu V^\rho = \partial_\nu V^\rho + {\stackrel{\bullet}{\Gamma}}{}^\rho{}_{\mu \nu} \, V^\mu. \label{ngrcp} \end{equation} Since the Weitzenb\"ock spin connection vanishes identically, ${\stackrel{~\bullet}{A}}{}^a{}_{b \nu} = 0$, the corresponding covariant derivative of a Lorentz vector $V^a$ will coincide with an ordinary derivative \cite{hs79}: \begin{equation} {\stackrel{\bullet}{D}}{}_\nu V^\rho = \partial_\nu V^\rho. \end{equation} Considering that, like in Einstein--Cartan theory, the equation of motion of spinless particles in new general relativity is the geodesic equation (\ref{eq:geodesic}), here also there is an inconsistency between the coupling prescription and the equation of motion of spinless particles. The Lagrangian of the gravitational field in new general relativity has the form \begin{equation} {\mathcal L}_{ngr} = \frac{h}{2 k^2} \; \left[a_1 \, {\stackrel{\bullet}{T}}{}^\rho{}_{\mu \nu} \; {\stackrel{\bullet}{T}}{}_\rho{}^{\mu \nu} + a_2 \, {\stackrel{\bullet}{T}}{}^\rho{}_{\mu \nu} \; {\stackrel{\bullet}{T}}{}^{\nu \mu} {}_\rho + a_3 \, {\stackrel{\bullet}{T}}{}_{\rho \mu}{}^{\rho} \; {\stackrel{\bullet}{T}}{}^{\nu \mu}{}_\nu \right], \label{ngrlag0} \end{equation} with $a_1, a_2, a_3$ arbitrary coefficients. Now, as is well known, torsion can be decomposed in irreducible components under the global Lorentz group \cite{hb73}: \begin{equation} {\stackrel{\bullet}{T}}{}_{\lambda \mu \nu} = \textstyle{\frac{2}{3}} \left(t_{\lambda \mu \nu} - t_{\lambda \nu \mu} \right) + \frac{1}{3} \left(g_{\lambda \mu} v_\nu - g_{\lambda \nu} v_\mu \right) + \epsilon_{\lambda \mu \nu \rho} \, a^\rho. \label{deco} \end{equation} In this expression, $v_\mu$ and $a^\rho$ are the vector and axial parts of torsion, defined respectively by \begin{equation} v_{\mu} = {\stackrel{\bullet}{T}}{}^{\nu}{}_{\nu \mu} \quad \mbox{and} \quad a^{\mu} = \textstyle{\frac{1}{6}} \epsilon^{\mu\nu\rho\sigma} \, {\stackrel{\bullet}{T}}{}_{\nu\rho\sigma}, \label{pt3} \end{equation} whereas $t_{\lambda \mu \nu}$ is the purely tensor part, given by \begin{equation} t_{\lambda \mu \nu} = \textstyle{\frac{1}{2}} \left({\stackrel{\bullet}{T}}{}_{\lambda \mu \nu} + {\stackrel{\bullet}{T}}{}_{\mu\lambda \nu} \right) + \frac{1}{6} \left(g_{\nu \lambda} v_\mu + g_{\nu \mu} v_\lambda \right) - \frac{1}{3} g_{\lambda \mu} \, v_\nu. \label{pt1} \end{equation} In terms of these components, the above Lagrangian reads \begin{equation} {\mathcal L}_{ngr} = \frac{h}{2 k^2} \; \left[b_1 \, t^\rho{}_{\mu \nu} \, t_\rho{}^{\mu \nu} + b_2 \, v^\mu \, v_\mu + b_3 \, a^\mu \, a_\mu \right], \label{ngrlag1} \end{equation} with $b_1, b_2, b_3$ new arbitrary coefficients. Considering then the identity \begin{equation} \textstyle \frac{2}{3} \, t^\rho{}_{\mu \nu} \, t_\rho{}^{\mu \nu} + \frac{2}{3} \, v^\mu \, v_\mu - \frac{3}{2} \, a^\mu \, a_\mu = {\stackrel{\circ}{R}}{}, \label{iden3} \end{equation} it can be rewritten in the form \begin{equation} {\mathcal L}_{ngr} = \frac{h}{2 k^2} \; \left[ {\stackrel{\circ}{R}}{} + c_1 \, t^\rho{}_{\mu \nu} \, t_\rho{}^{\mu \nu} + c_2 \, v^\mu \, v_\mu + c_3 \, a^\mu \, a_\mu \right], \label{ngrlag2} \end{equation} with \begin{equation} \textstyle c_1 = b_1 - \frac{2}{3}, \quad c_2 = b_2 - \frac{2}{3}, \quad c_3 = b_3 + \frac{3}{2}. \end{equation} According to this theory, therefore, torsion is assumed to produce deviations from the predictions of general relativity --- or equivalently, from the predictions of the teleparallel equivalent of general relativity. This means that, similarly to Einstein--Cartan theory, torsion represents additional degrees of freedom of gravity. Also from the point of view of new general relativity, therefore, Einstein made a mistake by neglecting torsion. It should be remarked that solar system experiments restrict severely the existence of non-vanishing $c_1$ and $c_2$. Furthermore, as already shown in the literature \cite{op}, the Schwarzschild solution exists only for the case with $c_1 = c_2 = c_3 = 0$. In principle, therefore, we can say that new general relativity lacks experimental support. Anyway, there has been recently a proposal to look for some eventual effects produced by a non--vanishing $c_3$ using the Gravity Probe B data \cite{mao}. The idea behind such proposal lies on the fact that the axial torsion $a^\mu$, which represents the gravitomagnetic component of the gravitational field \cite{apz}, is responsible for producing the Lense--Thirring effect, which is one of the effects Gravity Probe B was intended to detect. \section{Final remarks} In general relativity, curvature represents the gravitational field. In teleparallel gravity, it is torsion that represents the gravitational field. In spite of this fundamental difference, the two theories are found to yield equivalent descriptions of the gravitational interaction. An immediate implication of this equivalence is that curvature and torsion are simply alternative ways of describing the gravitational field, and are consequently related to the same degrees of freedom. This is corroborated by the fact that the symmetric matter energy-momentum tensor appears as source in both theories: as source of curvature in general relativity, as source of torsion in teleparallel gravity. Now, more general gravity theories, like Einstein-Cartan, gauge theories for the Poincar\'e and more general groups, as well as new general relativity, consider curvature and torsion as representing independent degrees of freedom. In these theories, therefore, torsion describes additional degrees of freedom and, in consequence, new physical phenomena should be expected. These differences give rise to a conceptual question concerning the actual role played by torsion. The two points of view are physically conflictive: if one is correct, the other is necessarily wrong. Which of them is right? In principle, experience should give the answer, but this is not so simple --- there seems to be no model--independent way to look for torsion. For example, due to the Einstein--Cartan theory, there is a widespread belief that torsion has an intimate association with spin, and is consequently important only at the microscopic level. Most searches rely on this point of view \cite{exptor}, though a recent proposal \cite{mao} looks for effects as predicted by new general relativity. It should be remarked that, due to the weakness of the gravitational interaction, there are no available data on the gravitational coupling of the fundamental particles. Concerning macroscopic physics, no one has ever reported new gravitational phenomena near a neutron star, for example, where the effects of torsion would be relevant according to Einstein--Cartan theory. Actually, there are no experimental signs of torsion in the sense predicted by Einstein--Cartan, gauge theories for the Poincar\'e and more general groups, and new general relativity. On the other hand, according to teleparallel gravity, torsion has already been detected: it is responsible for all known gravitational effects, including the physics of the solar system, which can be reinterpreted in terms of a force equation, with torsion playing the role of gravitational force. We could then say that the existing experimental data favor the teleparallel point of view, and consequently general relativity. From the conceptual point of view, all alternative models --- Einstein--Cartan, gauge theories for Poincar\'e and more general groups, as well as new general relativity --- present consistency problems. For example, even though the coupling prescription of these models can comply with the {\it passive} strong equivalence principle, they are not consistent \cite{ap0} with the {\em active} version of the strong equivalence principle, also known as the general covariance principle \cite{weinberg}. Another relevant problem is that, when used to describe the interaction of the electromagnetic field with gravitation, the coupling prescriptions of these models violate the U(1) gauge invariance of Maxwell's theory. This problem is usually circumvented by {\it postulating} that the electromagnetic field does not couple to torsion \cite{postulate}. This ostrich-like ``solution'' is, however, far from reasonable. On the other hand, the teleparallel interpretation for torsion presents several conceptual advantages in relation to the other theories: it is consistent with both active and passive versions of the strong equivalence principle \cite{mospe} and describes the interplay of electromagnetic and gravitational fields without violating electromagnetic gauge invariance \cite{vector}. In spite of the conceptual soundness of the teleparallel approach, we prefer to say once more that a definitive answer can only be achieved by experiments. \section*{Acknowledgments} The authors would like to thank FAPESP, CNPq and CAPES for partial financial support.
2,877,628,091,414
arxiv
\section{Introduction} A vital field in 3D vision and robotics is 3D object reconstruction. By compressing input data into simpler representations such as meshgrids or sets of simpler objects, we can understand and interact with objects and their surroundings more easily, solving problems like collision avoidance~\cite{smith2020incorporating} or grasp planning~\cite{goldfeder2007grasp},~\cite{vezzani2017grasping}. A common type of reconstruction is to represent a given (3D) object with a set of simpler shape primitives, often referred to as \textit{geons}. A popular approach in geon reconstruction is to estimate a fixed amount of geons and, concurrently, predict which of those should be kept in the final geon set, thus retaining the variability in geon numbers~\cite{tulsiani2017learning},~\cite{paschalidou2019superquadrics}. In our previous work, for example, we used a MaskRCNN model to first segment the input depth image and then reconstruct the parts using a specific type of geons, called superquadrics~\cite{sircelj2020segmentation}. Following the overall idea of our prior work, we introduce a novel method to reconstruct 3D objects with a set of superquadrics (SQ) in this paper. However, in this novel approach we base the procedure on a hierarchical tree decomposition algorithm. Instead of using a model that returns a fixed amount of geons at once, we introduce a hierarchical decomposition model, that incrementally splits the input object into more and more SQ representations, as illustrated in~\Cref{fig:tree_examp}. While a similar method was introduced by Paschalidou \textit{et al.} in~\cite{paschalidou2020learning}, we propose a splitting method based on the superquadrics characteristic, alleviating the model from determining how the object should be hierarchically split. We evaluate the proposed approach on the ShapeNet dataset. Two models with different reconstruction capabilities are trained for the experiments. The first, facilitating reconstructions with a maximum of $4$ superquadrics, is trained and tested on the full ShapeNet dataset. The second, enabling reconstructions with up to $8$ superquadrics, is assessed on the pistol ShapeNet subset. Results are presented in terms of IoU scores and with visual examples, and point to the feasability of the proposed solution. \begin{figure}[t!] \begin{center} \includegraphics[width=0.85\linewidth]{images/sq_pair_tree2.png} \caption{Visualization of the hierarchical decomposition with superquadrics. Each step splits the object into a pair of superquadrics ($a$ and $b$). The split is driven by previous superquadrics as shown by the arrows. The tree levels are indexed with $d$, whereas the SQ pairs in each level are indexed with $i$.\vspace{-3mm}} \label{fig:tree_examp} \end{center} \end{figure} \section{Method} \subsection{Superquadrics} To reconstruct the 3D shapes we use superquadrics, which are geometric shapes that can describe objects such as spheres, ellipsoids, cylinders and rectangular cuboids. A common way to describe the surface of a superquadric is using the implicit function $F(x, y, z) = 1$. Here $F$ is called the \textit{inside-outside function} and is defined as \begin{equation} \label{eq:sq_function} F(x, y, z) = \left( \left( \frac{x}{a_1} \right)^\frac{2}{\varepsilon_2} + \left( \frac{y}{a_2} \right)^\frac{2}{\varepsilon_2} \right)^\frac{\varepsilon_2}{\varepsilon_1} + \left( \frac{z}{a_3} \right)^\frac{2}{\varepsilon_1}, \end{equation} where $a_1, a_2, a_3$ define the size of the superquadric and $\varepsilon_1, \varepsilon_2$ its shape. We can also move the superquadric in space by offsetting the $x, y, z$ coordinates by $t_1, t_2, t_3$ respectively and by rotating the coordinates using quaternion notation $q_0, q_1, q_2, q_3$. To abbreviate the notation we write the spatial coordinates using vector notation $\bm{x} = [x, y, z]$, and all superquadric parameter as $$\bm\lambda = [a_1, a_2, a_3, \varepsilon_1, \varepsilon_2, t_1, t_2, t_3, q_1, q_2, q_3, q_4].$$ A useful property of the \textit{inside-outside function}, that will be used further on, is that it contains information whether the space is inside or outside the superquadric (hence the name). Points with $F(\bm{x}) < 1$ lie inside the object, $F(\bm{x}) > 1$ lie outside, and, as already suggested, points with $F(\bm{x}) = 1$ lie on its surface. For numerical stability we rather compare $F(\bm{x})^{\varepsilon_1}$ values~\cite{jaklic2000segmentation}, the spatial inequalities remains the same. \begin{figure}[!t] \begin{center} \includegraphics[width=0.88\linewidth]{images/model_architecture2.png} \caption{Recursive model architecture.}\vspace{-4mm} \label{fig:model_arch} \end{center} \end{figure} \subsection{Model architecture} Similarly to Paschalidou \textit{et al.}~\cite{paschalidou2020learning} we use a hierarchical decomposition procedure to reconstruct a complex 3D object using a set of superquadrics. This allows the decomposition to predict different numbers of superquadrics, increasing their number for fine detail, and decreasing for larger parts of the object. The procedure works by first extracting a feature vector out of the input depth image, which we refer to as a \textit{depth feature}. This is concatenated with a \textit{hierarchical feature} and passed into a recursive neural network which predicts two new \textit{hierarchical features} each encoding data to predict a new superquadric. This prediction is done with an \textit{SQ predictor} network which predicts the parameters $\bm{\lambda}$ previously described. Each of the two \textit{hierarchical features} can again be concatenated with the depth feature vector and passed into the recursive neural network, thus splitting the initial superquadric feature into two. A hierarchical feature in the first step can be a simple vector filled with zeroes. The model architecture corresponding to the outlined idea is shown in \Cref{fig:model_arch}. Such a model produces a superquadric-pair binary tree, where each node contains two sets of parameters and two links to two children, again containing two sets of parameters. We use the following notation to reference the superquadric parameters in the tree: \begin{equation} \textrm{SQ\_pair}_{d, i} = [\bm{\lambda}_a, \bm{\lambda}_b], \label{eq:sq-pair-tree} \end{equation} where $d$ is the depth of the node and $i$ is the index of the node in the tree. This notation is also shown in the tree example in \Cref{fig:tree_examp}. \subsection{Training} For training, we use the natural property of the \textit{inside-outside} function, whose sign describes the inside and the outside space of the superquadric. Similarly to \cite{paschalidou2020learning}, we define the occupancy function $g(\bm x; \bm\lambda)$, which translates this property to act as a binary classifier of the inside space \begin{equation} g(\bm x; \bm\lambda) = \sigma(s * (1 - F^{\,\epsilon_1}(\bm{x}; \bm\lambda))), \end{equation} where $\sigma$ is the sigmoid function, and $s$ is a scaling parameter that defines the slant of the values around the surface of the superquadric -- for more information see~\cite{paschalidou2020learning}. Since we want to use this occupancy property of the superquadrics, we generate the training dataset by sampling points in space and annotating them with $1$ if they lie inside the 3D object and $0$ if they lie outside. The root node of the superquadric pair tree, Eq.~\eqref{eq:sq-pair-tree}, is trained on these sets of initial ground-truth points and labels, denoted as $\mathcal{P}$ and $\mathcal{L}_{1,1}$ respectively. \begin{figure}[!t] \begin{center} \includegraphics[width=\linewidth]{images/space_split_comp2.png} \caption{Rows: top - computations on the $f=F^{\varepsilon_1}$ function, bottom - computations on the radial Euclidean distance function ($f=d$). Columns: first - values of the $f$ function around the first superquadric, shown in red, second - values of $f$ around the second superquadric, shown in blue, third - space split according to the $f$ function. We can see that $F$ and $F^{\varepsilon_1}$ split space better inside of the superquadrics while the radial distance function splits space more evenly outside of the superquadrics.}\vspace{-5mm} \label{fig:sq_split} \end{center} \end{figure} The child SQ pairs should only cover the space in proximity to the space occupied by their parent superquadric, and should not be fitted to the full inside of the 3D model i.e.\ $\mathcal{L}_{1,1}$. Paschalidou \textit{et al.}~\cite{paschalidou2020learning} solve this problem by also predicting the centroid of the part covered by a predicted superquadric. The space is then split by matching every point to its closest centroid (see Figure 4 in~\cite{paschalidou2020learning}). In this paper, we use the mathematical properties of superquadrics to split the space. More specifically, we found that it is best to split the space inside and outside of the superquadric pairs differently. The space inside of the superquadrics is split using the \textit{inside-outside} function by finding the maximum of the \textit{inside-outside} function, i.e.: \begin{equation} \argmax_{i = a,b} F^{\varepsilon_1}(\bm x; \bm\lambda_i). \end{equation} Conversely, the space outside of the superquadric pair is split using the radial Euclidean distance \begin{equation} d(\bm x)= |\bm x|| 1 - F^{-\frac{\varepsilon_1}{2}} |, \end{equation} taking the superquadric with the minimum value \begin{equation} \argmin_{i = a,b} d_i(\bm x). \end{equation} This distance function is used outside because it behaves more properly as a distance function further away from a superquadric. For more information on the radial distance function, look at Jaklič \textit{et al.}~\cite{jaklic2000segmentation}. The space-splitting properties of the \textit{inside-outside} function and the \textit{radial Euclidean} distance function are illustrated in \Cref{fig:sq_split}. The proposed splitting method is simpler than the one used in~\cite{paschalidou2020learning}, since we don't need to predict any additional values, and uses the space splitting capabilities of the superquadrics, thus, splitting the space more naturally and in accordance with the geometry of the superquadrics. \begin{figure}[!t] \begin{center} \includegraphics[width=0.9\linewidth]{images/space_split_vis.pdf} \caption{Left: Airplane with its insides noted dark $\mathcal{L}_{1,1}$, center top: outline of SQ 1 with its assigned space after split, center bottom: outline of SQ 2 with its assigned space, right: two outcomes of \Cref{eq:space_spilt}, top is $\mathcal{L}_{2,1}$, bottom $\mathcal{L}_{2,2}$.} \label{fig:obj_inside_outside} \end{center}\vspace{-4mm} \end{figure} Since our model obtains a hierarchy of superquadrics, we must also construct a hierarchy of labels $\mathcal{L}_{d,i}$ on which we compute the losses. These labels are computed recursively according to how the space is split using the superquadric pairs. The inside of the object is considered inside for the SQ-pair child only if its parent superquadrics' split space also contains it (see \Cref{fig:obj_inside_outside}). This translates to a logical AND operation between the parents ground truth occupancy $\mathcal{L}_{d,i}$ and the space split of the parent \begin{equation} \begin{split} l_{d, i} =& \; l_{parent\_node(d, i)} \;\; \wedge \\ & \; split(\bm x, \bm\lambda_{parent\_sq(d, i)}, \bm\lambda_{uncle\_sq(d, i)}), \end{split} \label{eq:space_spilt} \end{equation} where $l_{d, i} \in \mathcal{L}_{d, i}$ is the label belonging to point $\bm x \in \mathcal{X}$, $parent\_node(d, i)$ denotes the parent SQ pair node, $parent\_sq(d, i)$ is the parent superquadric ($a$ or $b$) and $uncle\_sq(d, i)$ is the other superquadric from the parent pair node. For example, for node $(d, i) = (2, 2)$: \begin{equation*} \begin{split} parent\_node(2, 2) &= (1, 1)\\ parent\_sq(2, 2) &= (1, 1, a)\\ uncle\_sq(2, 2) &= (1, 1, b). \end{split} \end{equation*} \begin{table}[!t] \caption{IoU scores of ModelS (full ShapeNet).} \label{tab:models_iou} \smallskip \small \begin{center} \begin{tabular}{ | r | c | c | } \hline \textbf{SQ tree level} & $1$ & $2$\\ \hline \textbf{IoU} [in \%] & $56.7\%$ & $58.8\%$ \\ \hline \end{tabular}\vspace{-5mm} \end{center} \end{table} \input{two_column_table} \begin{table}[h] \caption{IoU scores of ModelP (pistols).} \label{tab:modelp_iou} \smallskip \small \begin{center} \begin{tabular}{ | r | c | c | c |} \hline \textbf{SQ tree level} & $1$ & $2$ & $3$ \\ \hline \textbf{IoU} & $63.5\%$ & $65.6\%$ & $64.7\%$ \\ \hline \end{tabular}\vspace{-4mm} \end{center} \end{table} Finally the occupancy loss is calculated using the occupancy values for each pair tree node and the ground truth occupancy values $\mathcal{L}_{d,i}$ \begin{equation} L = \sum_{d=1} \sum_{i=1}^{2^{d-1}} \sum_{\substack{\bm x \in \mathcal{X}\\l_{d,i} \in \mathcal{L}_{d,i}}} L_{BCE}(\max_{sq = {a,b}} g(\bm x; \bm\lambda_{sq}), l_{d, i}), \end{equation} where $L_{BCE}$ is the binary cross entropy loss. This loss is similar to the \textit{part-reconstruction loss} term from Paschalidou \textit{et al.}~\cite{paschalidou2020learning}, but differs in that we look at how accurately an SQ pair matches the designated space of its parent superquadric. \section{Results} In this section, we present the results for two trained hierarchic decomposition models. We use the ShapeNet dataset~\cite{shapenet2015} for training along with the NVIDIA Kaolin v0.1 library~\cite{KaolinLibrary}, which is used to compute signed distance function values of the 3D objects and to infer the ground truth inside-outside space. The first model (abbreviated ModelP) is trained on the Pistol subset of ShapeNet with the maximum depth of the superquadric pair tree set to $3$ levels. The second model (ModelS) is trained on the full ShapeNet dataset with the maximum depth set to $2$ levels. Training with more levels proved challenging, and resulted in degenerated SQs, often small and out of bounds. \textbf{Performance Indicator.} After a train-validation-test split, both models are evaluated on the test subset using the intersection-over-union metric (IoU) \begin{equation} \label{eq:iou} IoU(\Lambda_d, \mathcal{L}_{0,0}) = \frac{\sum_{\substack{\bm x \in \mathcal{X}\\l \in \mathcal{L}_{0,0}}} \hat l_{d}(\bm x) \wedge l} {\sum_{\substack{\bm x \in \mathcal{X}\\l \in \mathcal{L}_{0,0}}} \hat l_{d}(\bm x) \vee l }, \end{equation} where $\Lambda_d$ denotes all predicted superquadric parameters in the tree at layer $d$, $\Lambda_d = \{\lambda_{d,i} | i \in \left[ 1, 2^{d-1} \right]\}$ and $\hat l_{d}(\bm x)$ denotes the predicted inside-outside label for point $\bm x$. The latter is inferred as inside or $\hat l_{d}(\bm x) = 1$ if $g(\bm x; \bm\lambda_{d,i}) > 0.5$, for any superquadric in layer $d$, otherwise the point is outside of the reconstruction. \textbf{Model Comparison.} The IoU results of the ModelS on the full ShapeNet dataset are shown in~\Cref{tab:models_iou} and for separate subsets of Shapenet in~\Cref{tab:iou_shapenet_sep}. The IoU results of ModelP for the pistol subset are reported in~\Cref{tab:modelp_iou}. Since the full ShapeNet dataset contains much more variability, ModelS scores a lower IoU over the full ShapeNet dataset, than ModelP over the pistol subset. An interesting observation is that ModelS still performs better on the pistol subset than ModelP, even though the latter was trained exclusively on pistols. This suggests that training on more complex and diverse sets can improve results on specific subsets. However, training on the full ShapeNet did prove more difficult. For instance, we managed to train ModelS for only up to $2$ tree levels, while we managed to train ModelP to predict trees up to $3$ levels deep. \textbf{Object-specific Results.} Regarding the specific subsets of ShapeNet, the model performed best on simpler object classes, e.g., \textit{dishwasher} or \textit{microwave}, which have a box-like shape and are, thus, easier to reconstruct using superquadrics. The poorest results are on concave objects, such as \textit{bowl} or \textit{bathtub}, since only two levels of SQ-pairs don't give enough capacity to cover the object. \textbf{Qualitative Evaluation.} In~\Cref{fig:shapenet_rec_vis} we show a few reconstruction examples of ModelS. As we can see, the largest issue is the low number of predicted superquadrics, given that ModelS has been trained for only $2$ levels, and being capable to reproduce an object with a maximum of 4 superquadrics. While simpler objects like the two cars are well reconstructed, more complex objects prove impossible to reconstruct with $4$ superquadrics. For example, both tables have four leg stands, which get covered by two rectangular superquadrics in the left case, and a cylindrical one on the right. The same problem can be observed on both of the planes, where the wings and the engine prove much too complex for only 4 superquadrics. \begin{figure}[!t] \begin{center} \includegraphics[width=0.8\linewidth]{images/shapenet_reconstructions.pdf} \vspace{-6mm} \caption{Reconstruction examples of ModelS on full ShapeNet. The 1st and 3rd columns show the original object. The 2nd and 4th show the reconstruction from the second SQ tree layer.} \label{fig:shapenet_rec_vis} \end{center}\vspace{-6mm} \end{figure} In~\Cref{fig:pistols_rec_vis} we show a few ModelP reconstructions, from worst to best, using the $2^{nd}$ and $3^{rd}$ tree level. Here, we observe that the least performing objects were those pistol models that have been rare in the pistol subset, like the $1^{st}$st and $2^{nd}$ pistol. The most common handgun in the last two rows performed best in terms of IoU. \begin{figure}[!htb] \begin{center} \includegraphics[width=0.85\linewidth]{images/pistol_vis.png}\vspace{-3mm} \caption{Reconstruction examples of ModelP on the pistol subset. First column - original object, second - reconstruction from the second SQ tree level, third - reconstruction from the third SQ tree level. The reconstructions are sorted from worst to best with the reconstruction IoU annotated above.} \label{fig:pistols_rec_vis} \end{center}\vspace{-6mm} \end{figure} \section{Conclusion} The hierarchical decomposition has proven to successfully decompose the targeted input objects, even with single point-of-view depth images, which give a highly occluded representation of the 3D object. We showed that, unlike in Paschalidou \textit{et al.}~\cite{paschalidou2020learning}, we don't need to predict centroids to split the space into two. It is sufficient to use the already predicted superquadrics for space splitting. These results have been supported by fairly good IoU scores and good visual representations. There is also much to improve in the model. The largest issue we faced was the maximum allowed depth of the SQ pair tree, where we achieved the max depth of $3$ with $8$ predicted superquadrics. \footnotesize \bibliographystyle{ieee}
2,877,628,091,415
arxiv
\section{Supplementary Information} For the analysis, we have used the data of Ref.~\cite{TrenkwalderNATPHYS2016}, see the inset of Fig.~\ref{Fig2}b, obtained with tunneling rate $\Omega = 40$ Hz, atom number $N \approx 4500$, temperature $\approx$ 10\,nK. \subsection{Data analysis of $\chi_{\rm mom}${}} According to Eq.~(\ref{chimom}), \begin{equation}\label{eq:chimom-def} \chi_{\rm mom}(\lambda) = \frac{1}{(\Delta \hat{O})^2} \bigg( \frac{d \big\langle \hat{O} \big\rangle}{d \lambda} \bigg)^2 \equiv a_0^2 \left(\frac{1}{\sigma}\frac{d |z|}{d a_s}\right)^2 \ , \end{equation} with $\lambda \equiv a_s/a_0$ with $a_s$ the (s-wave) scattering length and $a_0$ the Bohr radius and the observable $\langle \hat O \rangle \equiv |z|$ with $|z| = \frac{|N_L-N_R|}{N_R+N_L}$ the splitting of the number of atoms in the left and right well, $N_L$ and $N_R$ respectively. The variance of the observable $(\Delta \hat O )^2 \equiv \sigma$ is obtained from the fit of two Gaussian on the measured distribution of observed $z$ values (for details see Ref.~\cite{TrenkwalderNATPHYS2016}). The splitting $|z|$ is half of the distance between the two Gaussians and $\sigma$ is the width of each of the Gaussians. Sample distributions and fitted Gaussians are plotted in Fig.~\ref{Fig2}a and the resulting $|z|$ is shown in Fig.~\ref{Fig2}b. Applying Eq.~\eqref{eq:chimom-def} directly to the data gives large errors since we must obtain the derivative of discrete and noisy data. To avoid this and to improve the quality of the result without the need of excessive smoothing of the (limited) data we simulate the experiment and analysis using the double Gaussian fit of the observed distribution as the sample distribution. On each of the about 3000 simulations we calculate $\chi_{\rm mom}${} with Eq.~\eqref{eq:chimom-def} using the derivative of nearest neighbors in $a_s$ and collect the result in histograms for each scattering length $a_s$, see Fig.~\ref{FigSupp1}. We find that the histograms can be best fitted with a Gaussian (red) on an exponential background centered at $\chi_{\rm mom}${} = 0. For the data point at $a_s = -2.76\,a_0$ the fit with the background gives $\chi_{\rm mom}$ $\approx 0$ but with a very big error, when we fit without background we get still a result close to zero but with a much smaller error. We fit the histograms all in linear scale since this emphasizes the large count rates of $\chi_{\rm mom}$. However, fitting in logarithmic scale does not change much the result, except for the data point at $a_s = -1.88\, a_0$, see second row of Fig.~\ref{FigSupp1}. This data point is difficult to assign but we believe that the fit in linear scale (left in the figure) is better describing the histogram for large count rates than the fit done in logarithmic scale (right in the figure). The results shown in Fig.~\ref{Fig2}d report the center of the fitted Gaussian as the resulting $\chi_{\rm mom}${} and the width of the Gaussian as the error. \\ \subsection{Data analysis of $\chi_{\rm cl}${}} The classical fidelity susceptibility $\chi_{\rm cl}${} is defined in Eq.~(\ref{chicl}) from the fidelity $\mathcal{F}_{\rm cl} (\lambda,\epsilon) = \sum_{\mu} \sqrt{P(\mu \vert \lambda) P(\mu \vert \lambda+\epsilon) }$ between neighbouring probability distributions $P(\mu \vert \lambda)$ and $P(\mu \vert \lambda+\epsilon)$ and $\lambda \equiv \frac{a_s}{a_0}$ and $\epsilon \rightarrow 0$. Expanding $\mathcal{F}_{\rm cl} (\lambda,\epsilon)$ to second order in $\epsilon$ gives: \begin{equation} \mathcal{F}_{\rm cl}(\lambda,\epsilon) = 1 - \frac{\chi_{cl}}{8} \epsilon^2 + O(\epsilon^3). \end{equation} To estimate $\chi_{\rm cl}${} from our data we take for each scattering length $a^i_s$ the distribution $P(\mu \vert \lambda_i) \equiv h_i(\mu)$ from the original data $\mu \equiv z$ with a fixed bin size $\delta z = 0.05$. Each distribution is normalized such that $\sum_z h_i(z) = 1$. We approximate the overlap $\mathcal{F}_{\rm cl}(\lambda,\epsilon)$ by the overlap of two neighboring distributions: \begin{equation} \begin{array}{ll} & \mathcal{F}_{\rm cl}(\lambda_i,\epsilon_{ij}) \approx \sum_z \sqrt{h_i(z) h_j(z)} \\ & \epsilon_{ij} = \lambda_{j}-\lambda_{i} \equiv \frac{a_j - a_i}{a_0}\ . \end{array} \end{equation} For each $\lambda_i$ we take the nearest neighboring points $\{x_j, y_j\} = \{\epsilon_{ij},1-\mathcal{F}_{\rm cl,ij}\}$ with $j = i \pm 1$ and fit a parabola through them, where the only free parameter $\chi_{cl,i}$ is obtained from the fit. Note, that for $\chi_{\rm mom}${} we have fitted the original distribution with a double Gaussian, which for the direct analysis of $\chi_{\rm cl}${} would not be needed. However, to improve the quality of the result and to obtain errors we again proceed as for the analysis of $\chi_{\rm mom}${}: we generate 3000 realizations of the experiment using the double Gauss fit of the original data as the distribution of the samples, and we calculate $\chi_{\rm cl}${} for each sample and collect the result in histograms, see Fig.~\ref{FigSupp2}. Each resulting histogram is very well fit by a single Gaussian (red) without background. The results shown in Fig.~\ref{Fig2}c report the center of the Gaussian as the resulting $\chi_{\rm cl}${} and the Gaussian $\sigma$ as the error. \begin{widetext} \begin{figure}[ht] \includegraphics[width=\columnwidth] {FigSupp1.pdf} \caption{\label{fig:chi-mom-var}Histograms of $\chi_{\rm mom}${} (blue dots, 100 bins) generated from about 3000 simulations (number after ``\#'' in label) of random samples obtained from the distribution described by the double Gaussian fit of the original data. Each histogram is fitted with a Gaussian (red) on an exponential background (green, limited to 1), except for $a_s = -2.76\,a_0$ fitted without background. The data for $a_s = -1.88\,a_0$ is shown twice (second row) where we show the fit result when the histogram is fitted in linear scale (left), as was done for all other histograms, and for comparison the fit result when the histogram is fitted in logarithmic scale (right).} \label{FigSupp1} \end{figure} \begin{figure}[ht] \includegraphics[width=\columnwidth] {FigSupp2.pdf} \caption{\label{fig:chi-cl-var}Histograms of $\chi_{\rm cl}${} (blue and green dots, 100 bins) generated from 3000 simulations (number after ``\#'' in label) of random samples obtained from the distribution described by the double Gaussian fit of the original data. Each histogram is fitted with a simple Gaussian (red). For comparison we also show the mean value and standard deviation (blue Gaussian) and the 68\,\% confidence region (green dots).} \label{FigSupp2} \end{figure} \end{widetext} \end{document}
2,877,628,091,416
arxiv
\section{Summary} \label{Intro} The precise determination of the mass and width of the $f_0(500)$ resonance \cite{Cap06,Kam11, PDG} prompted us \cite{us} to revisit an old idea \cite{RJC70,Ell70} that the chiral condensate $\langle \bar{q}q \rangle_{\mathrm{vac}} \not= 0$ may also be a condensate for scale transformations in the chiral $SU(3)_L\times SU(3)_R$ limit. This may occur in QCD if the heavy quarks $t,b,c$ are first decoupled and then the strong coupling% \footnotemark[1]\addtocounter{footnote}{1} \footnotetext[1]{We have $[D_\mu\,,\,D_\nu] = ig G^a_{\mu\nu}T^a$ where $D_\mu$ is the covariant derivative, $\{T^a\}$ generate the gauge group, $\alpha_s = g^2/4\pi$ is the strong coupling, and $\beta = \mu\partial\alpha_{s}/\partial\mu$ and $\gamma_{m} = \mu\partial\ln m_q/\partial\mu$ refer to a mass-independent renormalization scheme with scale $\mu$.}% $\alpha_s$ of the resulting three-flavor theory runs nonperturbatively to a fixed point $\alpha^{}_{\mathrm{IR}}$ in the infrared limit (Fig.\ \ref{fig:beta}). At that point, $\beta(\alpha^{}_{\mathrm{IR}})$ vanishes, so the gluonic term in the strong trace anomaly \cite{Mink76} \begin{equation}\label{eqn:anomaly} \theta^\mu_\mu =\frac{\beta(\alpha_{s})}{4\alpha_{s}} G^a_{\mu\nu}G^{a\mu\nu} + \bigl(1 + \gamma_{m}(\alpha_{s})\bigr)\sum_{q=u,d,s} m_{q}\bar{q}q \end{equation} is absent, which implies \begin{align} \left.\theta^\mu_\mu\right|_{\alpha_s = \alpha^{}_{\mathrm{IR}}} &= \bigl(1 + \gamma_{m}(\alpha^{}_{\mathrm{IR}})\bigr) (m_u\bar{u}u + m_d\bar{d}d + m_s\bar{s}s) \nonumber \\ &\to 0\ , \ SU(3)_L\times SU(3)_R \mbox{ limit} \label{scale}\end{align} and hence a $0^{++}$ QCD dilaton% \footnotemark[2]\addtocounter{footnote}{1} \footnotetext[2]{\label{dilaton_def}% We reserve the term \emph{dilaton} and notation $\sigma$ for a NG boson due to scale invariance being preserved by the Hamiltonian but broken by the vacuum, in some limit. We are \emph{not} talking about the $\sigma$-model, scalar gluonium \cite{glue}, or walking gauge theories \cite{Holdom,Aki86,Appel86,Yamawaki} where $\beta \approx 0$ near a scale-invariant vacuum \cite{CBZ,Appel,Del10} and proposals for ``dilatons'' \cite{Yamawaki,Bando,Appel10} seem unlikely \cite{HoldomTerning}.}% $\sigma$ due to quark condensation.% \footnote{\label{condensate}% In field and string theory, it is often stated that Green's functions are manifestly conformal invariant for $\beta = 0$. This assumes that, as in perturbative theories with $\beta = 0$, there are no scale condensates. If a scale condensate is present, conformal invariance becomes manifest only if \emph{all} four-momenta are spacelike and large.} The obvious candidate for this state is the $f_0(500)$, which arises from a pole on the second sheet at a complex mass with typical value~\cite{Cap06} \begin{equation} m_{f_0} = 441-i\,272 \mbox{ MeV} \label{f_0}\end{equation} and surprisingly small errors \cite{Mink10}. In all estimates of this type, the real part of $m_{f_0}$ is less than $m_K$. In Sec.~\ref{Motiv} below, we recall problems with the phenomenology of $\chi$PT$_3$ caused by the $f_0$ pole in $0^{++}$ channels, and observe that they can be avoided by treating $f_0$ as a Nambu-Goldstone (NG) boson $\sigma$ in the limit (\ref{scale}). The result is chiral-scale perturbation theory $\chi$PT$_\sigma$, where the NG sector $\{\pi,K,\eta,f_0/\sigma\}$ is clearly separated in scale from other hadrons. Section~\ref{Lagrangian} introduces the model-independent $\chi$PT$_\sigma$ Lagrangian for meson amplitudes expanded in $\alpha_s$ about $\alpha^{}_{\mathrm{IR}}$ for $m_{u,d,s} \sim 0$. It summarizes soft $\pi, K, \eta, \sigma$ meson theorems for three-flavor chiral and scale symmetry. For amplitudes where $\sigma$ plays no role, the results agree with $\chi$PT$_3$. Results for soft $\sigma$ amplitudes (Sec.~\ref{strong}) are similar to those found originally \cite{RJC70,Ell70} but include effects due to the gluonic term in (\ref{eqn:anomaly}). In Appendix \ref{AppA}, Weinberg's analysis of the $\chi$PT$_2$ loop expansion \cite{Wei79} is extended to include $\chi$PT$_\sigma$. \begin{figure}[t] \center\includegraphics[scale=0.75]{beta_function2} \caption{The solid line shows a three-flavor $\beta$ function (or better, a QCD version \cite{Grun82} of the Gell-Mann--Low $\Psi$ function) with an infrared fixed point $\alpha^{}_{\mathrm{IR}}$ at which $\alpha_s$ freezes \cite{pinch,Steve,freeze,Brod,lattice,DSE} but the manifest scale invariance of \cite{CBZ,Appel,Del10} is \emph{avoided}. The existence of $\alpha^{}_{\mathrm{IR}}$ for small $N_f$ values is not entirely settled. The dashed line shows the original lattice result \cite{Lusc94} for $N_f = 0$ (no quarks) where $\beta$ remains negative and becomes linear at large $\alpha_s$.} \label{fig:beta} \end{figure} Effective electromagnetic and weak operators are then added to simulate two-photon processes (Sec.~\ref{Electromag}) and nonleptonic $K$ decays (Sec.~\ref{weak_emag}). The main result is a simple explanation of the $\Delta I =1/2$ rule for kaon decays: in the \emph{lowest} order of $\chi$PT$_\sigma$, there is a dilaton pole diagram (Fig.\ \ref{fig:k_pipi}) which produces most of the $\{\pi\pi\}_{I=0}$ amplitude \begin{equation} A_0 = A_{g^{}_{8,27}\,\textrm{vertices}} + A_{\sigma\textrm{-pole}} \simeq A_{\sigma\textrm{-pole}} \end{equation} and makes it large relative to the $I=2$ amplitude $A_2$ \cite{PDG}: \begin{equation} \bigl|A_0\bigl/A_2\bigr|_\textrm{expt} \simeq 22 \,. \end{equation} We conclude that the ratio of the \textbf{8} and \textbf{27} contact couplings $g_8$ and $g_{27}$ is of the order \begin{equation} 1 \lesssim \bigl|g_8\bigl/g_{27}\bigr| \lesssim 5 \label{ratio}\end{equation} indicated by early calculations \cite{Feyn65,Feyn71,Gaill74,Alta74}, and \emph{not} the value $22$ found by fitting lowest order $\chi$PT$_3$ to data. In order to obtain a value for the $K_S\sigma$ coupling of Fig.~\ref{fig:k_pipi}, we compare the two-photon processes $\gamma\gamma \to \pi\pi$ (Fig.~\ref{fig:gamgam_pipi}) and $K_S \to \gamma\gamma$. Well-known features of these amplitudes are the presence of ultraviolet finite $\pi^\pm, K^\pm$ loop diagrams coupled to the external photons \cite{DAm86}, and the need for a rule \cite{Gasser84,Gasser85,Leut94} \begin{equation} A_\mu \sim \partial_\mu = O(p) \label{GLrule}\end{equation} specifying the effect of a photon or weak boson field $A_\mu$ on the chiral order of terms in loop expansions. These features are important for our analysis, and in particular, for an investigation in Sec.~\ref{Electromag} of the relation between the $\sigma\gamma\gamma$ coupling (Fig.~\ref{fig:gamgam_pipi}) and the electromagnetic trace anomaly \cite{RJC72,Ell72} \begin{gather} \widetilde{\theta}^\mu_\mu = \theta^{\mu}_{\mu} + (R\alpha/6\pi) F_{\mu\nu} F^{\mu\nu} , \notag \\ R=\left.\frac{\sigma(e^{+}e^{-}\rightarrow\mathrm{hadrons})}% {\sigma(e^{+}e^{-}\rightarrow\mu^{+}\mu^{-})}\right|_{\textrm{energy}\to\infty} \label{eqn:em_anomaly} \end{gather} at the QCD infrared fixed point $\alpha_s = \alpha^{}_\mathrm{IR}$. Here $F_{\mu\nu}$ and $\alpha$ are the electromagnetic field strength tensor and fine-structure constant, and $\widetilde{\theta}_{\mu\nu}$ is the energy-momentum tensor for QCD and QED combined. \begin{figure}[t] \center\includegraphics[scale=.65]{k_pipi} \caption{Tree diagrams in the effective theory $\chi$PT$_\sigma$ for the decay $K_S\to\pi\pi$. The vertex amplitudes due to \textbf{8} and \textbf{27} contact couplings $g_8$ and $g_{27}$ are dominated by the $\sigma/f_0$ pole amplitude. The magnitude of $g^{}_{K_S\sigma}$ is found by applying $\chi$PT$_\sigma$ to $K_S \to \gamma\gamma$ and $\gamma\gamma \to \pi\pi$.} \label{fig:k_pipi} \end{figure}% \begin{figure}[b] \center\includegraphics[scale=.7]{gamgam_pipi} \caption{Dilaton pole in $\gamma\gamma \to \pi\pi$. In this order of $\chi$PT$_\sigma$, diagrams with a $\pi^\pm$ or $K^\pm$ loop coupled to both photons must also be included.} \label{fig:gamgam_pipi} \end{figure}% To obtain an approximate result for the decay $\sigma \to \gamma\gamma$, the momentum $q$ carried by $\theta^\mu_\mu$ has to be extrapolated from $q^2 = 0$ (given exactly by the electromagnetic trace anomaly) to $q^2 = m^2_\sigma$. In simple cases, and when photons are absent, this amounts to $\sigma$-pole dominance of $\theta^\mu_\mu$, i.e.\ partial conservation of the dilatation current (PCDC) \cite{Carr71}, which is the direct analogue of partial conservation of the axial-vector current (PCAC) for soft-pion amplitudes. However, we find that, unlike PCAC for $\pi^0 \to \gamma\gamma$, PCDC for $\sigma \to \gamma\gamma$ is modified by meson loop diagrams coupled to photons. In effect, these ultraviolet convergent diagrams produce an infrared singularity which is an inverse \emph{power} of the light quark mass, arising in the same way as conventional Li-Pagels singularities \cite{LiPag,Pagels75}, but sufficiently singular to compete with the pole term. In Appendix \ref{AppB}, we show that, for a fixed number of external operators coupled purely to the NG sector, these inverse-power singularities do not upset the convergence of the chiral expansion: relative to the corresponding lowest order graph, be it tree or loop, each additional loop produces a factor $O(m_q)$ or $O(m_q\ln m_q$). The analysis generalises the rule (\ref{GLrule}) for minimal gauge couplings \cite{Gasser84,Gasser85} and its extension to axial anomalies \cite{Leut94} to include (a) other nonminimal gauge couplings such as the electromagnetic trace anomaly (\ref{eqn:em_anomaly}), and (b) external Wilson operators of any kind. Appendix \ref{AppC} is a brief note about Eq.~(\ref{eqn:em_anomaly}) for QCD in the physical region $0 < \alpha_s < \alpha^{}_\mathrm{IR}$. Unlike other results in this article, our estimate \begin{equation} R^{}_{\mathrm{IR}} \approx 5 \end{equation} for the renormalized value of $R$ at the fixed point depends on the many-color limit $N_c \to \infty$. This involves the observation (Sec.~\ref{Motiv}) that for $N_c$ large, the dilaton $\sigma/f_0$ is a $q\bar{q}$ state, i.e.\ similar to $\pi, K, \eta$, but with planar-gluon corrections. Like other $q\bar{q}$ resonances, $\sigma/f_0$ has a {\it narrow} width in that limit (Sec.~\ref{strong}). \section{Motivation} \label{Motiv} It may seem odd that new conclusions about QCD can be drawn simply from approximate chiral symmetry and $0^{++}$ pole diagrams. Scalar pole dominance for reactions like $K_S \to \pi\pi$ was considered long ago \cite{Golo80, Volk88, Moro90, Pol02}, it can be easily incorporated in a chiral invariant way, and if difficulties with hyperon decays% \footnote{Accounting for nonleptonic hyperon decays will require either $\chi$PT for baryons or the weak sector of the Standard Model to be modified.} are overlooked, theory and experiment for soft $\pi,K,\eta$ amplitudes are in excellent agreement, with dispersive corrections included where necessary. The flaw in this picture is contained in another old observation --- lowest order $\chi$PT$_3$, if not corrected, typically fails for amplitudes which involve both a $0^{++}$ channel and $O(m_K)$ extrapolations in momenta: \begin{enumerate} \item Final-state $\pi\pi$ interactions \cite{Tru84} in $K_{\ell 4}$ decays \cite{Tru81} and nonleptonic $K$ \cite{Nev70,Tru88} and $\eta$ \cite{Roi81,Gass85} decays compete with and often dominate purely chiral contributions \cite{Tru84,Tru81,Nev70,Tru88,Roi81,Gass85,Meiss91}. \item The chiral one-loop prediction for the $K_L \to \pi^0\gamma\gamma$ rate \cite{Eck87} is only 1/3 of the measured value \cite{Kam94}. \item The lowest order prediction \cite{Bij88,Don88} of a linear rise in the $\gamma\gamma \to \pi^0\pi^0$ cross section disagrees \cite{Don93} with the Crystal Ball data \cite{Xal90}. \end{enumerate} These facts became evident at a time when it was thought that $0^{++}$ resonances below $\approx$ 1 GeV did not exist,% \footnote{The $\epsilon(700)$ resonance considered in \cite{RJC70,Ell70,RJC72,Ell72} was last listed in 1974 \cite{PDG74}. Replacing it by $f_0(500)$ was proposed in 1996 \cite{Torn96}.} but it was already clear that agreement with data required the inclusion of large dispersive effects which had to be somehow ``married'' to chiral predictions \cite{Don95}. The same can be said now, except that the $f_0(500)$ pole of Eq.~(\ref{f_0}) can be identified as the source of these effects. Consequently dispersion theory for these processes, with the possible exception of $\eta \to 3\pi$ decay \cite{Col11}, is far better understood \cite{Pol02,Ynd07,Pen06,Col12,Trof12}. But that does nothing to alter the fact that the lowest order of standard chiral $SU(3)_L \times SU(3)_R$ perturbation theory $\chi$PT$_3$ fits these data so poorly. The lowest order amplitude ${\cal A}_\mathrm{LO}$ is the first term of an asymptotic series \begin{equation} {\cal A} = \bigl\{{\cal A}_\mathrm{LO} + {\cal A}_\mathrm{NLO} + {\cal A}_\mathrm{NNLO} + \ldots\bigr\}_{\chi\mathrm{PT}_3} \label{chiral}\end{equation} in powers of $O(m_K)$ momenta and quark masses $m_{u,d,s} = O(m_K^2)$ (with $m_{u,d}/m_s$ held fixed). If the first term is a poor fit, \emph{any} truncation of the series to make it agree with a dispersive fit to data is unsatisfactory \emph{because the series is diverging}. For example, consider the amplitude for $K_L \to \pi^0\gamma\gamma$ (item 2 above). Let the series (\ref{chiral}) be matched to data by including dispersive NLO corrections (next to lowest order) and then truncating: \begin{equation} {\cal A}_{K_L \to \pi^0\gamma\gamma} \simeq \bigl\{{\cal A}_\mathrm{LO} + {\cal A}_\mathrm{NLO}\bigr\}_{\chi\mathrm{PT}_3} \,. \end{equation} The LO prediction for the rate is 1/3 too small, so, depending on the relative phase of the LO and NLO terms, a fit can be achieved only for \begin{equation} \bigl|{\cal A}_\mathrm{NLO}\bigr|_{\chi\mathrm{PT}_3} \gtrsim \sqrt{2}\bigl|{\cal A}_\mathrm{LO}\bigr|_{\chi\mathrm{PT}_3} \,. \label{Klong} \end{equation} How can this be reconciled with the success \cite{Gasser85} of $\chi$PT$_3$ elsewhere? Corrections to lowest order $\chi$PT$_3$ should be $\sim$ 30\% at most: \begin{equation} \bigl|{\cal A}_\mathrm{NLO}\bigl/{\cal A}_\mathrm{LO}\bigr|_{\chi\mathrm{PT}_3} \lesssim 0.3 \mbox{ , acceptable fit.} \label{eqn:XPT3_fit} \end{equation} A standard response% \footnote{LCT thanks Professor H.~Leutwyler for a discussion of this point.} is that there are limits to the applicability of an expansion like $\chi$PT$_3$, so failures in a few cases are to be expected. In our view, there is a consistent trend of failure in $0^{++}$ channels which can and should be corrected by modifying the \emph{lowest order} of the three-flavor theory. This must be achieved without changing $\chi$PT$_2$, where amplitudes are expanded about the chiral $SU(2)_L \times SU(2)_R$ limit with $O(m_\pi)$ extrapolations% \footnote{For some authors, ``two-flavor theory" refers to pionic processes \emph{without} the restriction $O(m_\pi)$ on pion momenta. Then the relevant theory is $\chi$PT$_3$ or $\chi$PT$_\sigma$, not $\chi$PT$_2$. See Fig.~\ref{fig:goldstone}.} in momenta; $\chi$PT$_2$ is wholly successful, producing convergent results with small corrections, typically 5\% or at most 10\%: \begin{equation} \bigl|{\cal A}_\mathrm{NLO}\bigl/{\cal A}_\mathrm{LO}\bigr|_{\chi\mathrm{PT}_2} < 0.1 \mbox{ , observed fits.} \end{equation} Our solution is to replace $\chi$PT$_3$ by chiral-scale perturbation theory $\chi$PT$_\sigma$, whose NG sector $\{\pi,K,\eta,\sigma/f_0\}$ includes $f_0(500)$ as a dilaton $\sigma$ associated with the scale-invariant limit (\ref{scale}). In $\chi$PT$_\sigma$, the strange quark mass $m_s$ sets the scale of $m^2_{f_0}$ as well as $m^2_K$ and $m^2_\eta$ (Fig.~\ref{fig:goldstone}, bottom diagram). As a result, the rules for counting powers of $m_K$ are changed: $f_0$ pole amplitudes (NLO in $\chi$PT$_3$) are promoted to LO. That fixes the LO problem for amplitudes involving $0^{++}$ channels and $O(m_K)$ extrapolations in momenta. At the same time, $\chi$PT$_\sigma$ \emph{preserves} the LO successes of $\chi$PT$_3$ elsewhere: for reactions which do not involve $\sigma/f_0$, the predictions of $\chi$PT$_3$ and $\chi$PT$_\sigma$ are identical. The analysis relies on a clear distinction being drawn between $\chi$PT$_2$, $\chi$PT$_3$, and $\chi$PT$_\sigma$. For each amplitude $\cal A$, these three versions of $\chi$PT produce three inequivalent asymptotic expansions of the form (\ref{chiral}). The corresponding scale separations between NG sectors and other particles are shown in Fig.~\ref{fig:goldstone}. We use $\chi$PT$_2$ in the strict sense originally intended \begin{figure} \includegraphics[scale=0.53]{goldstone_mass2v5} \caption{Scale separations between Nambu-Goldstone (NG) sectors and other hadrons for each type of chiral perturbation theory $\chi$PT discussed in this paper. Note that scale separation in $\chi$PT$_2$ (chiral $SU(2) \times SU(2)$, top diagram) is ensured by limiting extrapolations in momenta $p,p'$ to $O(m_\pi)$ (not $O(m_K)$). In conventional three-flavor theory $\chi$PT$_3$ (middle diagram), there is {\it no scale separation}: the non-NG boson $f_0(500)$ sits in the middle of the NG sector $\{\pi, K, \eta\}$. Our three-flavor proposal $\chi$PT$_\sigma$ (bottom diagram) for $O(m_K)$ extrapolations in momenta implies a clear scale separation between the NG sector $\{\pi,K,\eta,\sigma = f_0\}$ and the non-NG sector $\{\rho,\omega, K^*\!,N,\eta',\ldots\}$.} \label{fig:goldstone} \end{figure} \cite{Wein66,DaWe69,Pagels75,Gasser83,Gasser84}: an asymptotic expansion for the limit $m_{u,d} \to 0$ with $m_s \not= 0$ and (crucially) momentum extrapolations limited to $O(m_\pi)$. There are only three NG bosons $\{\pi^+, \pi^0, \pi^-\}$, with \emph{no dilaton}: $\chi$PT$_2$ is not sensitive to the behavior of $\beta$ because of the relatively large term $m_s\bar{s}s$ in Eq.~(\ref{eqn:anomaly}) for $\theta^\mu_\mu$. Since $s$ is not treated as a light quark, the $K$ and $\eta$ mesons as well as $f_0, \rho, \omega, N, \eta' \ldots$ are excluded from the $\chi$PT$_2$ NG sector. If there is an $O(m_K)$ extrapolation in momentum, $\chi$PT$_2$ is \emph{not} sufficient. Three-flavor contributions must be included, either as large dispersive extrapolations, or with $\chi$PT$_2$ replaced by a three-flavor chiral expansion: $\chi$PT$_3$ \cite{Pagels75,GMOR68,Gasser82,Gasser85,Scherer12} or $\chi$PT$_\sigma$. An $O(m_K)$ extrapolation may arise because $K$ or $\eta$ is soft, or because the pion momenta in (say) $\pi\pi \to \pi\pi$ or $\gamma\gamma \to \pi\pi$ are chosen to be $O(m_K)$, or because of a kinematic constraint. A well known example is the fact that $\chi$PT$_2$ says almost nothing about $K_S \to \pi\pi$: if one pion becomes soft, the momentum difference between on-shell states $|K\rangle$ and $|\pi\rangle$ is necessarily $O(m_K)$. An example of interest in Sec.~\ref{weak_emag} is the pion-loop result \cite{DAm86} for $K_S \to \gamma\gamma$, which is not implied by $\chi$PT$_2$: a three-flavor expansion is necessary. Both $\chi$PT$_3$ and $\chi$PT$_\sigma$ involve the limit% \footnote{We require $m_s > m_{u,d}$ throughout. Double asymptotic series can be considered for either $\chi$PT$_2$ and $\chi$PT$_3$ \cite{Gasser85,Gasser07} or $\chi$PT$_2$ and $\chi$PT$_\sigma$. The unusual limit $m_s \to 0$ for fixed $m_{u,d} \not= 0$ considered in Sec.~4 of \cite{Nebreda10} does not produce any NG bosons.} \begin{equation} m_i \sim 0\ ,\ m_i/m_j \mbox{ fixed, } i,j = u,d,s. \label{3-flavor}\end{equation} In each case, amplitudes are expanded in powers and logarithms of \begin{equation} \{\mbox{momenta}\}\bigl/\chi_\mathrm{ch} \ll 1 \label{irscale}\end{equation} where the infrared mass scale $\chi_\mathrm{ch} \approx 1 \mbox{ GeV}$ is set by the chiral condensate $\langle \bar{q}q \rangle_{\mathrm{vac}}$. In $\chi$PT$_3$, $\chi_\mathrm{ch}$ is $4\pi F_\pi$ \cite{ManGeo}, where $F_\pi = 93$ MeV is the pion decay constant; a similar result will be found for $\chi$PT$_\sigma$ in Sec.~\ref{strong}. The chiral scale $\chi_\mathrm{ch}$ also sets the mass scale of particles outside the corresponding NG sectors.% \footnote{Except for glueballs, if they exist. In $\chi$PT$_\sigma$, they may have large masses due to gluonic scale condensates such as $\langle G^2 \rangle_\mathrm{vac}$.} For nucleons with mass $M_N$, this is evident from the Goldberger-Treiman relation \begin{equation} F_\pi g_{\pi NN} \simeq g_A M_N \,. \label{GT}\end{equation} It is essential \cite{ManGeo} to make a clear distinction between the low-energy scale $\chi_\mathrm{ch}$ and the ultraviolet QCD scale $\Lambda_\mathrm{QCD} \approx 200$ MeV associated with expansions in the asymptotically free domain \begin{equation} \{\mbox{momenta}\}\bigl/\Lambda_\mathrm{QCD} \gg 1. \label{uvscale}\end{equation} Strong gluonic fields are presumably responsible for both scales, but that does not mean that the dimensionless ratio \begin{equation} \chi_\mathrm{ch}\bigl/\Lambda_\mathrm{QCD} \approx 5 \end{equation} has to be 1. The difference between $\chi$PT$_3$ and $\chi$PT$_\sigma$ can be seen in the relation between hadronic masses and terms in Eq.~(\ref{eqn:anomaly}) for $\theta^\mu_\mu$. In $\chi$PT$_3$, there is no sense in which the gluonic trace anomaly is small. For example, the gluonic anomaly is taken to be responsible for most of the nucleon's mass: \begin{equation} M_N = \langle N|\theta^\mu_\mu |N\rangle \underset{\chi\mathrm{PT}_3}{=} \frac{\beta(\alpha_s)}{4\alpha_s}\langle N|G^2 |N\rangle + O\bigl(m_K^2\bigr) \,. \end{equation} This assumes that $f_0(500)$ pole terms can be neglected, or equivalently, given that $f_0$ is so light on the mass scale for non-NG particles set by $\chi_\mathrm{ch}$, that $f_0$ couples weakly to $G^2$ and $\bar{q}q$. As noted in Fig.~\ref{fig:goldstone}, the small $f_0$ mass implies that $\chi$PT$_3$ has no scale separation, which (as we have seen) is a problem because $f_0$ couples so strongly to other particles. Contrast this with $\chi$PT$_\sigma$, where the infrared regime \begin{equation} O(m_K) \mbox{ momenta } \ll \chi_\mathrm{ch} \end{equation} emphasizes values of $\alpha_s$ close to $\alpha^{}_\mathrm{IR}$, so a combined limit \begin{equation} m_{u,d,s} \sim 0 \quad\mbox{and}\quad \alpha_s \lesssim \alpha^{}_\mathrm{IR} \end{equation} must be considered. Since $\beta(\alpha_s)$ is small, the gluonic trace anomaly is small \emph{as an operator}, but it can produce large amplitudes when coupled to dilatons. \begin{figure}[t] \center\includegraphics[scale=.7]{g_sigNN.eps} \caption{Dominant $\sigma$-pole diagram in $\chi$PT$_\sigma$ for $\langle N|\theta^\mu_\mu |N\rangle$.} \label{fig:g_sigNN} \end{figure}% Consider how $M_N$ arises in $\chi$PT$_\sigma$ (Fig.~\ref{fig:g_sigNN}). Like other pseudo-NG bosons, $\sigma$ couples to the vacuum via the divergence of its symmetry current, \begin{equation} \langle\sigma | \theta^\mu_\mu |\mbox{vac}\rangle = - m_\sigma^2 F_\sigma = O(m_\sigma^2)\,, \ m_\sigma \to 0 \label{dilaton}\end{equation} where $F_\sigma$ is the dilaton decay constant. The nucleon remains massive in the scaling limit because of its coupling $-g_{\sigma NN}\sigma\bar{N}N$ to $\sigma$ and the factor $-i/m_\sigma^2$ produced by the $\sigma$ pole at zero momentum transfer. This gives rise to the well known analogue \cite{Carr71} \begin{equation} F_\sigma g_{\sigma NN} \simeq M_N \label{scalarGT}\end{equation} of the Goldberger-Trieman relation (\ref{GT}). In our scheme, both the gluonic anomaly and the quark mass term in Eq.~(\ref{eqn:anomaly}) for $\theta^\mu_\mu$ can contribute to $M_N$ in the chiral-scale limit (\ref{scale}). That is because we require% \footnote{In principle, we could have constructed a chiral-scale perturbation theory with $m_\sigma$ and $m_K$ as independent expansion parameters, but that would make sense only if there were a fourth light quark or different low-energy scales for chiral and scale expansions. Fig.~\ref{fig:goldstone} provides clear confirmation that the choice $m_\sigma = O(m_K)$ is sensible.\label{one_scale}} \begin{equation} m_\sigma^2 = O(m_K^2) = O(m_{u,d,s})\,, \label{dilaton-mass} \end{equation} which allows the constants $F_{G^2}$ and $F_{\bar{q}q}$ given by \begin{align} \beta(\alpha_s)\bigl/(4\alpha_s)\langle\sigma | G^2 |\mbox{vac}\rangle &= - m_\sigma^2 F_{G^2} \,, \nonumber \\ \{1 + \gamma_m(\alpha_s)\}\sum_{q = u,d,s}m_q \langle\sigma | \bar{q}q |\mbox{vac}\rangle &= - m_\sigma^2 F_{\bar{q}q} \label{constants}\end{align} to remain finite in that limit: \begin{equation} M_N \simeq F_{G^2}g_{\sigma NN} + F_{\bar{q}q}g_{\sigma NN} \,. \end{equation} Suggestions that a resonance like $f_0(500)$ cannot be a pseudo-NG boson have no foundation. There can be no theorem to that effect because counterexamples such as our effective chiral-scale Lagrangian in Sec.~\ref{Lagrangian} are so easily constructed. It is true that in the symmetry limit where a NG boson becomes exactly massless, it has zero width, but that is because there is no phase space for it to decay into other massless particles. If phase space for strong decay is made available by explicit symmetry breaking and quantum number conservation allows it, a pseudo-NG boson will decay: \begin{equation} m_\sigma > 2 m_\pi \ \Rightarrow\ \mbox{ width } \Gamma_{\sigma \to \pi\pi} \not= 0 \,. \label{eqn:phase} \end{equation} Note that: \begin{itemize} \item Non-NG bosons need not be resonances; for example, $\eta'(960)$ is stable against strong decay. \item The resonance $f_0/\sigma$ becomes a massless NG boson \emph{only} if all three quarks $u,d,s$ become massless as $\alpha_s$ tends to $\alpha^{}_\mathrm{IR}$. In that combined limit, all particles except $\pi,K,\eta$ and $\sigma$ remain massive. Strong gluon fields set the scale of the condensate $\langle \bar{q}q \rangle_{\mathrm{vac}}$, which then sets the scale for massive particles and resonances except (possibly) glueballs. \item QCD at $\alpha_s = \alpha^{}_\mathrm{IR}$ resembles the physical theory (i.e.\ QCD for $0 < \alpha_s < \alpha^{}_\mathrm{IR}$) in the resonance region, but differs completely at high energies because it lacks asymptotic freedom. Instead, Green's functions scale asymptotically with nonperturbative anomalous dimensions in the ultraviolet limit. \end{itemize} Another key difference between $\chi$PT$_3$ and $\chi$PT$_\sigma$ becomes evident in the many-color limit $N_c \to \infty$ \cite{tHooft,Venez,Witten}. At issue is the quark content of the $f_0(500)$ resonance: is it a standard $q\bar{q}$ meson, or an exotic tetraquark state $q\bar{q}q\bar{q}$? In general, this is a model-dependent question; indeed the tetraquark idea was first proposed for the $0^+$ nonet in the context of the quark-bag model \cite{Jaffe}. However the large-$N_c$ limit permits conclusions which are far less model-dependent. In modern analyses of $\chi$PT$_3$, $f_0(500)$ is often considered to be a multi-particle state and so is not represented by a field in an effective Lagrangian. Instead, the $\chi$PT$_3$ expansion is unitarized, with $f_0$ identified as a resonating two-meson state produced by the unitarized structure. From that, the large-$N_c$ conclusion \cite{Pelaez11} \begin{equation} f_0 \sim \pi\pi \sim (q\bar{q})^2 \,, \ \mbox{ unitarized $\chi$PT$_3$} \end{equation} is drawn. This assumes from the outset that $f_0$ is \emph{not} a dilaton. The problem, already discussed at the beginning of this Section, is that the $\chi$PT$_3$ expansion diverges because it is dominated by these unitary ``corrections''\!. In $\chi$PT$_\sigma$, the large-$N_c$ properties of $f_0/\sigma$ are similar to those of pions, and are found by considering the two-point function of $\theta_{\mu\nu}$ instead of chiral currents. At large-$N_c$, the spin-2 part is dominated by pure-glue states: \begin{equation} T\bigl\langle\mathrm{vac}\bigl|\theta_{\alpha\beta}\theta_{\mu\nu} \bigr|\mathrm{vac}\bigr\rangle^{}_\mathrm{spin-2} = O(N_c^2) \,. \end{equation} However, when the spin-0 part is projected out by taking the trace $\theta_\alpha^\alpha$, the quark term dominates the gluonic anomaly of Eq.~(\ref{eqn:anomaly}) at large $N_c$ because of the factor $\alpha_s \sim 1/N_c$ multiplying $G^2$. Thus we find \begin{equation} T\bigl\langle\mathrm{vac}\bigl|\theta^\alpha_\alpha\theta^\mu_\mu \bigr|\mathrm{vac}\bigr\rangle = O(N_c) \label{2pt}\end{equation} due to the quark term compared with $O(1)$ from the gluonic anomaly. Clearly, a $\sigma$ pole can be present only if $f_0/\sigma$ is a $q\bar{q}$ state. At zero momentum transfer, this pole contributes $m_\sigma^2F_\sigma^2$ to the amplitude (\ref{2pt}), from which we conclude \begin{equation} F_\sigma = O\bigl(\sqrt{N_c}\bigr) \,, \label{Fsigma} \end{equation} as for the pion decay constant $F_\pi$. We will see in Sec.~\ref{strong} that the dilaton, like other $q\bar{q}$ states, obeys the narrow width rule at large $N_c$. Sometimes pure-glue corrections in $f_0/\sigma$ are dominant. The most obvious example is the nucleon mass $M_N$, where the leading $O(N_c)$ contribution due to $q\bar{q}$ states is the numerically small two-flavor sigma term \begin{equation} \langle N|m_u\bar{u}u + m_d\bar{d}d|N \rangle \ll M_N \,. \end{equation} Therefore (as is generally agreed) most of $M_N$ comes from the $m_{u,d}$-independent term due to pure-glue exchange. In particular, the terms $\sim G^2$ and $m_s\bar{s}s$ in Eq.~(\ref{eqn:anomaly}) for $\theta^\mu_\mu$ couple to a nucleon only via pure-glue states. \section{Chiral-scale Lagrangian} \label{Lagrangian} Consider strong interactions at low energies $\alpha_s \lesssim \alpha^{}_\mathrm{IR}$ within the physical region \begin{equation} 0 < \alpha_s < \alpha^{}_{\mathrm{IR}} \,. \label{phys}\end{equation} Let $d$ denote the scaling dimension of operators used to construct an effective chiral-scale Lagrangian. In general, there must be a scale-invariant term $\mathcal{L}_\mathrm{inv}$ with scaling dimension $d = 4$, a term $\mathcal{L}_\mathrm{mass}$ with dimension \cite{Wilson69} \begin{equation} d_\mathrm{mass} = 3 - \gamma_{m}\bigl(\alpha^{}_\mathrm{IR}\bigr)\ , \quad 1 \leqslant d_\mathrm{mass} < 4 \label{dim-mass} \end{equation} to simulate explicit breaking of chiral symmetry by the quark mass term, and a term $\mathcal{L}_\mathrm{anom}$ with dimension $d > 4$ to account for gluonic interactions responsible for the strong trace anomaly in Eq.~(\ref{eqn:anomaly}): \begin{equation} \mathcal{L}^{}_{\mbox{\small $\chi$PT$_\sigma$}} =\ :\mathcal{L}^{d=4}_\mathrm{inv} + \mathcal{L}^{d>4}_\mathrm{anom} + \mathcal{L}^{d<4}_\mathrm{mass}: \,. \label{Lagr}\end{equation} The anomalous part of $d_\mathrm{mass}$ is evaluated at $\alpha^{}_\mathrm{IR}$ because we expand in $\alpha_s$ about $\alpha^{}_\mathrm{IR}$. A proof that $\mathcal{L}_\mathrm{anom}$ has dimension $d > 4$ appears later in this Section. We restrict our analysis to the NG sector of $\chi$PT$_\sigma$ (Fig.~\ref{fig:goldstone}). Then operators in \begin{equation} \mathcal{L}^{}_{\mbox{\small $\chi$PT$_\sigma$}} = \mathcal{L}\bigl[\sigma,U,U^\dagger\bigr] \end{equation} are constructed from a QCD dilaton field $\sigma$ and the usual chiral $SU(3)$ field \begin{equation} U = U(\pi,K,\eta)\,, \ UU^\dagger = I \,. \end{equation} Scale and chiral transformations commute, so $\sigma$ is chiral invariant. The scale dimensions of $\pi,K,\eta$ and hence $U$ must be zero in order to preserve the range of field values on the coset space $SU(3)_L\times SU(3)_R/SU(3)_V$ \cite{Chiv89}. In Eq.~(\ref{Lagr}), both $\mathcal{L}_\mathrm{inv}$ and $\mathcal{L}_\mathrm{anom}$ are $SU(3)_L \times SU(3)_R$ invariant, while $\mathcal{L}_\mathrm{mass}$ belongs to the representation $(\mathbf{3},\bar{\mathbf{3}})\oplus(\bar{\mathbf{3}},\mathbf{3})$ associated with the $\pi,K,\eta$ (mass)$^2$ matrix $M$. In lowest order, with $M$ diagonalized, \begin{equation} M = \frac{F_\pi^2}{4}\left(\begin{matrix} m_\pi^2 & 0 & 0 \\ 0 & m_\pi^2 & 0 \\ 0 & 0 & 2m_K^2 - m_\pi^2 \end{matrix}\right) \label{mass_matrix}\end{equation} the vacuum condition for $U$ is \begin{equation} U \to I \ \mbox{ for } \ \pi,K,\eta \to 0 \,. \label{vac_cond} \end{equation} The dimension of $\mathcal{L}_\mathrm{anom}$ can be found from the scaling Ward identities (Callan-Symanzik equations) \begin{equation} \bigg\{ \mu\frac{\partial}{\partial\mu} + \beta(\alpha_s)\frac{\partial}{\partial\alpha_s} + \gamma_{m}(\alpha_s)\sum_q m_q\frac{\partial\ }{\partial m_q} \bigg\}{\cal A} = 0 \label{CSeqn} \end{equation} for renormalization-group invariant QCD amplitudes $\mathcal{A}$. The term $\beta\partial/\partial\alpha_s$ corresponds to the gluonic anomaly in Eq.~(\ref{eqn:anomaly}), so the effect of $\alpha_s\partial/\partial\alpha_s$ on $\mathcal{A}$ is to insert the QCD operator $G^2 = G_{\mu\nu}^aG^{a\mu\nu}$ at zero momentum transfer. Applying $\alpha_s\partial/\partial\alpha_s$ to Eq.~(\ref{CSeqn}), \begin{align} \biggl\{ \mu\frac{\partial\ }{\partial\mu} + &\beta(\alpha_s)\frac{\partial\ }{\partial\alpha_s} + \beta'(\alpha_s) - \beta(\alpha_s)\bigl/\alpha_s\biggr\} \alpha_s\frac{\partial{\cal A}}{\partial\alpha_s} \nonumber \\ &= -\alpha_s\frac{\partial\ }{\partial\alpha_s} \sum_q \gamma_{m}(\alpha_s) m_q\frac{\partial{\cal A_{}}}{\partial m_q} \label{del_A} \end{align} we see that the anomalous dimension function for $G^2$ is \begin{equation} \gamma^{}_{G^2}(\alpha_s) = \beta'(\alpha_s) - \beta(\alpha_s)/\alpha_s \,. \label{gamma} \end{equation} Hence, to lowest order in the expansion $\alpha_s \lesssim \alpha^{}_\mathrm{IR}$, ${\cal L}_\mathrm{anom}$ has a positive anomalous dimension equal to the slope of $\beta$ at the fixed point (Fig.~\ref{fig:beta}): \begin{equation} d_\mathrm{anom} = 4 + \beta'\bigl(\alpha^{}_\mathrm{IR}\bigr) > 4\,. \label{dim-anom}\end{equation} As $\alpha_s \to \alpha^{}_\mathrm{IR}$, the gluonic anomaly vanishes, so for consistency,\footnotemark[10] we must require terms in ${\cal L}_\mathrm{anom}$ to involve derivatives $\partial\del = O(M)$ or have $O(M)$ coefficients: \begin{equation} {\cal L}_\mathrm{anom} = O(\partial^2, M) \,. \label{L_anom}\end{equation} The result is a chiral-scale perturbation expansion $\chi$PT$_\sigma$ about $\alpha^{}_\mathrm{IR}$ with QCD dilaton mass $m_\sigma = O(m_K)$. An explicit formula for the $\chi$PT$_\sigma$ Lagrangian (\ref{Lagr}) can be readily found by following the approach of Ellis \cite{Ell70,Ell71}. Let $F_\sigma$ be the coupling of $\sigma$ to the vacuum via the energy momentum tensor $\theta_{\mu\nu}$, improved \cite{CCJ70} when spin-0 fields are present: \begin{equation} \langle\sigma(q)|\theta_{\mu\nu}|\mathrm{vac}\rangle = (F_\sigma/3)\bigl( q_\mu q_\nu - g_{\mu\nu}q^2\bigr) \,. \label{b}\end{equation} When conformal symmetry is realized nonlinearly \cite{Salam}, a dilaton field $\sigma$ is needed to create connection terms $\sim \partial\sigma$ in covariant derivatives. It transforms as \begin{equation} \sigma \to \sigma - \tfrac{1}{4}F_\sigma \log \big|\det (\partial x'/\partial x) \big| \label{sigma_scale}\end{equation} under conformal transformations $x \to x'$, which corresponds to scale dimension 1 for the covariant field $e^{\sigma/F_\sigma}$. The dimensions of $\chi$PT$_3$ Lagrangian operators such as \begin{equation} \mathcal{K}\bigl[U,U^\dagger\bigr] = \tfrac{1}{4}F_{\pi}^{2}\mathrm{Tr}(\partial_{\mu} U\partial^{\mu}U^{\dagger}) \end{equation} and the dilaton operator $\mathcal{K}_\sigma = \frac{1}{2}\partial_{\mu}\sigma\partial^{\mu}\sigma$ can then be adjusted by powers of $e^{\sigma/F_\sigma}$ to form terms in $\mathcal{L}$. In lowest order, \begin{align} &\mathcal{L}^{d=4}_\mathrm{inv,\,LO} = \bigl\{c_{1}\mathcal{K} + c_{2}\mathcal{K}_\sigma + c_{3}e^{2\sigma/F_{\sigma}}\bigr\}e^{2\sigma/F_{\sigma}} \,, \notag \\[1mm] &\mathcal{L}^{d>4}_\mathrm{anom,\,LO} \notag \\ &= \bigl\{(1-c_{1})\mathcal{K} + (1-c_{2})\mathcal{K}_\sigma + c_4 e^{2\sigma/F_{\sigma}}\bigr\}e^{(2+\beta')\sigma/F_{\sigma}} \,, \notag \\[1mm] &\mathcal{L}^{d<4}_\mathrm{mass,\,LO} = \mathrm{Tr}(MU^{\dagger}+UM^{\dagger})e^{(3-\gamma_{m})\sigma/F_{\sigma}} \,, \label{Lstr} \end{align} where $\beta'$ and $\gamma_{m}$ are the anomalous dimensions $\beta'(\alpha^{}_\mathrm{IR})$ and $\gamma_{m}(\alpha^{}_\mathrm{IR})$ of Eqs.\ (\ref{dim-anom}) and (\ref{dim-mass}). The constants $c_{1}$ and $c_{2}$ are not fixed by general arguments, while $c_3$ and $c_4$ depend on the vacuum condition chosen for the field $\sigma$. The role of $c_3$ and $c_4$ is to fix the scale of $e^{\sigma/F_\sigma}$, just as the (mass)$^2$ matrix fixes the chiral $SU(3)$ direction of $U$ (Eqs.~(\ref{mass_matrix}) and (\ref{vac_cond})). The simplest choice of field variables% \footnote{On-shell amplitudes do not depend on how the field variables are chosen \cite{Chi61,Kam61}.} is to have all NG fields $\sigma, \pi, K, \eta$ fluctuate about zero. For the vacuum to be stable in the $\sigma$ direction at $\sigma = 0$, Lagrangian terms linear in $\sigma$ must cancel: \begin{align} 4c_3 + (4+\beta')c_4 &= - (3-\gamma_{m})\bigl\langle\mathrm{Tr} (MU^{\dagger}+UM^{\dagger})\bigr\rangle_{\mathrm{vac}} \notag \\ &= - (3-\gamma_{m})F_\pi^2\bigl(m_K^2 + \tfrac{1}{2}m_\pi^2\bigr)\,. \label{stable}\end{align} Eqs.~(\ref{L_anom}) and (\ref{stable}) imply that both $c_3$ and $c_4$ are $O(M)$. Evidently $\chi$PT$_\sigma$ is a simple extension of the conventional three-flavor theory $\chi$PT$_3$. The $\chi$PT$_\sigma$ Lagrangian defined by Eqs.~(\ref{Lagr}) and (\ref{Lstr}) satisfies the condition \begin{equation} {\cal L}^{}_{\mbox{\small $\chi$PT$_\sigma$}} \to {\cal L}^{}_{\mbox{\small $\chi$PT$_3$}} \,, \ \sigma \to 0 \end{equation} and hence preserves the phenomenological success of \emph{lowest order} $\chi$PT$_3$ for amplitudes which do not involve the $0^{++}$ channel (Sec.~\ref{Motiv}). In next to lowest order, new chiral-scale loop diagrams involving $\sigma$ need to be checked. The $\chi$PT$_\sigma$ Lagrangian obeys the standard rule that each term ${\cal L}_d$ of dimension $d$ contributes $(d-4){\cal L}_d$ to the trace of the effective energy-momentum tensor: \begin{equation} \left.\theta^\mu_\mu\right|_\mathrm{eff} =\ :\beta'\mathcal{L}^{d>4}_\mathrm{anom} - (1+\gamma_{m})\mathcal{L}^{d<4}_\mathrm{mass}: \,. \label{eff-tr} \end{equation} Note that the critical exponent $\beta'$ normalizes the gluonic term in $\theta^\mu_\mu$. \section{Strong Interactions} \label{strong} In lowest order, $\cal L$ gives formulas for the $\sigma\pi\pi$ coupling \begin{equation} \mathcal{L}_{\sigma\pi\pi} = \bigl\{\bigl(2+(1-c_1)\beta'\bigr)|\partial\bm{\pi}|^2 - (3 - \gamma_{m})m_\pi^2|\bm{\pi}|^2\bigr\}\sigma/(2F_\sigma) \label{Lsigpi} \end{equation} and dilaton mass $m_\sigma$ \begin{equation} m_\sigma^2 F_\sigma^2 = F_\pi^2\bigl(m_K^2 + \tfrac{1}{2}m_\pi^2\bigr)(3 - \gamma_{m})(1 + \gamma_{m}) - \beta'(4 + \beta')c_4 \label{mass}\end{equation} which resemble pre-QCD results \cite{Ell70,RJC70,Ell71,Klein71} but have extra gluonic terms proportional to $\beta'$. For consistency with data, we must assume that the unknown coefficient $2+(1-c_1)\beta'$ in Eq.~(\ref{Lsigpi}) does not vanish accidentally. That preserves the key feature of the original work, that $\mathcal{L}_{\sigma\pi\pi}$ is mostly \emph{derivative}: for soft $\pi\pi$ scattering (energies $\sim m_\pi$), the dilaton pole amplitude is negligible because the $\sigma\pi\pi$ vertex is $O(m_\pi^2)$, while the $\sigma\pi\pi$ vertex for an on-shell dilaton \begin{equation} g_{\sigma\pi\pi} = -\bigl(2+(1-c_1)\beta'\bigr)m_\sigma^2/(2F_\sigma) + O(m_\pi^2) \label{on-shell} \end{equation} is $O(m_\sigma^2)$, consistent with $\sigma$ being the broad resonance $f_0(500)$. Comparisons with data require an estimate of $F_\sigma$, most simply from $NN$ scattering and the dilaton relation (\ref{scalarGT}). The data imply \cite{CC08} a mean value $g_{\sigma NN} \sim 9$ and hence $F_{\sigma} \sim 100\,\mathrm{MeV}$ but with an uncertainty which is either model-dependent or very large ($\approx 70\%$). That accounts for the large uncertainty in \begin{equation} 1\tfrac{1}{2} \lessapprox |2 + (1-c_{1})\beta'| \lessapprox 6 \label{inequal} \end{equation} when we compare Eq.~(\ref{on-shell}) with data \cite{Cap06}: \begin{equation} |g_{\sigma\pi\pi}| = 3.31^{+0.35}_{-0.15}\mbox{ GeV, and } m_{\sigma} \approx 441\,\mathrm{MeV}. \label{numbers} \end{equation} The convergence of a chiral-scale expansion can be tested by adding $\sigma$-loop diagrams to the standard $\chi$PT$_3$ analysis \cite{Gasser85}. These involve the (as yet) undetermined constants $\beta',\gamma_{m},c_{1\ldots 4}$: for example, corrections to $g_{\sigma\pi\pi}$ involve the $\sigma\sigma\sigma$ and $\sigma\sigma\pi\pi$ vertices derived from Eq.~(\ref{Lstr}). However a numerical estimate of scales associated with the expansion can be obtained using the dimensional arguments of Manohar and Georgi \cite{ManGeo}. The idea is to count powers of dimensionful quantities $F_\pi$ and (for $\chi$PT$_\sigma$) $F_\sigma$ associated with the quark condensate $\langle \bar{q}q \rangle_{\mathrm{vac}}$, and keep track of powers of $4\pi$ arising from loop integrals. To illustrate their point, Manohar and Georgi considered loop corrections to $\pi\pi$ scattering, such as the first diagram in Fig.\ \ref{fig:pipi_scatter}, for which they obtained the estimate \begin{equation} {\cal A}_\mathrm{loop}\bigl/{\cal A}_\mathrm{tree} \sim \frac{1}{16\pi^2 F_\pi^2} \times \mbox{logarithms}. \end{equation} In our scheme, we must add contributions \begin{equation} \sim \biggl\{\frac{1}{16\pi^2 F^2_\sigma}\,\mbox{ and } \frac{F_\pi^2}{16\pi^2 F_\sigma^4}\biggr\} \times \mbox{logarithms} \end{equation} from e.g.\ the second and third graphs of Fig.\ \ref{fig:pipi_scatter}. As a result, we find that there are in principle \emph{two} $\chi$PT$_\sigma$ scales \begin{equation} \chi_\pi = 4\pi F_\pi \ \mbox{ and } \ \chi_\sigma = 4\pi F_\sigma\,. \end{equation} The rough estimate of 100 MeV for $F_\sigma$ (close to $F_\pi \simeq$ 93 MeV) indicates that in effect, there is a single infrared mass scale \begin{equation} \chi_\pi \approx \chi_\sigma \approx 1 \mbox{ GeV} \end{equation} as foreshadowed in Eq.~(\ref{irscale}). \begin{figure}[t] \center\includegraphics[scale=0.6]{pipi_scatter} \caption{Examples of NLO $\chi$PT$_\sigma$ graphs in the chiral-scale expansion of $\pi\pi$ scattering for $O(m_K)$ momenta. Each vertex is generated by the lowest order terms (\ref{Lstr}) in $\mathcal{L}$. Not shown are additional diagrams involving the self-energy of the $\sigma$ propagator, and internal $\sigma$ lines which connect one external $\pi$ leg to another. Similar diagrams are found for the $t$ and $u$ channels.} \label{fig:pipi_scatter} \end{figure} Numerology which ignores factors of $4\pi$ can be as misleading in $\chi$PT$_\sigma$ as in $\chi$PT$_3$. The most important example of this arises from the observation that $f_0(500)$ is almost as broad as it is heavy. Does this mean that the width of $f_0(500)$ is a \emph{lowest} order effect, i.e.\ of the same order in $m_\sigma$ as the real part of the mass? If so, would not that invalidate PCDC (partial conservation of the dilatation current), where dominance by a \emph{real} pole is assumed for the lowest order? To see that the answer is ``no'', let us estimate the $\sigma$ width $\Gamma_{\sigma\pi\pi}$ in the spirit of Manohar and Georgi. We find \begin{equation} \Gamma_{\sigma\pi\pi} \approx \frac{|g_{\sigma\pi\pi}|^2}{16\pi m_\sigma} \sim \frac{m_\sigma^3}{16\pi F_\sigma^2} \sim 250 \mbox{ MeV} \label{width}\end{equation} so $\Gamma_{\sigma\pi\pi}$ is $O(m_\sigma^3)$ and hence \emph{nonleading} relative to the mass $m_\sigma$. We are therefore justified in using just tree diagrams to generate the lowest order% \footnote{Beyond lowest order, and in degenerate cases like the $K_{L}$--$K_{S}$ mass difference, methods used to estimate corrections at the $Z^0$ peak \cite{ZOphys} and the $\rho$ resonance \cite{Scherer} may be necessary.\label{degen}} of $\chi$PT$_\sigma$, as in $\chi$PT$_2$ and $\chi$PT$_3$. (The main exception to this rule, for two-photon channels, is discussed in Sec.~\ref{Electromag} and Appendix \ref{AppB}.) Pure numerology fails because $F_\sigma$ in the denominator of (\ref{width}) is an order of magnitude smaller than $\chi_{\pi,\sigma}$. In the large-$N_c$ limit, as shown in Sec.~\ref{Motiv}, the dilaton behaves as a $q\bar{q}$ state. It follows that the gluonic corrections $\sim (1-c_1)\beta'$ in Eq.~(\ref{on-shell}) for the $\sigma\pi\pi$ coupling correspond to disconnected quark diagrams, so they are nonleading \begin{equation} (1-c_1)\beta' = O\bigl(1\bigl/N_c\bigr) \end{equation} and the pre-QCD result \cite{Ell70,RJC70} \begin{equation} F_\sigma g_{\sigma\pi\pi} \approx - m_\sigma^2 \end{equation} is recovered for $N_c$ large. It follows from Eq.~(\ref{Fsigma}) that $\sigma$ decouples from $\pi\pi$ at large $N_c$: \begin{equation} g_{\sigma\pi\pi} = O\bigl(1/\sqrt{N_c}\bigr) \,. \end{equation} Hence, like other $q\bar{q}$ states, the dilaton $\sigma$ obeys the narrow width rule \begin{equation} \Gamma_{\sigma\pi\pi} = O(1/N_c) \,. \end{equation} The technique used to obtain Eq.~(\ref{Lstr}) from $\chi$PT$_3$ works equally well for higher order terms in strong interactions, and also for external operators induced by electromagnetic or weak interactions (Sects.~\ref{Electromag} and \ref{weak_emag}). In general, NLO terms in the strong interaction Lagrangian $\cal L$ are $O(\partial^4, M\partial^2, M^2)$. For example, let us construct $O(\partial^4)$ terms from the $\chi$PT$_3$ operator $(\mbox{Tr}\partial U\partial U^\dagger)^2$. It has dimension 4 already, so it appears unchanged in the scale-symmetric term \begin{equation} {\cal L}^{d=4}_{\mathrm{inv,\,NLO}} = \{\mbox{coefficient}\} (\mbox{Tr}\partial U\partial U^\dagger)^2 + \ldots \label{invNLO}\end{equation} i.e.\ without $\sigma$ field dependence. The anomalous term has dimension greater than 4, so it depends on $\sigma$: \begin{equation} {\cal L}^{d>4}_{\mathrm{anom,\,NLO}} = \{\mbox{coefficient}\}\bigl(\mbox{Tr}\partial U\partial U^\dagger\bigr)^2 e^{\beta'\sigma/F_\sigma}. \label{d>4NLO}\end{equation} The difference between $\chi$PT$_3$ and $\chi$PT$_\sigma$ is summarized in Fig.~\ref{fig:compare}. See Appendix \ref{AppA} for a discussion of \begin{figure}[b] \label{fig:compare} \centering\includegraphics[scale=1]{chiPT_expansions} \caption{Comparison of $\chi$PT$_3$ and $\chi$PT$_\sigma$. The $f_0/\sigma$ pole terms responsible for the poor convergence of $\chi$PT$_3$ are transferred to LO in $\chi$PT$_\sigma$, where they do not upset convergence.} \end{figure} power counting for $\chi$PT$_\sigma$ loop expansions. \section{Resonance Saturation in $\bm{\chi}$PT$_\sigma$} \label{saturate} Conventional $\chi$PT$_3$ is often supplemented by a technique \cite{Eck89} in which the coefficients of $O(\partial^4) = O(m_K^4)$ terms are estimated by saturation with particles or resonances from the non-NG sector. This scheme can be readily adapted to $\chi$PT$_\sigma$, provided that the changed role of $f_0/\sigma$ is understood. Each non-NG particle or resonance of mass $M_\mathrm{res}$ gives rise to a pole factor which carries a linear combination $p = O(m_K)$ of the external momenta. The relevant coefficient is obtained from terms $\sim p^4/M^2_\mathrm{res}$ in \emph{heavy-particle} expansions of these pole factors \begin{align} p^4\bigl/\bigl(p^2 - M^2_\mathrm{res}\bigr) = p^2 + p^4/M^2_\mathrm{res} + \ldots \ \mbox{ for } M_\mathrm{res} \gg p \,. \label{heavy}\end{align} These expansions are nonchiral, i.e.\ they are not light-particle, small-momentum expansions of the type (\ref{irscale}). Evidently this technique assumes a clear scale separation between the NG and non-NG sectors. Where does the $f_0(500)$ resonance fit into this scheme? Having it contribute as a light particle in chiral expansions and a heavy particle in Eq.~(\ref{heavy}) would be double counting. In $\chi$PT$_3$, the answer is that the $f_0(500)$ does not belong to the NG sector, so it is treated as a heavy resonance. The obvious lack of scale separation with the $K,\eta$ NG bosons (Fig.~\ref{fig:goldstone}) makes this proposal unworkable. In $\chi$PT$_\sigma$, the problem disappears because $f_0/\sigma$ is assigned to the NG sector. Its contributions are already taken into account in chiral expansions, so logically, it must be \emph{excluded} from the saturation procedure of \cite{Eck89}. That is in line with the requirement that saturation be restricted to the \emph{non-NG sector}. Scale separation of the NG and non-NG sectors works well for $\chi$PT$_\sigma$ (Fig.~\ref{fig:goldstone}), so the heavy-particle conditions $M_\mathrm{res} \gg p$ for $p = O(m_K)$ are satisfied. In practice, $\chi$PT$_\sigma$ coefficients such as those in Eqs.~(\ref{invNLO}) and (\ref{d>4NLO}) are not easily evaluated, because the analysis requires data for soft $\sigma$ as well as soft $\pi,K,\eta$ amplitudes. \section{Electromagnetic properties of mesons} \label{Electromag} In $\chi$PT$_\sigma$, the electromagnetic interactions of NG bosons are of great interest because \begin{itemize} \item The amplitudes for $K_S \to \gamma\gamma$ and $\gamma\gamma \to \pi\pi$ can be used to analyse $K \to 2\pi$ (Sec.~\ref{weak_emag}). \item The electromagnetic trace anomaly (\ref{eqn:em_anomaly}) and hence the Drell-Yan ratio can be estimated at the infrared fixed point $\alpha_s = \alpha^{}_\mathrm{IR}$. \item In $\gamma\gamma$ channels, meson loops can produce Li-Pagels singularities $\sim 1/m^2_{\pi ,K, \sigma}$ and hence amplitudes which compete with $\sigma$-pole tree diagrams. \end{itemize} Photon interactions are introduced as in $\chi$PT$_3$, with the added requirement that the chiral singlet field $\sigma$ is gauge invariant. So under local $U(1)$ transformations, we have \begin{equation} \sigma \to \sigma\,,\quad U\to e^{-i\lambda(x)Q}Ue^{i\lambda(x)Q}\,, \end{equation} where $Q = \tfrac{1}{3}\mathrm{diag}(2,-1,-1)$ is the quark-charge matrix. Gauge invariance can be satisfied minimally by introducing a covariant derivative for $U$, \begin{equation} D_\mu U = \partial_\mu U +ieA_\mu [Q,U]\,, \label{DelU} \end{equation} where $A_\mu$ is the photon field. However this is not sufficient: it does not change the scaling properties of the effective Lagrangian, and so cannot produce an electromagnetic trace anomaly (\ref{eqn:em_anomaly}) proportional to $F_{\mu\nu}F^{\mu\nu}$. The operator $F_{\mu\nu}F^{\mu\nu}$ has dimension 4, so we need an action which, when varied, produces a scale \emph{invariant} result. This can happen only if the scaling property is \emph{inhomogeneous}. The $\sigma$ field has a scaling property (\ref{sigma_scale}) of that type, from which it is evident that the effective Lagrangian must contain a nonminimal term of the form \begin{equation} {\cal L}_{\sigma\gamma\gamma} = \tfrac{1}{4} g_{\sigma\gamma\gamma}\sigma F_{\mu\nu}F^{\mu\nu} \,. \label{non-min} \end{equation} This is the effective vertex first considered by Schwinger \cite{Schw51} in his study of the gauge invariance of fermion triangle diagrams. Originally, the electromagnetic trace anomaly (\ref{eqn:em_anomaly}) was derived in the context of broken scale invariance (before QCD and asymptotic freedom), so the ultraviolet limit defining the Drell-Yan ratio $R$ was nonperturbative. A comparison of Eqs.~(\ref{eqn:em_anomaly}) and (\ref{non-min}) in the tree approximation, or equivalently $\sigma$-pole dominance of $\theta^\mu_\mu$ (PCDC), led to the conclusion \cite{RJC72,Ell72} that the coupling of $\sigma$ to $\gamma\gamma$ is proportional to $R$. In the current context, there are two important modifications to this argument. The first is to identify ``$R$'' correctly. In the physical region $0 < \alpha_s < \alpha^{}_\mathrm{IR}$, asymptotic freedom controls the ultraviolet limit and produces a perturbative answer \begin{equation} R^{}_\mathrm{UV} = \sum\{\mbox{quark charges}\}^2 = 2\,, \ N_f = N_c = 3 \label{eqn:RUV} \end{equation} for $N_f = 3$ light flavors and $N_c = 3$ colors. However, the hard gluonic operator $G^2$ in $\theta^\mu_\mu$ prevents PCDC from being used to relate low-energy amplitudes to asymptotically free quantities like $R^{}_\mathrm{UV}$. Instead, in the lowest order of $\chi$PT$_\sigma$, we use amplitudes defined at the infrared fixed point where the gluonic trace anomaly vanishes and so PCDC can be tested. At the infrared fixed point $\alpha_s = \alpha^{}_\mathrm{IR}$, there is no asymptotic freedom, so the UV limit of $e^+ e^- \to$ hadrons produces a \textit{nonperturbative} value $R_\mathrm{IR}$ which has to be determined theoretically. Thus we expect $g_{\sigma\gamma\gamma}$ to be related to $R_\mathrm{IR}$. The second modification is a surprise. In $\gamma\gamma$ channels, meson-loop integrals produce inverse Li-Pagels singularities \mbox{$\sim M^{-1}$} in the chiral limit $M \sim 0$, where $M$ is the $\pi, K, \eta$ (mass)$^2$ matrix (\ref{mass_matrix}). These infrared singularities are strong enough to allow $\pi^\pm,K^\pm$ one-loop diagrams to have the {\it same} chiral order as tree amplitudes containing the anomalous vertex in (\ref{non-min}). This means that naive PCDC ($\sigma$-pole dominance) does not work when $\gamma\gamma$ channels are present; for example, the $\sigma \to \gamma\gamma$ coupling turns out to be proportional to $(R_\mathrm{IR} - 1/2)$, not $R_\mathrm{IR}$. Similar problems are not encountered for PCAC, partly because loop corrections to PCAC are limited by the negative parity of the corresponding Nambu-Goldstone bosons. It becomes less surprising when the power-counting rule (\ref{GLrule}) for electromagnetic corrections to $\chi$PT expansions is considered. A standard treatment of $\chi$PT \cite{Gasser84,Gasser85} is to require that the effective Lagrangian be invariant under {\it local} chiral $SU(N_f)_L \times SU(N_f)_R$ transformations. This requirement is satisfied minimally by replacing ordinary derivatives $\partial_\mu$ acting on $U$ fields with covariant ones \begin{equation} D_\mu U = \partial_\mu U - \tfrac{i}{2}(v_\mu+a_\mu) U + \tfrac{i}{2} U(v_\mu-a_\mu) \,, \label{eqn:cov D} \end{equation} where the gauge fields $v_\mu(x)$ and $a_\mu(x)$ transform inhomogeneously under the respective vector and axial-vector subgroups of $SU(N_f)_L \times SU(N_f)_R$. In order to match the chiral counting $\partial_\mu U = O(p)$ used by Weinberg \cite{Wei79} to study pure pion processes in $\chi$PT$_2$, Gasser and Leutwyler proposed the rule \cite{Gasser84,Gasser85} \begin{equation} a_\mu \sim v_\mu = O(p)\,. \label{chiral rule} \end{equation} For electromagnetic processes, this requires the photon field $A_\mu$ obtained from \begin{equation} v_\mu = -2eQA_\mu \qquad \mbox{and} \qquad a_\mu = 0, \end{equation} to be counted as $O(p)$. As a result, one-loop meson amplitudes which couple (say) $\sigma$ to any number of external photons are of the same chiral order, namely $O(p^4)$. In $\chi$PT$_\sigma$, where the global symmetry group includes dilatations, chiral gauge invariance is not sufficient to determine the chiral order for nonminimal operators such as (\ref{non-min}). In Appendix \ref{AppB}, we generalize the Gasser-Leutwyler analysis to cover such cases. As a result: \begin{enumerate} \item Both Eq.~(\ref{chiral rule}) and the rule $A_\mu = O(p)$ remain valid. \item The operator (\ref{non-min}) gives rise to a $O(p^4)$ vertex amplitude of the same chiral order as one-loop meson graphs for $\sigma \to \gamma\gamma$. \item In the presence of photons, $\chi$PT$_\sigma$ corrections to lowest-order tree and loop diagrams still converge: each additional loop is suppressed by a factor $\sim M\ln M$ or $M$. \end{enumerate} In this Section, we consider lowest-order amendments to PCDC for the amplitude $\langle\gamma\gamma|\widetilde{\theta}^\mu_\mu|\mathrm{vac}\rangle$. Let $\gamma_i = \gamma(\epsilon_i,k_i)$ represent a photon with polarization $\epsilon_i$ and momentum $k_i$, and let $F(s)$ be the form factor defined by \begin{equation} \langle\gamma_1,\gamma_2|\widetilde{\theta}_{\mu}^{\mu}(0)|\mathrm{vac}\rangle = (\epsilon_{1}\cdot\epsilon_{2} k_{1}\cdot k_{2} -\epsilon_{1}\cdot k_{2}\epsilon_{2}\cdot k_{1}) F(s) \,. \label{form} \end{equation} The electromagnetic trace anomaly concerns the value of this form factor at $s=0$: \begin{equation} F(0) = -\tfrac{1}{3}\pi\alpha \int d^4x\, d^4y\, x\cdot y\, T \langle J^{\beta}(x) J_{\beta}(0) \theta_{\mu}^{\mu}(y) \rangle_{\mathrm{vac}} \,. \label{form'}\end{equation} At the fixed point $\alpha_s = \alpha^{}_{\mathrm{IR}}$, we have a theory of broken scale invariance, so the conditions of the derivations in \cite{RJC72,Ell72} are satisfied. The leading short-distance behavior of both $\langle J_\alpha J_\beta \theta_{\mu\nu}\rangle_{\mathrm{vac}}$ and $\langle J_\alpha J_\beta \rangle_{\mathrm{vac}}$ is conformal, with no anomalous dimensions because $J_\alpha$ and $\theta_{\mu\nu}$ are conserved, and the soft $d < 4$ trace $\theta^\mu_\mu$ ensures convergence of Eq.~(\ref{form'}) at $x \sim y \sim 0$. Therefore, we can write down an exact anomalous Ward identity% \footnote{There is similar result for $0 < \alpha_s < \alpha^{}_{\mathrm{IR}}$ which involves $R^{}_\mathrm{UV}$ but has no practical use. See Appendix \ref{AppC}.} \begin{equation} F(0) = \frac{2R^{}_{\mathrm{IR}}\alpha}{3\pi}\,, \ \alpha_s = \alpha^{}_{\mathrm{IR}}\,. \label{eqn:rjcform} \end{equation} The calculation of the form factor $F(s)$ in $\chi$PT$_\sigma$ involves two classes of diagrams (Fig.\ \ref{fig:anomaly_loops}): \begin{figure}[t] \center \includegraphics[scale=0.34]{anomaly_loops} \caption{Lowest order contributions to $\langle\gamma_1,\gamma_2|\widetilde{\theta}^\mu_\mu(0)|\mathrm{vac}\rangle$ in $\chi$PT$_\sigma$. Diagram (a) represents the contact term proportional to $g_{\sigma\gamma\gamma}$, while diagrams (d), (e), (h), and (i) are each accompanied by an additional crossed amplitude (not shown). Similar loop diagrams have been considered in $\chi$PT$_3$ for $K_S\to\gamma\gamma$ \cite{DAm86}, $K_L\to\pi^0\gamma\gamma$ \cite{Eck87}, and $\gamma\gamma\to\pi^0\pi^0$ \cite{Don88,Bij88}. } \label{fig:anomaly_loops} \end{figure} \begin{enumerate} \item Dilaton pole diagrams (a-e) which produce a factorized amplitude \begin{equation} F_1(s) = {\cal A}_{\sigma\gamma\gamma} \frac{i}{s - m^2_\sigma} (-m_\sigma^2 F_\sigma) \,. \label{F_1} \end{equation} Here ${\cal A_{\sigma\gamma\gamma}}$ includes a contact term $-ig_{\sigma\gamma\gamma}$ from diagram (a) and contributions from one-loop diagrams (b-e) with internal $\pi^\pm, K^\pm$ lines. \item A one-loop amplitude $F_2(s)$ from diagrams (f-i) with internal $\pi^\pm,K^\pm$ lines coupled to the vacuum via $\theta^\mu_\mu$. \end{enumerate} The $\sigma \to \gamma\gamma$ amplitude in Eq.~(\ref{F_1}) can be written \begin{equation} {\cal A}_{\sigma\gamma\gamma} = - ig_{\sigma\gamma\gamma} + \frac{i\alpha}{\pi F_{\sigma}} {\cal C} \sum_{\phi=\pi, K} m_{\phi}^{2}\Big(\frac{1+2I_{\phi}}{s}\Big) \label{eqn:Asig2gam} \end{equation} where the label $\phi = \pi^\pm \mbox{ or } K^\pm$ refers to the meson propagating around the loop in diagrams (b-e). In Eq.~(\ref{eqn:Asig2gam}), the constant $\cal C$ is a combination of low energy coefficients \begin{equation} {\cal C} = 1 - \gamma_m - (1-c_1)\beta' \label{eqn:C} \end{equation} and $I_\phi$ is the relevant Feynman-parametric integral \begin{align} I_{\phi} = m_\phi^2\overset{\ \ 1}{\iint\limits_0} dz_{1}dz_{2}\,\theta(1-z_{1}-z_{2})\bigl/\bigl(z_{1}z_{2}s - m_{\phi}^2\bigr) \label{eqn:feyn par} \end{align} for on-shell photons $k_1^2 = k_2^2 = 0$. The constant $\cal C$ and integral $I_\phi$ also appear in the result for diagrams (f-i): \begin{equation} F_2(s) = \frac{\alpha}{\pi}({\cal C} - 2) \sum_{\phi=\pi, K} m_{\phi}^{2}\Big(\frac{1+2I_{\phi}}{s}\Big) \,. \end{equation} The final step is to compare the answer for \begin{equation} F(s) = F_1(s) + F_2(s) \end{equation} with the $s=0$ constraint (\ref{eqn:rjcform}). For that, we need the Taylor expansion \begin{equation} 1+2I_{\phi} = - \frac{s}{12m_{\phi}^{2}} + O(s^{2}) \,. \end{equation} Summing the $\pi^{\pm}$ and $K^{\pm}$ contributions, we have \begin{equation} \sum_{\phi=\pi, K} m_{\phi}^{2}\Big(\frac{1+2I_{\phi}}{s}\Big) = - \frac{1}{6} + O(s) \,, \label{Taylor} \end{equation} and so find that the terms involving ${\cal C}$ cancel: \begin{equation} F(s) = g_{\sigma\gamma\gamma} F_\sigma + \alpha/3\pi + O(s) \,. \end{equation} Comparison with Eq.~(\ref{eqn:rjcform}) yields the desired relation% \footnote{The answer is simple because we chose a $\sigma$ field with the scaling property (\ref{sigma_scale}). Constants like $\cal C$ can appear if other definitions of $\sigma$ are used.\label{sigma_def}} \begin{equation} g_{\sigma\gamma\gamma} = \frac{2\alpha}{3\pi F_\sigma} \Big(R^{}_{\mathrm{IR}} - \tfrac{1}{2} \Big)\,. \label{eqn:gsig2gam} \end{equation} Evidently, the one-loop diagrams which produce the term $-\frac{1}{2}$ relative to $R^{}_{\mathrm{IR}}$ have the same chiral order as the tree diagram involving $g_{\sigma\gamma\gamma}$. This is an explicit demonstration of the way PCDC is modified by the inverse Li-Pagels singularities noted above for $\gamma\gamma$ channels. An estimate for $R^{}_{\mathrm{IR}}$ from Eq.~(\ref{eqn:gsig2gam}) is not straightforward because dispersive analyses of reactions such as $\gamma\gamma\to\pi\pi$ yield residues at the $f_0/\sigma$ pole proportional to the full amplitude ${\cal A}_{\sigma\gamma\gamma}(s = m^2_\sigma)$ of Eq.~(\ref{eqn:Asig2gam}), not $g_{\sigma\gamma\gamma}$. Currently, we have no independent data about the constant $\cal C$, apart from the weak constraint (\ref{inequal}) for $(1-c_1)\beta'$ and the inequality \begin{equation} -1 \leqslant 1 - \gamma_m < 2 \end{equation} from Eq.~(\ref{dim-mass}). We will argue below that numerically, these corrections are likely to be small compared with the electromagnetic trace anomaly. First, let us review what is known about $\gamma\gamma\to\pi\pi$ from dispersion theory. The residue of the $f_0(500)$ pole was first extracted from the Crystal Ball data \cite{Xal90} by Pennington \cite{Pen06} and subsequently refined in several analyses \cite{Oll08,Mao09,Mou11,Hof11}. We use a recent determination \cite{Hof11} of the radiative width \begin{equation} \Gamma_{\sigma\gamma\gamma} = 2.0 \pm 0.2 \mbox{ keV} \label{eqn:Hof} \end{equation} based on fits to data \cite{Gar10} of pion polarizabilities.% \footnote{We do not use the alternative estimate \cite{Hof11} $\Gamma_{\sigma\gamma\gamma} = 1.7\pm 0.4$ keV because it depends on scalar meson resonance saturation for low energy constants of $\chi$PT$_2$ expansions \cite{Gass05,Gass06} and also (tracing back via App.~D.2.2 of \cite{Bell94} to \cite{Amet92}) $\chi$PT$_3$ expansions. As noted in Sec.~\ref{strong} below Eq.~(\ref{heavy}), that places $f_0$ in the non-NG sector. It would be inconsistent for us to combine that with $\chi$PT$_\sigma$.} In lowest order $\chi$PT$_\sigma$, the relevant diagrams for the process $\sigma\to\gamma\gamma$ are those shown in (a-e) of Fig.~\ref{fig:anomaly_loops}, but with $\sigma$ treated as an asymptotic state. The narrow width approximation is valid in lowest order $\chi$PT$_\sigma$, so the magnitude of the full amplitude ${\cal A}_{\sigma\gamma\gamma}$ at $s=m_\sigma^2$ is determined by \begin{equation} \Gamma_{\sigma\gamma\gamma} = \frac{m_\sigma^3}{64\pi}|{\cal A}_{\sigma\gamma\gamma}|^2 \,. \label{eqn:Gam 2gam} \end{equation} Comparison with (\ref{eqn:Hof}) then gives \begin{equation} |{\cal A}_{\sigma\gamma\gamma}| = 0.068\pm 0.006 \ \mbox{GeV}^{-1} \label{eqn:modA} \end{equation} where the uncertainties have been added in quadrature. The presence of lowest order meson loops in $\gamma\gamma$ channels implies that numerical results for the contact term depend on how the scalar field is defined.\footnotemark[14] Consequently, care must be exercised when comparing our value with those found using $\chi$PT$_3$ or dispersion theory --- definitions of ``the contact $f_0\gamma\gamma$ coupling'' are not necessarily equivalent. For example, the small values for these couplings reported in dispersive analyses \cite{Ach07,Men08} could well be consistent with each other and with our result for the coupling ${\cal L}_{\sigma\gamma\gamma}$ of Eq.~(\ref{non-min}). In $\chi$PT$_\sigma$ we find that for $N_c$ large, it is the contact term which is the dominant contribution to ${\cal A}_{\sigma\gamma\gamma}$. This is because, relative to the single-quark loop diagrams associated with $R^{}_\mathrm{IR} = O(N_c)$, terms from $\pi^\pm,K^\pm$ loop graphs involve an additional quark loop and so are suppressed by a factor $1/N_c$. We therefore have \begin{equation} g_{\sigma\gamma\gamma} = O\bigl(\sqrt{N_c}\bigr) \quad \mbox{and} \quad {\cal C} = O(1) \end{equation} in the large-$N_c$ limit and conclud \footnote{This approximation is {\it not} required in our analysis of $K_S\to\pi\pi$ in Sec.\ \ref{weak_emag}. Indeed $g_{\sigma\gamma\gamma}$ does not appear anywhere. The key ingredient is the phenomenological estimate (\ref{eqn:modA}) for the complete amplitude ${\cal A}_{\sigma\gamma\gamma}$.} \begin{equation} {\cal A}_{\sigma\gamma\gamma} = -ig_{\sigma\gamma\gamma} + O\bigl(1/\sqrt{N_c}\bigr) \,. \end{equation} From Eq.~(\ref{eqn:modA}) and within the large uncertainty due to that in $F_\sigma$, we estimate \begin{equation} R^{}_{\mathrm{IR}} \approx 5\,. \end{equation} This result is a feature of the nonperturbative theory at \begin{figure}[tb] \center\includegraphics[scale=0.75]{beta_RIR} \caption{Drell-Yan ratios $R^{}_{\mathrm{UV}}$ and $R^{}_{\mathrm{IR}}$ associated with the proposed $\beta/\psi$ function. For $e^+ e^- \to$ hadrons at high energies with $0 < \alpha_s < \alpha^{}_\mathrm{IR}$, the strong coupling $\alpha_s$ runs to zero and the result $R^{}_{\mathrm{UV}}$ is perturbative (asymptotic freedom). However if $\alpha_s$ is at $\alpha^{}_{\mathrm{IR}}$, it cannot run, so we get a nonperturbative result $R^{}_{\mathrm{IR}}$ associated with \emph{short}-distance scaling at the infrared fixed point.} \label{fig:beta_RIR} \end{figure}% $\alpha^{}_\mathrm{IR}$ (Fig.\ \ref{fig:beta_RIR}), so it has \emph{nothing} to do with asymptotic freedom or the free-field formula (\ref{eqn:RUV}). \section{Weak interactions of mesons} \label{weak_emag} The most important feature of $\chi\mathrm{PT}_{\sigma}$ is that it explains the empirical $\Delta I=1/2$ rule for nonleptonic kaon decays such as $K\rightarrow\pi\pi$. Problems explaining the data for nonleptonic kaon and hyperon decays were first recognised sixty years ago \cite{mgm54}. They became acute with the advent of three-flavor chiral perturbation theory. For $\chi$PT$_3$ applied to kaon decays, the dilemma is: \begin{enumerate} \item A fit to data in lowest nontrivial order, i.e.\ for $O(p^2)$ amplitudes ${\cal A}_\mathrm{LO}$, would require the ratio of $\bm{8}$ to $\bm{27}$ couplings $|g_8/g_{27}|$ to be $\simeq 22$, much larger than any of the reasonable estimates in the range (\ref{ratio}). \item The main alternative is to accept Eq.~(\ref{ratio}) and argue that the dominant contribution comes from a NLO $O(p^4)$ term produced by strong final-state interactions in the $0^{++}$ channel, e.g.\ via a non-NG scalar boson $S$ \cite{Golo80, Volk88, Moro90, Pol02} for which the pole diagram in Fig.\ \ref{fig:k_pipi} is $O(p^4/m_S^2)$, with $m_S \not= 0$. Then the $\chi$PT$_3$ expansion diverges uncontrollably,% \footnote{The factor 22 is 70 times larger than the limit $\sim 0.3$ prescribed by Eq.~(\ref{eqn:XPT3_fit}) for an acceptable fit.}% \begin{equation} \bigl|\mathrm{NLO}\bigl/\mathrm{LO}\bigr|_{\chi\mathrm{PT}_3} \simeq 22 \label{22}\end{equation} contradicting the premise that $\chi$PT$_3$ is applicable. \end{enumerate} Let us review option 1 in more detail. In the lowest order% \footnote{Our aim is to solve the $\Delta I = 1/2$ puzzle \emph{without} using NLO terms. Weak NLO terms in $\chi$PT$_3$ \cite{weakNLO}, except those depending on $f_0/\sigma$ (Sec.\ \ref{saturate}), become weak NLO $\chi$PT$_\sigma$ terms when multiplied (as in Eq.~(\ref{d>4NLO})) by suitable powers of $e^{\sigma/F_\sigma}$. We expect these to produce small corrections to our result.} of standard $\chi$PT$_3$, the effective weak Lagrangian \begin{equation} \left.\mathcal{L}_{\mathrm{weak}}\right|_{\sigma=0} = g_{8}Q_{8} + g_{27}Q_{27} + Q_{mw} + \mathrm{h.c.}\ \label{usual}\end{equation} contains an octet operator \cite{Cro67} \begin{equation} Q_{8} = \mathcal{J}_{13}\mathcal{J}_{21} - \mathcal{J}_{23}\mathcal{J}_{11} \ , \quad \mathcal{J}_{ij} = (U\partial_{\mu}U^{\dagger})_{ij} \label{eqn:octet} \end{equation} the $U$-spin triplet component \cite{Gaill74,RJC86} of a \textbf{27} operator \begin{equation} Q_{27} = \mathcal{J}_{13}\mathcal{J}_{21} + \tfrac{3}{2} \mathcal{J}_{23}\mathcal{J}_{11} \label{eqn:27-plet} \end{equation} and a weak mass operator \cite{Bern85} \begin{equation} Q_{mw} = \mathrm{Tr} (\lambda_6 - i\lambda_7) \bigl(g_MMU^\dagger + \bar{g}_MUM^\dagger\bigr) \,. \label{eqn:weak mass} \end{equation} Although $Q_{mw}$ has isospin 1/2, it cannot be used to solve the $\Delta I = 1/2$ puzzle if dilatons are absent. When $Q_{mw}$ is combined with the strong mass term $\left.{\cal L}_\mathrm{mass}\right|_{\sigma = 0}$, it can be removed by a chiral rotation \begin{equation} U\to \widetilde{U} = RUL^\dagger \end{equation} which aligns the vacuum such that \begin{equation} \langle \widetilde{U}\rangle_\mathrm{vac} = I \ \mbox{ and } \ M = \mbox{real diagonal}. \end{equation} Therefore \cite{RJC86} $Q_{mw}$ has no effect on $\chi$PT$_3$ low-energy theorems relating $K \to \pi\pi$ and $K \to \pi$ on shell, and so the conclusion that $|g_8/g_{27}|$ is unreasonably large ($\approx$ 22) cannot be avoided. In $\chi$PT$_\sigma$, the outcome is entirely different. First, we adjust the operator dimensions of $Q_8,\, Q_{27},$ and $Q_{mw}$ by powers of $e^{\sigma/F_\sigma}$ \begin{align} {\cal L}_{\mathrm{weak}} &= Q_{8}\sum_n g_{8n}e^{(2 -\gamma_{8n})\sigma/F_\sigma} + g_{27}Q_{27}e^{(2-\gamma_{27})\sigma/F_\sigma} \nonumber \\ &+ Q_{mw}e^{(3-\gamma_{mw})\sigma/F_\sigma} + \mbox{h.c.}\,, \end{align} as in Eqs.~(\ref{Lstr}) and (\ref{d>4NLO}) for the strong interactions, with octet quark-gluon operators allowed to have differing dimensions at $\alpha^{}_\mathrm{IR}$. The key point is that the weak mass operator's dimension $(3-\gamma_{mw})$ bears no relation to the dimension $(3-\gamma_{m})$ of ${\cal L}_\mathrm{mass}$, so the $\sigma$ dependence of $Q_{mw} e^{(3-\gamma_{mw})/F_\sigma}$ cannot be eliminated by a chiral rotation. Instead, after aligning the vacuum, we find \begin{align} &\mathcal{L}^{\mathrm{align}}_{\mathrm{weak}} = \widetilde{Q}_{8}\sum_n g_{8n}e^{(2-\gamma_{8n})\sigma/F_\sigma} + g_{27}\widetilde{Q}_{27}e^{(2-\gamma_{27})\sigma/F_\sigma} \nonumber \\ &+ \widetilde{Q}_{mw}\bigl\{e^{(3-\gamma_{mw})\sigma/F_\sigma} - e^{(3-\gamma_{m})\sigma/F_\sigma}\bigr\} + \mathrm{h.c.}\,, \end{align} where the tilde indicates that the $\bm 8$ and $\bm 27$ operators are now functions of the rotated field $\widetilde{U}$. As a result, there is a residual interaction $\mathcal{L}_{K_S\sigma} = g^{}_{K_S\sigma}K_{S}\sigma$ which mixes $K_{S}$ and $\sigma$ in \emph{lowest} $O(p^2)$ order \begin{equation} g^{}_{K_S\sigma} = (\gamma_{m} - \gamma_{mw})\mathrm{Re}\{(2m^2_K - m^2_\pi)\bar{g}_M - m^2_\pi g_M\}F_{\pi}/2F_{\sigma} \end{equation} and produces the $\Delta I = 1/2$ amplitude $A_{\sigma\textrm{-pole}}$ of Fig.~\ref{fig:k_pipi}. At this point, we could simply choose $g_{K_S\sigma}$ to fit the rate for $K_S \to \pi\pi$, knowing that inserting the full $K_S \to \pi\pi$ amplitude into the standard loop calculation for $K_S \to \gamma\gamma$ \cite{DAm86} would give agreement with experiment. That would leave unclear what version of chiral perturbation theory in Fig.~\ref{fig:goldstone} is being used to analyse $K_S \to \gamma\gamma$. So instead, we first apply $\chi$PT$_\sigma$ to $K_S \to \gamma\gamma$ and $\gamma\gamma\rightarrow\pi\pi$ in order to determine $g_{K_S\sigma}$, and then show that this gives a result for $K_S \to \pi\pi$ which agrees with experiment. The scalar part ${\cal A}_{K\gamma\gamma}$ of the $K_{S}\rightarrow\gamma\gamma$ amplitude \begin{equation} {\cal A}_{\mu\nu} = (g_{\mu\nu}k_1\cdot k_2 - k_{2\mu}k_{1\nu}){\cal A}_{K\gamma\gamma} \end{equation} receives three contributions at lowest order (Fig.~\ref{fig:loops}) \begin{equation} {\cal A}_{K\gamma\gamma} = {\cal A}^{\mathrm{tree}}_\sigma + {\cal A}^{\mathrm{loop}}_\sigma + {\cal A}^{\mathrm{loop}}_{\pi, K}\,. \end{equation} \begin{figure}[tb] \center\includegraphics[scale=.43]{charged_loops} \caption{\label{fig:loops} Lowest order diagrams for $K_{S}\to\gamma\gamma$ in $\chi\mathrm{PT}_{\sigma}$, including finite loop graphs \cite{DAm86}. The grey vertex contains $\pi^\pm,\,K^\pm$ loops as in the four $\chi$PT$_3$ diagrams to the right. An analogous set of diagrams contributes to $\gamma\gamma \to\pi^{0}\pi^{0}$.} \end{figure} The explicit expressions are \begin{align} &{\cal A}^{\mathrm{tree}}_\sigma + {\cal A}^{\mathrm{loop}}_\sigma = -2g^{}_{K_S\sigma}{\cal A}_{\sigma\gamma\gamma}\bigl/ \bigl(m_K^2-m_\sigma^2\bigr) \,, \nonumber \\ &{\cal A}^{\mathrm{loop}}_{\pi,K} = -2\frac{\alpha}{\pi F_\pi^3} \bigl(g_8+g_{27}\bigr) \sum_{\phi=\pi,K}(m_K^2-m_\phi^2) \Big(\frac{1+2I_\phi}{m_K^2}\Big) \,, \label{eqn:kto2gam} \end{align} where the magnitude of ${\cal A}_{\sigma\gamma\gamma}$ is determined from Eq.~(\ref{eqn:modA}) and $I_\phi$ is the integral given by Eq.~(\ref{eqn:feyn par}). If we neglect the $g_8$ and $g_{27}$ terms, we find \begin{equation} |g^{}_{K_S\sigma}| \approx 4.4\times 10^{3}\,\mathrm{keV}^{2} \label{gksig} \end{equation} to a precision $\lesssim 30\%$ expected for a three-flavor chiral expansion. Now consider $K_S\to\pi\pi$ (Fig.\ \ref{fig:k_pipi}). Eq.~(\ref{gksig}) and data for the $f_0$ width (Eq.~(\ref{numbers})) imply that the $\sigma$-pole diagram contributes (very roughly, given\footnotemark[12] $\sigma/f_0$'s width and near degeneracy with $K$) \begin{equation} \left|A_{\sigma\textrm{-pole}}\right| = \left|\frac{-ig^{}_{K_S\sigma}g_{\sigma\pi\pi}}{m_K^2 - m_\sigma^2}\right| \approx 0.34\,\mathrm{keV} \label{pole} \end{equation} to the full $I = 0$ amplitude% \footnote{Our convention for the $K\to\pi\pi$ isospin amplitudes is that given in \cite{MRR}.} \begin{equation} {A}_{0} = \frac{\sqrt{3}}{F_\pi^3}\bigl(g_8+\tfrac{1}{6}g_{27}\bigr) \bigl(m_K^2-m_\pi^2\bigr) + A_{\sigma\textrm{-pole}} \,. \end{equation} If the $g_{8,27}$ contributions are again neglected, \begin{equation} \left|A_0\right| \simeq \left|A_{\sigma\textrm{-pole}}\right| \end{equation} we see that Eq.~(\ref{pole}) accounts for the large magnitude of $A_0$ \cite{PDG}: \begin{equation} |A_{0}|_{\mathrm{expt.}} = 0.33\,\mathrm{keV} \,. \end{equation} We conclude that the observed ratio $|A_0/A_2| \simeq 22$ is mostly due to the dilaton-pole diagram of Fig.~\ref{fig:k_pipi}, that $g_{8} = \sum_n g_{8n}$ and $g_{27}$ have roughly similar magnitudes as simple calculations \cite{Feyn65,Feyn71,Gaill74,Alta74} indicate, and that only $g_{27}$ can be fixed precisely (from $K^{+}\rightarrow\pi^{+}\pi^{0}$). Consequently, the lowest $O(p^2)$ order of $\chi$PT$_\sigma$ solves the $\Delta I = 1/2$ problem for kaon decays. The chiral low-energy theorems which relate the on-shell% \footnote{Ref.~\cite{RJC86} followed standard practice in current algebra. It related the amplitude for $K \to \pi\pi$, where ${\cal H}_\mathrm{weak}$ carries zero 4-momentum, to $K \to \pi$ for \emph{on-shell} kaons and pions, where the relevant operator $[F_5,{\cal H}_\mathrm{weak}]$ obviously carries nonzero 4-momentum. Ref.~\cite{RJC86} is often misquoted by authors who implicitly set the momentum transfer for $K \to \pi$ equal to zero. In Eq.~(\ref{tadpole}), $|\mathrm{vac}\rangle$ refers to the unique state with translation invariance, so ${\cal H}_\mathrm{weak}$ carries momenta whose square is $m_K^2$.} $K \to 2\pi$ and $K \to \pi$ amplitudes have extra terms due to $\sigma$ poles, but the no-tadpoles theorem \cite{RJC86} is still valid: \begin{equation} \langle K | \mathcal{H}_{\mathrm{weak}} |\mathrm{vac}\rangle = O\bigl(m_s^2 - m_d^2\bigr)\,, \ K\mbox{ on shell}. \label{tadpole} \end{equation} \section{Remarks} Why must the $0^{++}$ particle be a dilaton in order to explain the $\Delta I = 1/2$ puzzle for $K$ decays? Because the property $m_\sigma \to 0$ in the chiral-scale limit (\ref{scale}) is essential. As is evident from Eq.~(\ref{22}), assuming scalar dominance by a non-NG particle contradicts the basic premise of chiral theory that at low energies, the NG sector dominates the non-NG sector. That is why none of the authors proposing scalar dominance by a non-NG particle since 1980 \cite{Golo80} claimed to have solved the puzzle or persuaded others to stop working on other proposals, such as penguin diagrams \cite{ITEP}, the large-$N_c$ limit \cite{largeN,Bur14}, or QCD sum rules \cite{QCDsum}. Our resolution of the $\Delta I = 1/2$ puzzle is distinguished by not being \emph{ad hoc}. It is part of a wider program to obtain numerically convergent three-flavor chiral expansions for amplitudes involving $0^{++}$ channels, i.e.\ where $\chi$PT$_3$ clearly fails (Sec.~\ref{Motiv}). So far, we can say only that lowest order $\chi$PT$_\sigma$ appears to be a good approximation. More stringent tests of convergence have yet to be developed because loop corrections involve couplings like $\sigma\sigma\pi\pi$ for which we lack data. % An important example is the shape of the $\sigma$ resonance at NLO (Fig.\ \ref{fig:width}), where the higher order corrections to (\ref{width}) have yet to be determined. This will require explicit calculations which include numerical fits for the $\sigma\sigma\sigma, \sigma\sigma\pi\pi,\ldots$ couplings. \begin{figure}[t] \centering\includegraphics[scale=0.42]{sigma_width} \caption{Next to lowest order (NLO) diagrams which contribute to the resonance structure of $f_0/\sigma$ in $\chi$PT$_\sigma$. Ultraviolet divergences arising from the loops are absorbed (in the usual way) by counterterms in the NLO Lagrangian.} \label{fig:width} \end{figure} Another test could be to invent a unitarization procedure for $\chi$PT$_\sigma$ and check whether (unlike $\chi$PT$_3$) it produces \emph{small} corrections to lowest order results. The basis of our work on approximate scale and chiral $SU(3)_L\times SU(3)_R$ symmetry in QCD should be carefully distinguished from what is postulated in analyses of walking gauge theories. As noted by Del Debbio \cite{Del10}, in such theories, ``the infrared fixed point \ldots describes the physical properties of theories which are scale invariant at large distances, where the field correlators have power-like behaviours characterized by the anomalous dimensions of the fields." That means that there is no mass gap at the fixed point: scale condensates are assumed to be absent. Our view of physics at the infrared fixed point is quite different. The Hamiltonian becomes scale invariant for massless $u,d,s$ quarks, but the vacuum is \emph{not} scale invariant because of the condensate $\langle \bar{q}q \rangle_{\mathrm{vac}} \not= 0$. It sets the scale of the mass gap for hadrons $\rho, \omega, N, \eta', \ldots$ in the non-NG sector (Sec.~\ref{Motiv} below Eq.~(\ref{eqn:phase})). For example, at the infrared fixed point in Fig.\ \ref{fig:beta_RIR}, $e^+e^-\to$ hadrons at low or intermediate energies has thresholds and resonances similar to QCD at similar energies. Scaling behaviour sets in only at {\it high} energies.\footnotemark[3] A result of this fundamental difference is that our hypothesis of an infrared fixed point for $N_f = 3$ is not tested by lattice investigations done in the context of walking gauge theories. Those investigations are based on criteria like Miransky scaling \cite{Miransky} which assume that a theory cannot have an infrared fixed point if it does not display the behavior described above in the quote from Del Debbio. More generally, our view is that theoretical evidence for or against our proposal in Fig.~\ref{fig:beta} is inconclusive. Various definitions of the QCD running coupling can be be readily compared in low orders of perturbation theory, but it is not at all clear which definitions are physically equivalent beyond that. The key nonperturbative requirements for a running coupling are that its dependence on the magnitude of a space-like momentum variable be monotonic and analytic. Gell-Mann and Low \cite{GML} achieved this for QED, but these properties are hard to establish for QCD running couplings. A lack of equivalence of these definitions may explain why differing results for infrared fixed points are obtained. Unfortunately, our analysis does not explain the failure of chiral theory to account for nonleptonic $|\Delta S| = 1$ hyperon decays. We have shown that octet dominance is not necessary for $K$-decays, but that makes no difference for hyperon decays: the Pati-Woo $\Delta I = 1/2$ mechanism \cite{Pati71} forbids all contributions from \textbf{27} operators. \begin{acknowledgments} We thank Ross Young, Peter Minkowski, Martin Hoferichter, and Gilberto Colangelo for valuable discussions at various stages of this work. LCT thanks Mary~K.~Gaillard and her colleagues for their kind hospitality at the Berkeley Center for Theoretical Physics, where part of this work was completed. LCT is supported in part by the Australian-American Fulbright Commission, the Australian Research Council, the U.S. Department of Energy under Contract DE-AC02-05CH11231, the National Science Foundation under grant PHY-0457315, and the Federal Commission for Scholarships for Foreign Students (FCS). The Albert Einstein Center for Fundamental Physics at the University of Bern is supported by the ``Innovations- und Kooperationsprojekt C-13" of the ``Schweizerische Universit\"{a}tskonferenz SUK/CRUS". \end{acknowledgments} \emph{Note Added.} A similar chiral Lagrangian with a technicolor ``dilaton'' has just been published by Matsuzaki and Yamawaki \cite{Mat13}. They acknowledge our prior work \cite{us}, but say that they do not believe it to be valid for hadronic physics. The basis for this assertion is the claim (footnote 8 of \cite{Bando}) that ``light'' dilatons are forbidden by the one-loop formula $-(6\pi)^{-1}(33 - 2N_f)\alpha_s^2$ for the QCD $\beta$ function. The problems with this are that (a) the relevant limit is infrared, not ultraviolet, and (b) for $\alpha_s$ large, the one-loop formula violates the analyticity bound \cite{Kras81} $\beta \gtrsim -\alpha_s\ln\alpha_s$.
2,877,628,091,417
arxiv
\section{Introduction} Seeking a source with autonomous vehicles is an area of growing interest and wide applications. The source could be an electromagnetic signal, acoustic signal, thermal signal, or a chemical/biological agent. Motivated from source-seeking behavior exhibited by natural species from a microscopic level \cite{Optimotaxis} to a macroscopic level \cite{AnimalNavigation}, researchers have developed robots \cite{TDoA} and sensor networks \cite{Coop_control} that can imitate these behaviors in order to perform complex tasks such as environment monitoring, search and rescue operations, explosive detection, drug detection, sensing leakage of hazardous chemicals, pollution sensing and environmental studies. In this work, we address the problem in which a team of mobile agents, called the seekers, attempt to find the location of a source that emits an electromagnetic signal of unknown strength. The seekers can continuously sense the signal strength transmitted by the source at their current positions which generally decays with distance from the source. The decay profile of the signal strength is very noisy as shown in Figure \ref{fig:Map}, which makes many existing methods inapplicable. Based on this information, we investigate the issue of modifying Particle Swarm Optimization (PSO) and applying it to swarm mobile robots to approach the source seeking problem. \begin{figure}[htb] \centering \begin{subfigure}[b]{\linewidth} \includegraphics[width=\textwidth]{./RSSI} \end{subfigure}\\% \begin{subfigure}[b]{\linewidth} \includegraphics[width=\textwidth]{./RSSI3D} \end{subfigure} \caption{Map of RSSI (Received Signal Strength Indication). The color bars on the right indicate the signal strength in dBm. In this map, the source is located in the center where the RSSI is approximately -29 dBm} \label{fig:Map} \end{figure} A vast amount of research has been done on source seeking with autonomous agents based on the idea of gradient descent/ascent. Many methods employ a single agent to search for single or multiple, static targets. Authors of \cite{Nehorai} propose methods of computing the gradients of the Cramer-Rao bound on the location error with respect to a sensor's coordinates and moving the sensor opposite to the corresponding gradient directions. They extend their techniques in \cite{VaporSource} to provide motion planning strategy for a mobile sensor used for detecting low concentration vapors. This method assumes very small sensing noise in the measurement model, and its applicability to a very noisy model is not known. The same issue is encountered by the algorithm implemented on Autonomous Underwater Vehicles to localize hydrothermal vents in \cite{AUVCircle}. The method of estimating local gradient by taking measurements on a circle may no longer be effective on such a noisy model. In \cite{RotationBasedAngle}, the authors utilize the difference in RSSI received by a rotating directional antenna on the single wing of the Samarai MAV to obtain gradient. However, the application of this method is restricted by the dynamics of the Samarai MAV whose entire body including the mono-wing rotates at all time for stable hovering \cite{SingleWingMAV}. For other types of robots, additional structure has to be made, and extra energy has to be allocated to rotate the antenna at all times which is neither convenient nor cost-efficient. Recently, extremum seeking \cite{Extremum_seeking} techniques have been adopted aiming at the source seeking problem. It has been applied to nonholonomic vehicles in both 2-D \cite{Non-h_forward_v, Cochran2009} and 3-D \cite{3-D_seeking} environment using sinusoidal and stochastic perturbation \cite{Liu20101443}. The work in \cite{Stanković20101243} also addresses the issue of stochastic noise in measurement. It uses the methodology of stochastic approximation to deal with colored noise. Nevertheless, in all the extremum seeking related work, only a single vehicle is used to collect measurements at different locations which is time-consuming. In addition, the trajectories generated by extremum seeking always demand costly maneuvers. Authors of \cite{Multi_deployment} propose strategies to deploy a group of vehicles around the source while applying extremum seeking. However, their focus is on achieving a formation distribution of the vehicles in accordance with signal strength instead of taking advantage of the vehicle swarm in collecting measurements. Therefore, this work is not a favorable example of multi-agent source seeking. The advantages of robot swarm and sensor network attract many researchers to study them and apply them to the source seeking problem. In \cite{CircularFormation, ConsensusCircularFormation} and \cite{MultiUAVMovingSource}, a team of agents implement a consensus algorithm to maintain a particular formation to track the gradient of the source. This method assumes the formation of the swarm being maintained perfectly which is too ideal to achieve. Cooperative source seeking algorithms are proposed in \cite{WZhang_Switching, Coop_control, veh_network} and \cite{Choi2012}. In \cite{WZhang_Switching}, the authors provide algorithms and experimental validation of a switching strategy for a team of agents trying to localize a source. Each agent switches from individual exploration to cooperative exploration only when individual gradient estimate is not available. A distributed coordination algorithm based on adaptive control is proposed in \cite{Choi2012}. However, it does not take noise into consideration which is crucial to our problem. In \cite{Coop_control} and \cite{veh_network}, cooperative control is applied to deploy agents in a way which is optimal for gradient estimation. But these two methods together with all other methods in this paragraph are all prone to be trapped in a local minimum. Since all of them deploy the swarm or network in a close neighborhood, they lack the global information of the model. Authors of \cite{pappas_stochastic} and \cite{Atanasov2014} apply stochastic approximation to the problem and enable the swarm to find the source in complex and noisy environment. But the computation complexity of the method hinders its implementation on some cheaper and less capable robots. There are other source seeking methods that are non-gradient based. Some are developed by obtaining source functions. In \cite{Point_source, Source_obstacles, ElBadia2002, ElBadia2005} and \cite{Komornik2005}, researchers formulate the source seeking problem as an inverse problem. Depending on the source types, heat and wave equations are commonly used as candidate source functions. Parameters of source functions are found by optimizing the difference between collected data and simulated data from the candidate function. The source can be located after obtaining the source function. For instance, in \cite{Source_obstacles}, the incoming directions of waves are obtained after solving the inverse problem. By tracing back the wave direction rays, the source is located at the intersections of the rays. This method cannot be applied to our problem because it requires a priori knowledge about the candidate function that governs the decay profile of the source, and some information about the source like wave length and frequency. In our problem, only signal strength can be measured, and the signal decay profile is unknown. For the same reason, the statistical signal-processing technique `independent component analysis' \cite{Albini2003451} and statistical methods \cite{Chemical_plume, Optimotaxis}, \cite{Following_RF} used to construct a maximum likelihood map of the source location cannot be used for our problem. In \cite{IslerTrackingFish}, researchers use directional an tennas to obtain bearing measurements of tagged fish, and localize its location by triangulation. The method has been applied effectively in localizing the invaded carp in a lake. But it is not applicable to our problem since we can only access scalar measurements instead of bearing ones. In this paper, we address this problem using Particle Swarm Optimization (PSO) which does not require any a priori knowledge of the signal model emanating from the source. PSO is a heuristic non-gradient based strategy, an evolutionary computation technique. It was first proposed by Kennedy and Eberhart \cite{PSO} who were inspired by the behavior of bird flock and fish school. Ever since then many variations of PSO have been proposed by researchers, like inertia weight PSO \cite{ModifiedPSO}, constriction PSO \cite{Clerc_Constriction}, neighborhood PSO \cite{NeighborPSO} in its early time, and Quantum behaved PSO \cite{QuantumPSO} and Digital Pheromone PSO \cite{Vijay_digital} developed recently. As a swarm optimization technique, PSO has been applied to some source seeking tasks involving mobile robots. In \cite{PSOPugh2006} and \cite{PSOPugh2007}, PSO is modified to adapt to multi-robot search. The authors discuss the limitation posed by physical robots and conduct simulations with several communication models. However, these simulations are limited to some benchmark functions rather than real world signal sources. Since real sources such as electromagnetic signals, odor and heat sources are considerably different from the benchmark functions, the results in these papers are not remarkably useful to implement. In \cite{PSOJatmiko} and \cite{PSOMarques}, the authors incorporate potential field-based motion planning method with PSO, and propose strategies of localizing static and moving odor source in complex environment. While their strategies have been proven effective in simulations for localizing odor sources, we would like to focus our attention to electromagnetic sources. We will also explore different variations of PSO and compare the performance of various parameter configurations, topology models and obstacle avoidance strategies. In essence, our work focuses on finding the most effective PSO variation to solve the electromagnetic source seeking problem, and validate it in real experiments. Authors of \cite{PSODerr} also use RF signals as sources to be sought. But this work does not consider a more complex environment where obstacles exist, and is limited to simulations. A method for obtaining optimal PSO parameters is proposed in \cite{PSODoctor}. Essentially, it applies PSO at a lower level to seek for the source, and at a higher level to seek for the optimal parameter configuration. This iterative way of finding optimal parameters is reasonable computationally, but not applicable to real robots. It requires a significant number of trials at the lower level which leads to enormous amount of experimental data. The effort required to obtain the optimal parameter configuration in a specific scenario is extravagant for the simple goal of finding the source, especially when this optimal configuration can hardly be applied to a different scenario. \cite{HerefordPSO2007} provides some experimental results of implementing a modified PSO on real robots. They use a diffuse light source as the target and provide a simple strategy of dealing with obstacles. The experiments illustrate the efficacy of implementing PSO on real robots, but are constrained to a specific PSO configuration. In this work, we present extensive simulation and experimental results of various PSO variations and configurations, and provide suggestions on parameter selection. This work is based on our previous exploration \cite{PSO_Rui_AIM}, \cite{PSO_Rui_MSC} on applying PSO to the source seeking problem. The main contributions of this paper are as follows. 1) We use a non-gradient based technique for the source seeking problem due to the inherent irregularity in the signal model. 2) We incorporate physical constraints posed by robots in the implementation and evaluation of PSO. 3) Guidelines are presented to choose proper parameters for several PSO variations. A strategy which enables PSO to be implemented experimentally in a complex environment is first presented in this paper. This paper is outlined as follows. In Section \ref{sec:background}, we provide some background information regarding the problem description and the main concept of PSO. In Section \ref{sec:variations}, we evaluate and compare three PSO variations with different parameters. In Section \ref{sec:obstacles}, we propose collision avoidance strategies for implementing PSO in environments containing obstacles. In Section \ref{sec:experiments}, we present a description of the experimental setup, and discuss the implementation results. Finally, we conclude this work in Section \ref{sec:conclusion}. \section{Background} \label{sec:background} \subsection{Problem Description} Consider a point source located on a plane continuously transmits/emits a signal. Based on the assumption that a static source is present in the vicinity, a group of mobile agents, called {\it{seekers}}, explore the environment to locate the source. The scenario is similar to a colony of ant swarms trying to locate a food source. The seekers are assumed to be holonomic kinematic agents with maximum speed $v_{\max}$. The seekers have the capability to measure the strength of the signal emitted by the source at their current locations. However, the seekers have no information about the current location of the source, its signal strength and its decay profile. The objective of the seekers is to find the location of the source which is assumed at the location where the signal strength is maximum. For most sources, signal intensity normally decays radially as the distance to the source increases. The decay profiles of some common sources are shown below: \begin{itemize} \item Let $P$ denote the power at which an electromagnetic source emits a signal. The decay profile of the signal intensity is given by the following equation \cite{rawnet11} \begin{equation} {P_A} = \frac{{cP}}{{(1 + d)}^\alpha }, \label{eqn:pow} \end{equation} where $P_{A}$ is the power of the signal measured at a point $A$ on the plane located at a distance $d$ from the source, $c$ and $\alpha$ are constants that depend on the physical parameters of the medium through which the signal is transmitted. \item The concentration of a chemical $c(\vec{r}, t)$ at a point $\vec{r}$ emitted from a point source located at $\vec{\rho}$ emitting vapors at a constant rate of $\mu$ Kg/s is given by \cite{VaporSource} \begin{equation} \label{eqn:vapor} c\left( {\vec r,t} \right) = \frac{\mu }{{4\pi \kappa \left| {\vec r - \vec \rho } \right|}}{\text{erfc}}\left( {\frac{{\left| {\vec r - \vec \rho } \right|}}{{2\sqrt {\kappa (t - {t_0})} }}} \right), \end{equation} where $\kappa$ is constant diffusivity in $m^2/s$. If we ignore the complementary error function ${\text{erfc}}(x) = (2/\sqrt \pi )\int_x^\infty {{e^{ - {y^2}}}dy}$ and only consider the dominant part on its left in equation (\ref{eqn:vapor}), the substance concentration is inversely proportional to the distance between $\vec{r}$ and the vapor source. \item For a spherical sound source, the acoustic intensity $I_r$ at a point with a distance of $r$ from the center in the radial direction is given by \cite{PASPWEB2010} \begin{equation*} I_r = \frac{P}{4\pi r^2} \end{equation*} \end{itemize} However, in reality the measured signal intensity is too noisy to be accurately described by these decay profiles. For instance, reflection, refraction, multi-path fading, etc. can influence the decay profile dramatically making the actual one highly different from the theoretical one. Figure \ref{fig:Map} illustrates a real RSSI (Received Signal Strength Indication) map of an RF source provided by an XBee\textregistered ZB RF module on a 5 m$\times$5 m plane. The XBee module was located at the center of this area, and the measurements were taken by another XBee module. The figure clearly illustrates the fact that the real RSSI profile has many local extrema and is non-differentiable almost everywhere contrary to the theoretical decay profile described by (\ref{eqn:pow}) shown in Figure \ref{fig:Map_theory}. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\textwidth]{./RSSI_theory} \end{subfigure}\\% \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\textwidth]{./RSSI_theory3D} \end{subfigure} \caption{Theoretical decay profile of an electromagnetic source} \label{fig:Map_theory} \end{figure} Therefore, an optimization method not limited by differentiability requirement, yet able to search highly multi-modal design spaces is desired for direct RSSI measurements, as portrayed in Figure 1. Also since mobile robots are used within the environment to seek maximum signal source, a population based method where each population member has a one to one correspondence with a mobile robot is favored. Particle Swarm Optimization (PSO), as described in Section II B, has the ingredients required to address the above challenges. \subsection{Original PSO} \label{sec:Original} In this subsection, we provide a brief description of the concept of PSO. PSO is a population based search algorithm first proposed in \cite{PSO} by Kennedy and Eberhart through simulation of a simplified social model. Although PSO was originally designed to solve minimization problems, it can be used to find the maximum of a function, with a simple change. It is initialized with a number of random solutions, called \textit{particles}. Each particle is also randomly initialized with a velocity within some user designed range. Each particle evolves iteratively in the search-space trying to improve the solution in the following manner: \begin{eqnarray} v^k_{i+1} &=& v^k_i + U(0,c_1) (Pbest^k - x^k_i) \nonumber \\ & &+ U(0,c_2) (Gbest - x^k_i) \label{eqn:PSO_basic}\\ x^k_{i+1} &=& x^k_i + v^k_{i+1} \nonumber \\ \end{eqnarray} where $x^k_{i+1}$ and $v^k_{i+1}$ represent the position and velocity of the $k$th particle in the $i+1$th iteration, $U(0,c_1)$ and $U(0,c_2)$ are uniformly distributed random numbers within $[0, c_1]$ and $[0, c_2]$, and $Pbest$ and $Gbest$ are best previous position of a particle and best previous position in the swarm. A best previous position is where a particle obtains the minimum cost in its search history. In our case, the above equations can be interpreted in the following way: Assuming $n$ seekers as $n$ particles moving in the search-space $X$, the position of the $k$th seeker in the $i$th iteration is denoted as $x^k_i \in X \subset \mathbb{R}^2$. The cost function $f: \mathbb{R}^2 \rightarrow \mathbb{R}$ incurred by each seeker is the negative of the signal strength received at its current location. The objective of the seekers is to communicate, and move in a manner so as to reach the global minimum of the cost function. We initialize the position and velocity of each seeker with a uniformly distributed vector $x^k_1$ in the search-space and $v^k_1$ within given bound. Each seeker is assumed to have the knowledge of its own best previous position and the global best previous position based on the assumption that each seeker has the memory to store its own previous experience and can benefit from the previous experience of all other members. Therefore, in (\ref{eqn:PSO_basic}), velocity $v^k_{i+1}$ consists of three terms: the effect of seeker's previous velocity, its best known position and global best known position. \section{PSO Modifications and Variations} \label{sec:variations} In this section, we will introduce some physical constraints into PSO and compare the performance of three different PSO variations and provide guidelines on parameter selection. First, we shall consider the bounds to the search space. As an optimization technique, boundary conditions exist in PSO. However, different actions are taken when particles violate boundary conditions: some discard these particles, some bounce them back, and some confine them to the boundary, etc. Discarding seekers may impair the performance especially when the total number of seekers is limited and every seeker is of significant value to performance. Therefore, we choose to confine seekers to boundaries, namely, Constraint 1: If $x^k_{i+1} \notin X$, then $x^k_{i+1}$ is set to the boundary point on $X$ in the direction of $v^k_{i+1}$. In PSO, there's no constraint on the velocity of a particle. It is possible for a particle to fly across the entire search space in a single iteration. However, this does not apply to seekers in our case which are actually ground robots. It is more appropriate to treat the velocity of a particle as a step of a robot in our implementation which decomposes the step length into speed and duration. Given a sufficiently long duration, a robot also can move a large step which crosses the entire search space. However, it is at the expense of a longer searching time, and a greater energy consumption, which is crucial to a robot with limited battery capacity. On the contrary, the step length should not be too small to affect the performance of PSO. To balance performance and efficiency, simulations to find a proper step length are conducted below. We denote the step length by $v_{\max}$, and check the following constraint in every iteration. Constraint 2: If $|v^k_{i+1}| > v_{\max}$, then $|v^k_{i+1}| = v_{\max}$ with the direction of $v^k_{i+1}$ unchanged. The aforementioned constraints apply to all simulations and experiments in this paper. \subsection{PSO with Inertia Weight} \label{sec:InertiaWeight} One variation of the original PSO is to introduce an inertia weight $\omega$ to the previous velocity in (\ref{eqn:PSO_basic}), which leads to the following equation to update velocity \cite{ModifiedPSO}, \begin{equation} v^k_{i+1} = \omega_i v^k_i + U(0,c_1) (Pbest^k - x^k_i)+ U(0,c_2) (Gbest - x^k_i) \label{eqn:PSO_v} \end{equation} According to Shi and Eberhart's analysis in \cite{ModifiedPSO}, the inertia weight is critical in balancing global and local search. If $\omega$ is set to zero, the seekers become ``memoryless" about its past velocity. With seekers' velocity only determined by individual and global best previous positions, all seekers would converge to the global best position directly making the search process resemble a local search. On the contrary, if $\omega$ is set to a larger number, the seekers are more stubborn in their previous velocity, which leads them to larger area of exploration. In other words, a larger inertia weight facilitates global exploration while a smaller one facilitates local exploitation to fine-tune the current search area \cite{PSOPara}. Therefore, implementing a damping mechanism to $\omega$ contributes to better global exploration in the beginning stage and better local exploitation when the swarm is closer to the source. The study in \cite{kennedy2001swarm} shows that $c_1$ and $c_2$ together contribute to the oscillation behavior for the seekers. As the values of $c_1$ and $c_2$ are increased, the frequency of oscillation of the seekers' trajectories also increases. Hereafter, we set $c_1 = c_2 = 2$ as suggested in \cite{kennedy2001swarm}. We start with multiplying $\omega$ with a damping coefficient $\lambda_{\omega}$ as the damping mechanism, and set $\lambda_{\omega} = 0.95$ as suggested in \cite{Vijay_digital}. (\ref{eqn:omega}) is implemented in every iteration after velocity is updates. Therefore, \begin{equation} \label{eqn:omega} \omega_{i+1} = \lambda_{\omega}\omega_i, \quad \text{with} \quad \lambda_{\omega} = 0.95 \end{equation} We choose the swarm size to be five in Section \ref{sec:InertiaWeight} and \ref{sec:Constriction}, and this will be explained in Section \ref{sec:SPSO}. Six sets of simulations with different initial $\omega$ and $v_{\max}$ were conducted. In each set, we ran 1000 simulations on the real RSSI design space described by Figure \ref{fig:Map}. The cost function is defined as the negative RSSI at each point which needs to be minimized. Each simulation was terminated when $Gbest$ remained unchanged for 20 iterations. Since the signal strength at the source is -28 dBm, we compared $Gbest$ with 28 after each simulation. In addition, we counted the number of iterations $I$ and the total distance traveled by all robots $TotalD$. Additionally, the following data is also collected: \begin{itemize} \item $avgGbest$ denotes the mean of $Gbest$. \item $stdGbest$ denotes the standard deviation of $Gbest$. \item $avgI$ denotes the mean of $I$. \item $avgTotalD$ denotes the mean of $TotalD$. \end{itemize} Simulation results are shown in Table \ref{table:lamda}, where the units of $v_{\max}$, and $avgTotalD$ are mm/iteration and mm, respectively. \begin{table}[thbp] \centering \caption{Simulation results with different $\omega$ and $v_{\max}$, and a damping coefficient $\lambda_{\omega} = 0.95$ } \label{table:lamda} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Set & $\omega_1$ & $v_{\max}$ & $avgGbest$ & $stdGbest$ & $avgI$ & $avgTotalD$ \\ \hline 1 & 2 & 500 & 28.1095 & 0.8095 & 45.639 & 52017 \\ \hline 2 & 3 & 500 & 28.0584 & 0.5715 & 46.613 & 66716 \\ \hline 3 & 4 & 500 & 28.0358 & 0.3631 & 47.187 & 77312 \\ \hline 4 & 5 & 500 & 28.0215 & 0.1187 & 47.855 & 84749 \\ \hline 5 & 2 & 1000 & 28.0378 & 0.3553 & 48.313 & 101966 \\ \hline 6 & 3 & 1000 & 28.0272 & 0.1507 & 50.747 & 133442 \\ \hline \end{tabular} \end{table} The first four sets illustrate the effect of increasing $\omega_1$ on the performance of the searching algorithm. As $\omega_1$ increases from 2 to 5, $avgGbest$ gets closer to 28, which means the seekers perform better in locating the source. Meanwhile, a decreasing $stdGbest$ represents growing reliability of the algorithm which is another indicator of improved performance. This improvement can be supported by the fact that larger $\omega$ facilitated global exploration. With a larger initial $\omega$, seekers tend to preserve their previous velocity, and explore a larger area in early iterations. Therefore, they are less likely to be trapped in a local minimum, and more likely to find the global minimum. However, the improved performance is at the expense of higher energy consumption. Though the average iterations $avgI$ is not clearly related to the change of $\omega_1$, $avgTotalD$ in set 4 is about 2.5 times that of set 1. Figures \ref{fig:traj_w1} and \ref{fig:traj_w4} are good demonstrations of the reason, in which seekers are represented by different colors and the small circles represent initial positions. It is clear from the figures that trajectories with $\omega_1=4$ cycle in a larger area and converge slower than that of $\omega_1 = 1$. So we can clearly see a trade-off between performance and energy consumption. \begin{figure}[thbp] \centering \begin{subfigure}{0.9\linewidth} \includegraphics[width=\textwidth]{./traj_w1a} \caption{Trajectories of seekers with $\omega = 1$} \end{subfigure}\\% \begin{subfigure}{0.9\linewidth} \includegraphics[width=\textwidth]{./traj_w1b} \caption{Statistics with $\omega = 1$} \end{subfigure} \caption{Trajectories of seekers with $\omega = 1$} \label{fig:traj_w1} \end{figure} \begin{figure}[thbp] \centering \begin{subfigure}{0.9\linewidth} \includegraphics[width=\textwidth]{./traj_w4a} \caption{Trajectories of seekers with $\omega = 4$} \end{subfigure} \begin{subfigure}{0.9\linewidth} \includegraphics[width=\textwidth]{./traj_w4b} \caption{Statistics with $\omega = 1$} \end{subfigure}\\% \caption{Trajectories of seekers with $\omega = 4$} \label{fig:traj_w4} \end{figure} Sets 5 and 6 are used to reveal the influence of $v_{\max}$. Comparing sets 5 and 6 with 1 and 2, we find slight improvement in the $avgGbest$ and $stdGbest$ when $v_{\max}$ doubles. However, the average total distance traveled also doubles. Moreover, if we take set 4 into consideration, apparently, increasing $\omega$ is a better strategy than increasing $v_{\max}$ in terms of both performance and energy efficiency. \subsection{PSO with Constriction Factor} \label{sec:Constriction} Another variation similar to PSO with inertia weight that is implemented to the source seeking problem in this paper is PSO with a constriction factor. Introduced by Clerc in \cite{Clerc_Constriction}, the constriction factor is used to prevent ``explosion" and ensure convergence of PSO. Equation (\ref{eqn:PSO_constriction}) and (\ref{eqn:K_constriction}) describe the basic concept of the constriction factor. \begin{equation} \label{eqn:PSO_constriction} v^k_{i+1} = K[v^k_i + U(0,c_1) (Pbest^k - x^k_i)+ U(0,c_2) (Gbest - x^k_i)] \end{equation} \begin{equation} \label{eqn:K_constriction} K = \frac{2}{\left|2-\phi - \sqrt{\phi^2 - 4\phi} \right| },\quad \text{where} \; \phi=c_1+c_2,\; \phi >4 \end{equation} Compared to the original PSO, the entire RHS of (\ref{eqn:PSO_basic}) is multiplied by a coefficient $K$, called the constriction factor. $K$ is a function of $c_1$ and $c_2$ as shown in (\ref{eqn:K_constriction}). The main idea of constriction PSO is to take advantage of the mathematical nature of (\ref{eqn:K_constriction}) which guarantees the convergence of the algorithm. Detailed explanation of the mechanism of constriction PSO can be found in \cite{Clerc_Constriction} which is beyond the scope of this paper. A closer look at (\ref{eqn:PSO_constriction}) reveals that it is a special case of (\ref{eqn:PSO_v}), whose inertia weight $\omega$ is set to $K$, and $c_1$ and $c_2$ are multiplied by $K$. It is the relation between $\phi$ and $K$ that prevents the swarm from ``explosion". Therefore, according to Clerc, $v_{\max}$ is not necessary when the constriction factor is applied. However, for application reasons that were mentioned before, we would keep $v_{\max}$ to a smaller value to improve the energy efficiency of the robots. Since $K$ is a decreasing function of $\phi$ whose supremum is 1. If we think of it in terms of (\ref{eqn:PSO_v}), the supremum of $\omega$ is 1. This suggests the constriction PSO does not emphasize global exploration at the initial stage of the search. And it does not favor local exploitation, either, since $K$ does not vary through the search. We also conducted 6 sets of simulations on the constriction PSO algorithm. In all sets, $c_1$ and $c_2$ are set to the same value of $\phi/2$ to balance the influence of individual and swarm experience. All configurations are identical to those in the previous section unless otherwise specified. Data is collected in Table \ref{table:constriction}. \begin{table}[thbp] \centering \caption{Simulation results of constriction PSO} \label{table:constriction} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Set & $K$ & $\phi$ & $v_{\max}$ & $avgGbest$ & $stdGbest$ & $avgI$ & $avgTotalD$ \\ \hline 7 & 0.5 & 4.5 & 500 & 29.2826 & 2.4494 & 51.443 & 15837 \\ \hline 8 & 0.73 & 4.1 & 500 & 28.6432 & 1.9508 & 54.338 & 32282 \\ \hline 9 & 0.8 & 4.05 & 500 & 28.5366 & 1.6601 & 49.583 & 44542 \\ \hline 10 & 0.90 & 4.01 & 500 & 28.3358 & 1.4592 & 43.043 & 62037 \\ \hline 11 & 0.73 & 4.1 & 1000 & 28.2765 & 1.1675 & 48.535 & 52972 \\ \hline 12 & 0.8 & 4.05 & 1000 & 28.1730 & 0.9260 & 45.996 & 77154 \\ \hline \end{tabular} \end{table} Sets 7, 8, 9 and 10 show the impact of decreasing $\phi$, or increasing $K$. As $K$ increases, growing emphasis is put to the term of previous velocity in (\ref{eqn:PSO_constriction}). Therefore, the seekers tend to explore larger area and have a higher chance of finding the source. This improvement in performance is evident in these 4 sets, as both $avgGbest$ and $stdGbest$ decrease with $K$. And we can see $avgTotalD$ also grows with $K$ regardless of how $avgI$ varies. This is also the result of favoring global exploration, since the seekers ``fly" longer distance in each iteration when they emphasize exploration. In set 11 and 12, we keep $K$ equal to that in set 12 and 13, and only double $v_{\max}$. We can see significant improvement in performance when $v_{\max}$ doubles. Because this allows the entire RHS of (\ref{eqn:PSO_constriction}) to be doubled, including the term for previous velocity which puts emphasis on global exploration in another way. However, this improvement is not seen in set 5 and 6 in PSO with inertia weight. A reasonable guess may be that when the coefficient $\omega$ is large, the performance is mainly influenced by $\omega$ rather than $v_{\max}$. Observation on the trajectories of seekers reveals another feature of constriction PSO which can be seen in Figure \ref{fig:pso_oscillation}. This figure illustrates one typical simulation result where $K = 0.9$. In this simulation, the source is found after about 40 iterations, however, the swarm does not converge to the source after that as in Figures \ref{fig:traj_w1} and \ref{fig:traj_w4}. Instead, all seekers keep oscillating around the source showing no sign of convergence. Comparing this to those sets on PSO with inertia weight, we can find that the violent oscillation in Figure \ref{fig:pso_oscillation} actually roots in the lack of a damping mechanism in constriction PSO. With a constant coefficient for the previous velocity, the swarm is incapable of switching from favoring global exploration in initial stage of the search to favoring local exploitation in later stage. \begin{figure}[thbp] \centering \begin{subfigure}{0.9\linewidth} \includegraphics[width=\textwidth]{./pso_oscillation_a} \caption{Trajectories} \end{subfigure} \begin{subfigure}{0.9\linewidth} \includegraphics[width=\textwidth]{./pso_oscillation_b} \caption{Statistics} \end{subfigure \caption{Trajectories of seekers with a constriction factor of $K = 0.9$} \label{fig:pso_oscillation} \end{figure} Based on the collected the data, our preliminary judgment is PSO with inertia weight is better suited for our application. Moreover, 1/10th of the length of the search space is a reasonable value for $v_{\max}$. As for the inertia weight, any value between 2 and 4 should produce some good results. \subsection{SPSO} \label{sec:SPSO} The last PSO variation studied in this paper is Standard Particle Swarm Optimization (SPSO). It is a substantial improvement to the original PSO published in 1995, and researchers that developed their own PSO implementations benchmark their method’s performance against SPSO. The implementation of SPSO 2006 can be found here \cite{SPSO}. In this section, we will first provide a brief description of SPSO 2006, then study three SPSO topology models. The velocity update equation in SPSO is almost the same as Equation (\ref{eqn:PSO_v}), except that $Gbest$ is replaced with $Lbest$ -- best previous position in the neighborhood, as shown in the following equation. \begin{equation} v^k_{i+1} = \omega v^k_i + U(0,c) (Pbest^k - x^k_i) + U(0,c) (Lbest^k - x^k_i) \label{eqn:SPSO} \end{equation} As a benchmark variation, there are generally accepted values for all the parameters in SPSO. The swarm size is determined by $10+[2\sqrt{D}]$, where $D$ is the dimension of the search space. So we use 12 seekers in this subsection. Other parameter values are \begin{eqnarray} \omega &=& \frac{1}{2\ln(2)}\approx 0.721 \nonumber\\ c &=& \frac{1}{2}+\ln(2) \approx 1.193 \nonumber \end{eqnarray} Please refer to \cite{SPSO2011} for detailed description on initialization and confinement of SPSO. A noticeable distinction in SPSO is the introduction of neighborhood. Neighborhood defines the communication topology among seekers. In this subsection, we will study and compare the implementation of three commonly used models on the source seeking problem. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.28\textwidth} \includegraphics[width=\textwidth]{./graph_ring} \caption{Ring topology} \label{fig:graph_ring} \end{subfigure}\\% ~ \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{./graph_full} \caption{Fully connected topology} \label{fig:graph_full} \end{subfigure}\\ ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{./graph_random} \caption{Adaptive random topology} \label{fig:graph_random} \end{subfigure} \caption{Graphs of different topologies}\label{fig:graph} \end{figure} Figure \ref{fig:graph} present the graphs of all three models. Figures \ref{fig:graph}(a) and \ref{fig:graph}(b) are self-explanatory. Figure \ref{fig:graph}(c) is the adaptive random topology model \cite{clerc2010particle} when $K = 3$. In this model, each particle informs $K$ random particles and itself of its $Pbest$, which means it informs at most $K+1$ different particles and at least one particle (itself). For instance, in Figure \ref{fig:graph_random}, particle 6 informs particle 2 and itself and has 5 informants $\{1, 2, 3, 4, 5\}$. $Lbest$ of a particle is defined as the best $Pbest$ among all its informants. This graph changes after every unsuccessful iteration (no improvement in $Gbest$). To compare the aforementioned topology models, we conducted five sets of simulations with different models. Table \ref{table:topology} collect all simulation data. \begin{table}[htbp] \centering \caption{Simulation results with different topology models } \label{table:topology} \begin{tabular}{|c|c|c|c|c|c|} \hline Set & Topology & $avgGbest$ & $stdGbest$ & $avgI$ & $avgTotalD$ \\ \hline 13 & ring & 28.000 & 1.82E-05 & 29.970 & 68380 \\ \hline 14 & fully connected & 28.000 & 6.74E-04 & 29.331 & 65475 \\ \hline 15 & $K = 3$ & 28.000 & 2.20E-04 & 29.259 & 68860 \\ \hline 16 & $K = 6$ & 28.001 & 3.16E-02 & 28.671 & 66913 \\ \hline 17 & $K = 12$ & 28.002 & 4.47E-02 & 29.127 & 65212 \\ \hline \end{tabular} \end{table} Surprisingly, there's no distinguishable difference among these various models either in terms of $Gbest$ or $avgTotalD$. Consequently, we cannot draw any solid conclusion on the superiority of one model over the others. One plausible reason for this inconclusive result may lie in the number of seekers. 12 seekers seems to be excessive for our implementation making the influence of topology model and other parameters negligible. For the same reason, we only used five seekers in previous subsections to distinguish influence of those parameters of interest. In future implementations, we would prefer the fully connected model for simplicity reason. \section{PSO in Complex Environment} \label{sec:obstacles} In previous implementations, the source seeking task is carried out in an ideal obstacle-free environment. However, in real-world, we have to cope with obstacles as well as collisions among seekers which can be modeled as collision avoidance in the presence of dynamic obstacles. Therefore, we decompose the obstacle avoidance problem into two stages to deal with static and dynamic obstacles, respectively. \subsection{Static Obstacles} Static obstacles are common in a search environment. Constructions and uneven terrain are all potential static obstacles for seekers. We will give a short description of two static obstacle avoidance strategies proposed in our previous work \cite{PSO_Rui_AIM}, \cite{PSO_Rui_MSC}. Then we will integrate them into SPSO and compare their performance in simulations. Obstacles are described as simple convex or concave polygons in the search space as shown in Figure \ref{fig:obstacle_map}. The red star in the center represents the source. Seekers are provided with the information about each obstacle's position and size beforehand. The main idea of integrating obstacle avoidance into SPSO is to add a new operation mode to the seekers. They operate in the regular mode implementing SPSO when their trajectories do not collide with obstacles, and switch to obstacle avoiding mode when there's potential collision. \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{./obstacle_map} \caption{Map with obstacles} \label{fig:obstacle_map} \end{figure} Strategy 1 inherits the heuristic nature of PSO. It introduces a step with a specific length and a random direction into PSO when an obstacle lies in the next step of a seeker. We set the length of this random step to be the ``diameter" of the obstacle so that the seeker has a good chance of circumvent the obstacle in one step as shown in Figure \ref{fig:random_step}. Here diameter refers the largest distance between any two points on the obstacle. Let $D_j$ denote the diameter of the $j$th obstacle. Algorithm \ref{algo:Strategy1} presents the procedure of this strategy. It is executed whenever a new step is generated for a seeker by PSO. In other words, collision with any obstacle is always checked for every step from $x^k_i$ to $x^k_{i+1}$ before it is executed. Figure \ref{fig:traj_rand} demonstrates the trajectories of 12 seekers implementing Strategy 1 in SPSO. Different seekers' trajectories are represented by different line styles. ``*" denotes the initial position of each seeker, and red ``x" represents a potential collision with an obstacle. \begin{algorithm}[htbp] \caption{Static Obstacle Avoidance Strategy 1} \label{algo:Strategy1} \begin{algorithmic} [1] \IF {$x^k_{i+1}$ is in the $j$ the obstacle} \REPEAT \STATE set $v^k_{i+1}$ to a random direction and let $|v^k_{i+1}| = D_j$ \STATE $x^k_{i+1} = x^k_i + v^k_{i+1}$ \UNTIL {$x^k_{i+1}$ is not in any obstacle} \ENDIF \STATE Proceed with the normal PSO \end{algorithmic} \end{algorithm} \begin{figure}[htb] \centering \begin{subfigure}[b]{0.8\linewidth} \includegraphics[width=\textwidth]{./random_step} \caption{Strategy 1} \label{fig:random_step} \end{subfigure}\\% ~ \begin{subfigure}[b]{\linewidth} \includegraphics[width=\textwidth]{./traj_rand} \caption{Trajectories of seekers} \label{fig:traj_rand} \end{subfigure} \caption{Static obstacle avoidance -- Strategy 1} \end{figure} Strategy 2 is a variation of the \textit{Bug 1} algorithm \cite{bug_algorithm}. Instead of knowing the position of the goal, only the signal strength at its current position is known to a seeker in our case. Once a seeker switches to obstacle avoidance mode, it starts to circumnavigate the encountered obstacle. As it circumnavigates, it measures the signal strength along its path. After circumnavigating the entire obstacle, the seeker follows the shortest path on the boundary to point at which it measures the largest signal strength and implements regular SPSO. Although in our case, it is not guaranteed that the seeker would end at the closest point to the source on the obstacle's boundary as in the \textit{Bug Problem}, it is highly likely to be on the side of the obstacle which is closer to the source. Because the source signal strength generally decays with distance, though it is quite noisy and does not strictly follow a decay profile. This provides the basis of implementing the \textit{Bug 1} algorithm and prevents the seeker from going back to the same obstacle. Figure \ref{fig:bug1} illustrates the trajectory of a seeker implementing the ``Bug 1" algorithm to avoid obstacle. And Figure \ref{fig:traj_bug} presents the trajectories of 12 seekers implementing Strategy 2 in SPSO. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.8\linewidth} \includegraphics[width=\textwidth]{./bug1} \caption{Strategy 2} \label{fig:bug1} \end{subfigure}\\% ~ \begin{subfigure}[b]{\linewidth} \includegraphics[width=\textwidth]{./traj_bug} \caption{Trajectories of seekers} \label{fig:traj_bug} \end{subfigure} \caption{Static obstacle avoidance -- Strategy 1 (Bug 1 Algorithm)} \end{figure} Now we provide more simulation results to compare these two obstacle avoidance strategies. We conducted 4 sets of simulations. Set 18 and 19 used the parameters in set 2, and set 20 and 21 used the fully connected topology. Simulation results are collected in Table \ref{table:obstacle}. \begin{table}[thbp] \centering \caption{Simulation results for two obstacle avoidance strategies} \label{table:obstacle} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Set & Variation & Strategy & $avgGbest$ & $stdGbest$ & $avgI$ & $avgTotalD$ \\ \hline 18 & Inertia & 1 & 28.0229 & 0.3779 & 45.3947 & 68475 \\ \hline 19 & Inertia & 2 & 28.2634 & 1.2469 & 66.3155 & 66400 \\ \hline 20 & SPSO & 1 & 28 & 2.65E-04 & 33.776 & 74017 \\ \hline 21 & SPSO & 2 & 28.0826 & 0.4034 & 35.867 & 50034 \\ \hline \end{tabular} \end{table} Strategy 1 outperforms Strategy 2 in both $avgGbest$ and $stdGbest$ for both PSO variations. Very small standard deviation suggests the high reliability of Strategy 1. The reason Strategy 1 ends with longer distance is that its random step is usually larger than $v_{\max}$ because of the size of obstacles. While in Strategy 2, seekers usually take steps shorter than $v_{\max}$ when circumnavigating obstacles. Overall, Strategy 1 is better than Strategy 2 in simulations. Moreover, performance distinction originated from different variations is more significant than from different strategies. This primarily result from the size of the swarm. \subsection{Dynamic Obstacles} In all previous simulations, seekers are assumed to be points on a plane. However, in practice they have a finite area. This makes dynamic obstacles avoidance an inevitable issue in the application of swarm robots since every robot acts as a dynamic obstacle to others. To deal with this problem, we add two steps to the obstacle avoidance mode. During each iteration, after $x^k_{i+1}$ are generated by PSO and checked or modified using the static obstacle avoidance strategy, potential collisions among seekers need to be checked. In this stage, there are two possible kinds of collisions: (1) Collisions at seekers end points; (2) Collisions in seekers trajectories. Since the seekers are assumed to dimensionless point particles in PSO, the algorithm needs to be modified to take into account possible collision between the robots at the end of their paths in a real scenario. Some seekers maybe too close to fit in the real robots causing collisions at these end points. In order to circumvent this problem, we incorporate a model that forces the seekers to repel each other to rearrange their end points to avoid collision. This is described in Algorithm \ref{algo:RepulsiveForce}. \begin{algorithm}[htbp] \caption{End point arrangement using repulsive force} \label{algo:RepulsiveForce} \begin{algorithmic} [1] \STATE $S$ is the set of seekers \STATE $R$ is the radius of a seeker \STATE $t$ is a scaling factor \WHILE {$\exists \; |x^p_{i+1} - x^q_{i+1}| < 2R, \; p,q \in S, p\neq q$} \FOR{each $k \in S$} \FOR{each $j \in S, j \neq k$} \STATE $d = x^k_{i+1}-x^j_{i+1}$ \IF{$|d| >= 2R$} \STATE $Force(k,j) = 0$ \ELSE \STATE $Force(k,j)= d(2R-|d|)/|d|$ \ENDIF \ENDFOR \STATE $Force(k) = \sum_{j \in S, j \neq k} Force(k,j)$ \STATE $x^k_{i+1} = x^k_{i+1} + t Force(k)$ \ENDFOR \ENDWHILE \end{algorithmic} \end{algorithm} Algorithm \ref{algo:RepulsiveForce} ensures safe distance between any two seekers, and avoids end point collision. After this, if any seeker happens to lie in the path of others, the second step is activated. In this mode, seekers move sequentially. Only one seeker moves at a time while others stay still. We treat all other seeker as rectangular obstacles. We construct a reduced visibility graph \cite{choset2005principles} from the current position $x^k_i$ of the activated seeker to its next position $x^k_{i+1}$. Finally, by applying the Dijkstra's algorithm \cite{Dijkstra}, we generate the shortest path from $x^k_i$ to $x^k_{i+1}$. Figure \ref{fig:visibility} presents an example of the visibility graph and the shortest path. Due to the finite non-zero area of a seeker, the boundaries of obstacles and stationary seekers are expanded to the black dashed line to ensure a safety zone for the activated seeker (Minkowski sum of the obstacles with the seekers). The solid black lined delineate the visibility graph. The red dashed line represent the shortest path between $x^k_i$ and $x^k_{i+1}$. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{./visibility} \caption{Visibility graph and shortest path} \label{fig:visibility} \end{figure} So far, we have proposed a complete solution to implementing PSO on real robots in a complex environment where there exist potential collisions. In the next section, we will describe the experimental setup for implementation. \section{Experiments} \label{sec:experiments} Our testbed is built on a 5 m$\times$5 m area covered by the Vicon tracking system. This system provides accurate position information of robots by recognizing markers on the robots serving as an indoor GPS system. The source is an XBee module hanging in the middle at a height of 20 cm above floor. We do not place it on the floor in order to avoid potential collision with the robot. Robots used in experiments are small differential-drive robots modified from the Parallax Shield-Bot controlled by Arduino. Each robot is equipped with an XBee module to measure RSSI. Figures \ref{fig:setup} and \ref{fig:setup_close} are pictures of the testbed and robots. Figure \ref{fig:complex_environment} illustrates the complex environment with obstacles in which experiments were conducted. \begin{figure}[thbp] \centering \includegraphics[width=0.9\linewidth]{./experiment_set_up} \caption{Testbed} \label{fig:setup} \end{figure} \begin{figure}[thbp] \centering \includegraphics[width=0.9\linewidth]{./Boebots} \caption{Experiment environment with obstacles} \label{fig:setup_close} \end{figure} \begin{figure}[thbp] \centering \includegraphics[width=0.9\linewidth]{./complex_environment} \caption{Close look of robots and source} \label{fig:complex_environment} \end{figure} In the experiments, we built a centralized system with a computer being the center collecting and distributing information from and to all robots. This is not necessary since the strategies proposed in this paper are not computationally expensive and can be implemented on these robots without a strong computation ability. Also, since each robot can also communicate with each other, this system can work effectively without a central unit if robots have access to their own positions. Two successful experiments were recorded in the video. In these experiments, five robots were deployed to seek the source implementing the proposed strategies in an environment with obstacles. The parameters were chosen to be the same as set 2 in Table \ref{table:lamda}. \section{Conclusion} \label{sec:conclusion} In this paper, we explored the implementation of PSO to the electromagnetic source seeking problem. We modified PSO in accordance with the physical constraints posed by robots and the environment. Three PSO variations were evaluated through simulations. We found that the inertia weight PSO is best suited to our implementation and provided guidelines on parameter selection in PSO. We extended PSO from a pure computation technique to a complete solution to the source seeking problem in complex environment. Collision avoidance techniques were discussed extensively in this paper, and a complete obstacle avoidance strategy was incorporated in PSO. Our work was validated eventually in experiments using real robots. In the future, we plan to explore and develop more advanced PSO variations that are specific for robotics applications. We would like to extend our work to more general source seeking scenarios, where sources may have different features and the obstacles in the environment cannot be simplified as polygons. Though it is unlikely that any variation can perform effectively in all kinds of scenarios, it is possible to explore the preferences of various scenarios and provide guidance in the selection of variations and parameter configurations. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Introduction} Seeking a source with autonomous vehicles is an area of growing interest and wide applications. The source could be an electromagnetic signal, acoustic signal, thermal signal, or a chemical/biological agent. Motivated from source-seeking behavior exhibited by natural species from a microscopic level \cite{Optimotaxis} to a macroscopic level \cite{AnimalNavigation}, researchers have developed robots \cite{TDoA} and sensor networks \cite{Coop_control} that can imitate these behaviors in order to perform complex tasks such as environment monitoring, search and rescue operations, explosive detection, drug detection, sensing leakage of hazardous chemicals, pollution sensing and environmental studies. In this work, we address the problem in which a team of mobile agents, called the seekers, attempt to find the location of a source that emits an electromagnetic signal of unknown strength. The seekers can continuously sense the signal strength transmitted by the source at their current positions which generally decays with distance from the source. The decay profile of the signal strength is very noisy as shown in Figure \ref{fig:Map}, which makes many existing methods inapplicable. Based on this information, we investigate the issue of modifying Particle Swarm Optimization (PSO) and applying it to swarm mobile robots to approach the source seeking problem. \begin{figure}[htb] \centering \begin{subfigure}[b]{\linewidth} \includegraphics[width=\textwidth]{./RSSI} \end{subfigure}\\% \begin{subfigure}[b]{\linewidth} \includegraphics[width=\textwidth]{./RSSI3D} \end{subfigure} \caption{Map of RSSI (Received Signal Strength Indication). The color bars on the right indicate the signal strength in dBm. In this map, the source is located in the center where the RSSI is approximately -29 dBm} \label{fig:Map} \end{figure} A vast amount of research has been done on source seeking with autonomous agents based on the idea of gradient descent/ascent. Many methods employ a single agent to search for single or multiple, static targets. Authors of \cite{Nehorai} propose methods of computing the gradients of the Cramer-Rao bound on the location error with respect to a sensor's coordinates and moving the sensor opposite to the corresponding gradient directions. They extend their techniques in \cite{VaporSource} to provide motion planning strategy for a mobile sensor used for detecting low concentration vapors. This method assumes very small sensing noise in the measurement model, and its applicability to a very noisy model is not known. The same issue is encountered by the algorithm implemented on Autonomous Underwater Vehicles to localize hydrothermal vents in \cite{AUVCircle}. The method of estimating local gradient by taking measurements on a circle may no longer be effective on such a noisy model. In \cite{RotationBasedAngle}, the authors utilize the difference in RSSI received by a rotating directional antenna on the single wing of the Samarai MAV to obtain gradient. However, the application of this method is restricted by the dynamics of the Samarai MAV whose entire body including the mono-wing rotates at all time for stable hovering \cite{SingleWingMAV}. For other types of robots, additional structure has to be made, and extra energy has to be allocated to rotate the antenna at all times which is neither convenient nor cost-efficient. Recently, extremum seeking \cite{Extremum_seeking} techniques have been adopted aiming at the source seeking problem. It has been applied to nonholonomic vehicles in both 2-D \cite{Non-h_forward_v, Cochran2009} and 3-D \cite{3-D_seeking} environment using sinusoidal and stochastic perturbation \cite{Liu20101443}. The work in \cite{Stanković20101243} also addresses the issue of stochastic noise in measurement. It uses the methodology of stochastic approximation to deal with colored noise. Nevertheless, in all the extremum seeking related work, only a single vehicle is used to collect measurements at different locations which is time-consuming. In addition, the trajectories generated by extremum seeking always demand costly maneuvers. Authors of \cite{Multi_deployment} propose strategies to deploy a group of vehicles around the source while applying extremum seeking. However, their focus is on achieving a formation distribution of the vehicles in accordance with signal strength instead of taking advantage of the vehicle swarm in collecting measurements. Therefore, this work is not a favorable example of multi-agent source seeking. The advantages of robot swarm and sensor network attract many researchers to study them and apply them to the source seeking problem. In \cite{CircularFormation, ConsensusCircularFormation} and \cite{MultiUAVMovingSource}, a team of agents implement a consensus algorithm to maintain a particular formation to track the gradient of the source. This method assumes the formation of the swarm being maintained perfectly which is too ideal to achieve. Cooperative source seeking algorithms are proposed in \cite{WZhang_Switching, Coop_control, veh_network} and \cite{Choi2012}. In \cite{WZhang_Switching}, the authors provide algorithms and experimental validation of a switching strategy for a team of agents trying to localize a source. Each agent switches from individual exploration to cooperative exploration only when individual gradient estimate is not available. A distributed coordination algorithm based on adaptive control is proposed in \cite{Choi2012}. However, it does not take noise into consideration which is crucial to our problem. In \cite{Coop_control} and \cite{veh_network}, cooperative control is applied to deploy agents in a way which is optimal for gradient estimation. But these two methods together with all other methods in this paragraph are all prone to be trapped in a local minimum. Since all of them deploy the swarm or network in a close neighborhood, they lack the global information of the model. Authors of \cite{pappas_stochastic} and \cite{Atanasov2014} apply stochastic approximation to the problem and enable the swarm to find the source in complex and noisy environment. But the computation complexity of the method hinders its implementation on some cheaper and less capable robots. There are other source seeking methods that are non-gradient based. Some are developed by obtaining source functions. In \cite{Point_source, Source_obstacles, ElBadia2002, ElBadia2005} and \cite{Komornik2005}, researchers formulate the source seeking problem as an inverse problem. Depending on the source types, heat and wave equations are commonly used as candidate source functions. Parameters of source functions are found by optimizing the difference between collected data and simulated data from the candidate function. The source can be located after obtaining the source function. For instance, in \cite{Source_obstacles}, the incoming directions of waves are obtained after solving the inverse problem. By tracing back the wave direction rays, the source is located at the intersections of the rays. This method cannot be applied to our problem because it requires a priori knowledge about the candidate function that governs the decay profile of the source, and some information about the source like wave length and frequency. In our problem, only signal strength can be measured, and the signal decay profile is unknown. For the same reason, the statistical signal-processing technique `independent component analysis' \cite{Albini2003451} and statistical methods \cite{Chemical_plume, Optimotaxis}, \cite{Following_RF} used to construct a maximum likelihood map of the source location cannot be used for our problem. In \cite{IslerTrackingFish}, researchers use directional an tennas to obtain bearing measurements of tagged fish, and localize its location by triangulation. The method has been applied effectively in localizing the invaded carp in a lake. But it is not applicable to our problem since we can only access scalar measurements instead of bearing ones. In this paper, we address this problem using Particle Swarm Optimization (PSO) which does not require any a priori knowledge of the signal model emanating from the source. PSO is a heuristic non-gradient based strategy, an evolutionary computation technique. It was first proposed by Kennedy and Eberhart \cite{PSO} who were inspired by the behavior of bird flock and fish school. Ever since then many variations of PSO have been proposed by researchers, like inertia weight PSO \cite{ModifiedPSO}, constriction PSO \cite{Clerc_Constriction}, neighborhood PSO \cite{NeighborPSO} in its early time, and Quantum behaved PSO \cite{QuantumPSO} and Digital Pheromone PSO \cite{Vijay_digital} developed recently. As a swarm optimization technique, PSO has been applied to some source seeking tasks involving mobile robots. In \cite{PSOPugh2006} and \cite{PSOPugh2007}, PSO is modified to adapt to multi-robot search. The authors discuss the limitation posed by physical robots and conduct simulations with several communication models. However, these simulations are limited to some benchmark functions rather than real world signal sources. Since real sources such as electromagnetic signals, odor and heat sources are considerably different from the benchmark functions, the results in these papers are not remarkably useful to implement. In \cite{PSOJatmiko} and \cite{PSOMarques}, the authors incorporate potential field-based motion planning method with PSO, and propose strategies of localizing static and moving odor source in complex environment. While their strategies have been proven effective in simulations for localizing odor sources, we would like to focus our attention to electromagnetic sources. We will also explore different variations of PSO and compare the performance of various parameter configurations, topology models and obstacle avoidance strategies. In essence, our work focuses on finding the most effective PSO variation to solve the electromagnetic source seeking problem, and validate it in real experiments. Authors of \cite{PSODerr} also use RF signals as sources to be sought. But this work does not consider a more complex environment where obstacles exist, and is limited to simulations. A method for obtaining optimal PSO parameters is proposed in \cite{PSODoctor}. Essentially, it applies PSO at a lower level to seek for the source, and at a higher level to seek for the optimal parameter configuration. This iterative way of finding optimal parameters is reasonable computationally, but not applicable to real robots. It requires a significant number of trials at the lower level which leads to enormous amount of experimental data. The effort required to obtain the optimal parameter configuration in a specific scenario is extravagant for the simple goal of finding the source, especially when this optimal configuration can hardly be applied to a different scenario. \cite{HerefordPSO2007} provides some experimental results of implementing a modified PSO on real robots. They use a diffuse light source as the target and provide a simple strategy of dealing with obstacles. The experiments illustrate the efficacy of implementing PSO on real robots, but are constrained to a specific PSO configuration. In this work, we present extensive simulation and experimental results of various PSO variations and configurations, and provide suggestions on parameter selection. This work is based on our previous exploration \cite{PSO_Rui_AIM}, \cite{PSO_Rui_MSC} on applying PSO to the source seeking problem. The main contributions of this paper are as follows. 1) We use a non-gradient based technique for the source seeking problem due to the inherent irregularity in the signal model. 2) We incorporate physical constraints posed by robots in the implementation and evaluation of PSO. 3) Guidelines are presented to choose proper parameters for several PSO variations. A strategy which enables PSO to be implemented experimentally in a complex environment is first presented in this paper. This paper is outlined as follows. In Section \ref{sec:background}, we provide some background information regarding the problem description and the main concept of PSO. In Section \ref{sec:variations}, we evaluate and compare three PSO variations with different parameters. In Section \ref{sec:obstacles}, we propose collision avoidance strategies for implementing PSO in environments containing obstacles. In Section \ref{sec:experiments}, we present a description of the experimental setup, and discuss the implementation results. Finally, we conclude this work in Section \ref{sec:conclusion}. \section{Background} \label{sec:background} \subsection{Problem Description} Consider a point source located on a plane continuously transmits/emits a signal. Based on the assumption that a static source is present in the vicinity, a group of mobile agents, called {\it{seekers}}, explore the environment to locate the source. The scenario is similar to a colony of ant swarms trying to locate a food source. The seekers are assumed to be holonomic kinematic agents with maximum speed $v_{\max}$. The seekers have the capability to measure the strength of the signal emitted by the source at their current locations. However, the seekers have no information about the current location of the source, its signal strength and its decay profile. The objective of the seekers is to find the location of the source which is assumed at the location where the signal strength is maximum. For most sources, signal intensity normally decays radially as the distance to the source increases. The decay profiles of some common sources are shown below: \begin{itemize} \item Let $P$ denote the power at which an electromagnetic source emits a signal. The decay profile of the signal intensity is given by the following equation \cite{rawnet11} \begin{equation} {P_A} = \frac{{cP}}{{(1 + d)}^\alpha }, \label{eqn:pow} \end{equation} where $P_{A}$ is the power of the signal measured at a point $A$ on the plane located at a distance $d$ from the source, $c$ and $\alpha$ are constants that depend on the physical parameters of the medium through which the signal is transmitted. \item The concentration of a chemical $c(\vec{r}, t)$ at a point $\vec{r}$ emitted from a point source located at $\vec{\rho}$ emitting vapors at a constant rate of $\mu$ Kg/s is given by \cite{VaporSource} \begin{equation} \label{eqn:vapor} c\left( {\vec r,t} \right) = \frac{\mu }{{4\pi \kappa \left| {\vec r - \vec \rho } \right|}}{\text{erfc}}\left( {\frac{{\left| {\vec r - \vec \rho } \right|}}{{2\sqrt {\kappa (t - {t_0})} }}} \right), \end{equation} where $\kappa$ is constant diffusivity in $m^2/s$. If we ignore the complementary error function ${\text{erfc}}(x) = (2/\sqrt \pi )\int_x^\infty {{e^{ - {y^2}}}dy}$ and only consider the dominant part on its left in equation (\ref{eqn:vapor}), the substance concentration is inversely proportional to the distance between $\vec{r}$ and the vapor source. \item For a spherical sound source, the acoustic intensity $I_r$ at a point with a distance of $r$ from the center in the radial direction is given by \cite{PASPWEB2010} \begin{equation*} I_r = \frac{P}{4\pi r^2} \end{equation*} \end{itemize} However, in reality the measured signal intensity is too noisy to be accurately described by these decay profiles. For instance, reflection, refraction, multi-path fading, etc. can influence the decay profile dramatically making the actual one highly different from the theoretical one. Figure \ref{fig:Map} illustrates a real RSSI (Received Signal Strength Indication) map of an RF source provided by an XBee\textregistered ZB RF module on a 5 m$\times$5 m plane. The XBee module was located at the center of this area, and the measurements were taken by another XBee module. The figure clearly illustrates the fact that the real RSSI profile has many local extrema and is non-differentiable almost everywhere contrary to the theoretical decay profile described by (\ref{eqn:pow}) shown in Figure \ref{fig:Map_theory}. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\textwidth]{./RSSI_theory} \end{subfigure}\\% \begin{subfigure}[b]{0.9\linewidth} \includegraphics[width=\textwidth]{./RSSI_theory3D} \end{subfigure} \caption{Theoretical decay profile of an electromagnetic source} \label{fig:Map_theory} \end{figure} Therefore, an optimization method not limited by differentiability requirement, yet able to search highly multi-modal design spaces is desired for direct RSSI measurements, as portrayed in Figure 1. Also since mobile robots are used within the environment to seek maximum signal source, a population based method where each population member has a one to one correspondence with a mobile robot is favored. Particle Swarm Optimization (PSO), as described in Section II B, has the ingredients required to address the above challenges. \subsection{Original PSO} \label{sec:Original} In this subsection, we provide a brief description of the concept of PSO. PSO is a population based search algorithm first proposed in \cite{PSO} by Kennedy and Eberhart through simulation of a simplified social model. Although PSO was originally designed to solve minimization problems, it can be used to find the maximum of a function, with a simple change. It is initialized with a number of random solutions, called \textit{particles}. Each particle is also randomly initialized with a velocity within some user designed range. Each particle evolves iteratively in the search-space trying to improve the solution in the following manner: \begin{eqnarray} v^k_{i+1} &=& v^k_i + U(0,c_1) (Pbest^k - x^k_i) \nonumber \\ & &+ U(0,c_2) (Gbest - x^k_i) \label{eqn:PSO_basic}\\ x^k_{i+1} &=& x^k_i + v^k_{i+1} \nonumber \\ \end{eqnarray} where $x^k_{i+1}$ and $v^k_{i+1}$ represent the position and velocity of the $k$th particle in the $i+1$th iteration, $U(0,c_1)$ and $U(0,c_2)$ are uniformly distributed random numbers within $[0, c_1]$ and $[0, c_2]$, and $Pbest$ and $Gbest$ are best previous position of a particle and best previous position in the swarm. A best previous position is where a particle obtains the minimum cost in its search history. In our case, the above equations can be interpreted in the following way: Assuming $n$ seekers as $n$ particles moving in the search-space $X$, the position of the $k$th seeker in the $i$th iteration is denoted as $x^k_i \in X \subset \mathbb{R}^2$. The cost function $f: \mathbb{R}^2 \rightarrow \mathbb{R}$ incurred by each seeker is the negative of the signal strength received at its current location. The objective of the seekers is to communicate, and move in a manner so as to reach the global minimum of the cost function. We initialize the position and velocity of each seeker with a uniformly distributed vector $x^k_1$ in the search-space and $v^k_1$ within given bound. Each seeker is assumed to have the knowledge of its own best previous position and the global best previous position based on the assumption that each seeker has the memory to store its own previous experience and can benefit from the previous experience of all other members. Therefore, in (\ref{eqn:PSO_basic}), velocity $v^k_{i+1}$ consists of three terms: the effect of seeker's previous velocity, its best known position and global best known position. \section{PSO Modifications and Variations} \label{sec:variations} In this section, we will introduce some physical constraints into PSO and compare the performance of three different PSO variations and provide guidelines on parameter selection. First, we shall consider the bounds to the search space. As an optimization technique, boundary conditions exist in PSO. However, different actions are taken when particles violate boundary conditions: some discard these particles, some bounce them back, and some confine them to the boundary, etc. Discarding seekers may impair the performance especially when the total number of seekers is limited and every seeker is of significant value to performance. Therefore, we choose to confine seekers to boundaries, namely, Constraint 1: If $x^k_{i+1} \notin X$, then $x^k_{i+1}$ is set to the boundary point on $X$ in the direction of $v^k_{i+1}$. In PSO, there's no constraint on the velocity of a particle. It is possible for a particle to fly across the entire search space in a single iteration. However, this does not apply to seekers in our case which are actually ground robots. It is more appropriate to treat the velocity of a particle as a step of a robot in our implementation which decomposes the step length into speed and duration. Given a sufficiently long duration, a robot also can move a large step which crosses the entire search space. However, it is at the expense of a longer searching time, and a greater energy consumption, which is crucial to a robot with limited battery capacity. On the contrary, the step length should not be too small to affect the performance of PSO. To balance performance and efficiency, simulations to find a proper step length are conducted below. We denote the step length by $v_{\max}$, and check the following constraint in every iteration. Constraint 2: If $|v^k_{i+1}| > v_{\max}$, then $|v^k_{i+1}| = v_{\max}$ with the direction of $v^k_{i+1}$ unchanged. The aforementioned constraints apply to all simulations and experiments in this paper. \subsection{PSO with Inertia Weight} \label{sec:InertiaWeight} One variation of the original PSO is to introduce an inertia weight $\omega$ to the previous velocity in (\ref{eqn:PSO_basic}), which leads to the following equation to update velocity \cite{ModifiedPSO}, \begin{equation} v^k_{i+1} = \omega_i v^k_i + U(0,c_1) (Pbest^k - x^k_i)+ U(0,c_2) (Gbest - x^k_i) \label{eqn:PSO_v} \end{equation} According to Shi and Eberhart's analysis in \cite{ModifiedPSO}, the inertia weight is critical in balancing global and local search. If $\omega$ is set to zero, the seekers become ``memoryless" about its past velocity. With seekers' velocity only determined by individual and global best previous positions, all seekers would converge to the global best position directly making the search process resemble a local search. On the contrary, if $\omega$ is set to a larger number, the seekers are more stubborn in their previous velocity, which leads them to larger area of exploration. In other words, a larger inertia weight facilitates global exploration while a smaller one facilitates local exploitation to fine-tune the current search area \cite{PSOPara}. Therefore, implementing a damping mechanism to $\omega$ contributes to better global exploration in the beginning stage and better local exploitation when the swarm is closer to the source. The study in \cite{kennedy2001swarm} shows that $c_1$ and $c_2$ together contribute to the oscillation behavior for the seekers. As the values of $c_1$ and $c_2$ are increased, the frequency of oscillation of the seekers' trajectories also increases. Hereafter, we set $c_1 = c_2 = 2$ as suggested in \cite{kennedy2001swarm}. We start with multiplying $\omega$ with a damping coefficient $\lambda_{\omega}$ as the damping mechanism, and set $\lambda_{\omega} = 0.95$ as suggested in \cite{Vijay_digital}. (\ref{eqn:omega}) is implemented in every iteration after velocity is updates. Therefore, \begin{equation} \label{eqn:omega} \omega_{i+1} = \lambda_{\omega}\omega_i, \quad \text{with} \quad \lambda_{\omega} = 0.95 \end{equation} We choose the swarm size to be five in Section \ref{sec:InertiaWeight} and \ref{sec:Constriction}, and this will be explained in Section \ref{sec:SPSO}. Six sets of simulations with different initial $\omega$ and $v_{\max}$ were conducted. In each set, we ran 1000 simulations on the real RSSI design space described by Figure \ref{fig:Map}. The cost function is defined as the negative RSSI at each point which needs to be minimized. Each simulation was terminated when $Gbest$ remained unchanged for 20 iterations. Since the signal strength at the source is -28 dBm, we compared $Gbest$ with 28 after each simulation. In addition, we counted the number of iterations $I$ and the total distance traveled by all robots $TotalD$. Additionally, the following data is also collected: \begin{itemize} \item $avgGbest$ denotes the mean of $Gbest$. \item $stdGbest$ denotes the standard deviation of $Gbest$. \item $avgI$ denotes the mean of $I$. \item $avgTotalD$ denotes the mean of $TotalD$. \end{itemize} Simulation results are shown in Table \ref{table:lamda}, where the units of $v_{\max}$, and $avgTotalD$ are mm/iteration and mm, respectively. \begin{table}[thbp] \centering \caption{Simulation results with different $\omega$ and $v_{\max}$, and a damping coefficient $\lambda_{\omega} = 0.95$ } \label{table:lamda} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Set & $\omega_1$ & $v_{\max}$ & $avgGbest$ & $stdGbest$ & $avgI$ & $avgTotalD$ \\ \hline 1 & 2 & 500 & 28.1095 & 0.8095 & 45.639 & 52017 \\ \hline 2 & 3 & 500 & 28.0584 & 0.5715 & 46.613 & 66716 \\ \hline 3 & 4 & 500 & 28.0358 & 0.3631 & 47.187 & 77312 \\ \hline 4 & 5 & 500 & 28.0215 & 0.1187 & 47.855 & 84749 \\ \hline 5 & 2 & 1000 & 28.0378 & 0.3553 & 48.313 & 101966 \\ \hline 6 & 3 & 1000 & 28.0272 & 0.1507 & 50.747 & 133442 \\ \hline \end{tabular} \end{table} The first four sets illustrate the effect of increasing $\omega_1$ on the performance of the searching algorithm. As $\omega_1$ increases from 2 to 5, $avgGbest$ gets closer to 28, which means the seekers perform better in locating the source. Meanwhile, a decreasing $stdGbest$ represents growing reliability of the algorithm which is another indicator of improved performance. This improvement can be supported by the fact that larger $\omega$ facilitated global exploration. With a larger initial $\omega$, seekers tend to preserve their previous velocity, and explore a larger area in early iterations. Therefore, they are less likely to be trapped in a local minimum, and more likely to find the global minimum. However, the improved performance is at the expense of higher energy consumption. Though the average iterations $avgI$ is not clearly related to the change of $\omega_1$, $avgTotalD$ in set 4 is about 2.5 times that of set 1. Figures \ref{fig:traj_w1} and \ref{fig:traj_w4} are good demonstrations of the reason, in which seekers are represented by different colors and the small circles represent initial positions. It is clear from the figures that trajectories with $\omega_1=4$ cycle in a larger area and converge slower than that of $\omega_1 = 1$. So we can clearly see a trade-off between performance and energy consumption. \begin{figure}[thbp] \centering \begin{subfigure}{0.9\linewidth} \includegraphics[width=\textwidth]{./traj_w1a} \caption{Trajectories of seekers with $\omega = 1$} \end{subfigure}\\% \begin{subfigure}{0.9\linewidth} \includegraphics[width=\textwidth]{./traj_w1b} \caption{Statistics with $\omega = 1$} \end{subfigure} \caption{Trajectories of seekers with $\omega = 1$} \label{fig:traj_w1} \end{figure} \begin{figure}[thbp] \centering \begin{subfigure}{0.9\linewidth} \includegraphics[width=\textwidth]{./traj_w4a} \caption{Trajectories of seekers with $\omega = 4$} \end{subfigure} \begin{subfigure}{0.9\linewidth} \includegraphics[width=\textwidth]{./traj_w4b} \caption{Statistics with $\omega = 1$} \end{subfigure}\\% \caption{Trajectories of seekers with $\omega = 4$} \label{fig:traj_w4} \end{figure} Sets 5 and 6 are used to reveal the influence of $v_{\max}$. Comparing sets 5 and 6 with 1 and 2, we find slight improvement in the $avgGbest$ and $stdGbest$ when $v_{\max}$ doubles. However, the average total distance traveled also doubles. Moreover, if we take set 4 into consideration, apparently, increasing $\omega$ is a better strategy than increasing $v_{\max}$ in terms of both performance and energy efficiency. \subsection{PSO with Constriction Factor} \label{sec:Constriction} Another variation similar to PSO with inertia weight that is implemented to the source seeking problem in this paper is PSO with a constriction factor. Introduced by Clerc in \cite{Clerc_Constriction}, the constriction factor is used to prevent ``explosion" and ensure convergence of PSO. Equation (\ref{eqn:PSO_constriction}) and (\ref{eqn:K_constriction}) describe the basic concept of the constriction factor. \begin{equation} \label{eqn:PSO_constriction} v^k_{i+1} = K[v^k_i + U(0,c_1) (Pbest^k - x^k_i)+ U(0,c_2) (Gbest - x^k_i)] \end{equation} \begin{equation} \label{eqn:K_constriction} K = \frac{2}{\left|2-\phi - \sqrt{\phi^2 - 4\phi} \right| },\quad \text{where} \; \phi=c_1+c_2,\; \phi >4 \end{equation} Compared to the original PSO, the entire RHS of (\ref{eqn:PSO_basic}) is multiplied by a coefficient $K$, called the constriction factor. $K$ is a function of $c_1$ and $c_2$ as shown in (\ref{eqn:K_constriction}). The main idea of constriction PSO is to take advantage of the mathematical nature of (\ref{eqn:K_constriction}) which guarantees the convergence of the algorithm. Detailed explanation of the mechanism of constriction PSO can be found in \cite{Clerc_Constriction} which is beyond the scope of this paper. A closer look at (\ref{eqn:PSO_constriction}) reveals that it is a special case of (\ref{eqn:PSO_v}), whose inertia weight $\omega$ is set to $K$, and $c_1$ and $c_2$ are multiplied by $K$. It is the relation between $\phi$ and $K$ that prevents the swarm from ``explosion". Therefore, according to Clerc, $v_{\max}$ is not necessary when the constriction factor is applied. However, for application reasons that were mentioned before, we would keep $v_{\max}$ to a smaller value to improve the energy efficiency of the robots. Since $K$ is a decreasing function of $\phi$ whose supremum is 1. If we think of it in terms of (\ref{eqn:PSO_v}), the supremum of $\omega$ is 1. This suggests the constriction PSO does not emphasize global exploration at the initial stage of the search. And it does not favor local exploitation, either, since $K$ does not vary through the search. We also conducted 6 sets of simulations on the constriction PSO algorithm. In all sets, $c_1$ and $c_2$ are set to the same value of $\phi/2$ to balance the influence of individual and swarm experience. All configurations are identical to those in the previous section unless otherwise specified. Data is collected in Table \ref{table:constriction}. \begin{table}[thbp] \centering \caption{Simulation results of constriction PSO} \label{table:constriction} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Set & $K$ & $\phi$ & $v_{\max}$ & $avgGbest$ & $stdGbest$ & $avgI$ & $avgTotalD$ \\ \hline 7 & 0.5 & 4.5 & 500 & 29.2826 & 2.4494 & 51.443 & 15837 \\ \hline 8 & 0.73 & 4.1 & 500 & 28.6432 & 1.9508 & 54.338 & 32282 \\ \hline 9 & 0.8 & 4.05 & 500 & 28.5366 & 1.6601 & 49.583 & 44542 \\ \hline 10 & 0.90 & 4.01 & 500 & 28.3358 & 1.4592 & 43.043 & 62037 \\ \hline 11 & 0.73 & 4.1 & 1000 & 28.2765 & 1.1675 & 48.535 & 52972 \\ \hline 12 & 0.8 & 4.05 & 1000 & 28.1730 & 0.9260 & 45.996 & 77154 \\ \hline \end{tabular} \end{table} Sets 7, 8, 9 and 10 show the impact of decreasing $\phi$, or increasing $K$. As $K$ increases, growing emphasis is put to the term of previous velocity in (\ref{eqn:PSO_constriction}). Therefore, the seekers tend to explore larger area and have a higher chance of finding the source. This improvement in performance is evident in these 4 sets, as both $avgGbest$ and $stdGbest$ decrease with $K$. And we can see $avgTotalD$ also grows with $K$ regardless of how $avgI$ varies. This is also the result of favoring global exploration, since the seekers ``fly" longer distance in each iteration when they emphasize exploration. In set 11 and 12, we keep $K$ equal to that in set 12 and 13, and only double $v_{\max}$. We can see significant improvement in performance when $v_{\max}$ doubles. Because this allows the entire RHS of (\ref{eqn:PSO_constriction}) to be doubled, including the term for previous velocity which puts emphasis on global exploration in another way. However, this improvement is not seen in set 5 and 6 in PSO with inertia weight. A reasonable guess may be that when the coefficient $\omega$ is large, the performance is mainly influenced by $\omega$ rather than $v_{\max}$. Observation on the trajectories of seekers reveals another feature of constriction PSO which can be seen in Figure \ref{fig:pso_oscillation}. This figure illustrates one typical simulation result where $K = 0.9$. In this simulation, the source is found after about 40 iterations, however, the swarm does not converge to the source after that as in Figures \ref{fig:traj_w1} and \ref{fig:traj_w4}. Instead, all seekers keep oscillating around the source showing no sign of convergence. Comparing this to those sets on PSO with inertia weight, we can find that the violent oscillation in Figure \ref{fig:pso_oscillation} actually roots in the lack of a damping mechanism in constriction PSO. With a constant coefficient for the previous velocity, the swarm is incapable of switching from favoring global exploration in initial stage of the search to favoring local exploitation in later stage. \begin{figure}[thbp] \centering \begin{subfigure}{0.9\linewidth} \includegraphics[width=\textwidth]{./pso_oscillation_a} \caption{Trajectories} \end{subfigure} \begin{subfigure}{0.9\linewidth} \includegraphics[width=\textwidth]{./pso_oscillation_b} \caption{Statistics} \end{subfigure \caption{Trajectories of seekers with a constriction factor of $K = 0.9$} \label{fig:pso_oscillation} \end{figure} Based on the collected the data, our preliminary judgment is PSO with inertia weight is better suited for our application. Moreover, 1/10th of the length of the search space is a reasonable value for $v_{\max}$. As for the inertia weight, any value between 2 and 4 should produce some good results. \subsection{SPSO} \label{sec:SPSO} The last PSO variation studied in this paper is Standard Particle Swarm Optimization (SPSO). It is a substantial improvement to the original PSO published in 1995, and researchers that developed their own PSO implementations benchmark their method’s performance against SPSO. The implementation of SPSO 2006 can be found here \cite{SPSO}. In this section, we will first provide a brief description of SPSO 2006, then study three SPSO topology models. The velocity update equation in SPSO is almost the same as Equation (\ref{eqn:PSO_v}), except that $Gbest$ is replaced with $Lbest$ -- best previous position in the neighborhood, as shown in the following equation. \begin{equation} v^k_{i+1} = \omega v^k_i + U(0,c) (Pbest^k - x^k_i) + U(0,c) (Lbest^k - x^k_i) \label{eqn:SPSO} \end{equation} As a benchmark variation, there are generally accepted values for all the parameters in SPSO. The swarm size is determined by $10+[2\sqrt{D}]$, where $D$ is the dimension of the search space. So we use 12 seekers in this subsection. Other parameter values are \begin{eqnarray} \omega &=& \frac{1}{2\ln(2)}\approx 0.721 \nonumber\\ c &=& \frac{1}{2}+\ln(2) \approx 1.193 \nonumber \end{eqnarray} Please refer to \cite{SPSO2011} for detailed description on initialization and confinement of SPSO. A noticeable distinction in SPSO is the introduction of neighborhood. Neighborhood defines the communication topology among seekers. In this subsection, we will study and compare the implementation of three commonly used models on the source seeking problem. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.28\textwidth} \includegraphics[width=\textwidth]{./graph_ring} \caption{Ring topology} \label{fig:graph_ring} \end{subfigure}\\% ~ \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{./graph_full} \caption{Fully connected topology} \label{fig:graph_full} \end{subfigure}\\ ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{./graph_random} \caption{Adaptive random topology} \label{fig:graph_random} \end{subfigure} \caption{Graphs of different topologies}\label{fig:graph} \end{figure} Figure \ref{fig:graph} present the graphs of all three models. Figures \ref{fig:graph}(a) and \ref{fig:graph}(b) are self-explanatory. Figure \ref{fig:graph}(c) is the adaptive random topology model \cite{clerc2010particle} when $K = 3$. In this model, each particle informs $K$ random particles and itself of its $Pbest$, which means it informs at most $K+1$ different particles and at least one particle (itself). For instance, in Figure \ref{fig:graph_random}, particle 6 informs particle 2 and itself and has 5 informants $\{1, 2, 3, 4, 5\}$. $Lbest$ of a particle is defined as the best $Pbest$ among all its informants. This graph changes after every unsuccessful iteration (no improvement in $Gbest$). To compare the aforementioned topology models, we conducted five sets of simulations with different models. Table \ref{table:topology} collect all simulation data. \begin{table}[htbp] \centering \caption{Simulation results with different topology models } \label{table:topology} \begin{tabular}{|c|c|c|c|c|c|} \hline Set & Topology & $avgGbest$ & $stdGbest$ & $avgI$ & $avgTotalD$ \\ \hline 13 & ring & 28.000 & 1.82E-05 & 29.970 & 68380 \\ \hline 14 & fully connected & 28.000 & 6.74E-04 & 29.331 & 65475 \\ \hline 15 & $K = 3$ & 28.000 & 2.20E-04 & 29.259 & 68860 \\ \hline 16 & $K = 6$ & 28.001 & 3.16E-02 & 28.671 & 66913 \\ \hline 17 & $K = 12$ & 28.002 & 4.47E-02 & 29.127 & 65212 \\ \hline \end{tabular} \end{table} Surprisingly, there's no distinguishable difference among these various models either in terms of $Gbest$ or $avgTotalD$. Consequently, we cannot draw any solid conclusion on the superiority of one model over the others. One plausible reason for this inconclusive result may lie in the number of seekers. 12 seekers seems to be excessive for our implementation making the influence of topology model and other parameters negligible. For the same reason, we only used five seekers in previous subsections to distinguish influence of those parameters of interest. In future implementations, we would prefer the fully connected model for simplicity reason. \section{PSO in Complex Environment} \label{sec:obstacles} In previous implementations, the source seeking task is carried out in an ideal obstacle-free environment. However, in real-world, we have to cope with obstacles as well as collisions among seekers which can be modeled as collision avoidance in the presence of dynamic obstacles. Therefore, we decompose the obstacle avoidance problem into two stages to deal with static and dynamic obstacles, respectively. \subsection{Static Obstacles} Static obstacles are common in a search environment. Constructions and uneven terrain are all potential static obstacles for seekers. We will give a short description of two static obstacle avoidance strategies proposed in our previous work \cite{PSO_Rui_AIM}, \cite{PSO_Rui_MSC}. Then we will integrate them into SPSO and compare their performance in simulations. Obstacles are described as simple convex or concave polygons in the search space as shown in Figure \ref{fig:obstacle_map}. The red star in the center represents the source. Seekers are provided with the information about each obstacle's position and size beforehand. The main idea of integrating obstacle avoidance into SPSO is to add a new operation mode to the seekers. They operate in the regular mode implementing SPSO when their trajectories do not collide with obstacles, and switch to obstacle avoiding mode when there's potential collision. \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{./obstacle_map} \caption{Map with obstacles} \label{fig:obstacle_map} \end{figure} Strategy 1 inherits the heuristic nature of PSO. It introduces a step with a specific length and a random direction into PSO when an obstacle lies in the next step of a seeker. We set the length of this random step to be the ``diameter" of the obstacle so that the seeker has a good chance of circumvent the obstacle in one step as shown in Figure \ref{fig:random_step}. Here diameter refers the largest distance between any two points on the obstacle. Let $D_j$ denote the diameter of the $j$th obstacle. Algorithm \ref{algo:Strategy1} presents the procedure of this strategy. It is executed whenever a new step is generated for a seeker by PSO. In other words, collision with any obstacle is always checked for every step from $x^k_i$ to $x^k_{i+1}$ before it is executed. Figure \ref{fig:traj_rand} demonstrates the trajectories of 12 seekers implementing Strategy 1 in SPSO. Different seekers' trajectories are represented by different line styles. ``*" denotes the initial position of each seeker, and red ``x" represents a potential collision with an obstacle. \begin{algorithm}[htbp] \caption{Static Obstacle Avoidance Strategy 1} \label{algo:Strategy1} \begin{algorithmic} [1] \IF {$x^k_{i+1}$ is in the $j$ the obstacle} \REPEAT \STATE set $v^k_{i+1}$ to a random direction and let $|v^k_{i+1}| = D_j$ \STATE $x^k_{i+1} = x^k_i + v^k_{i+1}$ \UNTIL {$x^k_{i+1}$ is not in any obstacle} \ENDIF \STATE Proceed with the normal PSO \end{algorithmic} \end{algorithm} \begin{figure}[htb] \centering \begin{subfigure}[b]{0.8\linewidth} \includegraphics[width=\textwidth]{./random_step} \caption{Strategy 1} \label{fig:random_step} \end{subfigure}\\% ~ \begin{subfigure}[b]{\linewidth} \includegraphics[width=\textwidth]{./traj_rand} \caption{Trajectories of seekers} \label{fig:traj_rand} \end{subfigure} \caption{Static obstacle avoidance -- Strategy 1} \end{figure} Strategy 2 is a variation of the \textit{Bug 1} algorithm \cite{bug_algorithm}. Instead of knowing the position of the goal, only the signal strength at its current position is known to a seeker in our case. Once a seeker switches to obstacle avoidance mode, it starts to circumnavigate the encountered obstacle. As it circumnavigates, it measures the signal strength along its path. After circumnavigating the entire obstacle, the seeker follows the shortest path on the boundary to point at which it measures the largest signal strength and implements regular SPSO. Although in our case, it is not guaranteed that the seeker would end at the closest point to the source on the obstacle's boundary as in the \textit{Bug Problem}, it is highly likely to be on the side of the obstacle which is closer to the source. Because the source signal strength generally decays with distance, though it is quite noisy and does not strictly follow a decay profile. This provides the basis of implementing the \textit{Bug 1} algorithm and prevents the seeker from going back to the same obstacle. Figure \ref{fig:bug1} illustrates the trajectory of a seeker implementing the ``Bug 1" algorithm to avoid obstacle. And Figure \ref{fig:traj_bug} presents the trajectories of 12 seekers implementing Strategy 2 in SPSO. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.8\linewidth} \includegraphics[width=\textwidth]{./bug1} \caption{Strategy 2} \label{fig:bug1} \end{subfigure}\\% ~ \begin{subfigure}[b]{\linewidth} \includegraphics[width=\textwidth]{./traj_bug} \caption{Trajectories of seekers} \label{fig:traj_bug} \end{subfigure} \caption{Static obstacle avoidance -- Strategy 1 (Bug 1 Algorithm)} \end{figure} Now we provide more simulation results to compare these two obstacle avoidance strategies. We conducted 4 sets of simulations. Set 18 and 19 used the parameters in set 2, and set 20 and 21 used the fully connected topology. Simulation results are collected in Table \ref{table:obstacle}. \begin{table}[thbp] \centering \caption{Simulation results for two obstacle avoidance strategies} \label{table:obstacle} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Set & Variation & Strategy & $avgGbest$ & $stdGbest$ & $avgI$ & $avgTotalD$ \\ \hline 18 & Inertia & 1 & 28.0229 & 0.3779 & 45.3947 & 68475 \\ \hline 19 & Inertia & 2 & 28.2634 & 1.2469 & 66.3155 & 66400 \\ \hline 20 & SPSO & 1 & 28 & 2.65E-04 & 33.776 & 74017 \\ \hline 21 & SPSO & 2 & 28.0826 & 0.4034 & 35.867 & 50034 \\ \hline \end{tabular} \end{table} Strategy 1 outperforms Strategy 2 in both $avgGbest$ and $stdGbest$ for both PSO variations. Very small standard deviation suggests the high reliability of Strategy 1. The reason Strategy 1 ends with longer distance is that its random step is usually larger than $v_{\max}$ because of the size of obstacles. While in Strategy 2, seekers usually take steps shorter than $v_{\max}$ when circumnavigating obstacles. Overall, Strategy 1 is better than Strategy 2 in simulations. Moreover, performance distinction originated from different variations is more significant than from different strategies. This primarily result from the size of the swarm. \subsection{Dynamic Obstacles} In all previous simulations, seekers are assumed to be points on a plane. However, in practice they have a finite area. This makes dynamic obstacles avoidance an inevitable issue in the application of swarm robots since every robot acts as a dynamic obstacle to others. To deal with this problem, we add two steps to the obstacle avoidance mode. During each iteration, after $x^k_{i+1}$ are generated by PSO and checked or modified using the static obstacle avoidance strategy, potential collisions among seekers need to be checked. In this stage, there are two possible kinds of collisions: (1) Collisions at seekers end points; (2) Collisions in seekers trajectories. Since the seekers are assumed to dimensionless point particles in PSO, the algorithm needs to be modified to take into account possible collision between the robots at the end of their paths in a real scenario. Some seekers maybe too close to fit in the real robots causing collisions at these end points. In order to circumvent this problem, we incorporate a model that forces the seekers to repel each other to rearrange their end points to avoid collision. This is described in Algorithm \ref{algo:RepulsiveForce}. \begin{algorithm}[htbp] \caption{End point arrangement using repulsive force} \label{algo:RepulsiveForce} \begin{algorithmic} [1] \STATE $S$ is the set of seekers \STATE $R$ is the radius of a seeker \STATE $t$ is a scaling factor \WHILE {$\exists \; |x^p_{i+1} - x^q_{i+1}| < 2R, \; p,q \in S, p\neq q$} \FOR{each $k \in S$} \FOR{each $j \in S, j \neq k$} \STATE $d = x^k_{i+1}-x^j_{i+1}$ \IF{$|d| >= 2R$} \STATE $Force(k,j) = 0$ \ELSE \STATE $Force(k,j)= d(2R-|d|)/|d|$ \ENDIF \ENDFOR \STATE $Force(k) = \sum_{j \in S, j \neq k} Force(k,j)$ \STATE $x^k_{i+1} = x^k_{i+1} + t Force(k)$ \ENDFOR \ENDWHILE \end{algorithmic} \end{algorithm} Algorithm \ref{algo:RepulsiveForce} ensures safe distance between any two seekers, and avoids end point collision. After this, if any seeker happens to lie in the path of others, the second step is activated. In this mode, seekers move sequentially. Only one seeker moves at a time while others stay still. We treat all other seeker as rectangular obstacles. We construct a reduced visibility graph \cite{choset2005principles} from the current position $x^k_i$ of the activated seeker to its next position $x^k_{i+1}$. Finally, by applying the Dijkstra's algorithm \cite{Dijkstra}, we generate the shortest path from $x^k_i$ to $x^k_{i+1}$. Figure \ref{fig:visibility} presents an example of the visibility graph and the shortest path. Due to the finite non-zero area of a seeker, the boundaries of obstacles and stationary seekers are expanded to the black dashed line to ensure a safety zone for the activated seeker (Minkowski sum of the obstacles with the seekers). The solid black lined delineate the visibility graph. The red dashed line represent the shortest path between $x^k_i$ and $x^k_{i+1}$. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{./visibility} \caption{Visibility graph and shortest path} \label{fig:visibility} \end{figure} So far, we have proposed a complete solution to implementing PSO on real robots in a complex environment where there exist potential collisions. In the next section, we will describe the experimental setup for implementation. \section{Experiments} \label{sec:experiments} Our testbed is built on a 5 m$\times$5 m area covered by the Vicon tracking system. This system provides accurate position information of robots by recognizing markers on the robots serving as an indoor GPS system. The source is an XBee module hanging in the middle at a height of 20 cm above floor. We do not place it on the floor in order to avoid potential collision with the robot. Robots used in experiments are small differential-drive robots modified from the Parallax Shield-Bot controlled by Arduino. Each robot is equipped with an XBee module to measure RSSI. Figures \ref{fig:setup} and \ref{fig:setup_close} are pictures of the testbed and robots. Figure \ref{fig:complex_environment} illustrates the complex environment with obstacles in which experiments were conducted. \begin{figure}[thbp] \centering \includegraphics[width=0.9\linewidth]{./experiment_set_up} \caption{Testbed} \label{fig:setup} \end{figure} \begin{figure}[thbp] \centering \includegraphics[width=0.9\linewidth]{./Boebots} \caption{Experiment environment with obstacles} \label{fig:setup_close} \end{figure} \begin{figure}[thbp] \centering \includegraphics[width=0.9\linewidth]{./complex_environment} \caption{Close look of robots and source} \label{fig:complex_environment} \end{figure} In the experiments, we built a centralized system with a computer being the center collecting and distributing information from and to all robots. This is not necessary since the strategies proposed in this paper are not computationally expensive and can be implemented on these robots without a strong computation ability. Also, since each robot can also communicate with each other, this system can work effectively without a central unit if robots have access to their own positions. Two successful experiments were recorded in the video. In these experiments, five robots were deployed to seek the source implementing the proposed strategies in an environment with obstacles. The parameters were chosen to be the same as set 2 in Table \ref{table:lamda}. \section{Conclusion} \label{sec:conclusion} In this paper, we explored the implementation of PSO to the electromagnetic source seeking problem. We modified PSO in accordance with the physical constraints posed by robots and the environment. Three PSO variations were evaluated through simulations. We found that the inertia weight PSO is best suited to our implementation and provided guidelines on parameter selection in PSO. We extended PSO from a pure computation technique to a complete solution to the source seeking problem in complex environment. Collision avoidance techniques were discussed extensively in this paper, and a complete obstacle avoidance strategy was incorporated in PSO. Our work was validated eventually in experiments using real robots. In the future, we plan to explore and develop more advanced PSO variations that are specific for robotics applications. We would like to extend our work to more general source seeking scenarios, where sources may have different features and the obstacles in the environment cannot be simplified as polygons. Though it is unlikely that any variation can perform effectively in all kinds of scenarios, it is possible to explore the preferences of various scenarios and provide guidance in the selection of variations and parameter configurations. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
2,877,628,091,418
arxiv
\section{Introduction} \label{sec:intro} Protoplanetary disks, as the birthplace of planetary systems, always exhibit turbulent motions \cite{Lesur2022}. There are several mechanisms currently discussed as main contributors: hydrodynamical instabilities as the vertical shear instability \cite{Goldreich1967, Fricke1968, Flock2017}, convective overstability \cite{Lyra2014, Klahr2014}, Zombie vortex stability \cite{Marcus2015, Marcus2016}, and magneto-hydrodynamical instabilities like the magnetorotational instability \cite{Balbus1991, Balbus1996, Balbus1998}. Turbulence regulates the angular momentum transport to sustain gas accretion onto the central star \cite{Shakura73a,Pringle1981}, influences the evolution of dust grains in disks \cite{Birnstiel2016}, and plays an important role in controlling the dynamics of embedded planets \cite{Kley2012}. Hence, a detailed understanding of disk evolution and planet formation requires knowledge of the strength of turbulent motions. Placing constraints on the turbulence level is also important in interpreting observational data with numerical simulations. In recent years, high-resolution images at infrared and (sub-)millimeter wavelengths have shown that gaps and rings are frequently observed in planet-forming disks \cite{Avenhaus2018, Long2018, Andrews2018}. These interesting substructures are often thought to be created by planet-disk interaction \cite{Dipierro2018,Zhang2018,Liu2019}. The description of the underlying physics relies heavily on (magneto-)hydrodynamical simulations in which turbulence strongly affects the resulting depth and number of gaps \cite{Pinilla2012a,Flock2015,Rosotti2016,Bertrang2018,Dong2018}. As a consequence, the inferred properties (e.g., mass and location) and number of the ``unseen'' (proto)planets are dependent on the input strength of turbulence in the simulation. However, measuring turbulence with gas line observations is very challenging because on the one hand it demands for data at high spatial and spectral resolution, and on the other hand thermal motion usually dominates the broadening of lines, leading to substantial difficulties when separating its contribution from the measured total line width \cite{Teague2016}. Therefore, the measurement of turbulence via gas line data so far is limited to a small number of disks, revealing low turbulent velocities typically below $5\%\,{\sim}\,10\%$ of the local sound speed ($c_s$) \cite{Guilloteau2012,Flaherty2015,Flaherty2017,Teague2018a,Flaherty2020}. An exception is for the DM Tau disk, where the measured turbulent velocity approaches $0.25\,{\sim}\,0.33\,c_s$ \cite{Flaherty2020}. Turbulence also affects the motion of the dust, either in the radial direction or in the vertical one. Dullemond \& Penzlin \cite{Dullemond2018b} suggested that the dependence of turbulence on the dust-to-gas mass ratio together with the radial drift of dust particles could be the origin of the ring structures commonly found in protoplanetary disks. By comparing the width of the millimeter continuum emission ring with the pressure scale height of the disk, Dullemond et al. \cite{Dullemond2018} found strong evidence of dust trapping operating in all the rings analyzed in their sample, and put constraints on the quantity $\alpha_{\rm turb}/{\rm St}$, where $\alpha_{\rm turb}$ is the turbulence parameter, and $\rm{St}$ is the Stokes number of the dust particles. Vertical stirring induced by turbulent motions acts as a counter process against the settling of dust grains. Theoretically speaking, millimeter continuum emission is dominated by millimeter-sized dust particles that are located near the midplane of the disk. However, material residing in the adjacent rings, located above the midplane, would hide the gap due to beam smearing. How severe this smoothening effect is depends on the scale height of millimeter-sized dust grains \cite{Pinte2016}. In stronger turbulent disks, dust grains are more vertically distributed, leading to a more substantial reduction on the gap depth. Recently, Doi \& Kataoka \cite{Doi2021} discussed the feasibility of analyzing the intensity variation as a function of azimuth on the rings to estimate the degree of dust settling. When the disk is optically thin and viewed at an oblique inclination, the optical depth $\tau$ along the line of sight on the major and minor axes differs from each other. Such a difference in $\tau$ forms a peak and dip in the brightness profile at the azimuthal angle of the major and minor axis, respectively. The ratio between the brightness peak and dip depends on the millimeter dust scale height. The authors fit the azimuthal brightness profiles of two rings in the HD\,163296 disk, and constrained the gas-to-dust scale height ratio and therefore the turbulence level. In their analysis, the disk is assumed to be vertically isothermal with a fixed midplane temperature profile. How such a simplification affects the result, particularly for rings with a large millimeter dust scale height (i.e., high turbulence regions) needs to be investigated. In this work, we take the HD\,163296 disk as an example to investigate in detail the link between millimeter gap contrasts and the strength of turbulence, and highlight some features and degeneracies that can be encountered. Sect.~\ref{sec:obs} gives an introduction about the HD\,163296 disk. The modeling assumptions are presented in Sect.~\ref{sec:modeling}, while the process of dedicated fitting to the ALMA image is described in Sect.~\ref{sec:fitalma}. We discuss our results in Sect.~\ref{sec:discussion}. The paper ends up with a summary in Sect.~\ref{sec:summary}. \begin{figure*}[!t] \centering \includegraphics[width=0.85\textwidth]{sbfit_v4.eps} \caption{A comparison between models and the DSHARP observation of the HD\,163296 disk. Panel (a): fiducial image generated by the DSHARP team. The beam is shown as the black ellipse in the bottom left corner. The dashed lines indicate the semi-major (to the northwest) and minor axes of the disk. The values of the azimuth ($\phi$) for the disk major and minor axes are given for a reference of the coordinate. Panel (b): the simulated image of the best-fit model (i.e., model \texttt{I4}). Panel (c)-(f): a comparison of brightness profiles between observation and different models. The red dots refer to data points, whereas the green, brown, black and blue lines represent model \texttt{I1}, \texttt{I2}, \texttt{I3} and \texttt{I4}, respectively. Model parameters can be found in Table~\ref{tab:paras}, and model gap contrasts are compared with the observation in Table~\ref{tab:gapcont}. Note that the four models overlap well with each other in panel (c).} \label{fig:imgres} \end{figure*} \section{Circumstellar disk of HD\,163296} \label{sec:obs} HD\,163296 is a Herbig Ae star (A1 spectral type) located at a distance of $D\,{=}\,101\,{\pm}\,2\,\rm{pc}$ \cite{gaia2018}. Its mass ($M_{\star}$) and age are $1.9\,M_{\odot}$ and $10.4\,\rm{Myr}$ \cite{Setterholm2018}. It has a luminosity of $L_{\star}\,{=}\,17\,L_{\odot}$, and an effective temperature of $T_{\rm eff}\,{=}\,9250\,\rm{K}$ \cite{Fairlamb2015}. Spatially resolved observations at both infrared and millimeter regimes have revealed ring structures in the disk around HD\,163296 \cite{Grady2000,Wisniewski2008,Muro-Arena2018,Isella2016,Notsu2019}. Analysis of the interferometric data taken with the Very Large Telescope Interferometer PIONIER and MATISSE yielded brightness asymmetries in the near-infrared emission, which may originate from a vortex near the inner rim ($R\,{\sim}\,0.4\,\rm{AU}$) of the disk \cite{Lazareff2017,Varga2021}. As one of the 20 targets selected in the Disk Substructures at High Angular Resolution Program (DSHARP), HD\,163296 was observed with the Atacama Large Millimeter/submillimeter Array (ALMA) in Band 6 at an unprecedented spatial resolution of $4.8\,{\times}\,3.8\,\rm{AU}$ \cite{Andrews2018}. The rms noise of the fiducial ALMA image generated by the DSHARP team is $\sigma_{\rm rms}\,{=}\,23\,\mu{\rm Jy/beam}$. The continuum image shows a few pairs of concentric rings/gaps, see panel (a) in Figure~\ref{fig:imgres}. The D48 and D86 gaps are located at a radial distance of 48 and 86\,AU, with a width of 20 and 16\,AU, respectively. The B67 and B100 rings are centered at a radial distance of 67 and 100\,AU, with a width of 16 and 12\,AU, respectively \cite{Huang2018}. We extracted the surface brightness along the disk major and minor axes, given the position angle (PA) of $133.33^{\circ}$. Along a PA of $99^{\circ}$, there is a crescent-like structure centered at a radial distance of $55\,\rm{AU}$ \cite{Isella2018}, which is probably caused by a Jupiter mass planet \cite{Rodenkirch2021}. Such an asymmetry contaminates the measurement of the gap contrast. Hence, we only considered the data on the semi-major axis to the northwest. On the minor axis, however, an average of both sides of the disk was performed to improve the signal-to-noise ratio. To apply the methodology introduced by Doi \& Kataoka \cite{Doi2021}, we also extracted the azimuthal brightness profiles on the B67 and B100 rings. The reference for the azimuthal coordinate ($\phi$) is given in panel (a) of Figure~\ref{fig:imgres}. The extracted brightnesses are shown with red dots in panels (c)-(f) of Figure~\ref{fig:imgres}. It should be noted that the mechanism responsible for generating the crescent-like structure also likely causes azimuthal perturbations to the B67 ring, which may be one of the reasons why the brightness profile shows non-axisymmetric features. The width between two adjacent points (i.e., 1.5\,AU) is about one third of the ALMA beam, which means that the brightness is first averaged over such a bin size and then extracted. The errors for each of the data points on the major axis, B67 and B100 rings are all set to $23\,\mu{\rm Jy/beam}$, but on the minor axis they are calculated to be $\frac{23}{\sqrt{2}}\,\mu{\rm Jy/beam}$ due to the average of both sides of the disk. \begin{table*}[!t] \caption{Gap contrasts of the HD\,163296 disk.} \centering \footnotesize \linespread{1.2}\selectfont \begin{tabular}{lcccccccc} \hline \multirow{2}{*}{} & \multicolumn{2}{c}{Major axis} & \multicolumn{2}{c}{Minor axis} & \multicolumn{2}{c}{B67 ring} & \multicolumn{2}{c}{B100 ring} \\ \cline {2-3} \cpartlineleft{4,1em}\cline {5-5} \cpartlineleft{6,1em}\cline {7-7} \cpartlineleft{8,1em}\cline {9-9} & D48 & D86 & D48 & D86 & $\phi=90^{\circ}$ & $\phi=270^{\circ}$ & $\phi=90^{\circ}$ & $\phi=270^{\circ}$ \\ \hline ALMA data & $0.98\,{\pm}\,0.03$ & $0.96\,{\pm}\,0.05$ & $0.94\,{\pm}\,0.02$ & $0.82\,{\pm}\,0.04$ & $0.22\,{\pm}\,0.03$ & $0.21\,{\pm}\,0.03$ & $0.00\,{\pm}\,0.07$ & $0.15\,{\pm}\,0.07$ \\ Model \texttt{I1} & 0.97 & 0.97 & 0.88 & 0.34 & 0.24 & 0.21 & 0.42 & 0.41 \\ Model \texttt{I2} & 0.97 & 0.97 & 0.94 & 0.80 & 0.13 & 0.11 & 0.20 & 0.18 \\ Model \texttt{I3} & 0.97 & 0.97 & 0.94 & 0.87 & 0.11 & 0.10 & 0.11 & 0.11 \\ Model \texttt{I4} & 0.97 & 0.97 & 0.92 & 0.81 & 0.21 & 0.18 & 0.10 & 0.10 \\ \hline \end{tabular} \linespread{1.0}\selectfont \label{tab:gapcont} \end{table*} The gap contrast is defined as $1\,{-}\,I_{\rm min}/I_{\rm max}$, where $I_{\rm min}$ is the minimum brightness within the gap, and $I_{\rm max}$ is the maximum brightness of its immediately exterior ring. The brightness profile of the B67 ring displays two dips at $\phi\,{=}\,90^{\circ}$ and $270^{\circ}$, which resemble gaps. For simplicity of description, we also call them as ``gaps'' hereafter in this work. The constrasts are defined as $1\,{-}\,I_{\phi{=}90^{\circ}}/I_{\phi{=}180^{\circ}}$ and $1\,{-}\,I_{\phi{=}270^{\circ}}/I_{\phi{=}180^{\circ}}$. On the B100 ring, the profile is quite flat in the western side, and shows only one ``gap'' at $\phi\,{=}\,270^{\circ}$. In addition to the chi-square ($\chi^2$) metrics, the observed gap contrasts summarized in Table~\ref{tab:gapcont} are the key characteristics used to evaluate the quality of fit of our models. The difference between gap contrasts measured along the disk major and minor axes is due to projection effect. Because the disk is geometrically thick, and it is tilted to an inclination of $46.7^{\circ}$, the width of the gap varies with azimuthal angle, and reaches the smallest along the minor axis, leading to the lowest gap contrast. \section{Full radiative transfer modeling} \label{sec:modeling} The key of our work is to constrain the scale height of the millimeter-sized dust grains by fitting the contrasts of gaps with self-consistent radiative transfer models, and then link the scale height to the strength of turbulence. In fact, the HD\,163296 disk has more gaps, i.e., D10 and D145. However, they are either not fully spatially resolved, or show evidence for being multiple gaps \cite{Huang2018}. We will not discuss them in detail throughout the paper, although our modeling methodology automatically captures both features. The radiative transfer models are parameterized in the framework of the \texttt{RADMC-3D} code\footnote{http://www.ita.uni-heidelberg.de/~dullemond/software/radmc-3d/.} \cite{radmc3d2012}. We assume that the disk is passively heated by stellar irradiation. The stellar spectrum is taken from the \texttt{Kurucz} database \cite{Kurucz1994}, assuming a gravity of ${\rm log}\,g\,{=}\,3.5$ and solar metallicity. Other model assumptions are for the density distribution and dust opacities, which are described below. \subsection{Dust density distribution} \label{sec:moddens} We consider a disk that extends from an inner to outer radii of $R_{\rm in}\,{=}\,0.4\,\rm{AU}$ and $R_{\rm out}\,{=}\,169\,\rm{AU}$, respectively \cite{Huang2018}. The model has two distinct dust grain populations, i.e., a small grain population (SGP) and a large grain population (LGP). The temperature structure of the disk is mainly governed by the SGP, whereas the LGP dominates the millimeter continuum emission. We fixed the mass fraction of the LGP to $f_{\rm SGP}\,{=}\,0.85$ that has been commonly used in previous modeling works of protoplanetary disks \cite{andrews2011,Liu2022}. The SGP is assumed to be well-mixed with the underlying gas distribution. Therefore, its scale height is set to the gas scale height ($H_{\rm gas}$) that is solved under the condition of vertical hydrostatic equilibrium. Large dust grains are expected to settle towards the midplane \cite{Dubrulle1995,Dullemond2004}. We characterize the degree of dust settling with the parameter $\Lambda$, and the scale height of the LGP is given by $H_{\rm gas}/{\Lambda}$. The volume density of the dust grains is parameterized as \begin{equation} \rho_{\rm{SGP}}(R,z)\,{=}\,\frac{(1-f_{\rm LGP})\,\Sigma_{\rm d}(R)}{\sqrt{2\pi}\,H_{\rm gas}}\,\exp\left[-\frac{1}{2}\left(\frac{z}{H_{\rm gas}}\right)^2\right], \\ \label{eqn:sgp} \end{equation} \begin{equation} \rho_{\rm{LGP}}(R,z)\,{=}\,\frac{f_{\rm LGP}\,\Sigma_{\rm d}(R)}{\sqrt{2\pi}\,H_{\rm gas}/{\Lambda}}\,\exp\left[-\frac{1}{2}\left(\frac{z}{H_{\rm gas}/{\Lambda}}\right)^2\right], \\ \label{eqn:lgp} \end{equation} where $\Sigma_{\rm d}(R)$ is the dust surface density, and $R$ is the distance from the central star measured in the disk midplane. Literatural studies usually took analytic forms for $\Sigma_{d}(R)$, e.g., a power law or power law with an exponential taper. However, such simple expressions have been demonstrated to be insufficient to capture the fine-scaled features revealed by high resolution ALMA observations \cite{Pinte2016,Liu2017}. Instead, we build the surface density by iteratively fitting the surface brightnesses at the ALMA wavelength where the optical depth is generally low, see Sect.~\ref{sec:surdens}. \subsection{Dust properties} \label{sec:dustopac} For the dust composition, we made use of the recipe by the DiscAnalysis (\texttt{DIANA}) project \cite{Woitke2016}. The dust grains consist of 60\% silicate ($\rm{Mg_{0.7}Fe_{0.3}SiO_{3}}$) \cite{dorschner1995}, 15\% amorphous carbon (BE$-$sample) \cite{Zubko1996}, and 25\% porosity. These percentages are volume fractions of each component, which are used to derive the effective refractory indices of the dust ensemble by applying the Bruggeman mixing rule \cite{Bruggeman1935}. We used a distribution of hollow spheres with a maximum hollow volume ratio of 0.8 \cite{Min2005}. The mean solid density of the dust ensemble $\rho_{\rm grain}\,{=}\,2.1\,\rm{g\,cm}^{-3}$ is estimated from an average between the silicate density ($3.01\,\rm{g\,cm}^{-3}$) and carbon density ($1.8\,\rm{g\,cm}^{-3}$) taking the volume fractions as the weighting factors. The distribution of grain sizes ($a$) follows a power law ${\rm d}n(a)\,{\propto}\,{a^{-3.5}} {\rm d}a$ with a minimum ($a_{\rm{min}}$) and maximum size ($a_{\rm{max}}$). For the SGP, $a_{\rm{min}}$ and $a_{\rm{max}}$ are fixed to $0.01\,\mu{\rm m}$ and $2\,\mu{\rm m}$, respectively. For the LGP, $a_{\rm{min}}$ is set to $2\,\mu{\rm m}$. Regarding $a_{\rm{max}}$, we will set it based on models that can reproduce the observed millimeter spectral slope, see Sect.~\ref{sec:sedmodel}. \subsection{Building the dust surface density} \label{sec:surdens} Previous studies have shown that surface density profiles in simple analytic expressions (e.g., a smooth power law with density drops at the gap locations) have difficulties to capture the detailed features revealed by ALMA \cite{Liu2017,Muro-Arena2018}. Using an iterative procedure, we built the surface densities by reproducing the millimeter surface brightnesses along the disk major axis that features the maximum spatial resolution. This approach was introduced by Pinte et al. \cite{Pinte2016}, and several works by other teams demonstrated its success \cite{Muro-Arena2018,Liu2019}. The iterative process consists of the following steps. \begin{itemize} \item[a)] We took a starting surface density profile $\Sigma_{\rm d}(R)\,{=}\,\Sigma_{0}\left(R/R_{\rm c}\right)^{-\gamma}{\rm exp}\left[-(R/R_{\rm c})^{2-\gamma}\right]$ with $R_{\rm{c}}\,{=}\,90\,\rm{AU}$ and $\gamma\,{=}\,0.1$ \cite{Isella2016}. For the starting point, we did not introduce any gap, and using other forms will not have a significant impact to the final result. \item[b)] With an initial guess for $H_{\rm gas}$, the dust density distribution is given by Eq.~\ref{eqn:sgp} and \ref{eqn:lgp}. Radiative transfer modeling is performed to obtain the dust temperature. Then, the dust density structure is solved assuming that the disk is in vertical hydrostatic equilibrium. We run the radiative transfer modeling with the new dust density distribution to get the new dust temperature. The iteration for the dust temperature and density goes back and forth, and convergence can be achieved after ${\sim}\,5$ iterations. For the initial choice of $H_{\rm gas}$, we assume $H_{\rm gas}\,{=}\,\sqrt{kT(R)R^3/GM_{\star}{\mu}m_{p}}$, where $G$ is the gravitational constant, $k$ is the Bolzmann's constant, $m_{p}$ is the mass of proton, $\mu\,{=}\,2.3$ is the mean molecular weight, and $T(R)=18.7(R/400\,\rm{AU})^{-0.14}$ is the midplane temperature given by Dullemond et al. \cite{Dullemond2020}. The black solid line in Figure~\ref{fig:s2hgas} shows the initial $H_{\rm gas}$. This step is time consuming because a smooth temperature structure is required to get the solution for the corresponding dust density. Thus, we use a total number of $3\,{\times}\,10^7$ photons in the simulation. \item[c)] From step b), the gas scale height ($H_{\rm gas}$) is derived self-consistently. Then, we simulate a model image at 1.25\,mm, which is convolved with the ALMA beam that has a size and position angle of $0.048^{\prime\prime}\times0.038^{\prime\prime}$ and $82^{\circ}$, respectively. \item[d)] We extracted the model surface brightness along the disk major axis to the northwest, identical to what we have done on the ALMA image. \item[e)] A ratio as a function of radius $\zeta(R)$ is obtained by dividing the observed brightness profile by the model brightness profile. \item[f)] The surface densities used as the input for the model is scaled by the point-by-point ratios $\zeta(R)$. The process goes back to step b. \end{itemize} The iteration for $\Sigma_{\rm d}$ typically converges after about 25 loops, when the change in the model brightness profile is less than 5\% at all radii. \begin{figure}[H] \centering \includegraphics[width=0.48\textwidth]{SED_fit.eps} \caption{SED of the HD\,163296 disk. Red dots indicate photometric data that are taken from literature. The black and blue lines show two models with the maximum grain size being $a_{\rm max}\,{=}\,1\,\rm{mm}$ and 1\,cm, respectively. The grey dashed line denotes the photospheric spectrum. The spectral indices measured at wavelengths $\lambda\,{\ge}\,1\,\rm{mm}$ are given for both the models and observation.} \label{fig:bestsed} \end{figure} \begin{table*}[!t] \centering \footnotesize \begin{threeparttable} \caption{Overview of parameter values for different models.} \label{tab:paras} \doublerulesep 0.1pt \tabcolsep 7pt \linespread{1.2}\selectfont \begin{tabular}{lcccccccl} \toprule Parameter & Fixed/free & Model \texttt{S1} & Model \texttt{S2} & Model \texttt{I1} & Model \texttt{I2} & Model \texttt{I3} & Model \texttt{I4} & Note \\ \hline $T_{\rm eff}$\,[K] & Fixed & \multicolumn{6}{c}{9250} & Effective temperature \\ $L_{\star}\,[L_{\odot}]$ & Fixed & \multicolumn{6}{c}{17} & Stellar luminosity \\ $D$\,[pc] & Fixed & \multicolumn{6}{c}{101} & Distance \\ $i\,[^{\circ}]$ & Fixed & \multicolumn{6}{c}{46.7} & Disk inclination \\ ${\rm PA\,[^{\circ}]}$ & Fixed & \multicolumn{6}{c}{133.33} & Position angle \\ $R_{\rm in}$\,[AU] & Fixed & \multicolumn{6}{c}{0.4} & Disk inner radius \\ $R_{\rm out}$\,[AU] & Fixed & \multicolumn{6}{c}{169} & Disk outer radius \\ $f_{\rm LGP}$ & Fixed & \multicolumn{6}{c}{0.85} & Mass fraction of the LGP \\ $a_{\rm min.SGP}$\,[$\mu{\rm m}$]& Fixed & \multicolumn{6}{c}{0.01} & Minimum grain size for the SGP \\ $a_{\rm max.SGP}$\,[$\mu{\rm m}$]& Fixed & \multicolumn{6}{c}{2} & Maximum grain size for the SGP \\ $a_{\rm min.LGP}$\,[$\mu{\rm m}$]& Fixed & \multicolumn{6}{c}{2} & Minimum grain size for the LGP \\ $a_{\rm max.LGP}$\,[cm] & Free & 0.1 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & Maximum grain size for the LGP \\ $\Sigma_{\rm d}\,\rm{[g\,cm^{-2}]}$ & Free & Figure~\ref{fig:s1s2surdens} & Figure~\ref{fig:s1s2surdens} & Figure~\ref{fig:surdensout} & Figure~\ref{fig:surdensout} & Figure~\ref{fig:surdensout} & Figure~\ref{fig:surdensout} & Dust surface density \\ $M_{\rm dust}\,[10^{-4}\,M_{\odot}]$ \tnote{(a)} & $-$ & 1.2 & 2.4 & 2.3 & 2.4 & 2.4 & 2.5 & Total dust mass \\ $\Lambda$ & Free & 5.0 & 5.0 & 1.0 & 2.6 & 10.6 & $-$ & $\Lambda$ for the entire disk, see Sect.~\ref{sec:conhratio} \\ $\Lambda{1}$ & Free & $-$ & $-$ & $-$ & $-$ & $-$ & $3.0_{-0.8}^{+0.3}$ & $\Lambda$ for ${R\,{<}\,59\,\rm{AU}}$, see Sect.~\ref{sec:varhratio} \\ $\Lambda{2}$ & Free & $-$ & $-$ & $-$ & $-$ & $-$ & $1.2_{-0.1}^{+0.1}$ & $\Lambda$ for ${59\,{\le}\,R\,{<}\,78\,\rm{AU}}$, see Sect.~\ref{sec:varhratio} \\ $\Lambda{3}$ & Free & $-$ & $-$ & $-$ & $-$ & $-$ & $1.9_{-0.1}^{+15.9}$ & $\Lambda$ for ${78\,{\le}\,R\,{<}\,94\,\rm{AU}}$, see Sect.~\ref{sec:varhratio} \\ $\Lambda{4}$ & Free & $-$ & $-$ & $-$ & $-$ & $-$ & $16.3_{-9.8}^{+3.7}$ & $\Lambda$ for ${R\,{\ge}\,94\,\rm{AU}}$, see Sect.~\ref{sec:varhratio} \\ \hline $\chi_{\rm tot}^2$ & $-$ & $-$ & $-$ & 975 & 460 & 478 & 242 & Chi-square of the model, see Eq.~\ref{eqn:chitot} \\ \bottomrule \end{tabular} \begin{tablenotes} \item[(a)] The total dust mass $M_{\rm dust}$ is obtained by integrating the surface density $\Sigma_{\rm d}$ that is constructed in the fitting procedure. Hence, $M_{\rm dust}$ is not a direct fitting parameter. \end{tablenotes} \end{threeparttable} \end{table*} \subsection{Setting $a_{\rm{max}}$ for the LGP based on SED modeling} \label{sec:sedmodel} Our model has three free parameters/quantities: the dust surface density ($\Sigma_{\rm d}$), the ratio of gas-to-dust scale height ($\Lambda$) and maximum grain size ($a_{\rm max}$) for the LGP. Note that the total dust mass ($M_{\rm dust}$) is not a free parameter, because integrating $\Sigma_{\rm d}$ within the disk naturally gives the result. A population of large dust grains will shallow the spectral index at millimeter wavelengths \cite{Ricci2010,Testi2014}. We collected photometric data from various catalogs and individual studies \cite{Mannings1994,Oudmaijer2001,cutri2003,Isella2007,ishihara2010,Sandell2011,cutri2013,Pascual2016,Tripathi2017,Andrews2018,Guidi2022}. The observed spectral energy distribution (SED) is shown as red dots in Figure~\ref{fig:bestsed}. The spectral index measured at wavelengths $\lambda\,{\ge}\,1\,\rm{mm}$ is $\alpha_{\rm mm.obs}\,{=}\,2.7\,{\pm}\,0.06$. Assuming that the emission is optically thin and in the Rayleigh-Jeans tail, this transfers into a millimeter slope of the dust absorption coefficient $\beta\,{=}\,\alpha_{\rm mm.obs}{-}\,2\,{=}\,0.7$. The $\beta$ value for the interstellar medium dust is ${\sim}\,1.7$ \cite{Li2001}. A lower $\beta$ in the HD\,163296 disk suggests that dust grains have grown up to millimeter and even centimeter sizes. To quantify the extent of grain growth in the HD\,163296 disk, we build a grid of SED models in which the ratio of gas-to-dust scale height is fixed to $\Lambda\,{=}\,5$, a typical value used in literature works \cite{andrews2011,Liu2022}. In Sect.~\ref{sec:fitalma}, we will conduct an extensive parameter study on $\Lambda$ through a dedicated fitting to the ALMA image. However, this parameter is not expected to have a significant impact to the constraint on $a_{\rm max}$ as long as the optical depth is not large. We sample 16 different $a_{\rm max}(s)$ that are logarithmically distributed from $10\,\mu{\rm m}$ to 1\,cm. The procedure of iteration for $\Sigma_{\rm d}$, as laid out in Sect.~\ref{sec:surdens}, is performed separately for each of the 16 models. As a result, 16 model SEDs are simulated. The model with $a_{\rm max}\,{=}\,1\,\rm{cm}$ (Model \texttt{S2}) best matches with the observation, see Figure~\ref{fig:bestsed}. Its converged surface density is shown in Figure~\ref{fig:s1s2surdens}, and Table~\ref{tab:paras} gives an overview of the model parameters. For the subsequent fitting to the ALMA data, we fixed $a_{\rm max}\,{=}\,1\,\rm{cm}$ for the LGP, leaving $\Sigma_{\rm d}$ and $\Lambda$ as the only two free parameters. The discrepancies in the mid- and far-infrared fluxes between model and observation is due to the presence of a puffed-up inner rim. This type of rim is a natural outcome when solving the disk structure in vertical hydrostatic equilibrium, particularly for Herbig disks \cite{Dullemond2007}. The blue solid line in Figure~\ref{fig:s2hgas} shows the gas scale height of Model \texttt{S2}. The overall geometry of the disk is flared. Disk regions just behind the inner rim cannot be exposed to the stellar light, leading to a reduced mid-infrared excess. At a certain radial distance, the disk will show up from the shadow casted by the inner rim. The surface layer of these outer regions directly absorbs stellar photons, and hence produces more far-infrared emission than the observed level. One can fully parameterize the scale height with analytic forms, e.g., a power law, and fit the infrared SED to constrain the geometry \cite{Harvey2012}. However, there are some degeneracies between the geometric parameters in SED models. Moreover, modeling the SED is not able to constrain the scale height of millimeter dust grains that is the key of this work. Therefore, we do not attempt to conduct further fine tuning on the SED fitting, and make our assumptions (i.e., number of free parameters) as few as possible. \section{Fitting the DSHARP ALMA image} \label{sec:fitalma} In this section, we will fit the surface brightnesses along the major and minor axes of the disk, and on the B67 and B100 rings to constrain $\Lambda$. Our strategy starts from a simple assumption of a constant $\Lambda$ in the radial direction, to a more complex scenario in which $\Lambda$ varies with $R$. The contrasts of gaps, as presented in Table~\ref{tab:gapcont}, are sensitive to the degree of dust settling. Therefore, to quantify the quality of fit, we first check whether or not the gap contrasts of the model are consistent with the observation. Then, we calculate the $\chi^2$ along the major axis ($\chi_{\rm major}^2$) and minor axis ($\chi_{\rm minor}^2$), and on the B67 ($\chi_{\rm B67}^2$) and B100 ring ($\chi_{\rm B100}^2$). To exclude the effect of the crescent-like substructure along ${\rm PA}\,{\sim}\,99^{\circ}$, data points between $\phi\,{=}\,{-}45^{\circ}$ and $45^{\circ}$ are not taken into account when calculating $\chi_{\rm B67}^2$ and $\chi_{\rm B100}^2$. The goodness of fit is evaluated according to \begin{equation} \chi_{\rm tot}^2\,{=}\,g_{1}\,\chi_{\rm major}^2+g_{2}\,\chi_{\rm minor}^2+g_{3}\,\chi_{\rm B67}^2+g_{4}\,\chi_{\rm B100}^2. \label{eqn:chitot} \end{equation} Four factors, i.e., $g_{1}$, $g_{2}$, $g_{3}$ and $g_{4}$, are introduced to balance the weightings. First, we calculate the factors as \begin{equation} g_{i} = \frac{\sum_{i=1}^{4}\left(N_{i}\right)}{N_{i}}, \end{equation} where $N_{i}$ is the number of data points taken into account in the calculation of $\chi^2(s)$ for the major and minor axes, and the B67 and B100 rings, respectively. Then, a normalization is performed to ensure that the sum of $g_{i}$ equals to unity. \subsection{Constant $\Lambda$ in the radial direction} \label{sec:conhratio} We first take the simplest assumption in which the ratio of gas-to-dust scale height does not change with radius ($R$). We sample 20 values for $\Lambda$, which are logarithmically distributed within 1 and 20. The case of $\Lambda\,{=}\,1$ means that millimeter dust grains are well coupled with the gas. Strongly settled models feature large values of $\Lambda$. The iteration process for $\Sigma_{d}$ is performed from scratch for each of these 20 models, ensuring that all the models are fully independent and self-consistent. None of the 20 models can reproduce all of the gap constrasts within the uncertainties simultaneously. Panels (c)-(f) of Figure~\ref{fig:imgres} shows a comparison of the brightnesses between observation and three representative models with $\Lambda\,{=}\,1.0$ (model \texttt{I1}), 2.6 (model \texttt{I2}) and 10.6 (model \texttt{I3}), respectively. Model \texttt{I2} has the lowest $\chi_{\rm tot}^2\,{=}\,460$ among the 20 samples. Figure~\ref{fig:surdensout} shows the reconstructed surface densities, whereas the gap contrasts extracted from the models are given in Table~\ref{tab:gapcont}. Along the disk major axis, the three models reproduce the data at a similar quality, see panel (c) of Figure~\ref{fig:imgres}, and the model gap contrasts in Table~\ref{tab:gapcont}. The ALMA beam can dilute the ring emission, and contributes to the adjacent gap emission. In vertically thicker (smaller $\Lambda$) disks, dust grains are located at a higher height above the midplane where the temperature is high. In this case, the ring emission is stronger, and its contribution to the gap emission is higher, which can shallow the gap contrast since the intrinsic emission from the gap is low. In addition to the millimeter dust scale height, the depth of surface density drops is another quantity influencing the gap contrast. A comparison between model \texttt{I1} and model \texttt{I3} indicates that deeper surface density drops in more turbulent disks can produce similar gap contrasts measured on the disk major axis to those generated by shallower surface density drops in more quiescent disks. This means that fitting the data on the major axis alone cannot break the degeneracy. \begin{figure}[H] \centering \includegraphics[width=0.48\textwidth]{models_surface_density.eps} \caption{Dust surface densities (on the left Y axis) reconstructed from the iterative fitting process, and optical depth at a wavelength of 1.25\,mm (on the right Y axis) for model I1, I2, I3 and I4. The dashed line shows the starting surface density used in the fitting loop: $\Sigma(R)\,{=}\,\Sigma_{0}\,(R/R_{\rm c})^{-\gamma}\,{\rm exp}[-(R/R_{\rm c})^{2-\gamma}]$ with $R_{\rm{c}}\,{=}\,90\,\rm{AU}$ and $\gamma\,{=}\,0.1$, see Sect.~\ref{sec:surdens}.} \label{fig:surdensout} \end{figure} Along the disk minor axis, the change to the gap contrast as a function of $\Lambda$ is observed due to the effect of projection. Panel (d) of Figure~\ref{fig:imgres} shows that models with a higher degree of dust settling produce more separate rings and deeper gaps, and vice versa. This fact is consistent with the findings reported by Pinte et al. \cite{Pinte2016}. Neither the D48 nor the D86 gap can be explained by model \texttt{I1}. Though both model \texttt{I2} and \texttt{I3} are consistent with the data of the D48 gap, only the former reproduces the D86 gap within the uncertainty, see Table~\ref{tab:gapcont}. The gas-to-dust scale height ratio $\Lambda$ has a strong impact on the brightness variation on the B67 and B100 rings. The well-mixed disk (model \texttt{I1}) shows two pronounced dips at $\phi\,{=}\,90^{\circ}$ and $\phi\,{=}\,270^{\circ}$, due to the difference in the optical depth ($\tau$) along the line of sight between $\phi\,{=}\,0^{\circ}$ (or $180^{\circ}$, major axis) and $\phi\,{=}\,90^{\circ}$ (or $270^{\circ}$, minor axis) \cite{Doi2021}. Such a difference in $\tau$ decreases with increasing $\Lambda$. Consequently, the contrasts of ``gaps'' on the rings are reduced in more settled disks, see for instance model \texttt{I3}. Panels (e) and (f) of Figure~\ref{fig:imgres} suggest that the degree of dust settling is different between B67 and B100. While B67 is close to a well-mixed situation, B100 favors a scenario in which large dust grains are well concentrated in the midplane. \subsection{Varying $\Lambda$ in the radial direction} \label{sec:varhratio} Though the experiment under the assumption for a constant $\Lambda$ does not return a satisfactory solution, it provides clues to improve the model. The fitting results imply that the degree of dust settling changes with $R$. Therefore, we parameterize the ratio of gas-to-dust scale height with a piecewise function \begin{equation} \Lambda = \left\{ \begin{array}{rcl} \Lambda{1} & : & {R\,{<}\,59\,\rm{AU}} \\ \Lambda{2} & : & {59\,\rm{AU}\,{\le}\,R\,{<}\,78\,\rm{AU}} \\ \Lambda{3} & : & {78\,\rm{AU}\,{\le}\,R\,{<}\,94\,\rm{AU}} \\ \Lambda{4} & : & {R\,{\ge}\,94\,\rm{AU}}. \\ \end{array} \right. \end{equation} The boundaries of the four radial bins are chosen according to the locations and widths of the gaps and rings, see Sect.~\ref{sec:obs}. We did not explore these borders in the fitting process. Using a piecewise form may have some artifacts in the boundaries. Nevertheless, how the gas-to-dust scale height ratio smoothly varies from one radial bin to another is difficult to be investigated, because it requires observational data at extremely high spatial resolutions that fully resolve the transition region between two adjacent bins. In the new model configuration, the ratios $\Lambda{2}$ and $\Lambda{4}$ are expected to play the dominated role in controlling the gap contrasts of the B67 and B100 rings, respectively. The gap contrasts of D48 and D86 are mainly influenced by a combination of $\Lambda{1}$ and $\Lambda{2}$, and a combination of $\Lambda{3}$ and $\Lambda{4}$, respectively. This is because the definition of contrasts of gaps on the major/minor axis is related to the brightnesses both in the gap and in its exterior ring, see Sect.~\ref{sec:obs}. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{chi2_tot.eps} \caption{The $\chi_{\rm tot}^2$ distribution as a function of the gas-to-dust scale height ratio $\Lambda{1}$, $\Lambda{2}$, $\Lambda{3}$ and $\Lambda{4}$. The dots overlaid with a red cross refer to models that cannot reproduce all of the observed gap contrasts tabulated Table~\ref{tab:gapcont}. Note that the $\chi_{\rm tot}^2\,{-}\,\Lambda{3}$ profile is flat, and most of the considered values for $\Lambda{3}$ are able to generate all of the observed gap contrasts. Therefore, $\Lambda{3}$ is basically not constrained.} \label{fig:chi2tot} \end{figure*} The parameter space becomes $\left\{\Lambda{1},\,\Lambda{2},\,\Lambda{3},\,\Lambda{4},\,\Sigma_{d}\right\}$. To maintain self-consistency and independency, the time-consuming process for iterating $\Sigma_{d}$ has to be conducted for each of the sampled sets $\left\{\Lambda{1},\,\Lambda{2},\,\Lambda{3},\,\Lambda{4}\right\}$. Therefore, it is impractical to perform the parameter study using the Markov Chain Monte Carlo approach. Instead, the grid search method is invoked to finish the task. We first search for the optimum combination of $\Lambda{1}$ and $\Lambda{2}$, and then for that of $\Lambda{3}$ and $\Lambda{4}$. We sample 20 values for $\Lambda{1}$, which are logarithmically spaced from 1 and 20. Before the parameter study, we run many simulation tests, and find that models with $\Lambda{2}$ only slightly deviating from ${\sim}\,1.2$ are not able to generate gap contrasts of B67 comparable to the observation. Hence, for the sake of reducing the computational time and meanwhile being conservative, we consider 10 points for $\Lambda{2}$ from 1 to 4 in the logarithmic manner. At this stage, $\Lambda{3}$ and $\Lambda{4}$ are fixed to 2.6, i.e., the value of model \texttt{I2}. We run the iteration procedure for $\Sigma_{d}$ from scratch for each of the 200 different combinations of $\Lambda{1}$ and $\Lambda{2}$, and obtain 200 models. Then, we fix $\Lambda{1}$ and $\Lambda{2}$ to the values of the model with the lowest $\chi_{\rm tot}^2$. The exploration for $\Lambda{3}$ and $\Lambda{4}$ is similar. However, both parameters have the same grid points to those for $\Lambda{1}$, and therefore they form 400 different combinations. The final best-fit model (model \texttt{I4}) features $\Lambda{1}\,{=}\,3.0$, $\Lambda{2}\,{=}\,1.2$, $\Lambda{3}\,{=}\,1.9$, $\Lambda{4}\,{=}\,16.3$, and $\chi_{\rm tot}^2\,{=}\,245$. Its dust surface density and millimeter optical depth are shown with the blue line in Figure~\ref{fig:surdensout}. The model image and brightness profiles are compared with the observation in Figure~\ref{fig:imgres}. The gap contrasts and model parameters are summarized in Table~\ref{tab:gapcont} and \ref{tab:paras}, respectively. The best-fit model is able to explain all of the gap contrasts. We separately vary the gas-to-dust scale height ratios in each radial bin from their best-fit values with a step width of 0.1, and investigate how well the parameters are constrained. The variations of $\chi_{\rm tot}^2$ are shown in Figure~\ref{fig:chi2tot}. The dots overlaid with a red cross refer to models that cannot reproduce all of the observed gap contrasts within their errors. Therefore, we exclude them in the estimation of parameter uncertainties that are deduced from the models with $\chi_{\rm tot}^2$ less than 1.05 times the minimum $\chi_{\rm tot}^2$. For instance, all the models with $\Lambda1\,{<}\,{\sim}2.2$ produce lower contrasts (i.e., ${<}\,0.92$) for the D48 gap measured on the disk minor axis than the observed value ($0.94\,{\pm}\,0.02$). Therefore, they are considered to be invalid although some of them have better $\chi^2_{\rm tot}$ than that of the best-fit model. The profiles of $\chi_{\rm tot}^2\,{-}\,\Lambda{1}$, $\chi_{\rm tot}^2\,{-}\,\Lambda{2}$ and $\chi_{\rm tot}^2\,{-}\,\Lambda{4}$ show a clear signature of getting the optimum solution, indicating that the gas-to-dust scale height ratios in the D48, B67 and B100 regions are well constrained. Their validity ranges are estimated to be [2.2, 3.3], [1.1, 1.3], and ${\ge}\,6.5$, respectively. The distribution of $\chi_{\rm tot}^2$ as a function of $\Lambda{3}$ is quite flat, and all the $\Lambda{3}$ values in the considered range can reproduce the data well. Hence, $\Lambda{3}$ is basically unconstrained. \section{Discussion} \label{sec:discussion} Using self-consistent radiative transfer models, we have placed constraints on the degree of dust settling by fitting the gap contrasts of the D48, B67, D86 and B100 features. Our results suggest a radially varying ratio of gas-to-dust scale height ratio in the HD\,163296 disk. In this section, we compare our result with literature studies, and link the derived gas-to-dust scale height ratio to the turbulence strength in the HD\,163296 disk. \subsection{Comparison of $\Lambda$ between different works} Ohashi et al. \cite{Ohashi2019} found that the dust scale height is the key parameter for reproducing the azimuthal variation of the polarization pattern in the gaps. By analyzing the ALMA data of the 0.87\,mm dust polarization from the HD\,163296 disk, they constrained the dust scale height to be less than one-third the gas scale height for the D48 gap, and to be two-thirds the gas scale height for the D86 gap. Recently, Doi \& Kataoka \cite{Doi2021} showed that the azimuthal variation in the continuum along rings are sentitive to the degree of dust settling. Assuming that the disk is vertically isothermal with a fixed power-law temperature, they fit the DSHARP continuum data of the B67 and B100 rings, and inferred the ratio of gas-to-dust scale height to be 1.1 and ${>}\,9.5$ for the B67 and B100 ring, respectively. Figure~\ref{fig:hdustcompare} (upper panel) shows a comparison of $\Lambda$ between different works. The blue solid line refers to our best fit, whereas brown dots and green dots mark the results by Ohashi et al. \cite{Ohashi2019} and Doi \& Kataoka \cite{Doi2021}, respectively. As can be seen, our results are overall consistent with these literature values. However, as one step further, our analysis provides constraints on $\Lambda$ both for the ring and gap regions in the framework of self-consistent radiative transfer simulation. The black dashed line in the upper panel of Figure~\ref{fig:hdustcompare} shows the dust scale height. In the inner ($R\,{<}\,60\,\rm{AU}$) or outermost ($R\,{>}\,94\,\rm{AU}$) regions, the millimeter dust disk is quite thin, with scale heights less than ${\sim}\,2\,\rm{AU}$. Disk regions in the vicinity of B67 have millimeter dust scale height of ${\sim}\,4\,\rm{AU}$. Disks, when viewed at high inclinations, have a specific advantage that the vertical extent of the emission layers can be directly constrained by spatially resolved images. Villenave et al. \cite{Villenave2020} presented ALMA continuum observations of 12 edge-on disks, at an angular resolution of ${\sim}\,0.1^{\prime\prime}$. A comparison between a set of radiative transfer models and the data indicates that at least three disks in their sample are consistent with a millimeter dust scale height of a few AU. Our inferred dust scale height for the HD\,163296 disk, tilted to $46.7^{\circ}$, is comparable with those of the observed edge-on disks. \begin{figure}[H] \centering \includegraphics[width=0.48\textwidth]{hratio_result_v2.eps} \caption{{\it Upper panel:} comparison of $\Lambda$ (on the right Y axis) between model \texttt{I4} (blue solid line) and literature studies. The $\Lambda$ values by Ohashi et al. \cite{Ohashi2019} and Doi \& Kataoka \cite{Doi2021} are indicated with brown dots and green dots, respectively. The black dashed curve shows the millimeter dust scale height (on the left Y axis) of model \texttt{I4}. {\it Bottom panel:} the $\alpha_{\rm turb}/{\rm St}$ ratio (blue solid line, on the right Y axis) and $\alpha_{\rm turb}$ (black dashed line, on the left Y axis) of model \texttt{I4}.} \label{fig:hdustcompare} \end{figure} \subsection{Comparison of $\alpha_{\rm turb}/\rm{St}$ and $\alpha_{\rm turb}$ between different works} Assuming an equilibrium between dust settling and vertical stirring by turbulent motions, the dust scale height and gas scale height follow the relation \cite{Youdin2007,Birnstiel2010} \begin{equation} H_{\rm dust} = H_{\rm gas}\left(1+\frac{\rm St}{\alpha_{\rm turb}} \frac{\rm 1+2\,St}{\rm 1+St}\right)^{-1/2}, \label{eqn:dusth} \end{equation} where the Stokes number St is given by \begin{equation} {\rm St}\,{=}\,\frac{\rho_{\rm grain}\bar{a}}{\Sigma_{\rm g}(R)}\frac{\pi}{2}. \end{equation} The gas surface density $\Sigma_{\rm g}(R)\,{=}\,\Sigma_{0}\,(R/R_{\rm c})^{-\gamma}\,{\rm exp}[-(R/R_{\rm c})^{2-\gamma}]$ with $\Sigma_{0}\,{=}\,8.8\,\rm{g\,cm^{-2}}$, $R_{\rm{c}}\,{=}\,165\,\rm{AU}$ and $\gamma\,{=}\,0.8$, are constrained by high resolution multiple CO line observations \cite{Zhang2021}. Considering a grain size distribution like the one prescribed for the LGP, $\bar{a}$ stands for the representative grain size of dust that dominates the continuum emission at 1.25\,mm. We check how the mass absorption coefficent $\kappa_{\rm abs}$ at 1.25\,mm changes with $a$, and find that it peaks at $a\,{\sim}\,0.2\,\rm{mm}$. This value is close to the number given by $\lambda/2\pi$. Therefore, in our calculation of St, we took $\bar{a}\,{=}\,0.2\,\rm{mm}$. \begin{table}[H] \caption{$\alpha_{\rm turb}/{\rm St}$ for the B67 and B100 ring from different studies.} \centering \linespread{1.2}\selectfont \begin{tabular}{lcc} \hline Reference & B67 ring & B100 ring \\ \hline Dullemond et al. \cite{Dullemond2018} & 0.33 & $0.13\,{\sim}\,0.77$ \\ Rosotti et al. \cite{Rosotti2020} & 0.23 & 0.04 \\ Doi \& Kataoka \cite{Doi2021} & ${>}\,2.4$ & ${<}\,0.011$ \\ This work & $2.3_{-0.9}^{+2.5}$ & $0.0038_{-0.0013}^{+0.02}$ \\ \hline \end{tabular} \linespread{1.0}\selectfont \label{tab:alpha} \end{table} The St value varies from ${\sim}\,10^{-5}$ in the inner disk to ${\sim}\,10^{-2}$ in the outer regions. Because St is much less than unity, Eq.~\ref{eqn:dusth} can be simplified as $H_{\rm dust}\,{=}\,H_{\rm gas}\left(1+\frac{\rm St}{\alpha_{\rm turb}}\right)^{-1/2}$. Therefore, the constrained $\Lambda$ directly translates into a ratio of $\alpha_{\rm turb}/\rm{St}$, which is shown with the blue solid line in the bottom panel of Figure~\ref{fig:hdustcompare}. Based on different methodologies, other groups have derived the $\alpha_{\rm turb}/\rm{St}$ values for the B67 and B100 rings. For instance, Rosotti et al. \cite{Rosotti2020} determined $\alpha_{\rm turb}/\rm{St}$ by measuring the deviation from Keplerian rotation of the gas in the proximity of the continuum peaks. Under an assumption that dust rings are caused by dust trapping in radial pressure bumps, Dullemond et al. \cite{Dullemond2018} constrained $\alpha_{\rm turb}/{\rm St}$ by analyzing the widths of the dust rings. In Doi \& Kataoka \cite{Doi2021}, the $\alpha_{\rm turb}/\rm{St}$ value was inferred by investigating the azimuthal intensity variation along dust rings. Table~\ref{tab:alpha} summarizes the reported values together with our best-fit result. As can be seen, our result is well consistent with the values derived by Doi \& Kataoka \cite{Doi2021}. This is not surprising because the idea of constraning $\alpha_{\rm turb}/\rm{St}$ is the same. But, our methodology is more realistic, and data points not only on the rings but also along the major/minor axes are simultaneously taken into account in the analysis. We note that the best-fit $\alpha_{\rm turb}/{\rm St}$ for B67 is about one order of magnitude larger than those obtained in Dullemond et al. and Rosotti et al. There are several possibilities to explain such a difference. First, our methodology is sensitive to the strength of turbulent motions in the vertical direction, while the constraints by Dullemond et al. are more related to the radial diffusion of dust grains. Second, the B67 ring has a neighboring crescent, implying that the ring itself may not be perfectly axisymmetric, thus undermining the assumption of our modeling procedure. Third, if the gaps are indeed opened by planets \cite{Pinte2018,Teague2018b,Teague2021}, the B67 ring can be substantially stirred due to meridional gas flows. Numerical simulations have shown that massive planets can stir sub-millimeter-sized dust grains up to ${\sim}\,70\%$ of the gas scale height at the gap edges \cite{Bi2021,Binkert2021}. For the B100 ring, we obtain a lower $\alpha_{\rm turb}/{\rm St}$ than that inferred by Dullemond et al. A lower turbulence in the vertical direction than in the radial direction can be explained under several physical scenarios, such as in dust feedback to turbulence \cite{Xu2022}, disk self-gravity \cite{Baehr2021}, and radial (pseudo-)diffusion \cite{Hu2021}. The black dashed line in the bottom panel of Figure~\ref{fig:hdustcompare} shows the derived turbulence strength. Except for the B67 ring, the disk has a turbulence level of $\alpha_{\rm turb}\,{<}\,3\times10^{-3}$. Theoretical works have shown that pure hydrodynamic mechanisms or the magnetorotational instability suppressed by nonideal magnetohydrodynamic effects can generate similar turbulence levels in protoplanetary disks \cite{Bai2011,Bai2015,Flock2017,Cui2020,Cui2021}. In the B67 ring, the turbulence is strong with $\alpha_{\rm turb}\,{\sim}\,1.2\,{\times}\,10^{-2}$. Several studies have tried to measure turbulence in the HD\,163296 disk through detailed analysis of gas line observations. Boneberg et al. \cite{Boneberg2016} found that models with $\alpha_{\rm turb}\,{=}\,(0.1\,{-}\,6.3)\,{\times}\,10^{-3}$ match well with the ${\rm C^{18}O}\,J\,{=}\,2{-}1$ line profile within 90\,AU of the disk. Based on CO isotopes and DCO$^{+}$ line observations, Flaherty et al. \cite{Flaherty2015,Flaherty2017} derived the gas turbulence velocity in the disk, which is less than a few percent of the sound speed, corresponding to $\alpha_{\rm turb}\,{<}\,{\sim}\,3\,{\times}\,10^{-3}$. Our inferred value for $\alpha_{\rm turb}$, except for the B67 ring, is consistent with the results set by gas observations. The value of $\alpha_{\rm turb}$ for B67 from our modeling is larger than the upper limit in either Boneberg et al. or Flaherty et al. The discrepancy may be explained by two reasons. First, as demonstrated by our analysis, $\Lambda$, and therefore $\alpha_{\rm turb}$, may vary in the radial direction. The spatial resolution of gas line observations in Boneberg et al. and Flaherty et al. is ${\sim}\,0.5^{\prime\prime}$ that is 10 times worse than that of the DSHARP data. Consequently, their constraints on $\alpha_{\rm turb}$ represent a mean level of turbulence over a much broader range of radius than ours. Due to the beam smearing, low turbulence outside B67 results in a small $\alpha_{\rm turb}$ probed by the gas lines. Second, the turbulence strength we measure describes the role of dust stirring in the vertical direction. This may be different from the turbulence of gas motions. Recent numerical simulations of dust evolution start to use different $\alpha_{\rm turb}(s)$ for gas evolution, radial diffusion and vertical stirring \cite{Pinilla2021}. Isella et al. \cite{Isella2016} presented Band 6 ALMA observations of HD\,163296 with a lower angular resolution than the DSHARP data, revealing three dust gaps at 60, 100, and 160\,AU in the continuum as well as CO depletion in the middle and outer dust gaps. Liu et al. \cite{Liu2018} investigated these gaps by performing 2D global hydrodynamic simulations of planet-disk interaction, and found that three half-Jovian-mass planets in a disk with effective viscosity being a function of radius can explain most of the observational features. Within $R\,{=}\,100\,\rm{AU}$, their model has a turbulence level of $\alpha_{\rm turb}\,{<}\,3\,{\times}\,10^{-4}$ that is weaker than ours. Such an inconsistency can be explained by the difference in the quality of data used in the analysis. As shown in the left column of Figure 3 in Liu et al. \cite{Liu2018}, the best-fit $\alpha_{\rm turb}$ is sensitive to how well the dust surface densities in the gap region are constrained. In the ALMA observation used by Liu et al. \cite{Liu2018}, the beam size is ${\sim}\,0.2^{\prime\prime}$ and the widths of the inner two gaps are narrower than ${\sim}\,0.27^{\prime\prime}$, indicating that the gaps are not fully resolved. However, our constraints are placed using the DSHARP data with four times better spatial resolution and sensitivity. \subsection{The effect of model assumptions on the results} The direct constraint from our radiative transfer analysis is on the gas-to-dust scale height ratio $\Lambda$. The scenario of dust settling that links $\Lambda$ and $\alpha_{\rm turb}$ is given by Eq.~\ref{eqn:dusth}, and the relation is based on numerical simulations performed by Dubrulle et al. \cite{Dubrulle1995} and Youdin \& Lithwick \cite{Youdin2007}. Models with more realistic physics on dust growth, sedimentation and radial mixing may alter the connection between dust and gas scale heights, therefore change the result. To calculate the Stokes number characterizing the coupling between gas and dust, one needs to know the gas surface density. In our calculation, we take the result from Zhang et al. \cite{Zhang2021} who modeled the high resolution ALMA data of CO and its isotopologue lines. How well the CO molecular lines probe the underlying total gas surface density remains uncertain. Such a fact will not affect our constaints on $\Lambda$ from the continuum radiative transfer modeling, but it will cause uncertainties when inferring $\alpha_{\rm turb}$ from $\Lambda$, see Eq.~\ref{eqn:dusth}. \section{Summary} \label{sec:summary} Constraining the strength of turbulence plays a key role in building up our knowledge on disk evolution and planet formation. It is also crucial for running numerical models to interpret high-resolution ALMA observations. In this work, we took the HD\,163296 disk as an example, and investigated in detail the millimeter gap contrast as a probe for turbulence level. With self-consistent radiative transfer modeling, we fit the gap contrasts measured for the D48, B67, D86 and B100 substructures that are spatially resolved by the DSHARP observation. We constrained the gas-to-dust scale height ratio $\Lambda$ to be $3.0_{-0.8}^{+0.3}$, $1.2_{-0.1}^{+0.1}$ and ${\ge}\,6.5$ for the D48, B67 and B100 regions. Our results show that the degree of dust settling varies with radius in the HD\,163296 disk. The $\Lambda$ value for the D86 region is unconstrained due to the degeneracy between $\Lambda$ and the depth of surface density drops. Based on the constrained gas-to-dust scale height ratio $\Lambda$, we estimate $\alpha_{\rm turb}/\rm{St}$ to be $2.3_{-0.9}^{+2.5}$ and $0.0038_{-0.0013}^{+0.02}$ for the B67 and B100 rings, respectively. These values are well consistent with those reported by Doi \& Kataoka \cite{Doi2021}, but differ from the numbers inferred by Dullemond et al. \cite{Dullemond2018} and Rosotti et al. \cite{Rosotti2020}. The discrepancy may be due of the fact that our modeling is sentitive to the turbulence for vertical stirring of dust grains, while literature studies more likely reflect the turbulence for the radial diffusion of dust grains or the turbulent motion of gas species. We calculate the turbulence level to be $\alpha_{\rm turb}\,{<}\,3\times10^{-3}$ for the D48 and B100 regions, which agree well with the upper limit set by Boneberg et al. \cite{Boneberg2016} and Flarherty et al. \cite{Flaherty2017} from analyzing the width of gas lines. According to our analysis, the B67 ring has a strong turbulence strength of $\alpha_{\rm turb}\,{\sim}1.2\,{\times}\,10^{-2}$. Future multi-wavelength continuum observations with comparable spatial resolution to the DSHARP data are required to better constrain the degree of dust settling, and therefore the scale height of dust grains with different sizes. Higher resolution observations of multiple gas lines are pivotal to directly measure the turbulent motions, and confirm whether the strong turbulence in the local region of B67 inferred from our analysis is also seen with gas tracers. \Acknowledgements{We thank the anonymous referees for their constructive comments that highly improved the manuscript. YL acknowledges the financial support by the Natural Science Foundation of China (Grant No. 11973090), and the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-B06. GHMB and MF acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 757957). GR acknowledges support from the Netherlands Organisation for Scientific Research (NWO, program number 016.Veni.192.233) and from an STFC Ernest Rutherford Fellowship (grant number ST/T003855/1). We thank Tilman Birnstiel, Guo Chen, Ke Zhang and Richard Teague for insightful discussions. We acknowledge the DSHARP team for making the calibrated CASA measurement sets, fiducial images, and the scripts used for calibration and image cleaning, available for the public. ALMA is a partnership of ESO (representing its member states), NSF and NINS, together with NRC, MOST and ASIAA, and KASI, in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.} \InterestConflict{The authors declare that they have no conflict of interest.} \bibliographystyle{scichina}
2,877,628,091,419
arxiv
\section{Introduction} \label{intro} Heterogeneous computing, where devices with different architectures and performance characteristics, such as CPUs, GPUs and accelerators are combined, are growing in popularity. These systems promise increased performance combined with reductions in power consumption, but the increasing complexity and diversity results in issues with programmability, such as poor \emph{performance portability}. Standards like OpenCL offer functional portability for heterogeneous systems. However, performance portability, making code achieve good performance when executed unaltered on different devices, remains problematic. This is particularly true for highly different devices, such as a CPU and a GPU, but also for two GPUs from the same vendor \cite{FALCH2}. Auto-tuning can potentially improve the situation. In this setting, auto-tuning involves automatically evaluating different candidate implementations and selecting the best one for a given device. Thus, empirical data is used to find the best code, rather than relying on potentially faulty and incomplete programmer intuition or compiler machine models. Performance portability is improved, since porting to a new device simply requires the auto-tuning to be re-done. While a high number of candidate implementations can lead to prohibitive search times, analytical performance models, or techniques based on machine learning, as proposed in our previous work \cite{FALCH2}, can be used to speed up the search. For this kind of auto-tuning to be truly automatic, it must include the generation of candidate implementations. Writing these manually is tedious, time consuming, and error prone. In some cases, simply changing the value of performance critical variables, such as the work-group size in OpenCL, is enough to generate what effectively becomes different implementations. However, many important optimizations require more substantial code changes. For instance, whether to use image or local memory in OpenCL effectively requires two versions of the source code. We propose using high level languages and source-to-source compilers to automatically generate candidate implementations with potentially widely different source code, thus enabling an end-to-end auto-tuning approach to overcome heterogeneous performance portability. In this paper, we present the ImageCL language and its source-to-source compiler. ImageCL programs resemble simplified versions of OpenCL kernels. Algorithms written in ImageCL are analysed by our compiler, and different optimizations are applied to generate a large number of different candidate implementations, covering an extensive optimization space. We then use the machine learning based auto-tuner from our previous work \cite{FALCH2} to pick a good implementation for a given device. Thus, an algorithm can be written once, and our auto-tuning compiler tool-chain can be used to generate high performing implementations for any device supporting OpenCL, thus improving performance portability. Image processing is an increasingly important domain, with applications ranging from medicine and seismology to Photoshop. ImageCL has been designed to work with the FAST \cite{SMISTAD2} framework. FAST allows the user to connect filters to form image processing pipelines, which can be executed on heterogeneous systems. ImageCL can be used to write a single such filter, which can be retuned to achieve high performance if scheduled on different devices. While ImageCL can be considered a domain specific language for image processing, it retains the generality of OpenCL. This paper is structured as follows: The next section provides background information, while Section~\ref{related} reviews related work. Section~\ref{compiler} presents a high level overview of how our approach can be used to achieve performance portability. In Section~\ref{language}, the ImageCL language and the implementation of the source-to-source compiler is described. Results are presented in Section~\ref{results}, and discussed in Section~\ref{discussion}. Finally, Section~\ref{conclusion} concludes and outlines possible future work. \section{Background} \label{background} In this section, we provide background information on heterogeneous computing, OpenCL, and the FAST framework, which our language and has been designed to work with. \subsection{GPU Computing and OpenCL} As several of our optimizations target GPU specific features, we will review their architecture here. For details, see e.g. \cite{SMISTAD}. Modern GPUs are built up of large numbers of processing elements, which are combined into compute units \footnote{Here we use OpenCL terminology. Nvidia uses the terms CUDA cores and streaming multiprocessors, AMD stream processors and compute units.}. The processing elements of a compute unit work in a SIMD fashion, executing instructions in lock step. Discrete GPUs have large, relatively slow, DRAM memories (separate from the system's main memory) known as global memory. On newer GPUs, global memory is often cached. In addition, they have fast, on-chip, scratch-pad memory, which can be used as a user managed cache. Furthermore, they have texture memory, which is cached, and optimized for access patterns with 2D and 3D spatial locality, as well as constant memory which is read-only and designed for high performance when accessed by many threads concurrently. OpenCL is emerging as a standard for heterogeneous computing, and is supported by many major hardware vendors. OpenCL code is divided into host code and kernels. The host code sets up and launches kernels on a device like a GPU or the same CPU the host code is running on. Kernels are executed in parallel by multiple threads known as work-items, which are organized into work-groups. On a GPU, work-groups are mapped to compute units, while work-items are mapped to processing elements, on a CPU they are mapped to the CPU cores. OpenCL has several logical memory spaces: local memory (mapped to the fast on-chip memory on GPUs), image memory (mapped to the GPU texture memory), and constant memory (mapped to the hardware constant memory on GPUs). On the CPU, these memory spaces are all typically mapped to main memory. \subsection{Image Processing and FAST} \label{fast} FAST \cite{SMISTAD2} is a recent framework that allows the user to create image processing applications by connecting together pre-implemented filters to form a pipeline. Each filter take one or more images as input, and produce one or more images as output. The filters are written in OpenCL for GPUs or C++ for CPUs, and can in principle provide multiple implementations for different devices. If executed on a system with multiple devices such as GPUs and CPUs, each filter in the pipeline can be scheduled to run on any of the available devices, with memory transfers handled automatically, thus taking full advantage of the heterogeneous system to achieve good performance. FAST makes it easy to write heterogeneous image processing applications from existing filters, but writing these filters is challenging due to performance portability. Each filter may be executed on different devices depending upon the machine it is executed on and the pipeline it is a part of, and must therefore often provide multiple different implementations tuned for different devices to ensure optimal performance on all of them. \section{Related work} \label{related} Auto-tuning is an established technique, used successfully in high performance libraries like FFTW \cite{FFTW} for FFTs and ATLAS \cite{ATLAS} for linear algebra, as well as for bit-reversal \cite{ELSTER}. Methods to reduce the search effort of auto-tuning, such as analytical models \cite{YOTOV}, or machine learning \cite{BERGSTRA}, have been developed. Poor OpenCL performance portability has been the subject of many works. Zhang et al. \cite{ZHANG} identified important tuning parameters greatly affecing performance. Pennycook et al. \cite{PENNYCOOK} attempted to find application settings that would achieve good performance accross different devices. Auto-tuning approaches have also been proposed in \cite{FALCH2,NUGTEREN}, but required the OpenCL code to be manually parameterized. Directive based approaches, including OpenMP 4.0 and OpenACC takes the level of abstraction even higher, and allow users to annotate C code with directives to offload parts to accelerators or GPUs. In contrast, our work is focused on making it simpler to write the code for a single kernel, using a implicitly data parallel language, rather than offloading and parallelizing serial CPU code. Furthermore, our approach is better suited for integration with frameworks such as FAST, which only requires the kernels to be written. High-level and domain-specific languages (DSL) have a long history \cite{DEURSEN}. While primarily designed to ease programming, but the domain specific knowledge can also be used to generate optimized code. For example, the Delite \cite{SUJEETH} framework has been used to develop performance oriented DSLs for multi-core CPUs and GPUs, such as OptiML for machine learning, and OptiGraph for graph processing. High performance DSLs for image processing have also been proposed, and many of these works resemble our own. Halide \cite{RAGAN2013} is a DSL embedded in C++, particularly targeting graphs of stencil operations for image processing. Halide separates the algorithm, specified in a purely functional manner, from the \emph{schedule} which specifies how the calculations should be carried out, including tiling, parallelization, and vectorization. Optimization is done by changing the schedule, without modifying the algorithm, or its correctness. Schedules can be hand-tuend, or auto-tuned using stochastic search. GPUs can be targeted, but important GPU optimizations, such as using specific memories, are hard or impossible to express. HIPACC \cite{MEMBARTH2016} is another DSL for image processing, also embedded in C++, but with a more traditional, imperative approach than Halide, and a larger focus on single filters rather than pipelines. A source-to-source compiler can generate code for different back-ends, including OpenCL, CUDA and C++. Domain specific knowledge, as well as data gathered from analysing the input algorithm is combined with a architecture model to generate optimized code. Optimizations that can be applied include memory layout, use of the memory hierarchy, thread coarsening, and efficient handling of boundary conditions. A heuristic is used to determine work-group sizes. PolyMage \cite{MULLAPUDI}, like Halide, focuses on complete image processing pipelines, with a functional style. It uses a model driven, optimizing compiler that only targets multi-core CPUs. There is also a body of work on transforming naive , simplified CUDA or OpenCL kernels and into optimized kernels using optimizations related to e.g. the memory hierarchy, memory coalescing, data sharing and thread coarsening. \cite{UENG, LIN, YANG}. While bearing semblance to our work, none of these have features specifically suited for image processing, combine all the optimizations we apply, or rely on auto-tuning. Combining code generation or source-to-source compilers with auto-tuners has also been explored. Khan et al. \cite{KHAN} used a script based auto-tuning compiler to translate serial C loop nests to CUDA. Du et al. \cite{DU} proposed combining a code generator for linear algebra kernels with a auto-tuner to achieve performance portability. The PATUS \cite{CHRISTEN} framework can generate and auto-tune code for stencil computations for heterogeneous hardware, using a separate specification for the stencil and the computation strategy, similar to Halide. It lacks the general purpose capabilities of our work, and does not support all our optimizations. \section{ImageCL and Auto-Tuning} \label{compiler} Starting with a description of the algorithm in ImageCL, our source-to source compiler generates multiple candidate implementations in OpenCL, each with a different set of optimizations applied. The auto-tuner then picks the best implementation for a given device. One can thus easily generate multiple, high-performing versions for different devices from a single source, achieving greater performance portability. This is illustrated in Figure~\ref{overview} \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{overview.pdf} \caption{Overview of how the ImageCL source-to-source compiler would work with an auto-tuner.} \label{overview} \end{figure} A more detailed view of how the source-to-source compiler works together with the auto-tuner to find the best implementation for a given device can be found in Figure~\ref{overview_detail}. Initially, the ImageCL code is analyzed to find the potential optimizations, that is, the tuning parameters and their possible values. Next, the auto-tuner explores the parameter space by selecting particular values for the tuning parameters, generating the corresponding OpenCL code with the source-to-source compiler, compiling it with the device compiler, and executing and timing it on the relevant device. The procedure is repeated using some arbitrary search method until the auto-tuner arrives at what it believes to be the best parameter values. Finally, the source-to-source compiler uses these parameter values to generate the final implementation. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{overview_detailed_alt.pdf} \caption{Detailed view of source-to-source compiler - auto-tuner interface .} \label{overview_detail} \end{figure} An analytical performance model or expert programmer knowledge could be used to determine parameter values directly. However, we believe the complexity and rapid development of the hardware makes auto-tuning a more robust option, and therefore intend our source-to-source compiler to be used with an auto-tuner. While any general purpose auto-tuning framework can be used, our compiler was designed with the auto-tuner of our previous work \cite{FALCH2} in mind. Our auto-tuner uses a machine learning based performance model to to find promising parameter configurations, to speed up the search. In particular, the auto-tuner will execute the code of several randomly selected parameter configurations and record the execution times. This data is then used to build an artificial neural network performance model, which can predict the execution time of unseen configurations. The model is then used to predict the execution time of all possible configurations, which can be done, even for large search spaces, since model evaluation is cheap. In a second step, some of the configurations with the best predicted execution times are executed, and the configuration with the best actual execution time of these is returned by the auto-tuner. We generate OpenCL since it is supported by multiple vendors and targets a broad range of devices. Compared to generating device specific assembly, it reduces the engineering effort, and allows us to leverage optimizations performed by the device OpenCL compilers. \section{The ImageCL Language} \label{language} The ImageCL programming language is designed to make it easy to write image processing kernels for heterogeneous hardware, and be used together with the FAST framework. An example of the language is shown in Listing~\ref{blur}, which implements a simple 3x3 box filter for blurring. While having features specifically designed for image processing, it can also be viewed as a simplified form of OpenCL instead of an image processing DSL. With the exception of some restrictions outlined below, ImageCL programs can contain arbitrary code with a syntax identical to OpenCL C. It can therefore be used to write general purpose programs, and can work in a standalone fashion without FAST. \begin{lstlisting}[caption=Box filter in ImageCL, label=blur, frame=single] #pragma imcl grid(input) void blur(Image<float> in, Image<float> out){ float sum = 0.0; for(int i = -1; i < 2; i++){ for(int j = -1; j < 2; j++){ sum += in[idx + i][idy + j]; } } out[idx][idy] = sum/9.0; } \end{lstlisting} ImageCL is based on the same programming models as OpenCL, but makes two major simplifications, as well as a number of other changes. Firstly, in OpenCL, the programmer specifies a two-level thread hierarchy. This concept is abstracted away in ImageCL, and replaced with a flat thread space. In particular, the user specifies an \texttt{Image} (described below), and a grid of logical threads, with the same size and dimensionality as the \texttt{Image} is created. The kernel is the work performed by one such logical thread, and it is intended to work on its corresponding pixel, although this is not a requirement. The built in variables \texttt{idx} and \texttt{idy} store the index of the thread, and can be used to index the thread-grid defining \texttt{Image} to find the pixel of the thread. If \texttt{Image}s are not used, the size of the logical thread grid can be specified manually. Secondly, OpenCL has a complex memory hierarchy, with multiple different logical memory spaces. These are typically mapped to different hardware memory spaces with different performance characteristics, as described in Section~\ref{background}. This memory hierarchy is also abstracted away in ImageCL, the programmer only deals with a single flat address space. In addition, ImageCL includes the \texttt{Image} data type, intended to store an image, supports 2D/3D indexing, as shown in Listing~\ref{blur}, and can be templated with the pixel type. One can also specify different boundary conditions, making it possible to read outside an \texttt{Image} with well defined results, a situation which frequently arises in stencil algorithms. We currently support constant and clamped boundary conditions, illustrated in Figure~\ref{border}. \texttt{Image} comes in addition to, and does not replace, the data types supported by OpenCL, such as regular arrays. \begin{figure}[h] \centering \begin{subfigure}[htpb]{0.3\textwidth} \includegraphics[width=\textwidth]{border_clamped.pdf} \caption{Clamped: values outside are set to that of the closest pixel inside the image. } \label{border_clamped} \end{subfigure} \quad \begin{subfigure}[htpb]{0.3\textwidth} \includegraphics[width=\textwidth]{border_const.pdf} \caption{Constant: values outside the image are set to some constant, e.g. 0.} \label{border_const} \end{subfigure} \caption{Different boundary conditions.} \label{border} \end{figure} The ImageCL language also includes a small number of compiler directives. The most important is the already described \texttt{grid} directive, which can be used to determine which \texttt{Image} to base the thread-grid on, as shown in Listing~\ref{blur}, or the size of the grid directly when no \texttt{Image}s are used. Other directives specify boundary conditions, upper bounds on sizes of arrays, and force optimizations on or off. The present features of ImageCL are rich enough to express a wide range of parallel image processing algorithms. However, some features, required for more complex algorithms, are planned for future versions. Specifically, there are presently no synchronization or communication primitives (global barriers can still be achieved by returning control to the host). Furthermore the kernel must be written as a single function. \subsection{Implementation} Our ImageCL source-to-source compiler is implemented using the ROSE \cite{QUINLAN11} compiler framework. Before being given to the compiler proper, a few typedefs and declarations are added to the source code (e.g. declaring \texttt{idx} and \texttt{idy}, declaring an \texttt{Image} class), necessary to make it valid C++, which ROSE can handle directly. ROSE can generate OpenCL. Our compiler can either analyze the input code to find possible tuning parameters, or read values for the tuning parameters, and apply the relevant transformations to generate OpenCL. The analysis examines the structure of the abstract syntax tree (AST) and performs various data-flow analyses. The transformations are applied by modifying the AST. Since the ImageCL programming model is so close to that of OpenCL, generating naive, unoptimized OpenCL, is straightforward. It involves replacing \texttt{idx} and \texttt{idy} with thread index calculations, converting \texttt{Image}s to 1D arrays as well as updating their index calculations, and adding code to implement the boundary conditions. Finally, OpenCL keywords like \texttt{\_\_kernel} and \texttt{\_\_global} must be added. In addition to the kernel code itself, we also generate host code to launch the kernel. We can either generate host code which can be used as a filter in FAST, or as a standalone function, callable from any C/C++ application. \subsection{Tuning Parameters} Our tuning parameters are summarized in Table~\ref{params}, and will be described here. \begin{table}[h] \caption{Tuning parameters.} \centering \begin{tabularx}{0.95\textwidth}{|l|X|} \hline \textbf{Parameter} & \textbf{Description} \\ \hline Work-group size & The size of a work-group in each dimension. \\\hline Thread coarsening & The number of logical threads processed by each real thread in each dimension. \\\hline Image memory & Whether or not to use image memory. One parameter for each applicable array. \\ \hline Constant memory & Whether or not to use constant memory. One parameter for each applicable array \\ \hline Local memory & Whether or not to use local memory. One parameter for each applicable array. \\ \hline Thread mapping & To use blocking or interleaved thread mapping. \\ \hline Loop unrolling & Loop unroll factor for each applicable loop. \\ \hline \end{tabularx} \label{params} \end{table} Some of the optimizations, in particular the local memory optimization, are motivated by computational patterns frequently occurring in image processing, such as stencil computations. They will therefore be most applicable, and have the largest impact, for such applications. Furthermore, optimizations involving OpenCL memory hierarchy may have little or no effect on CPUs, where this hierarchy is placed entirely in main memory. A capable auto-tuner will be able to handle such effectless parameters. \subsubsection{Work-Group Size} As described above, ImageCL has a flat thread space, which must be mapped to the two level thread hierarchy in OpenCL. The size and shape of the work-groups can have significant impact on performance \cite{FALCH2}, and are therefore added as tuning parameters. We add a parameter for the size in each dimension, the dimensionality is the same as the \texttt{Image} upon which the thread-grid is based. \subsubsection{Thread Coarsening} ImageCL has one logical thread for each pixel of the thread-grid defining \texttt{Image}. While good performance on GPUs require thousands of threads to keep the device busy, using one thread per pixel can lead to millions of threads, many more than required. It may therefore be beneficial to perform \emph{thread coarsening} \cite{MAGNI13}, that is, let each thread perform more work, while reducing the total number of threads. In ImageCL, logical threads can be merged so that each real thread (that is, each OpenCL work-item) processes a block of pixels. The sizes of this block in each dimension then becomes the tuning parameters. Not only the amount of work, but also the shape of the block matters, as it can affect the memory access pattern. Presently, thread coarsening is implemented by wrapping the kernel in for-loops, thus executing it multiple times. \subsubsection{Thread mapping} When a single real thread (i.e., OpenCL work-item) is used to process multiple logical threads, as described above, there are several ways to distribute the logical threads. Because the logical threads typically work on their corresponding pixel of the thread-grid defining \texttt{Image}, this mapping can impact memory access patterns, and thereby, performance. Since the logical threads are organized in an \emph{n}-dimensional grid, a contiguous block of logical threads can be assigned to each real thread, as illustrated in Figure~\ref{mem_access_blocking}. While this might give good memory locality, since the pixels of the logical threads are close, it results in poor \emph{coalescing} on GPUs. Memory transactions made by different threads on GPUs can be merged, or coalesced, into fewer transactions if the accesses are close together. It might therefore be better to interleave the logical threads processed, as shown in Figure~\ref{mem_access_interleaved}. Thus, if the real threads access the pixels of their logical threads sequentially, they will access a contiguous block of pixels, leading to coalesced loads. Because of this potential performance impact, we add whether to use blocked or interleaved thread assignment as a tuning parameter. The implementation of this parameter simply requires different indexing calculations. \begin{figure}[h] \begin{subfigure}[htpb]{0.3\textwidth} \includegraphics[width=\textwidth]{mem_access_1.pdf} \caption{Blocking} \label{mem_access_blocking} \end{subfigure} \hfill \begin{subfigure}[htpb]{0.3\textwidth} \includegraphics[width=\textwidth]{mem_access_2.pdf} \caption{Interleaved} \label{mem_access_interleaved} \end{subfigure} \centering \begin{subfigure}[htpb]{0.3\textwidth} \includegraphics[width=\textwidth]{mem_access_3.pdf} \caption{Interleaved in work-group} \label{mem_access_interleaved_wg} \end{subfigure} \caption{Thread mappings. Cells correspond to logical threads, colors to real threads, and pattern to different work-groups.} \end{figure} The blocking scheme works well combined with the local memory optimization described below, since the pixels processed by a work-group still form one contiguous block. This is not the case for the interleaved scheme. When the local memory optimization is used, the interleaving is therefore performed within each work-group, as illustrated in Figure~\ref{mem_access_interleaved_wg}. \subsubsection{Memory Spaces} In ImageCL, there is only a single address space, by default mapped to OpenCL global memory. However, using other OpenCL memory spaces can often affect performance, as described in Section~\ref{background}. We therefore include whether to place data in other memory spaces as tuning parameters. The data under consideration are either \texttt{Images}, or general arrays. In the following, we will refer to both as arrays, unless the distinction is necessary. \paragraph{Image Memory} can be used in either a read-only or write-only manner. In ImageCL, we disallow aliasing. We can therefore determine if an array is only read from, or only written to, by looking at every reference to the array and determine whether it is a read or a write. To place an array in image memory, we simply change the type declaration, replace the references with the relevant image memory read or write functions, and change the host memory allocation. \paragraph{Constant memory} can only be used in a read-only manner, and has a limited size. To determine if an array can be placed in constant memory, we therefore check if it is only read from, as described above, and whether its size is below some threshold. If the size of the array cannot be determined at compile time, but is known to always be sufficiently small, a compiler directive can be used to specify this. To place an array in constant memory, we simply add the relevant address space qualifier. \paragraph{Local memory} can be used if an \texttt{Image} is only read from and each thread reads from a fixed size neighbourhood, or stencil, around its pixel, the size of which can be determined at compile time. Using local memory in this case can increase performance because the areas read by neighbouring threads will overlap. Loading the data once into local memory, and then having different threads access it there multiple times may therefore be beneficial. To determine the size of the stencil, that is, the area around its central pixel a thread reads from, we find all the relevant \texttt{Image} references, and make sure they have the form \texttt{image[idx + c1][idy + c2]}. We then use constant propagation to determine the values of \texttt{c1} and \texttt{c2}. Often, \texttt{c1} and \texttt{c2} are not constants, but depend on the iteration variable of for-loops with a fixed range, as in Listing~\ref{blur}. In such cases, we use a modified version of constant propagation where we allow each variable to take on a small set of constant values. If the values of \texttt{c1} or \texttt{c2} cannot be determined at compile time, the analysis fails, and local memory is not used. If the local memory optimization is applied, each work-group initially cooperatively loads the required part of the array into local memory. The required part is the area covered when moving the center of the bounding box of the stencil over all the pixels belonging to the threads of the work-group, as illustrated in Figure~\ref{local_mem}. We use the bounding box for simplicity, although this may cause unnecessary loads. We then replace each load from global memory with a load from local memory, updating the indexing appropriately. \begin{figure}[htbp] \centering \includegraphics[width=0.55\textwidth]{local_mem.pdf} \caption{Local memory transformation. The light blue area shows the pixels of a work-group, while the green/hatched pixels is the stencil of a single thread. The dark blue area show the additional data that must be loaded into local memory.} \label{local_mem} \end{figure} In general, parts of the stencil might fall outside the \texttt{Image} being read from, and it might also be the case that the \texttt{Image} read from is smaller than the thread-grid. Since we restrict this transformation to \texttt{Images}, their boundary conditions ensures that well defined values can be returned in these cases. Since we only consider cases where \texttt{idx} and \texttt{idy} are not multiplied, divided, taken the modulo of etc., there is a well defined mapping from the logical threads of the thread-grid to the pixels of any size array, ensuring that the real threads of a work-group work on a contiguous area, which can easily be computed. The read-only requirement is needed to ensure correctness, since we presently do not have any synchronization primitives. \subsubsection{Loop unrolling} involves replacing the loop body with multiple copies of itself, while adjusting the number of iterations accordingly. As this is well known to impact performance, we add the loop unrolling factor as a tuning parameter. \section{Results} \label{results} To evaluate ImageCL we implemented three image processing benchmarks, and compared performance with other state of the art solutions. To evaluate performance portability, we tested on a range of different hardware devices. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{combined.pdf} \caption{Slowdown compared to ImageCL, computed independently for each benchmark/device combination.} \label{exectime} \end{figure*} The benchmarks are separable convolution, non-separable convolution and Harris corner detection. Convolution was chosen because it is extensively used in image processing, e.g. for the ubiquitous Gaussian blurring. It also serves as a proxy for other stencil computations. Harris corner detection was chosen as an example of a more complex algorithm. For evaluation, we used three different GPUs, an Nvidia GeForce GTX 960, an Nvidia Tesla K40 and an AMD Radeon HD 7970 as well as a Intel i7 4771 CPU. We compared our performance against Halide \cite{RAGAN2013}, HIPACC \cite{MEMBARTH2016} and OpenCV \cite{OPENCV}. Halide and HIPACC, described in Section~\ref{background}, are domain specific languages for image processing, and can generate high performance CPU and GPU implementations. OpenCV is a widely used library for image processing. It is highly optimized, and contains implementations for both CPUs and GPUs. We used OpenCV 3.0, the 2015/10/22 release of Halide and HIPACC 0.8.1. HIPACC allows the target device and architecture to be specified, so that appropriate optimizations can be applied. The version of HIPACC we used does not support the AMD 7970. We therefore used the latest generation of AMD devices it supports instead. While Halide claims that its code can be auto-tuned, no auto-tuner or auto-tuning interface is distributed with the source code. We therefore performed extensive manual tuning. As described in Section~\ref{related}, Halide code is divided into a functional description of the algorithm, and a schedule, describing parallelization, tiling, vectorization, etc. The manual tuning was therefore carried out by systematically trying out different possible Halide schedules for each device/benchmark combination. The ImageCL implementations were auto-tuned with the machine learning based auto-tuner from our previous work \cite{FALCH2}, which is described in Section~\ref{compiler}. Both HIPACC and Halide can generate either OpenCL or CUDA when targeting Nvidia devices. For HIPACC, we used the CUDA version, and for Halide the OpenCL version, since there performed best. Similarily, HIPACC can generate OpenCL and C++ code for the CPU. Since the OpenCL versions performed better, we report those results. Both Halide and HIPACC can generate specialised code if the values of the filters are known at code generation time. We evaluated both options, for the separable convolution, the filter values are known at code generation time, while they are only known at run time for the non-separable convolution. The execution times reported does not inclue CPU-GPU memory transfers, preparing inputs, etc., as this will be the same for all alternatives. Due to time constraints, we only compare against OpenCV for the Harris corner detection. Figure~\ref{exectime} shows the results. For separable convolution, we used a 4096x4096 image with pixels of type \texttt{float}, a 5x5 filter, and constant boundary condition. ImageCL was faster than the alternatives on the GPUs, achieving speedups between 1.06 and 2.25, with the sole exception of Halide on the GTX 960, which was 9.1\% faster than ImageCL. On the CPU ImageCL was 1.05 and 1.11 times slower than Halide and OpenCV, but was 1.23 times faster than HIPACC. For non-separable convolution, we used a 8192x8192 image with pixels of type \texttt{unsigned char}, a 5x5 filter, and clamped boundary condition. ImageCL was faster than the alternatives on the GPUs, achieving speedups between 1.17 and 2.82, with the sole exception of OpenCV on the AMD 7970, which was 43.4\% faster than ImageCL. On the CPU, ImageCL performed worst, only 1.06 times slower than OpenCV, but 4.24 times slower than Halide. For Harris corner detection, we used a 5120x5120 image with pixels of type \texttt{float}, and a block size of 2x2. ImageCL significantly outperformed OpenCV on the AMD 7970, K40 and Intel i7, achieving speedups of 3.15, 2.11 and 4.57 respectively. On the GTX 960, the performance was more similar, with ImageCL achieving a speedup of 1.08. Table~\ref{bestconfigs} shows the parameters, and the values found, by the auto-tuner for separable convolution. Table~\ref{bestconfigs_nonsep}, \ref{bestconfigs_sobel} and \ref{bestconfigs_harris} shows the same data for the non-separable convolution, and the Sobel and Harris kernels of the Harris corner detection, respectively. \begin{table}[h] \centering \caption{Configurations found by auto-tuner for the row (R) and column (C) kernels of the separable convolution.} \label{bestconfigs} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \textbf{Device} &\multicolumn{2}{l|}{\textbf{AMD 7970}} & \multicolumn{2}{l|}{\textbf{GTX 960}} & \multicolumn{2}{l|}{\textbf{K40}} & \multicolumn{2}{l|}{\textbf{Intel i7}} \\ \hline \textbf{Kernel} & R & C & R & C & R & C & R & C \\ \hline Px./thread X & 4 & 2 & 1 & 1 & 2 & 2 & 128 & 32 \\ \hline Px./thread Y & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 1 \\ \hline Work-group X & 64 & 16 & 16 & 64 & 16 & 16 & 8 & 16 \\ \hline Work-group Y & 4 & 16 & 16 & 4 & 16 & 16 & 1 & 2 \\ \hline Interleaved & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\ \hline Image mem. & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 \\ \hline Local mem. & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline Constant mem. & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline Unroll loop 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 \\ \hline \end{tabular} \end{table} \begin{table}[h] \centering \caption{Configurations found by auto-tuner for the non-separable convolution.} \label{bestconfigs_nonsep} \begin{tabular}{|l|l|l|l|l|} \hline \textbf{Device} & \textbf{AMD 7970} & \textbf{GTX 960} & \textbf{K40} & \textbf{Intel i7} \\ \hline Px/thread X & 4 & 4 & 4 & 256 \\ \hline Px/thread Y & 16 & 4 & 8 & 2 \\ \hline Work-group X & 64 & 8 & 32 & 2 \\ \hline Work-group Y & 4 & 32 & 4 & 8 \\ \hline Interleaved & 0 & 1 & 0 & 1 \\ \hline Image mem & 0 & 0 & 1 & 0 \\ \hline Local mem & 0 & 1 & 0 & 0 \\ \hline Constant mem & 1 & 1 & 1 & 1 \\ \hline Unroll loop 1 & 1 & 1 & 1 & 1 \\ \hline Unroll loop 2 & 0 & 1 & 1 & 1 \\ \hline \end{tabular} \end{table} \begin{table}[h] \centering \caption{Configurations found by auto-tuner for the Sobel kernel of the Harris corner detection.} \label{bestconfigs_sobel} \begin{tabular}{|l|l|l|l|l|} \hline \textbf{Device} & \textbf{AMD 7970} & \textbf{GTX 960} & \textbf{K40} & \textbf{Intel i7} \\ \hline Px/thread X & 1 & 4 & 1 & 32 \\ \hline Px/thread Y & 1 & 2 & 4 & 4 \\ \hline Work-group X & 128 & 32 & 32 & 64 \\ \hline Work-group Y & 1 & 2 & 4 & 1 \\ \hline Interleaved & 0 & 1 & 0 & 0 \\ \hline Image mem & 0 & 0 & 1 & 0 \\ \hline Local mem & 0 & 1 & 0 & 0 \\ \hline \end{tabular} \end{table} \begin{table}[h] \centering \caption{Configurations found by auto-tuner for the Harris kernel of the Harris corner detection.} \label{bestconfigs_harris} \begin{tabular}{|l|l|l|l|l|} \hline \textbf{Device} & \textbf{AMD 7970} & \textbf{GTX 960} & \textbf{K40} & \textbf{Intel i7} \\ \hline Px/thread X & 1 & 1 & 32 & 8 \\ \hline Px/thread Y & 1 & 2 & 2 & 2 \\ \hline Work-group X & 32 & 128 & 64 & 32 \\ \hline Work-group Y & 8 & 2 & 16 & 2 \\ \hline Interleaved & 0 & 0 & 1 & 0 \\ \hline Image mem dx & 1 & 1 & 1 & 0 \\ \hline Image mem dy & 1 & 1 & 1 & 0 \\ \hline Local mem dx & 1 & 1 & 0 & 0 \\ \hline Local mem dy & 1 & 1 & 0 & 0 \\ \hline Loop 1 & 1 & 0 & 0 & 1 \\ \hline Loop 2 & 0 & 1 & 0 & 0 \\ \hline \end{tabular} \end{table} \section{Discussion} \label{discussion} ImageCL performs comparatively well because it is able to apply a wide range of optimizations, and use auto-tuning to pick the correct optimizations for a given device and benchmark. For example, the good performance compared to Halide on the K40 is caused in part by ImageCL using image memory, an optimization Halide does not expose and therefore cannot be applied even if the programmer suspects it might help. The varying and sometimes poor performance of OpenCV illustrates the problem of performance portability. It has separate implementations for the CPU and GPUs, a solution that requires extra work and scales poorly. For the GPUs, it is increasingly difficult to write a single implementation that performs well on all of them. ImageCL's use of auto-tuning represents a more robust and scalable solution. However, some optimizations cannot be applied in the current version of ImageCL. Halide achieves good performance, in particular on the GTX 690 by merging the two separable convolution kernels, caching the intermediary result in local memory. Due to the lack of synchronization and communication primitives, this optimization can currently not be applied in ImageCL. We suspect that the poor results for non-separable convolution on the CPU is caused by lack of vectorization. Our compiler does currently not perform vectorization, instead relying on the vectorization capabilities of the OpenCL runtime, which performed better for the separable convolution and corner detection, than the non-separable convolution. The poor performance for non-separable convolution on the CPU is also caused by the implementation of the clamped boundary condition. If the constant boundary condition is used, the execution time is reduced by a factor of 2. As such, it is not a fundamental flaw of our approach, and using auto-tuning to pick different implementations of the boundary condition for different devices could overcome this issue. Despite good results, the time required for auto-tuning, even using our machine learning based auto-tuner \cite{FALCH2}, can be a significant drawback. For the casses presented, the auto-tuner executed around 1700 valid candidate implementations during its search for each device/benchmark combination. This required around 2 hours in total. For each candidate implementation, running the actual kernel takes milliseconds, additional time is required to prepare inputs and transfer data to and from the GPU. More significantly, each candidate implementation must be compiled by our compiler and the OpenCL compiler which can take up to 1-3 seconds. In comparison, OpenCV and HIPACC have no tuning overhead, while the manual tuning for Halide required several hours. Because of the overhead, this kind of auto-tuning will be most usefull for code that is executed many times, like library functions. The good results achieved here, as well as in our previous work \cite{FALCH2} gives us some confidence in our auto-tuners abilities, but it does not give any guarantees about how close the solution found is to the globally best one, which can only be found by time consuming exhaustive search. Auto-tuning might be more challenging for larger applications with more complex search spaces. \section{Conclusion and Future Work} \label{conclusion} The increasing popularity of heterogeneous computing has made performance portability, making code execute with good performance when moved between different devices, harder to achieve. In this paper, we have introduced the ImageCL language and its source-to-source compiler. ImageCL is a simplified form of OpenCL with features for image processing, and is translated to optimized OpenCL. Our source-to-source compiler can apply various optimizations to generate multiple candidate implementations. An auto-tuner can then pick the best implementation for a given device. We have evaluated the performance of ImageCL with three image processing benchmarks, and are able outperform state of the art solutions in several cases, achieving speedups of up to 4.57x. Our proposed solution therefore shows promise as a way to come closer to the goal of heterogeneous performance portability. Future work may include adding additional features, such as synchronization and communication primitives, and increased support for vectorization. Another possibility is developing a multi-device or distributed version using MPI. Finally, we intend to evaluate ImageCL with more diverse benchmarks, on a broader range of hardware, including CPUs and accelerators. \section{Acknowledgements} The authors would like to thank NTNU IME and NTNU MedTech for their support of the Medical Computing \& Visualization project, Nvidia's GPU Research Center program and NTNU for hardware donations to our NTNU/IDI HPC-Lab, and Drs. Malik M. Z. Khan and Erik Smistad for their helpful discussions. The authors would also like to thank Dr. Keshav Pingali and ICES at the University of Texas at Austin for hosting us during the work for this article. \bibliographystyle{plain}
2,877,628,091,420
arxiv
\section{Introduction} Since muons are over 200 times more massive than electrons, they could be circularly accelerated to \tera\eV\ energies without the synchrotron-radiation energy losses that electrons suffer. At the same time, since, unlike hadrons, they are fundamental particles with no constituent partons, their collision energies can be known to comparitively low uncertainties. Muon colliders would therefore open new frontiers of investigation in high energy particle physics, allowing precision measurements to be made at TeV energies. A wide range of physics can be studied at a \ensuremath{\muon^{+}}\ensuremath{\muon^{-}}\ collider \cite{Ankenbrandt:1999as,Barger:jk}: A sub-\tera\eV\ collider can scan for the $s$-channel resonance of the Higgs boson~\cite{Bar95} and the thresholds for the production of pairs of light beyond-standard-model particles \cite{Barger:rz,Lykken:qv,PhysRevLett.80.5489} with \mega\eV\ precision. A \tera\eV\ collider can search for the heavy particles of a new physics~\cite{Lykken:qv}. The high flux of muons at the front of the collider would allow for high-precision muon physics studies, such as searches for rare decays~($\ensuremath{\upmu}\rightarrow~e\gamma$,~$\ensuremath{\upmu}\rightarrow e$). As well, high energy muons can be used for high $Q^2$ deep inelastic scattering with protons~\cite{schellman:166,cheung1998}. One of the greatest challenges to constructing a muon collider is the cooling of a beam of muons on a timescale comparable to the lifetime of the muon. Simulation of a muon collider front end utilizing frictional cooling~\cite{Abramowicz:2004} indicated that such a scheme is a viable option for producing high lumonsity \ensuremath{\muon^{+}}\ and \ensuremath{\muon^{-}}\ beams. The Muon Collider group of the Max Planck Institute for Physics (\textsc{mpp}), Munich, is commissioning the Frictional Cooling Demonstration~(\textsc{fcd}) experiment to verify the working principles behind the scheme. \section{Frictional Cooling} \begin{figure}[!b] \includegraphics[width=\textwidth]{dedx_teqp_bw.eps} \caption{Stopping power (solid curve) of helium on \ensuremath{\muon^{+}}. The horizontal dashed line shows the accelerating power of the restoring electric field for a particle of unit charge.\label{fig:dedx}} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{GasCellFull.eps} \caption{Scale diagram of the \textsc{fcd}\ cooling cell.\label{fig:fcdCell}} \end{center} \end{figure} Frictional cooling involves the balancing of energy loss to a moderator with energy gain from an electric field to bring a beam of charged particles to an equilibrium energy and reduce dispersion. It requires the beam be in an energy region where the stopping power of the moderator medium---the energy loss per unit path length normalized by the medium density, \ensuremath{(1/\rho)\,\IdTds}---increases with increasing kinetic energy $T$. There are two energy regions where this requirement is met~(\parfig~\ref{fig:dedx}). Ionization cooling schemes utilize particle beams in the high energy region~\cite{Neu83}. Frictional cooling utilizes particle beams in the low energy region.% Applying an electric field to restore energy loss creates two equilibrium energies: a stable one at an energy \sub{T}{eq}\ below the ionization peak of the stopping power curve, where the energy~loss per unit~path~length $\Idiff{T}{s}\propto\sqrt{T}$, and an unstable one at an energy $\sub{T}{eq}'$ above the peak, where \Idiff{T}{s}\ decreases with increasing kinetic energy. Particles with kinetic energies below \sub{T}{eq}\ accelerate; those with kinetic energies between \sub{T}{eq}\ and $\sub{T}{eq}'$ decelerate. The coolable energy region is defined by $T<\sub{T}{eq}'$. Additionally, restoring lost energy only in the longitudinal direction provides transverse cooling. For a chosen equilibrium energy, the electric field strength required to balance the energy loss scales directly with the density of the moderator. The density of the moderator must therefore be low, to keep the electric field strength within a feasible range. Helium and hydrogen gasses are good moderators because they have low densities and suppress the capture of electrons by the cooled particles~\cite{PhysRevLett.33.568} and the capture of the particles by the medium atoms~\cite{Coh02} \comment{plus proton ref}. \section{Frictional Cooling Demonstration Experiment\label{sec:construction}} The \textsc{fcd}\ experiment at the \textsc{mpp}\ is designed to verify the working principles behind frictional cooling and the modeling of frictional cooling used in the simulation. The dependence of \sub{T}{eq}\ on moderator density and electric field strength can be measured and compared to monte carlo simulations. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{HVUpgrade.eps} \caption{Photo of the experimental setup: visible are the gas cell (center) with the source on the left-hand side and detector on the right-hand side; the connection to \textsc{hv}\ (left); electronics feedthroughs (right) leading to the electronics (bottom right); and the gas feedthroughs (top right). \label{fig:fcdphoto}} \end{center} \end{figure} The experiment consists of a gas cell mounted inside an accelerating grid that provides the restoring electric field~(\parfig~\ref{fig:fcdCell}). A proton source and an open silicon drift detector~(\textsc{sdd}) are mounted inside the gas cell. The whole construction is then placed inside a vacuum tank (\parfig~\ref{fig:fcdphoto}). The accelerating grid is constructed from twenty-one metal \comment{stainless-steel or tungsten} rings, \unit{3}{\milli\meter} thick, spaced~\unit{5}{\milli\meter} apart, connected in series with \unit{64}{\mega\ohm} resistors between rings. The rings enclose a cylindrical space~\unit{33}{\milli\meter} in diameter and~\unit{100}{\milli\meter} long from the center of the first ring to the center of the last ring. The first ring is connected to a power supply capable of providing voltages up to~\unit{100}{\kilo\volt}; the last ring is grounded. The central axis of the grid defines the $z$ direction, with~$z\,=\,\unit{0}{\milli\meter}$ at the center of the high-voltage (\textsc{hv}) ring and~$z\,=\,\unit{100}{\milli\meter}$ at the center of the ground ring. The grid creates a nearly uniform electric field along $z$ with strengths up to \unit{1}{\mega\volt\per\meter} (see \parsec~\ref{sec:sim:efield}). The gas cell is a cylinder made of \textsc{peek}, with an outer radius of \unit{31}{\milli\meter} and inner radius of \unit{27}{\milli\meter}. It is centered inside the accelerating grid. The end of the cell nearest the \textsc{hv}\ ring is sealed but for a small hole on the $z$ axis through which the proton source is mounted. The end of the cell nearest the ground ring is sealed by a grounded metal flange that holds the \textsc{sdd}\ and provides gas input and output feedthroughs. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\columnwidth]{sourceConstructed.eps} \includegraphics[width=0.45\columnwidth]{ProtonSourceUpgrade.eps} \caption{Photo of the proton source assembled (left) and disassembled (right). The gold-colored disc is the americium. The cylindrical cap holds the mylar foil.\label{fig:protonsourcephoto}} \end{center} \end{figure} The proton source (\parfig~\ref{fig:protonsourcephoto}) consists of an open alpha source covered with a Mylar foil. The alpha particles free protons from the Mylar (see \parsec~\ref{sec:sim:source}). The source is embedded in the top of a lollipop made of \textsc{peek}. A cap that fastens to the lollipop head holds the Mylar foil in place. Foils of various thicknesses can be swapped into the construction easily and quickly. The lollipop stick screws into the head at one end and into a cylindrical platform on the opposite end. This platform is \unit{25}{\milli\meter} in diameter. The platform with the attached lollipop fastens to the \textsc{hv}\ end of the gas cell forming a gas-tight seal. \begin{figure}[t] \begin{minipage}[t]{0.45\textwidth} \centering \includegraphics[height=5.5cm]{SDDPhoto2.eps} \caption{Photo of the \textsc{sdd}. The inner black circle is the back surface of the active region.\label{fig:sddphoto}} \end{minipage} \begin{minipage}[t]{0.05\textwidth} \ \end{minipage} \begin{minipage}[t]{0.5\textwidth} \centering \includegraphics[height=5.5cm]{DetectorInFlangeCropped2.eps} \caption{Photo of the \textsc{sdd}\ mounted into the gas feedthrough flange.\label{fig:sddflangephoto}} \end{minipage} \end{figure} The \textsc{sdd}\ (\parfig~\ref{fig:sddphoto}) mounts through the gas feedthrough flange (\parfig~\ref{fig:sddflangephoto}) to a \textsc{peek}\ holder; when fastened tightly, this mounting provides a gas-tight seal. The \textsc{peek}\ holder also acts as an electronics feedthrough for the \textsc{sdd}. This construction allows for a quick exchange of the \textsc{sdd}. \subsection{Electric Field \label{sec:sim:efield}} \begin{figure}[t] \includegraphics[width=\columnwidth]{BLACK_plastic_map_field_BW.eps} \caption{Map of the potential, for a voltage of \unit{1}{\volt} applied to the first ring, in the $z$--$r$ plane (left) and the potential (black; left axis) and longitudinal electric field strength (dashed; right axis) on the $z$ axis (right) created by the accelerating grid (with plastic source holder).\label{fig:efield}} \end{figure} A map of the electric field created by the accelerating grid is needed for the full simulation of the \textsc{fcd}\ experiment (\parsec~\ref{sec:CoolSim}) as well as for characterizing the detector's response to protons (\parsec~\ref{sec:protons}). We use a successive overrelaxation algorithm to calculate the electric field (\parfig~\ref{fig:efield}). This calculation revealed that the potential is at its maximum not at $z=\unit{0}{\milli\meter}$, but instead at $z=\unit{9}{\milli\meter}$. Therefore the surface of the proton source must be placed at $z>\unit{9}{\milli\meter}$. The electric field is strongest and also nearest to uniform at $z>\unit{20}{\milli\meter}$. In the simulation of the \textsc{fcd}\ cell and in the experimental setup for the measurement of proton spectra, the source surface is therefore placed at $z=\unit{20}{\milli\meter}$. \subsection{Proton Source \label{sec:sim:source}} \begin{figure}[t] \includegraphics[width=\columnwidth]{ProtonSourceFull2.eps} \caption{Schematic of the proton production mechanism \label{fig:prosrc}} \end{figure} \begin{table}[b] \begin{center} \begin{tabular}{l *{3}{r}} \hline \hline \rule{0pt}{1em}Energy (\mega\eV) & 5.388 & 5.422 & 5.485 \\ Branching Ratio (\%) & 1.0 & 13.0 & 84.5 \\ \hline \hline \end{tabular} \end{center} \caption{Energies and branching ratios (BR) for alpha particles emitted by \ensuremath{^{241}{\rm Am}}\ with branching ratio greater than 0.4\%\label{tab:AmAlphas}} \end{table} The proton source contains a \unit{74}{\kilo\becquerel} \ensuremath{^{241}{\rm Am}}\ alpha source covered by a thin Mylar foil. The americium emits alpha particles with energies approximately \unit{5}{\mega\eV} (\partab~\ref{tab:AmAlphas}). As they pass through the Mylar, they break carbon--hydrogen bonds, freeing hydrogen nuclei from the Mylar molecule~(\parfig~\ref{fig:prosrc}). When these bonds are broken near the outer surface of the foil, the electric field can accelerate the resultant protons out of the foil before they are recaptured. \begin{figure}[t] \includegraphics[width=\columnwidth]{HeHCrossSection_GreenMcNeal.eps} \caption{Experimentally measured cross section for the ionization of molecular hydrogen by He$^{2+}$ \cite{Tawara1985} (points) and a fit adapted from \cite{Gre71} (line) \label{fig:HeHCS}} \end{figure} The number of bonds an \ensuremath{\upalpha}\ breaks per unit distance is {\sub{n}{p}(E) = \sub{\sigma}{ion}(E)\cdot\sub{\rho}{H}}, where \sub{\sigma}{ion} is the cross section for ionization of molecular hydrogen by He$^{2+}$ and \sub{\rho}{H} is the concentration of hydrogen in Mylar, \unit{34.35}{\nano\meter\rpcubed}. The measured cross section for ionization of molecular hydrogren, \begin{displaymath} \textrm{He}^{2+} + \textrm{H}_{2} \to \textrm{He}^{n} + \textrm{p},\nonumber \end{displaymath} where He$^{n}$ is any charge state of helium, was reported in \cite{Tawara1985}. To describe the cross section (\parfig~\ref{fig:HeHCS}), we fit to the data a semi-empirical formula for the cross section of hydrogen on helium found in \cite{Gre71}, \begin{displaymath} \sigma = \sigma_0 \cdot a_1 \, \left(\frac{E'}{E_R}\right)^{\displaystyle a_2}{\Bigg /} \left(1+\left(\frac{E'}{a_3}\right)^{\displaystyle a_2+a_4} + \left(\frac{E'}{a_5}\right)^{\displaystyle a_2+a_6}\right), \nonumber \end{displaymath} where $E'=E-E_t$ is the energy of the \ensuremath{\upalpha}\ minus the threshold energy of the process, $E_t$; $E_R$ is the Rydberg energy multiplied by $\sub{m}{He}/\sub{m}{e}$. The $a_i$ are the fit parameters, and $\sigma_0$ is a scaling factor equal to \unit{\power{10}{-16}}{\centi\meter\squared}. The americium is in the form of a disc \unit{2.5}{\milli\meter} in diameter, which is embedded in a plastic lollipop-shaped holder. It is completely open on its exposed side. A cap fits over the lollipop to hold the Mylar foil in place at a distance of \unit{2.45}{\milli\meter} from the source. The cap has a circular opening \unit{3.5}{\milli\meter} in diameter centered over the source. \begin{figure}[t] \begin{center} \includegraphics[width=.45\columnwidth]{protonSourceSpatialDist_fx.eps} \includegraphics[width=.45\columnwidth]{protonSourceRate_fx.eps} \caption{Radial distribution of p/\ensuremath{\upalpha}\ (left) and total proton production rate for a \unit{74}{\kilo\becquerel} alpha source (right) as functions of Mylar foil thickness. Both rates count only protons produced in the last \unit{1}{\nano\meter} of foil.\label{fig:prodist}} \end{center} \end{figure} We simulated this proton source in Geant4~\cite{Geant4}. The americium emits alpha particles isotropically. We tracked those alpha particles that pass through the foil and record their trajectories and energy losses in the Mylar foil. We calculate the number of C--H bonds broken by an \ensuremath{\upalpha}\ at a point along its trajectory using its Geant4-calculated energy. This is used to calculate the radial distribution (in a plane parallel to the surface of the foil) of the number of C--H bonds broken within the last {1-\nano\meter} as a function of thickness of the foil traversed. We used \unit{1}{\nano\meter} as an estimate of the depth from which a proton can escape out of the foil because this is roughtly the size of the Mylar monomer. Figure~\ref{fig:prodist} (left) displays the results for thickness of the foil from \unit{1}{\micro\meter} to \unit{33}{\micro\meter} in \unit{1}{\micro\meter} steps; this is the precision on the thickness at which such foils are manufactured. We also calculated the total proton production (again in the last \unit{1}{\nano\meter}) as a function of thickness of the Mylar foil (\parfig~\ref{fig:prodist}, right). The shape of the rate--thickness relationship is similar to that of the Bragg peak for \alphaparticle 's\ in Mylar; however, it is broadened by the distribution of the incidence angles of the \alphaparticle 's. The rate is maximum in the region of \unit{23}{\micro\meter} and is zero above \unit{30}{\micro\meter}, since the \alphaparticle 's\ are stopped in the foil before reaching the surface. Around the rate-maximizing thickness, the proton production is spatially uniform, and at larger thicknesses, the protons are produced mainly at the center of the foil surface (\parfig~\ref{fig:prodist}). Though at the larger thicknesses the total rate is lower, when detector acceptance is taken into account (see \parsec~\ref{sec:detAcc}), the centralized proton production may be beneficial. The square-shaped markers in the right plot of figure~\ref{fig:prodist} indicate the five thicknesses available for the experiment: 12, 19, 24, 30, and \unit{33}{\micro\meter}. By sandwiching foils together, we are also able to test a thickness of \unit{31}{\micro\meter}. \subsection{Silicon Drift Detector} \begin{figure}[t] \begin{center} \includegraphics[width=0.5\columnwidth]{SDDSchematic.eps} \caption{Schematic of the \textsc{sdd}\ from \cite{Sim07}, showing the electrode structure used to deplete the detector bulk and setup the charge-collection field, and the \textsc{fet}.\label{fig:sddschematic}} \end{center} \end{figure} The \textsc{sdd}, designed and constructed by the \textsc{mpp}'s Semiconductor Laboratory, measures particle energies in the range \unit{100s}{eV} to approximately \unit{150}{\kilo\eV} \comment{find \textsc{hll} ref, and fix numbers}. The detector is constructed on an n-type silicon wafer \unit{450}{\micro\meter} thick. The exposed (back) surface of the detector is covered uniformally with a 30-\nano\meter-thick aluminum electrode. The opposite surface is implanted with concentric rings of p-type silicon (\parfig~\ref{fig:sddschematic}). A negative potential (on the order of \unit{-100}{\volt}) on the aluminum depletes the silicon. The p-type rings are placed at voltages that produce a well-shaped potential inside the silicon. Ionizing particles produce a number of electron-hole pairs in the silicon in proportion to the amount of energy they deposit along their trajectory; the electrons then drift to the center of the p-doped surface of the detector, where a field-effect transistor produces a signal. \begin{figure}[t] \includegraphics[width=.5\columnwidth]{sddSignal_AfterPreamp_full.eps} \includegraphics[width=.5\columnwidth]{sddSignal_AfterPreamp.eps} \caption{Signal from \textsc{sdd}\ after initial amplification. The plot on the right shows the rising edge of the signal, shown in full on the left. \label{fig:preampSig}} \end{figure} The voltages for depletion of the silicon and setting up of the potential well are regulated by electronics manufactured by PNSensor~\cite{pnsensor}. The detector outputs a saw-tooth-shaped voltage pulse that after initial amplification by these electronics has a rise time of 30~to~40~\nano\second\ (\parfig~\ref{fig:preampSig}) and an amplitude~$\sub{V}{det}~\!\approx~\!(\unit{1.2}{\milli\volt\per\kilo\eV})\;\sub{T}{dep}$, where \sub{T}{dep}\ is the energy the particle deposits in the active layers of the detector. \subsection{Electronic Readout} \begin{figure}[t] \begin{minipage}[t]{0.5\textwidth} \centering \includegraphics[width=\columnwidth]{fitsig.eps} \end{minipage} \begin{minipage}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{verysatsig.eps} \end{minipage} \begin{minipage}[t]{0.47\textwidth} \caption{Shaped and digitized signal from the \textsc{sdd}\ with pedestal and peak fits.\label{fig:shapedSig}\label{fig:fittedsig}} \end{minipage} \begin{minipage}[t]{0.03\textwidth} \ \end{minipage} \begin{minipage}[t]{0.5\textwidth} \caption{Saturated signal from the \textsc{sdd}.\label{fig:satSig}} \end{minipage} \end{figure} A shaping amplifier with a shaping time of~\unit{0.25}{\micro\second} converts this quick signal to a fin-shaped pulse~3~to~4~\micro\second\ wide~(\parfig~\ref{fig:shapedSig}). The shaper preserves the linearity of the signal amplitude's dependence on~\sub{T}{dep}. The signal is split in two: one part is used for triggering; the other is digitized and saved for offline analysis. A 12-bit \textsc{adc}\ from National Instruments~\cite{natInst}, interfaced with a computer via LabView~\cite{LabView}, digitizes the signal with a sampling rate of \unit{10}{\mega\hertz}. Before entering the \textsc{adc}, the signal is delayed \unit{4.75}{\micro\second} with respect to the trigger signal, so that the baseline voltage before the signal is recorded as well. The \textsc{adc}\ records the event for \unit{10}{\micro\second} (100 samples). A window discriminator, consisting of two trailing-edge constant fraction discriminators (\textsc{cfd}), one setting a low threshold, the other a high threshold, produces the trigger signal for the \textsc{adc}\ to begin sampling. The low-threshold \textsc{cfd}\ filters out low-amplitude voltage fluctuations---that is, noise. Since the \textsc{adc}\ has a maximum triggering rate of \unit{30}{\hertz}, when needed, the high-threshold \textsc{cfd}\ was used to filter out unimportant signals that had large amplitudes, namely those from \ensuremath{\upalpha}\ particles. A particle that deposits too much energy in the detector produces a charge-saturated signal~(\parfig~\ref{fig:satSig}). The signal is distorted and the energy of the particle cannot be reconstructed. These signals themselves are filtered out by the window discriminator; however, the saturation often produces secondary signal peaks in the trailing edge of the original signal. These peaks are large enough to pass the low-threshold \textsc{cfd}\ but small enough for the high-threshold \textsc{cfd}\ to not veto them. To prevent these signals from swamping the \textsc{adc}, a gate generator can produce a veto signal from the high-threshold \textsc{cfd}'s trigger pulse. The length of the gate can be set within a large range of times from less than~\unit{100}{\nano\second} to greater than~\unit{11}{\second}. \subsection{Offline Analysis} The amplitude of a signal above the baseline linearly corresponds to the energy deposited in the detector by the incoming particle. The shape of the signal around its peak is approximately gaussian. However, since outside this region the signal is not perfectly gaussian, we cannot fit the whole signal with a gaussian shape plus a pedestal to get both the amplitude and the baseline. Instead, we fit the peak and the samples before the signal separately~(\parfig~\ref{fig:fittedsig}). Fitting the first approximately \unit{2}{\micro\second} with a constant pedestal plus a gaussian function that describes the start of the signal gives the baseline value. Fitting a symmetric \unit{2}{\micro\second} window around the peak with a gaussian function gives the signal amplitude. \begin{figure}[t] \includegraphics[width=\columnwidth]{delta2ndf.eps} \caption{$\chi^{2}/$NDF distributions as a function of event energy for a background run.\label{fig:chi2cut}} \end{figure} A visual investigation of the pulses revealed two different classes of signal: one that contained one peak, which the functional form reproduced well; and one that contained multiple peaks, which the functional form did not reproduce well. The poorly fitted pulses were clearly due to noise. Cutting on $\log(\delta^{2})<-4$~(\parfig~\ref{fig:chi2cut}), defined by \begin{displaymath} \delta^2 = h^{-2} \sum_i [V_{i}-f(t_{i})]^{2},\nonumber \end{displaymath} rejects such noise, where $V_i$ and $t_i$ are the voltage and time of the $i$th voltage sample, $f(t_{i})$ is the fitted voltage at time $t_i$, and $h$ is the maximum voltage of the event. \subsection{Detector Calibration \label{sec:detcal}} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{calib.eps} \caption{\ensuremath{^{241}{\rm Am}}\ (top left) and \ensuremath{^{55}{\rm Fe}}\ (top right) \textsc{x}-ray\ spectra, and calibration fit (bottom).\label{fig:detcal}} \end{figure} To calibrate the detector and readout system, we record the spectra of the known \textsc{x}-ray\ sources \ensuremath{^{55}{\rm Fe}}\ and \ensuremath{^{241}{\rm Am}}. The two sources provide \xray s\ at many energies between \unit{5}{\kilo\eV} and \unit{60}{\kilo\eV}, allowing us to learn the proportionality of the detectors signal amplitude to~\sub{T}{dep}~(\parfig~\ref{fig:detcal}) and to check the linearity of the detector response. \begin{figure}[t] \centering \includegraphics[angle=-90,width=.45\columnwidth]{Fe55Old_lin.eps} \includegraphics[angle=-90,width=.45\columnwidth]{Fe55New_lin.eps} \caption{\ensuremath{^{55}{\rm Fe}}\ spectra taken with the \textsc{sdd}\ at room temperature (left) and cooled to \unit{-20}{\celsius} (right).\label{fig:detres}} \end{figure} Since \xray s\ deposit their energy locally and uniformally inside the silicon, these sources also allow us to measure the energy resolution of the detector and our read-out system without influences from any dead layer structure. The resolution, here defined as the \textsc{fwhm}\ of an energy peak, is greatly affected by the temperature of the detector~(\parfig~\ref{fig:detres}). At room temperature, the detector has resolutions at the Mn K$\upalpha$\ line of \ensuremath{^{55}{\rm Fe}}~(\unit{5.9}{\kilo\eV}) greater than~\unit{400}{\electronvolt}. When cooled to temperatures below~\unit{0}{\celsius}, the detector reaches resolutions below~\unit{150}{\electronvolt}. \subsection{Gas Purity \label{sec:gaspurity}} The frictional cooling effect and the operation of the open \textsc{sdd}\ require that the moderating gas be uncontaminated. The detector has a dead layer of \unit{30}{\nano\meter} of aluminum on top of the silicon that forms its active volume. As well, in the first approximately \unit{200}{\nano\meter} of the silicon, the detector has a charge collection efficiency below 100\%. In the energy range of interest, $T\lessapprox\unit{30}{\kilo\eV}$, protons deposit a significant amount of energy in the aluminum and deposit their remaining energy largely in the region of reduced collection efficiency. Therefore, the \textsc{sdd}\ measures only 35\% to 65\% of the proton's energy (\parsec~\ref{sec:protons}). Impurities in the gas environment can build up on the detector's surface, which is the coldest surface in the experiment setup, increasing the amount of material that protons must traverse before entering the active layers of the detector. Even a thin layer of built-up gas impurities significantly reduces the amount of energy measured and, due to straggling in the trajectories of protons through the dead layers, greatly reduces the detector's energy resolution. The main sources of impurities are outgassing of molecules from the plastic pieces in the experiment construction and contamination of the gas from impurities either in the gas source or entering along the gas transfer line. The effects of outgassing can be greatly reduced by pumping the gas cell down to a low pressure (\unit{\power{10}{-6}}{\milli\bbar}) for several days. During this time, the plastic pieces expel foreign molecules, leaving the environment cleaner for data-taking runs. The boil-off from cryogenic liquids is used as an ultrapure gas source. The gas transfer lines are tightly sealed, capable of holding pressures down to \unit{\power{10}{-9}}{\milli\bbar}, to prevent air leaking into the gas. As well, the transfer lines are entirely constructed from stainless steel, so no outgassing can contaminate the gas on the way from the source to the cell. The construction of an impurity trap, improvements in the gas line, and ultrapure helium sources will be implemented for running he full system. \subsection{Electric Breakdown} Due to the large electric field strengths it provides, the accelerating grid must be kept in high vacuum ($P<\unit{\power{10}{-4}}{\milli\bbar}$) in order to prevent breakdown of the electric field between grid rings. Even at high vacuum, breakdowns can occur if the lines carrying the high voltage come too near to lower voltage rings or grounded pieces. Such breakdowns were observed in early versions of the experiment construction in which the high voltage was lead to the \textsc{hv}\ ring by an insulated wire that passed along the length of he accelerating grid. Electrical discharges originated at the \textsc{hv}\ ring and traveled down the surface of the wire's insulation to the grounded detector flange, when high voltages as low as \unit{15}{\kilo\volt} were applied to the \textsc{hv}\ ring. To prevent this from occuring, the construction was altered to bring high voltages to the \textsc{hv}\ ring from the opposite side (\parfig~\ref{fig:fcdphoto}) The gas seal at the source side of the cell must be very tight since it is at the \textsc{hv}\ end of the grid. A small leak can stream gas over the rings creating a path for the breakdown between rings. It can also provide a path for charge to flow in a breakdown from inside the gas cell to the \textsc{hv}\ ring. Both breakdowns mechanisms were observed in an early version of the experiment construction in which the proton source was mounted directly through the gas cell wall. This construction did not create a tight enough gas seal, and breakdowns were observed at gas pressures above several \unit{\power{10}{-1}}{\milli\bbar}, when high voltages as low as \unit{10}{\kilo\volt} were applied to the \textsc{hv}\ ring. To prevent gas leaks causing such breakdowns, we constructed the source-holder platform described above. \subsubsection{Breakdown Inside the Gas Cell} Breakdown of the electric field also occurs entirely inside the gas cell. This can be particularly dangerous since a breakdown that strikes the detector can destroy it or the detector electronics. The frequency and strength of these breakdowns depends greatly on the source-holder platform. We constructed two platforms: one made of \textsc{peek}\ that does not alter the electric field of the accelerating grid; and one made of stainless steel that can be connected to the \textsc{hv}\ ring or left unconnected (floating), and alters the shape of the electric field at the \textsc{hv}\ end. When the metal platform is electrically connected to the \textsc{hv}\ ring, such a large current is drawn at any pressure that the high voltage supply is incapable of providing enough current to hold a \textsc{hv}\ above \unit{1}{\kilo\volt}. However, when the platform is left floating, higher voltages can be reached before a breakdown from the platform to a point inside the gas cell occurs. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{paschen_without_source.eps} \caption{Threshold voltage for breakdown of the electric field inside the gas cell for helium, nitrogen, and argon with a floating metal source-holder platform.\label{fig:paschen}} \end{figure} A photodiode, mounted onto an empty \textsc{sdd}\ housing that was mounted inside the gas cell in place of the detector, measured the light from discharges in the gas allowing for measurement of the frequency of electric breakdown. We observed that disharges of a harmless size occured frequently (\unit{\gtrapprox1}{\hertz}), but did not drain enough current to alter the voltage. However, above a voltage threshold that depends on the pressure of the gas in the cell, we observed large discharges occuring with higher frequency. These discharges prevented the high voltage supply from keeping a constant voltage. The dependence of the threshold voltage \sub{V}{br} on gas pressure $p$ (\parfig~\ref{fig:paschen}) matches the predictions from the Townsend discharge theory \cite{pej2002} augmented by an offset voltage $V_{0}$, \begin{equation} \sub{V}{br} = \frac{Bpd}{\ln(Apd)-\ln(\ln(1+\gamma^{-1}))} + V_{0}, \end{equation} where $d$ is the distance over which the discharge takes place, $A$ and $B$ are the Townsend coefficients, and $\gamma$ is the secondary emission coefficient. Coefficients $A$, $B$, and $\gamma$ are different for each gas. Fitting the Townsend theory to the voltage-threshold data with $d$ and $V_0$ as the only free parameters indicates a discharge distance on the order of \centi\meter, the same order of size as the gas cell. Tests conducted with the \textsc{peek}\ platform and americium source indicate that charge freed from the gas by the americium alphas builds up on the platform until a breakdown occurs from the platform to the \textsc{hv}\ ring. These breakdowns create large discharges and are fatal to the detector and detector electronics. However, these breakdowns do not occur when the seal between the \textsc{peek}\ platform and the end of the gas cell is made very gas tight, closing the path for charge to take during a breakdown. In the gas-tight cell, electric fields of strengths up to \unit{650}{\kilo\volt\per\meter} have been reached without breakdown at gas pressures from \unit{\power{10}{-7}}{\milli\bbar} to \unit{1.25}{\bbar}. \section{FCD Cell Simulation\label{sec:CoolSim}} We simulated the frictional cooling process in the \textsc{fcd}\ cell to provide expectations for proton energy spectra under different configurations. For this simulation as well as full frictional cooling schemes~\cite{greenwald:293,Bao201028}, we developed software based on Geant4, called CoolSim~\cite{CoolSim}. It implements the low-energy packages of Geant4 optimized for the tracking of protons and muons through matter. As well, we have added new processes to the Geant4 framework for the simulation of charge exchange processes at low energies in gaseous materials \cite{greenwald:293}. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fcd_T_z_blue_edited.eps} \caption{Simulated kinetic energy distributions of protons (shaded) and the kinetic energy of a single proton (line) as a function of $z$ in the \textsc{fcd}\ cooling cell filled with helium gas at \unit{40}{\milli\bbar} and an electric field strength of \unit{0.4}{\mega\volt\per\meter}.\label{fig:evz}} \end{figure} The cell simulated was exactly as described in \textsec~\ref{sec:construction}, with the electric field shape as calculated in \textsec~\ref{sec:sim:efield}. The protons were simulated as originating from a point source located on the $z$ axis at the center of the 5th ring ($z=\unit{20}{\milli\meter}$). In the following discussion all data are taken from runs in which ten thousand protons were simulated for each of the combinations of nine electric field strengths, evenly spaced between \unit{0.1}{\mega\volt\per\meter} and \unit{0.5}{\mega\volt\per\meter}, and nine helium gas densities, logarithmically spaced between \unit{1}{\milli\bbar} and \unit{700}{\milli\bbar}. In each simulation run, protons start at rest and accelerate through the gas in the positive $z$ direction, approaching the equilibrium energy (\parfig~\ref{fig:evz}). \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{E_P_acc_col.eps} \caption{Simulated detector acceptance as a function of electric field strength and helium gas density.\label{fig:acceptance}} \end{figure} \label{sec:detAcc} As they accelerate, they interact with the helium gas, scattering away from the $z$ axis and decreasing acceptance in the \textsc{sdd}~(\parfig~\ref{fig:acceptance}), which has a radius of \unit{1.78}{\milli\meter}. The mean free path for scattering decreases with increasing helium gas pressure, causing more muons to scatter away from the \textsc{sdd}\ at higher pressures. However, the stronger electric fields refocus some of those scattered protons towards the $z$ axis. The lowest acceptances are expected for the high-pressure--weak-field region of the parameter space. The scattering can be seen in the highlighted trajectory of figure~\ref{fig:evz}: after scattering into a direction opposed to the electric field, the proton decelerates, turns around, and then reaccelerates. This produces the abrupt kinetic energy fluctuation seen in the figure. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{E_P_T_col.eps} \caption{Simulated mean kinetic energy of protons at the \textsc{sdd}\ as a function of electric field strength and helium gas density.\label{fig:E_P_T}} \end{figure} For each combination of electric field strength and gas pressure the mean of the kinetic energy distribution at the \textsc{sdd}\ ($z=\unit{10}{\centi\meter}$, $r<=\unit{1.78}{\milli\meter}$) is calculated (\parfig~\ref{fig:E_P_T}). For a fixed electric field strength, raising the gas pressure increases the energy loss to the helium, decreasing the mean energy at the detector. For a fixed gas pressure, raising the electric field strength increases the restorative energy gain, increasing the mean energy. Both behaviors are as expected from \textfig~\ref{fig:dedx}. \section{Measurements} Several measurements were made using the experimental setup to calibrate the detectors~(\parsec~\ref{sec:detcal}) and measure the effect of their dead layers, as well as to measure the \textsc{x}-ray\ background, and verify the production of protons. All of the following measurements were made with the proton source, with a $23$-$\micro\meter$-thick Mylar foil, mounted in the accelerating grid at $z=\unit{20}{\milli\meter}$ and the detector cooled to approximately \unit{15}{\celsius}. Background and gasless proton measurements were made without the gas cell in place. The total data taking rate was approximately \unit{10}{\hertz}. The \textsc{x}-ray\ rate was approximately \unit{1.5}{\hertz}; the proton rate was approximately \unit{2}{\hertz}; and the remaining rate was due to saturated pulses from \mega\eV\ alpha particles. \subsection{Background} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{bg.eps} \caption{Background spectra with \ensuremath{^{241}{\rm Am}}\ source present and the gas cell evacuated.\label{fig:background}} \end{figure} \begin{table}[t] \begin{center} \begin{tabular}{r @{.} l r @{.} l r r @{.} l} \hline \hline \multicolumn{2}{l}{\rule{0pt}{1em}Energy} & \multicolumn{2}{c}{\ BR} & \multicolumn{1}{r@{\hspace{.6em}}}{D} & \multicolumn{2}{c}{BR\cdot D} \\ \multicolumn{2}{l}{(\kilo\eV)} & \multicolumn{2}{c}{\ (\%)} & \multicolumn{1}{r@{\hspace{.25em}}}{\ (\%)} & \multicolumn{2}{l}{(\%)} \\ \hline 11&87 & 0&66 & 92 & 0&61 \\ 13&76 & 1&07 & 81 & 0&87 \\ 13&95 & 9&6 & 79 & 7&6 \\ 15&86 & 0&15 & 62 & 0&10 \\ 16&11 & 0&18 & 61 & 0&11 \\ 16&82 & 2&5 & 58 & 1&4 \\ 17&06 & 1&5 & 56 & 0&8 \\ 17&50 & 0&65 & 54 & 0&35 \\ 17&99 & 1&37 & 51 & 0&70 \\ 20&78 & 1&39 & 36 & 0&50 \\ 21&10 & 0&65 & 35 & 0&23 \\ 21&34 & 0&59 & 35 & 0&20 \\ 21&49 & 0&29 & 34 & 0&10 \\ 26&34 & 2&40 & 23 & 0&56 \\ 59&54 & 35&90 & 3 & 1&21 \\ \hline \hline \end{tabular} \end{center} \caption{Energies, branching ratios (BR), detectability (D), and BR\cdot D for \xray s\ emitted by \ensuremath{^{241}{\rm Am}}\ with branching ratio greater than 0.2\%\label{tab:AmXrays}} \end{table} The \ensuremath{^{241}{\rm Am}}\ in the proton source emits \xray s\ in the energy range of interest for the proton measurements ($E\lesssim\unit{30}{\kilo\eV}$). The observed rate of \xray s\ is comparable to that of protons, so the background energy spectrum must be measured for subtraction from the proton energy spectra. The background spectrum~(\parfig~\ref{fig:background}) contains several peaks from the \ensuremath{^{241}{\rm Am}}\ spectrum~(\partab~\ref{tab:AmXrays}) as well as low-amplitude noise. To reduce the low-amplitude noise rate, a voltage threshold corresponding to an energy threshold of \unit{1}{\kilo\eV} is used in data recording. The probability that the \xray s\ interact in the \textsc{sdd}, which is \unit{450}{\micro\meter} thick, rapidly decreases with increasing energy in the range of interest. Table~\ref{tab:AmXrays} lists the detectability (D), defined as the percentage of \xray s\ interacting in the sensitive volume of the detector, and detectable branching ratio (D\cdot BR) for the prominent \textsc{x}-ray\ lines. \subsection{Proton Observations \label{sec:protons}} \begin{figure}[t] \includegraphics[width=\columnwidth]{backgroundsubtraction.eps} \caption{Energy spectrum for an $E=\unit{23}{\kV\per\meter}$ run: full spectrum (line), background-removed spectrum (thick line), and background (highlighted).\label{fig:bgsubtraction}} \end{figure} Energy spectra were measured with electric field strengths evenly spaced from \unit{70}{\kV\per\meter} to \unit{300}{\kV\per\meter} in \unit{10}{\kV\per\meter} steps. The background spectrum ($E=\unit{0}{\kV\per\meter}$) and proton spectra ($E>\unit{0}{\kV\per\meter}$) were analyzed together to discover the overall background rate and the signal rate above that background for each spectrum. Figure~\ref{fig:bgsubtraction} shows an example spectrum, for $E=\unit{230}{\kV\per\meter}$. The background spectrum as calculated from all the measured spectra is shown for comparison. The spectrum has a prominent proton peak centered around approximately \unit{11}{\kilo\eV}, which is lower than the \unit{18.4}{\kilo\eV} expected from the calculation of the electric field. The discrepancy is due to energy deposition in the dead and partially-inactive layers of the detector. The peak has a \textsc{fwhm}\ of approximately \unit{2}{\kilo\eV}. This is larger than the \textsc{x}-ray\ energy resolution at the same energy, but is as expected from the distribution of proton energy loss in the dead layers. The proton peak also has a tail to lower energies. This is due to protons striking the outer edges of the detector surface where they encounter larger dead layers. The increase in the low-energy noise above the background rate may be due to fluorescence of the silicon and aluminum of the detector, which produces peaks in this range. \begin{figure}[t] \includegraphics[width=\columnwidth]{proruns_even.eps} \caption{Proton energy spectra, with background removed, for five representative strengths of the electric field.\label{fig:protonspec}} \end{figure} Figure~\ref{fig:protonspec} shows five of the proton spectra along with the overall background. The peak centers are evenly spaced in accordance with the field strengths at which they were measured. As well, the proton rate remained nearly constant with changing electric field strength. \begin{figure}[!t] \includegraphics[width=\columnwidth]{sddresponseCombined_fx.eps} \caption{\textsc{sdd}-measured energy as a function of expected proton energy (top) and the ratio of measured to expected energy as a function of expected energy (bottom). \comment{combine into one plot}\label{fig:detresponse}} \end{figure} The upper plot of figure~\ref{fig:detresponse} shows the peak centers of the spectra (\sub{T}{meas}, obtained by fitting with a gaussian distribution) as a function of the expected proton energy (\sub{T}{exp}), which is obtained from from the numerical calculation of the electric field. The detector's dead layers are of a thickness on the order of 100s of \nano\meter, which is also the order of size of the penetration depth of \kilo\eV\ protons. The higher-energy protons travel further into the detector, depositing a larger ratio of their energy in fully active layers of the detector (\parfig~\ref{fig:detresponse}, bottom). The lowest energy run in \textfig~\ref{fig:detresponse}, for which $E=\unit{70}{\kV\per\meter}$, has $\sub{T}{meas}=\unit{0}{\kilo\eV}$ because after depositing energy in the dead layers, the protons didn't have enough energy left to be measureable above the low-energy threshold, which vetoes electronic noise in the detector readout system. Any further build up of a dead layer increases the minimum energy protons must have in order to be detectable. \section{Conclusion} The \textsc{fcd}\ experiment at the Max Planck Institute for Physics, Munich, has been commissioned to study the working principle behind frictional cooling. The experiment construction is complete and all parts have been commissioned: The accelerating grid can maintain electric field strengths without breakdown up to \unit{900}{\kV\per\meter} in an evacuated gas cell and up to \unit{650}{\kV\per\meter} in a pressurized gas cell with pressures up to \unit{1.25}{\bbar}. The detector can measure energy with good resolutions and can be reliably operated in the strong field of the accelerating grid. The gas system is capable of maintaining a specified pressure for several hours. Proton spectra have been measured, demonstrating that the source functions. The next step in the \textsc{fcd}\ experiment is the taking of data with the gas cell filled. \section*{Acknowledgements} Design and contruction of the experiment apparati was accomplished with help from Karlheinz Ackermann and G\"unter Winkelmuller. Many of the data-taking electronics components were designed and built by Si Tran. Much help in operating the \textsc{sdd} s and their control electronics was given by Adrian Niculae and Atakan Simsek from PNSensor. Commissioning and data taking was accomplished with the help of Christian Blume, Raphael Galea, Andrada Ianus, Brodie Mackenzie, Alois Kabelschacht, and Franz Stelzer. \input{fcd_arXiv.bbl} \end{document}
2,877,628,091,421
arxiv
\section{Introduction} \label{sec:intro} The Goldreich-Schubert-Fricke (GSF) instability (\citealt{GoldreichSchubert1967}, hereafter GS67, and \citealt{Fricke1968}) is an axisymmetric hydrodynamic instability that occurs in a shearing background when thermal diffusion from the displaced fluid elements counterbalances the stabilising effect of a positive entropy gradient. Since the advent of helioseismology and asteroseismology and their recent observational achievements, it has gained attention in the recent literature as one of the effects that are thought to contribute to the transfer of angular momentum (AM) inside stars. Stellar evolution codes (e.g. MESAstar, see \citealt{Paxton2013}) incorporate the GSF instability together with other hydrodynamic and magnetic processes, though not in the form of fundamental dynamics. These effects are modelled as a diffusive term in the equation of the evolution of the AM distribution. The combined effect of these processes is still unable to reproduce the correct AM flux for subgiants and early RGs (red giants) \citep{Cantiello2014}---at least in the current implementation. Codes are becoming more comprehensive, and other effects, including wave transport, may also soon be included. This approach holds promise, but the physics of the mechanisms that transport AM is often complicated. Increasingly constraining observations from asteroseismology motivate us to revisit these mechanisms and to re-examine with critical attention the approximations used in their modelling. In this paper in particular, we revisit the GSF instability. The circulation induced by the development of the GSF instability is also thought to play a role in the chemical evolution of stars on the asymptotic giant branch (AGB), see e.g. \citet{Herwig2003}, \citet{Siess2004}. These stars are known to be the main producers of elements heavier than iron, but theoretical models continue to struggle to reproduce the large spread in the heavy elements distribution observed among various AGB stars. \citet{Piersanti2013} attribute this phenomenon to rotational effects, and in particular to the mixing induced by the onset of the GSF instability at the top of the radiative zone. Revisiting the physics of the GSF instability will have an impact on our understanding of the chemical evolution of evolved stars. The manner in which the onset of the GSF instability is thought to cause a sort of zonal flow, which we will refer to as GSF circulation, is detailed in \citet{JamesKahn1970,JamesKahn1971} and is the basis of the diffusion-like approximation. The authors argue that the GSF instability is self-limiting, and provide an estimate for the AM flux in the fluid, after it has adjusted to suppress the growth of the unstable modes. This situation may be analogous to thermohaline convection (i.e. convection driven by destabilising compositional gradients) in low-mass red giants - numerical simulations in the low Prandtl number case have found that a linearly unstable fluid generates only small fingers, which only have a modest effect in mixing the material in the star \citep{Denissenkov2011}. The results by James \& Kahn were then taken up by \citet{EndalSofia1978} in the first evolutionary study of rotating stars with time-dependent redistribution of angular momentum. Since then, the formula for the AM flux due to the GSF circulation has become a standard part of many evolutionary codes. As of this writing, there are no simulations that either confirm or contradict the existence of this circulation. Whether it truly exists and its effectiveness in transporting angular momentum in a star are largely unknown. One of the approximations made by James and Kahn is that viscosity is ignored. However, an important but subtle feature of the GSF instability is that it is can be suppressed by the presence of a very small kinematic viscosity $\nu$, even if the Prandtl number Pr = $\nu / \chi$, where $\chi$ is the thermal diffusion coefficient, is much smaller than unity \citep{Acheson1978, KnoblochSpruit1982, Menou2004}. In a recent paper, we have showed that it is likely that the GSF instability is suppressed in the upper radiative zone of the Sun, if the shear is comparable in value to that inferred from helioseismology \citep[hereafter CB16]{CaleoBalbus2016}. In this paper, we will explore in greater detail the conditions under which the GSF instability occurs in differentially rotating stars. We find that when realistic values of $\nu$ are considered, the instability is triggered only in regions of very strong shear, and we determine the minimum shear required to trigger the instability in the radiative zone of the Sun and of three RGs at various evolutionary stages. Recently, a similar study was conducted by \citet{Hirschi2010}. These authors have shown that the GSF instability would be suppressed by a turbulence-induced viscosity $\nu_{\text{turb}}$ in the radiative interior of a massive star. Here we consider instead a physical viscosity, i.e. the molecular viscosity and radiative viscosity. Our understanding of these processes is on a firmed footing than those underlying $\nu_{\text{turb}}$. In a particularly simple case, we also investigate what happens when two of the assumptions behind the work by Goldreich and Schubert are relaxed. We consider a generic GSF-unstable environment, and then include (a) a small deviation from axisymmetry in the form of a finite azimuthal component $k_\phi$ of the wave vector, and (b) a small background magnetic field. This paper is organised as follows. Section 2 revisits the basics of the GSF instability. Section 3 presents a numerical study of the dispersion relation by GS67 with realistic values of the kinematic viscosity in a variety of stellar environments. Section 4 presents our investigation of the effect of a small deviation from axisymmetry or the presence of a small background magnetic field on an otherwise unstable environment. Section 5 summarizes our results. \section{The Goldreich-Schubert-Fricke instability} \label{sec:basicideas} We make use both of standard spherical coordinates $(r, \phi, \theta)$ as well as cylindrical coordinates $(R, \phi, z)$. Throughout this paper, we consider a background angular velocity field which is azimuthally symmetric but otherwise arbitrary: $\Omega = \Omega (r, \theta)$ or $\Omega = \Omega(R, z)$. In their seminal paper, Goldreich and Schubert performed a WKB analysis of modes with wavelength $\lambda \ll R$ allowing for finite thermal diffusion and kinematic viscosity in the medium. The perturbations depend on space and time as: \begin{equation} \exp[q \Omega t +\text{i}(k_R R + k_z z)] , \end{equation} where $t$ is time, $q$ is a (complex valued) wave frequency, and $\bb k = (k_R, 0, k_z)$ is the wave vector. The system is then unstable if the dispersion relation has solutions with a positive real part of $q$. For ease of reference, we reproduce here in a compact form the dispersion relation describing the evolution of the modes: \begin{equation} \label{GS0} q^3 + A(\bb k) q^2 + B(\bb k) q + C(\bb k) = 0 , \end{equation} where \begin{equation} \label{GS1} A(\bb k) = \frac{k^2}{\Omega} \Big( 2\nu + \frac{1}{\gamma} \chi \Big), \end{equation} \begin{equation} \label{GS2} \begin{aligned} B(\bb k) = & - \Big( \frac{k_z}{k} \Big)^2 \Big[ \frac{1}{\gamma \Omega^2 \rho} (\widetilde D P) (\widetilde D \sigma) + \frac{2}{\Omega R} \widetilde D l \Big] + & \\ & + \frac{2}{\gamma} \Big( \frac{\chi k^2}{\Omega} \Big) \Big( \frac{\nu k^2}{\Omega} \Big) , \end{aligned} \end{equation} \begin{equation} \label{GS3} \begin{aligned} C(\bb k) = & - \Big( \frac{k_z}{k} \Big)^2 \Big( \frac{\nu k^2}{\Omega} \Big) \Big[ \frac{1}{\gamma \Omega^2 \rho} (\widetilde D P) (\widetilde D \sigma) \Big] - & \\ & - \Big( \frac{k_z}{k} \Big)^2 \Big( \frac{\chi k^2}{\Omega} \Big) \Big[ \frac{2}{\gamma \Omega R} \widetilde D l \Big] + & \\ & + \frac{1}{\gamma} \Big( \frac{\chi k^2}{\Omega} \Big) \Big(\frac{\nu k^2}{\Omega} \Big)^2 . \end{aligned} \end{equation} In these equations, $\rho$ is the density of the fluid, $P$ its pressure, $T$ its temperature, $\nu$ the kinematic viscosity, $\sigma = \log{(P \rho^{-\gamma})}$ the entropy variable, $\gamma$ the adiabatic index, $\chi$ the heat conductivity, $l = \Omega R^2$ the angular momentum per unit mass, and $k^2 = k_R^2 + k_z^2$. We introduced the differential operator \citep{Balbus1995}: \begin{equation} \widetilde D = \frac{k_R}{k_z} \frac{\partial}{\partial z} - \frac{\partial}{\partial R} . \end{equation} Inspection of equation \eqref{GS0} immediately shows that there is at least one real positive solution for $q$ if $C(\bb k) < 0$ for any $\bb k$. A necessary stability criterion is therefore $C(\bb k) > 0$. Goldreich and Schubert focused on the small Prandtl number case: Pr = $\nu / \chi \rightarrow 0$. In this case, the stability condition reduces to $\widetilde D l < 0$, or in other words: \begin{equation} \frac{k_R}{k_z} \frac{\partial l}{\partial z} - \frac{\partial l}{\partial R} < 0. \end{equation} Since the ratio $k_R / k_z$ can assume any value (either positive or negative), this condition can hold only if \begin{equation} \label{GScond} \frac{\partial l}{\partial R} > 0 \qquad \text{and} \qquad \frac{\partial \Omega}{\partial z} = 0 , \end{equation} i.e.the angular momentum increasing outward and the angular velocity fixed on cylinders. This is the criterion of Goldreich and Schubert, and it provides a very strong constraint: any rotation pattern with $\partial_z \Omega \ne 0$ would be subject to an exponentially growing instability. A naive application of equation \eqref{GS0} with $\nu = 0$ to the upper radiative zone of the Sun gives a growth time-scale of less than 10 years (CB16). Models with $\Omega$ fixed at a given spherical distance from the centre, $\Omega = \Omega(r)$ (``shellular'' rotation, see \citealt{Meynet1997}) are often used in modern stellar codes; all these structures would be unstable unless the star rotates as a rigid body. As we have mentioned in the introduction, however, it was soon realised that the instability may be self-limiting and its consequences less dramatic. In the radiative zone of the Sun Pr $= \nu/\chi \sim 10^{-6} - 10^{-5}$, so it might appear at first sight that one could neglect the viscosity when studying the onset of the instability. However, as reported above and elsewhere, the ratio of the terms that multiply $\nu$ in the first line of the equation to those that multiply $\chi$ in the second line is large. This number is about $10^5$ in the radiative zone of the Sun, so that the neglect of the viscous terms is not a good approximation. \section{The GSF instability in various stellar models} \label{sec:numericalresults} We discuss here the onset of the GSF instability in stellar environments, retaining the viscosity in equation \eqref{GS3}. For our background states, we have computed evolutionary tracks for the Sun and for a set of RGs. We selected the models of a 1.3 M$_\odot$ star in three different evolutionary states on the RGB (red giant branch), corresponding to a subgiant, an early RG, and a more evolved RG, following the example by \citet{Belkacem2015b}. \subsection{Stellar models} \label{subsec:stellarmodels} We have generated a set of stellar models with the evolutionary code \texttt{PROSECCO} (see \citealt{Tognelli2012}). The main, rather standard, input physics adopted for the computations is detailed in \citet{Tognelli2015b} and references therein. The code generates spherically symmetric stellar models in hydrostatic equilibrium. Convection in super adiabatic regions is treated by the Mixing Length scheme \citep{MixingLength1958}. The models are computed adopting a solar-calibrated value of the mixing length parameter, namely $\alpha_\text{ML} = 1.76$. We adopted a mild overshooting parameter $\lambda_\text{ov}=0.2$ for $M\ge 1.2$ M$_\odot$. All the models are calculated for [Fe/H]=$+0.0$, which translates into an initial helium abundance $Y=0.274$ and a total metallicity $Z=0.013$. Figure \ref{figHR} shows the evolutionary track of the 1.3 M$_\odot$ star in the effective temperature ($T_{\rm eff}$) luminosity ($L$) plane, and the position of the three selected models in the HR diagram. The main properties of our models are summarised in table \ref{tab:models}. The last column of the table shows the radius of the radiative region of the star as a fraction of the total radius. In each case, the generation of energy via nuclear fusion, be it in the core (main sequence) or in shell (RGs), occurs entirely in a zone of radiative transport. \begin{figure} \centering \includegraphics[width=0.475\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{pic1_HRDiagram.png} \caption{\label{figHR}Evolutionary track of a 1.3 M$_\odot$ star in the HR diagram. The location of the three selected models is marked in the figure. $T_{\text{eff}}$ is expressed in K.} \end{figure} \begin{table} \centering \begin{tabular}{ c c c c c c } \hline \hline Model & $t$ (Myr) & $R/R_\odot$ & $L/L_\odot$ & $T_{\text{eff}}$ (K) & $R_{\text{rad}} / R$ \\ \hline Sun & 4570 & 1.00 & 1.00 & 5776 & 0.72 \\ RG1 & 3610 & 2.52 & 6.56 & 5822 & 0.70 \\ RG2 & 3730 & 2.88& 5.18 & 5133 & 0.42 \\ RG3 & 3995 & 7.16 & 23.4 & 4744 & 0.08 \\ \hline \end{tabular} \caption{\label{tab:models}Main characteristics of the selected models: age, radius, luminosity, effective temperature, and location of the radiative-convective boundary.} \end{table} \subsection{Diffusive processes} \label{sec:diff} Whether or not the GSF instability occurs depends on the microscopic diffusion coefficients of the stellar material. The heat flux in the radiative region of a star is of course due to diffusive radiative transfer. Adopting the same notation of GS67, the thermal conductivity for this process is: \begin{equation} \label{chi} \chi = \frac{16 (\gamma - 1) \bar m \sigma_{\text{SB}} T^3}{3 k_\text{B} \kappa \rho^2}, \end{equation} where $\sigma_{\text{SB}}$ is the Stephan-Boltzmann constant, $k_\text{B}$ is the Boltzmann constant, $\gamma$ is the adiabatic index, $\bar m$ is the average particle mass, and $\kappa$ is the opacity. Two processes contribute to the viscosity in the star: the diffusion of particles and the diffusion of photons. We follow the convention of referring to the first as ``molecular'' viscosity (though the particles are of course highly ionised atoms) and to the second as radiative viscosity. The molecular viscosity is given by \citep{Spitzer1962}: \begin{equation} \label{numol} \nu_{\text{dyn}} \cong 2.2 \times 10^{-15} \frac{T^{5/2}}{\log(\Lambda) \rho} \text{ cm$^2$ s$^{-1}$}, \end{equation} where $\log(\Lambda)$ is the Coulomb logarithm. Values for $\log(\Lambda)$ given the physical properties of the plasma are tabulated by \citet{Spitzer1962}. We adopt $\log(\Lambda) = 4$, an acceptable approximation throughout the radiative zone of the Sun and in the radiative zone of the RGs. The radiative viscosity is given by (GS67; see also \citealt{Thomas1930}): \begin{equation} \label{nurad} \nu_{\text{rad}} = \frac{16 \sigma_{\text{SB}} T^4}{15 c^2 \kappa \rho}, \end{equation} where $c$ is the speed of light. Inspection of equations \eqref{chi} and \eqref{nurad} shows that the ratio between $\nu_{\text{rad}}$ and $\chi$ is of order: \begin{equation} \label{chiovernurad} \frac{\nu_{\text{rad}}}{\chi} \sim \Big(\frac{c_\text{s}}{c}\Big)^2 , \end{equation} where $c_\text{s}$ is the isothermal sound speed. The molecular viscosity $\nu_{\text{dyn}}$ is usually more important than $\nu_{\text{rad}}$ in stars. A notable exception, which is relevant to this paper, is given by the core of RGs. Equation \eqref{chiovernurad} provides a lower limit for the Prandtl number, but in most cases it is not a good approximation for it. The thermal conductivity and kinematic viscosity in the Sun are shown in figures \ref{figChiSun} and \ref{figNuSun}; the same quantities are shown for the model RG1 in figures \ref{figChiRG} and \ref{figNuRG}. The figures for the models RG2 and RG3 are similar to those for RG1 and are not reported here. The Prandtl number in the Solar radiative zone is at any point between $1 \times 10^{-6}$ (the value near the outer boundary) and $2 \times 10^{-5}$ (core); in the RG1 model it is between $1 \times 10^{-7}$ and $5 \times 10^{-7}$. The noticeable spikes in figure \ref{figChiRG} arise from the details of the opacity function. \begin{figure} \centering \includegraphics[width=0.475\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{picSunHeatDiffusivity.png} \caption{\label{figChiSun}Thermal conductivity $\chi$ in the radiative zone of the Sun.} \end{figure} \begin{figure} \centering \includegraphics[width=0.475\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{picSunViscosity.png} \caption{\label{figNuSun}Kinematic viscosity $\nu$ in the radiative zone of the Sun.} \end{figure} \begin{figure} \centering \includegraphics[width=0.475\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{picRG1HeatDiffusivity.png} \caption{\label{figChiRG}Thermal conductivity $\chi$ in the RG1 model. The vertical grey line shows the boundary between the radiative and convective zone.} \end{figure} \begin{figure} \centering \includegraphics[width=0.475\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{picRG1Viscosity.png} \caption{\label{figNuRG}Kinematic viscosity $\nu$ in the interior of the RG1 model. The vertical grey line shows the boundary between the radiative and convective zone.} \end{figure} \subsection{Shear required to trigger the GSF instability} \label{sec:numresults} We wish to determine the minimum shear required to trigger the GSF instability in a rotating star. In the case at hand, this may be done rather simply by evaluating the sign of the last term of the dispersion relation by Goldreich \& Schubert, given in equation \eqref{GS3}. We apply this to the stability problem of the radiative zones of the Sun and of the RG models we have earlier described. At a given location, the result will depend on the structural variables of the star (e.g. $\rho, T, \chi, \nu$), the angular velocity $\Omega$, and the shear. In the $\nu = 0$ case, of course, any shear in the $\hat z$ direction would suffice to induce instability. In what follows, it is convenient to express the shear in dimensionless form. In cylindrical coordinates, this amounts to calculating $\partial \log \Omega / \partial \log R$ and $\partial \log \Omega / \partial \log z$. However, rotation in spherical shells is often assumed to be a good approximation in the radiative zone of a star, so that only the $r$-derivative $\partial \log \Omega / \partial \log r$ is required. While the observations from asteroseismology provide estimates of $\Omega$, the shear is much less precisely determined. The data on the radiative zone of the Sun are consistent with uniform rotation, with shear present only in the upper boundary adjacent the convective zone\footnote{In this paper, when we refer to the ``shear'' in the text, we imply that we mean its absolute value. In all cases of interest to us, $\partial \log \Omega / \partial \log r$ is negative.}. A straightforward interpolation of a recent set of helioseismology data gives, at $r = 0.70 $R$_\odot$ \citep{Caleo2015}: \begin{equation} \label{shearSun} \frac{\partial \log \Omega}{\partial \log R} = - 0.11 , \qquad \frac{\partial \log \Omega}{\partial \log z} = - 0.24 . \end{equation} The data are even less constraining for the RGs. \citet{Deheuvels2014} identified two subgiants for which the observations are significantly better reproduced by a curve with a discontinuity in $\Omega$ at a location near the H-burning shell, rather than by a smooth model. The smooth curve would correspond to a shear (inferred from their figure 10) of order: $\partial \log \Omega / \partial \log r \sim -1$. The shear near the discontinuity would obviously be larger than this, but it is not currently possible to estimate it with accuracy. The structural quantities for our models have been interpolated from the \texttt{PROSECCO} models. For all models, we adopt an angular velocity value which is both a good approximation for the radiative zone of the Sun, and a reasonable average value for the radiative zone of a typical RG: $\Omega = 2.7 \times 10^{-6}$ rad s$^{-1}$. At about 20 radial locations in the radiative zone of each star, at low (20\ensuremath{^\circ}), mid (45\ensuremath{^\circ}), and high (70\ensuremath{^\circ}) latitude, we solve equation \eqref{GS0} for a range of values of $k_R, k_z$, for the 4 sign combinations of $k_R$, $k_z$. In this way, we determine the minimum value of $\partial \log \Omega / \partial \log r$ for which there are solutions to the equation $C(\bb k) < 0$. We consider wave vector components in the range: \begin{equation} \label{krange} k_{R}, k_{z}: \ \pm \frac{2 \pi}{10^{-2} R_\odot} \rightarrow \pm \frac{2 \pi}{10^{-14} R_\odot}, \end{equation} and limit our search to values of the shear in the interval: \begin{equation} \label{shearlimits} 0.1 < \Big| \frac{\partial \log \Omega}{\partial \log r} \Big| < 10 . \end{equation} We show our results in figure \ref{figShearSun} for the Sun and figures \ref{figShearRG1} - \ref{figShearRG3} for the RGs. The main feature implied by these figures is that in all cases the onset of the GSF instability in the deep radiative interior is only possible for very strong shear, while it may occur more easily near the outer edge of the radiative zone. In the Sun, a shear of order unity is required even at the upper boundary of the radiative zone. This is consistent with the already noted result that the GSF instability does not occur in the upper radiative zone, for values of the shear given by equation \eqref{shearSun} (CB16). On the other hand, a small shear $\partial \log \Omega / \partial \log r \sim 0.1$ may be sufficient to induce instability in the upper radiative zone of the RG models, or in the very inner part of the nuclear core of the Sun. \begin{figure} \centering \includegraphics[width=0.475\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{picSunShear.png} \caption{\label{figShearSun}Minimum shear required for the onset of the GSF instability in the Sun, as a function of the distance from the centre of the star. The vertical grey line shows the boundary between the radiative and convective zone. The thin black line is a linear interpolation of the data points. The calculation is limited to the values of equation \eqref{shearlimits}. The horizontal grey line shows the upper value $d \log \Omega / d (\log r) = 10$.} \end{figure} \begin{figure} \centering \includegraphics[width=0.475\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{picRG1Shear.png} \caption{\label{figShearRG1}Minimum shear required for the onset of the GSF instability in the RG1 model. The vertical grey line shows the boundary between the radiative and convective zone.} \end{figure} \begin{figure} \centering \includegraphics[width=0.475\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{picRG2Shear.png} \caption{\label{figShearRG2}Minimum shear required for the onset of the GSF instability in the RG2 model. The vertical grey line shows the boundary between the radiative and convective zone.} \end{figure} \begin{figure} \centering \includegraphics[width=0.475\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{picRG3Shear.png} \caption{\label{figShearRG3}Minimum shear required for the onset of the GSF instability in the RG3 model. The vertical grey line shows the boundary between the radiative and convective zone.} \end{figure} \section{Non-axisymmetric perturbations and introduction of a background magnetic field} \label{sec:appendixGS} The GSF instability is traditionally studied in the axisymmetric case. The evolution of non-axisymmetric displacements, i.e. displacements with a finite azimuthal component of the wave vector $k_\phi$, is a much more complex problem. In this case, the perturbed equations cannot be reduced to a local dispersion relation. This result is well known to the fluid dynamicists and need not be elaborated upon here. We refer the reader to CB16 for a discussion of this issue and further references, and adopt here the same notation as in that paper. There we have derived the set of non autonomous ordinary differential equations which describe the evolution of the perturbed quantities in the non-axisymmetric case, see equations (51) - (55). We also make use of the same standard, non-rotating background solar model as in that paper \citep{BahcallSerenelliBasu2005}. The possible importance of non-axisymmetric perturbations is that they can be more unstable in a rotating system than axisymmetric disturbances. This happens, for example, in rotating convectively unstable systems in which a geostrophic balance eliminates rotational stabilisation, but only for non-axisymmetric modes \citet{BalbusSchaan2012}. We therefore discuss here an idealised case in which the GSF instability is not fully suppressed by the viscosity, and solve those equations to determine what happens when the assumptions of axisymmetry and non-magnetised background are relaxed, exploring the richness and complexity of the general, triple-diffusive problem. For this purpose, we consider an environment with the same properties of the upper radiative zone of the Sun (including the rotation), but an enhanced radiative thermal diffusion coefficient: $\xi_{\text{rad}} = 10^3 \xi_{\text{rad} \odot}$. For clarity, we note that the coefficient $\xi_{\text{rad}}$ of CB16 is related to the coefficient $\chi$ of the current paper by $\xi_\text{rad} = (\gamma - 1) \chi$. \subsection{Axisymmetric perturbations} The axisymmetric case is treated with the techniques of the present paper. We have solved equation \eqref{GS0} in the upper radiative zone of the Sun at $r = 0.70$ R$_\odot$, $\theta = 45 \ensuremath{^\circ}$, with the modification $\xi_{\text{rad}} = 10^3 \xi_{\text{rad}\odot}$. There are unstable modes only when $k_R$ and $k_z$ have opposite signs, and the results for $k_R > 0, k_z < 0$ are the same as those for $k_R < 0, k_z > 0$. Only modes in a relatively narrow region of the $k_R < 0, k_z > 0$ quadrant are unstable: we show this region and the growth time-scale of the instability in figure \ref{figGS3}. The shortest growth time-scale is found to be given by $\log_{10}(T_{\text{gr}} / s) \approx 6.7$, i.e. $T_{\text{gr}} \approx 5 \times 10^6$ s, with a strong dependence of $T_{\text{gr}}$ on the position in the $k_R - k_z$ plane. \begin{figure} \centering \includegraphics[width=0.5\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{picGS3.png} \caption{\label{figGS3}Region of the $k_R < 0, k_z > 0$ plane where the GSF instability occurs in the Sun and growth time-scale $T_{\text{gr}}$ for the axisymmetric displacements, for $r = 0.7 R_\odot$, $\theta = 45 \ensuremath{^\circ}$, with $\xi_{\text{rad}} = 10^3 \xi_{\text{rad} \odot}$ and $\nu = \nu_\odot$. The numbers next to the iso-contours correspond to the value of $\log_{10}(T_{\text{gr}})$ with $T_{\text{gr}}$ expressed in seconds.} \end{figure} \subsection{Non-axisymmetric perturbations} As noted above, the evolution of perturbations with finite $k_\phi$ is not described by a plane wave dispersion relation, but rather determined by solving equations (51) - (55) of CB16. However, a qualitative understanding of their behaviour can be obtained by noting that perturbations with a finite but small $k_\phi$ will still show a predominantly axisymmetric behaviour, albeit with values of $k_R$ and $k_z$ that are not constant in time. The classical dispersion relation by GS67 is therefore still of some use in understanding how these displacements behave, provided that the time dependence of $k_R$ and $k_z$ is correctly included in it. At large times, all perturbations formally evolve towards a quasi-axisymmetric state with a wave vector of the form (see CB16, equation (16)): \begin{equation} \label{eventualkRkz} k_R (t) \rightarrow - A k_\phi t \frac{\partial \Omega}{\partial R}, \qquad k_z (t) \rightarrow - A k_\phi t \frac{\partial \Omega}{\partial z} , \end{equation} for some value of $A$. Equivalently, they evolve towards a state with \begin{equation} \label{eventualkRkzratio} \frac{k_R}{k_z} = \frac{\partial \Omega / \partial R}{\partial \Omega / \partial z} . \end{equation} We explored the stability of axisymmetric perturbations that adhere to the constraint \eqref{eventualkRkzratio} in the range of amplitudes $|\bb k| = 10^{-9} - 10^{-1} $cm$^{-1}$ and co-latitudes $\theta = 10 \ensuremath{^\circ} - 80 \ensuremath{^\circ}$. We found no unstable modes: \emph{all the non-axisymmetric modes are eventually stable}. The next natural step of this analysis is to study the transient phase to assess the presence of large initial growths that occur before the perturbation moves out of the unstable region of the $k_R - k_z$ plane. The following empirical argument allows us to estimate a threshold for $|k_\phi|$ for the perturbation to grow by a large factor. We may visualize the position of the perturbation as a point in figure \ref{figGS3}, which moves over the course of time. The perturbation will have a large growth phase if it remains in an unstable region of the plane for a time that is much longer than the growth time-scale in such region. The size of the unstable region of the plane is bounded by $|\Delta k| \lesssim 10^{-4} $cm$^{-1}$. The position of the perturbation in the region changes with constant wavenumber velocity: \begin{equation} \label{kRkzvelocities} \dot k_R = - k_\phi R \frac{\partial \Omega}{\partial R}, \qquad \dot k_z = - k_\phi R \frac{\partial \Omega}{\partial z} . \end{equation} The condition that the perturbation remains in the region of close to maximum growth for a time much longer than the growth time-scale, say $\Delta t = 10 T_{\text{gr}}$, gives the constraint: \begin{equation} |k_\phi| < \frac{\Delta k}{R |\bb \nabla \Omega|} \frac{1}{\Delta t} \lesssim 10^{-6} \text{cm}^{-1} . \end{equation} In fact, a detailed exploration of the modes in the unstable region shows that only modes with $|k_\phi| \lesssim 10^{-7}$ cm$^{-1}$ are able to grow by many orders of magnitude. Since the most unstable region of the plane resides at $k_R > 10^{-5}$ cm$^{-1}$, only displacements that are strongly axisymmetric are found to be unstable enough for the GSF instability to have the time required to affect them. It is possible to identify perturbations that grow by a large factor before becoming stable. This may be understood on the basis of the axisymmetric theory. We show in figure \ref{figGS4} the evolution of $\delta v_R (t) / \delta v_R(0)$ for a perturbation with initial wave vector components $k_{R0} = - 10^{-4.5}$ cm$^{-1}$, $k_{\phi} = 10^{-7}$ cm$^{-1}$, and $k_{z0} = 10^{-7}$ cm$^{-1}$. As the wave vector of the perturbation changes, it eventually moves out of the unstable region of the $k_R - k_z$ plane. The growth is reversed when the perturbation emerges from the unstable region. We show in figure \ref{figGS5} the path of the perturbation in the $k_R - k_z$ plane. Finally, we show in figure \ref{figGS6} the growth rate $T_{\text{gr}}^{-1}$ for the time interval in which the perturbation is in the unstable region. Non-axisymmetric disturbances are not intrinsically unstable on their own; they show growth only to the extent that the instantaneous poloidal wavenumbers would be unstable in an axisymmetric calculation. \begin{figure} \centering \includegraphics[width=0.5\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{picGS4.png} \caption{\label{figGS4}Evolution of $\delta v_R(t)$ for a perturbation with $k_{R0} = - 10^{-4.5}$ cm$^{-1}$, $k_{\phi} = 10^{-7}$ cm$^{-1}$, and $k_{z0} = 10^{-7}$ cm$^{-1}$. The unit on the temporal axis is expressed in terms of $\Omega^{-1} = 3.77 \times 10^5$ s, corresponding to about 4.4 days.} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{picGS5.png} \caption{\label{figGS5}Path tracked by the perturbation of figure \ref{figGS4} in the $k_R - k_z$ plane. The initial position of the perturbation is marked by the red dot in the unstable region of the plane, while the final position is marked by the other red dot. The position of the wave vector does not proceed linearly with time on the segment, due to the logarithmic scale adopted for the axes.} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{picGS6.png} \caption{\label{figGS6}Growth rate $T_{\text{gr}}^{-1}$ for the displacement of figure \ref{figGS4}, reported for as long as it stays in the unstable region of the $k_R - k_z$ plane. The growth of the perturbation is halted when the growth rate reaches 0. The unit on the temporal axis is as in figure \ref{figGS4}.} \end{figure} \subsection{The stabilizing effect of $\bb B$} \label{sec:GSmag} Finally, we discuss the effect of a finite background magnetic field $\bb B$. $\bb B$ appears in equations (51) - (55) of CB16 only via the constant term $\bb k \bcdot \bb v_A$. It is convenient to compare $\bb k \bcdot \bb v_A$ and $\Omega$. As expected, we found that the behaviour of the perturbation in figures \ref{figGS4} - \ref{figGS6} is unchanged for values of $\bb k \bcdot \bb v_A \ll \Omega$. However, increasing $\bb k \bcdot \bb v_A$ up to values greater than $0.1 \Omega$, we find that the growth is rapidly inhibited, disappearing for $\bb k \bcdot \bb v_A \approx \Omega$. We show in figure \ref{figGS7} the evolution of $\delta v_R (t) / \delta v_R(0)$ for the same perturbation in the case $\bb k \bcdot \bb v_A = \Omega$. \begin{figure} \centering \includegraphics[width=0.5\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{picGS7.png} \caption{\label{figGS7}Evolution of $\delta v_R(t)$ for the perturbation with $k_{R0} = - 10^{-4.5}$ cm$^{-1}$, $k_{\phi} = 10^{-7}$ cm$^{-1}$, and $k_{z0} = 10^{-7}$ cm$^{-1}$ in the magnetic case $\bb k \bcdot \bb v_A = \Omega$. The units are as in figure \ref{figGS4}.} \end{figure} A qualitative understanding of the role of $\bb B$ is as follows. Once the effect of the background magnetic field becomes comparable to that of the rotation, the diffusivity term that most contributes to the damping is the resistivity $\eta$, rather than the viscosity. We have adopted the same value of $\eta$ as in CB16, $\eta \sim 6 \times 10^2$ cm$^2$ s$^{-1}$, so that $\eta$ is at least one order of magnitude larger than $\nu$ in the problem at hand. This makes the damping more efficient in the magnetised case. Interestingly, the condition $\bb k \bcdot \bb v_A \ge \Omega$ for the wave vector of the perturbation in figures \ref{figGS4} - \ref{figGS6} gives $\bb v_A \gtrsim 0.1$ cm s$^{-1}$, a value many orders of magnitude smaller than the sound speed in the medium and the rotational velocity. The corresponding magnetic field is $|\bb B| \gtrsim 0.2$ G. Even if the radiative diffusion coefficient were much higher than in the Sun, a very small magnetic field would be sufficient to suppress the GSF instability. \section{Conclusions} The GSF instability is often considered to be one of the sources of AM transport in stellar interiors. It is thought to play a (modest) role in determining the time evolution the angular velocity $\Omega(r, \theta)$ in the star, and to affect the mixing of chemical elements in the upper radiative zone of stars in the AGB. This instability is typically incorporated in codes of stellar evolution in a diffusion-like approximation assuming a non-viscous background. We have shown here that this approximation is not supported by a detailed analysis of the linear stability problem: when realistic values of the kinematic viscosity are accounted for, the GSF instability is suppressed in the bulk of the radiative zone of both the Sun and RGs at various evolutionary stages. Finally, in a specific case of an environment which would normally be GSF-unstable, we have investigated the effect of a small deviation from axisymmetry and the presence of a small background magnetic field. Both these effects appear to have a stabilising influence. \bibliographystyle{mn2e}
2,877,628,091,422
arxiv
\section{Introduction}\label{sec:introduction} While to us as human beings the ground on which we walk may appear 'rock-solid' the surface of our planet is actually in constant albeit very slow motion. Continental plates move at a rate of centimetres per year. The reason for this movement are enormous forces acting deep below our feet. Convective processes in the Earth's mantle help the planet rid itself of excess energy that is either left from the time of its formation or generated by continued radioactive decay. The mantle is a layer of Earth starting from below the crust at roughly \SI[group-separator = {,},group-minimum-digits = 4]{60}{km} and extending down to the core-mantle-boundary at a depth of about \SI[group-separator = {,},group-minimum-digits = 4]{3000}{km}. On geologic time-scales the rocks inside the mantle behave like a highly viscous fluid. A single overturn of the material in the mantle takes about 100 mio.~years. A detailed understanding of these processes is of fundamental interest to geophysics, as they are the driving force behind phenomena such as plate tectonics, mountain and ocean building, volcanism, and finally earthquakes. As the mantle is not accessible for direct measurements studies of its convection rely mostly on simulation and form an active research topic in \gls*{cfd}. The requirements on spatial and temporal resolution render the solution of the underlying system of \glspl*{pde} a grand challenge in computational science \cite{Burstedde:2013:GJI,Bauer:2020:SPPEXA}. The combination of extremely viscous material, characteristic length scale, and creeping flow of the Earth's mantle result in a Reynolds number on the order of $10^{-15}$, \cite{Ricard:2007:Treatise} and the Stokes equations are suitable to model momentum and mass balance. Conservation of energy can be described by an equation of advection--diffusion type for the temperature. In a buoyancy-driven flow the dimensionless Rayleigh number Ra describes the vigor of convection. For the Earth's mantle Ra is estimated to lie between $10^7$ and $10^8$ \cite{Ricard:2007:Treatise}. In that range temperature transport is mainly driven by fluid flow (advection) and much less by diffusive effects. In this paper we are interested in the numerical treatment of this kind of equation in the advection-dominated regime. While the temperature equation of mantle convection forms our focus point, such kind of transport problems appear, of course, also in many other applications in \gls*{cfd} \cite{Chen:2006:SIAM,Morton:2019:CRC}. Although the quantity of interest varies, the main characteristics of the underlying equation remain the same. Typical transported variables include for example chemical species concentration, material markers, or isotope ratios. The solution of the advection-diffusion equation is known to be challenging in the advection-dominated regime, for instance due to stability issues at high gradients or even discontinuities in the solution \cite{Quarteroni:2008:Springer,Elman:2014:OUP}. Well-known and established methods for the numerical treatment of advection-diffusion equations include the \gls*{supg} method \cite{Brooks:1982:CMAME}, where for stability reasons, artificial diffusion is introduced into the solution. A more recent approach in the same direction is the entropy viscosity method, see e.g.~\cite{Kronbichler:2012:GJI} and references therein. \Gls*{afc} approaches the problem by modification of the equations at the algebraic level \cite{Kuzmin:2012:Springer}. A comparison of \gls*{supg}, \gls*{afc} and other finite-element based methods for advection-dominated transport is presented in \cite{John:2008:CMAME}. High-order, discontinuous Galerkin discretizations \cite{Cockburn:1998:SINUM,Reinarz:2020:CPC} are attractive as they are naturally well-suited to represent discontinuous solutions. However, the selection of adequate slope-limiters and the large number of unknowns that are introduced may be problematic. A fundamentally different approach to the discretization of advection-diffusion equations are so-called \emph{Lagrangian} or \emph{characteristic} methods. Instead of employing a fixed, \emph{Eulerian} grid, the advected property is captured by particles or volumes that move along the characteristics of the velocity field. Usually, both, Eulerian and Lagrangian discretization approaches are combined by means of a splitting-technique, where the advective term is treated by a Lagrangian, and the diffusive term by an Eulerian discretization. Solutions need to be interpolated between these two domains. These approaches are also called \glspl*{elm}. Two prominent implementations of this category are the \gls*{mmoc} \cite{Douglas:1982:SINUM,Allievi:2000:IJNMF,Malevsky:1991:PFA,ElAmrani:2008:IJCM} (also referred to as characteristic Galerkin method or Lagrange-Galerkin method) and the \gls*{ellam} \cite{Celia:1990:AWR,Russell:2002:AWR}. The \gls*{mmoc} is based on the backtracking of particles along the characteristics, where the transported quantity for the next time step is evaluated. This method permits large time steps, is free from parameterization and conceptually easy to understand. The particle-based method requires frequent evaluation (or interpolation) of the solution function away from the grid nodes. In general, the \gls*{mmoc} is not perfectly energy-conserving. A scheme to enforce global energy conservation is developed in \cite{Douglas:1999:NUMA}. Numerical analysis on accuracy and stability of the \gls*{mmoc} is found in \cite{Dawson:1989:SINUM,Bermudez:2006pt1:SINUM}. Note that by following characteristics backwards in time, \gls*{mmoc} is conceptually different from the particle/marker-in-cell techniques often employed in geodynamical flow simulations for advecting quantities like chemical composition or water content, \cite{Gassmoeller:2019:GJI}. It also avoids some of their pitfalls such as e.g.~the question of particle concentration per cell. The only investigation of \gls*{mmoc}-based methods for geodynamical flows seems to be \cite{Malevsky:1991:PFA}. \gls*{ellam} may provide local energy conservation by propagation of volumes instead of particles. This class of methods has similar advantages as the \gls*{mmoc}, but the integration over elements that are not aligned with the grid may be difficult, in particular in parallel implementations, and thus it can be computationally expensive. In this article, our focus is on an \gls*{elm} based on the \gls*{mmoc} suited for massively parallel simulations on state-of-the-art supercomputers. The parallel algorithms and data structures used in our implementation build upon the concept of \gls*{hhg} \cite{Bergen:2004:NLAA,Bauer:2020:SPPEXA}, addressing extreme-scalable, matrix-free geometric multigrid solvers on block-structured grids. With mantle convection models as a target application, a prototype application has demonstrated scalability of Stokes solvers for systems with more than $10^{13}$ unknowns \cite{Gmeiner:2016:JoCS}. New matrix-free methods \cite{Bauer:2017:ANM,Bauer:2018:SISC}, performance and scalability \cite{Gmeiner:2015:SISC,Gmeiner:2016:JoCS,Kohl:2020:arXiv}, and application to geophysical problems \cite{Bauer:2019:JoCS} have been studied, mainly focusing on the solution of the Stokes system. The \gls*{elm} proposed in this article is developed to exploit and extend the excellent scalability of the \gls*{hhg}-based solvers for time-dependent mantle-convection problems. Parallel implementations of \glspl*{elm} have been designed for various applications, including research on sea-ice \cite{Samake:2017:JCP}, Navier-Stokes \cite{Ouro:2019:CAF,Tavelli:2019:IJNMF}, and also natural convection in \cite{Busto:2020:CAF}. In the latter a target application similar to this work is considered on unstructured meshes, and an \gls*{elm} is used for both the advection terms in the energy equation and also the discretization of the Navier-Stokes system itself. However, only moderate scalability with up to \numINT{1000} parallel processes was demonstrated. To quantitatively and accurately predict the convection patterns of Earth's mantle, however, extreme-scale parallel simulations are necessary, as for instance a global spatial resolution of $\sim$ 1.7km results in linear systems with more than a trillion ($10^{12}$) \glspl*{dof} \cite{Bauer:2019:JoCS}. Such problems require methods that can efficiently exploit the resources of today's peta- and future exascale supercomputers. With the proposed method, we demonstrate the scalability of \gls*{elm}-based time-dependent simulations for up to a hundred of thousand parallel processors. \paragraph{Contribution} In this paper we will (a) present a particle-based, massively parallel method for the advection-diffusion equation based on the \gls*{mmoc} that is applicable to curved geometries and largely independent of the underlying grid data structures and spatial discretization, (b) embed the method into to a block-structured finite-element framework based on \gls*{hhg}, (c) quantify the accuracy and energy conservation of our approach through multiple, two- and three-dimen\-sio\-nal benchmarks with different spatial finite-element discretizations, discontinuous solutions, pure advection, curved domains, large time steps, \gls*{cfl} number $ > 1$, and coupled buoyancy-driven flow, and (d) demonstrate the extreme-scalability of the approach on to up to \numINT{147456} parallel processes and more than \num{5.2e10} particles, and an application to a simplified mantle convection setup. \paragraph{Reproducibility} All presented algorithms and benchmarks are implemented in the open-source software framework \gls*{hyteg}\footnote{\url{https://i10git.cs.fau.de/hyteg/hyteg}} \cite{Kohl:2019:IJPEDS,Kohl:2020:arXiv,HyTeG:2021:SW}, assuring reproducibility of the results. \subsection*{Governing equations} We consider the numerical approximation of the advection-diffusion equation on a bounded domain $\Omega \subset \mathbb{R}^d,\ d \in \{2,3\}$, and time interval $[0, T], T \in \mathbb{R}^+$ \begin{align}\label{eq:advection-diffusion-pde} \frac{\partial}{\partial t} c + \mathbf{u} \cdot \nabla c - \kappa \Delta c = q, &\quad (\mathbf{x}, t) \in \Omega \times [0, T] \end{align} where $c = c(\mathbf{x}, t)$ represents the advected, scalar quantity (temperature in case of our target application), $\mathbf{u} = \mathbf{u}(\mathbf{x}, t)$ a given divergence-free velocity field, \mbox{i.\,e.}\xspace satisfying \begin{align} \nabla \cdot \mathbf{u} = 0, \quad (\mathbf{x}, t) \in \Omega \times [0, T]\enspace, \end{align} $q = q(\mathbf{x}, t)$ the given rate of internal heat production, and $\kappa \geq 0$ a diffusivity parameter. Initial, Dirichlet, and (homogeneous) Neumann boundary conditions for the temperature $c$ are given by \begin{equation} c(\mathbf{x}, 0) = c_0(\mathbf{x}),\ \mathbf{x} \in \Omega, \quad c(\mathbf{x}, t) = c_{\Gamma}(\mathbf{x}, t),\ \mathbf{x} \in \partial \Omega_D, \quad \frac{\partial c}{\partial \mathbf{n}}(\mathbf{x}, t) = 0,\ \mathbf{x} \in \partial \Omega_N \end{equation} for $t \in [0, T]$, boundary $\partial \Omega = \partial \Omega_D \cup \partial \Omega_N$, and outward normal $\mathbf{n}$. We require for the sake of simplicity that the velocity field has no inflow into the domain. In typical applications, the advective term $\mathbf{u} \cdot \nabla c$ strongly dominates over the diffusive term $\kappa \Delta c$. Depending on the formulation and non-dimensionalization of the model, this translates to either $\kappa \ll 1$, or large velocity magnitudes. The advection-diffusion equation can be coupled to the Stokes equation for viscous flows using the Boussinesq-approximation for natural convection, as will be described in \cref{sec:coupled-flow}. \section{Eulerian-Lagrangian method} In this section we describe the parallel algorithms and data structures of the \gls*{mmoc}-based method for the advection-diffusion equation \cref{eq:advection-diffusion-pde}. \subsection{Hierarchical hybrid grids}\label{sec:domain} We base the construction of the computational mesh on the concept of \gls*{hhg} \cite{Bergen:2004:NLAA,Bauer:2020:SPPEXA}. Therefore, we define a coarse unstructured mesh $\mathcal{T}_0$ of tetrahedral (or triangular) elements that partitions the domain $\Omega$. In a second step, each coarse grid element is uniformly refined according to \cite{Bey:1995:Tetrahedral}. This results in hierarchy of block-structured meshes $\mathcal{T} = \{\mathcal{T}_\ell,\, \ell = 0, ..., L\}$ and offers crucial performance advantages for matrix-free multigrid methods as demonstrated especially for the Stokes system \cite{Kohl:2020:arXiv,Bauer:2020:SPPEXA,Bauer:2017:ANM,Bauer:2018:SISC}. If the problem domain $\Omega$ is polyhedral, we can define a set of coarse grid elements, whose union equals $\Omega$. However, in this article we also consider a more general case, which is that $\Omega$ coincides with a polyhedral domain after a \emph{blending function} $\Phi$ is applied to the latter. In particular, we are interested in domains with curved boundaries, such as the thick spherical shell, that is used to represent Earth's mantle in geophysical models \cite{Bauer:2019:JoCS,Rudi:2015:SC}. We require $\Phi$ to be a homeomorphism and its inverse to be known explicitly. To construct the grid hierarchy for this second case, we start from an approximation of the \emph{physical domain} $\Omega_\text{phy} := \Omega$ by a polyhedral, \emph{computational domain} $\Omega_\text{comp}$ (\mbox{i.\,e.}\xspace $\Phi(\Omega_\text{comp}) = \Omega_\text{phy}$). This polyhedral domain is then refined as outlined above, yielding a mesh hierarchy $\mathcal{T} = \{\mathcal{T}_\ell,\, \ell = 0, ..., L\}$. Finally, by applying our blending function to each mesh $\mathcal{T}_\ell$ we obtain a hierarchy $\widetilde{\mathcal{T}} := \{\Phi(\mathcal{T}_\ell),\, \ell = 0, \ldots, L\}$ for $\Omega_\text{phy}$. Obviously, application of this algorithm to a polyhedral physical domain $\Omega_\text{phy}$ corresponds to the special case $\Phi = \mathrm{Id}$ as $\Omega_\text{phy} = \Omega_\text{comp}$. \Cref{fig:annulus-domain} shows an example, where the computational domain is projected onto an annulus. The left figure shows an initial, unrefined, unstructured computational mesh $\mathcal{T}_0$, the right figure the corresponding physical mesh $\Phi(\mathcal{T}_3)$ after three refinement iterations. \begin{figure}[!ht] \footnotesize \centering \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[width=0.8\textwidth]{figures/annulus_comp_level_0.png} \caption{$\mathcal{T}_0$} \label{fig:annulus-domain-comp} \end{subfigure} \hfill \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[width=0.8\textwidth]{figures/annulus_phy_level_2.png} \caption{$\Phi(\mathcal{T}_3)$} \label{fig:annulus-domain-phy} \end{subfigure} \footnotesize \caption{Partitioning of an annular domain: (\protect\subref{fig:annulus-domain-comp}) unstructured, initial computational mesh before refinement, (\protect\subref{fig:annulus-domain-phy}) refined mesh mapped to physical domain.} \label{fig:annulus-domain} \end{figure} Efficient and scalable, matrix-free solvers for scalar elliptic PDE problems and Stokes flow on curved domains in conjunction with \gls*{hhg} have been presented in \cite{Bauer:2018:SISC,Bauer:2017:ANM}. \subsection{Discretization of the advection-diffusion equation}\label{sec:discretization} The essence of the \gls*{mmoc} is the elimination of the advective term $\mathbf{u} \cdot \nabla c$ from \cref{eq:advection-diffusion-pde}. For this, we define the so-called \emph{characteristics} $\mathbf{X} : \Omega \times [0, T]^2 \rightarrow \mathbb{R}^3$ of the velocity field $\mathbf{u}$ as the solutions of \begin{equation}\label{eq:characteristic-curves} \begin{aligned} \frac{d}{dt} \mathbf{X}(\mathbf{x}, s, t) &= \mathbf{u}(\mathbf{X}(\mathbf{x}, s, t), t), \quad t \in (0, T) \\ \mathbf{X}(\mathbf{x}, s, s) &= \mathbf{x} \end{aligned} \end{equation} for fixed $(\mathbf{x}, s) \in \Omega \times [0, T]$. Specifying two points in time $t_0, t_1 \in [0, T],\, t_0 < t_1$, $\mathbf{X}(\mathbf{x}, t_1, t_0)$ can be interpreted as the \emph{departure point} at time $t_0$ of a particle, that reaches point $\mathbf{x}$ at time $t_1$. Such a departure point is, thus, given by \begin{align}\label{eq:departure-point} \mathbf{X}(\mathbf{x}, t_1, t_0) = \mathbf{x} - \int_{t_0}^{t_1} \mathbf{u}(\mathbf{X}(\mathbf{x}, t_1, t), t) \, dt. \end{align} We now define, for a fixed time $s \in [0, T]$ \begin{align} \hat{c}(\mathbf{x}, t) := c(\mathbf{X}(\mathbf{x}, s, t), t) \end{align} and calculate, using the chain rule and \cref{eq:characteristic-curves} \begin{align}\label{eq:material-derivative} \frac{\partial}{\partial t} \hat{c} (\mathbf{x}, t) = \left( \frac{\partial}{\partial t} c + \mathbf{u} \cdot \nabla c \right) (\mathbf{X}(\mathbf{x}, s, t), t). \end{align} At time $t = s$ we can replace the advective term in \cref{eq:advection-diffusion-pde}, since \begin{align}\label{eq:material-derivative-t-eq-s} \frac{\partial}{\partial t} \hat{c} (\mathbf{x}, s) = \left( \frac{\partial}{\partial t} c + \mathbf{u} \cdot \nabla c \right) (\mathbf{x}, s), \end{align} and reformulate the PDE as \begin{align}\label{eq:advection-diffusion-pde-reformulated} \frac{\partial}{\partial t} \hat{c} - \kappa \Delta c = q. \end{align} Next we semi-discretize \cref{eq:advection-diffusion-pde-reformulated} in time. To this end, we divide $[0, T]$ into $N$ intervals $[t_n, t_{n+1}]$, $n \in \{0, \dots, N-1\}$ with step size $\tau_n = t_{n+1} - t_n$. We then set $\mathbf{x} = \mathbf{X}(\mathbf{x}, t_{n+1}, t_{n+1})$ (or $s = t_{n+1}$ in \cref{eq:material-derivative}) and approximate the time derivative via a difference quotient \begin{align}\label{eq:mmoc-approximation} \frac{\partial}{\partial t}\hat{c}(\mathbf{x}, t_{n+1}) \approx \frac{1}{\tau_n} \Big[ \hat{c}(\mathbf{x}, t_{n+1}) - \hat{c}(\mathbf{x}, t_{n}) \Big] = \frac{1}{\tau_n} \Big[ c(\mathbf{x}, t_{n+1}) - c(\mathbf{X}(\mathbf{x}, t_{n+1}, t_{n}), t_{n}) \Big]. \end{align} We perform the spatial discretization of the temperature and velocity fields using the standard Galerkin finite element method subject to the \gls*{hhg} grid hierarchy described in \cref{sec:domain}. We therefore introduce the spaces of piecewise polynomial functions \begin{align} \mathcal{S}_\ell^{m} := \{ v \in \mathcal{C}^0(\Omega) : \restr{v}{T} \in \mathcal{P}_m(T),\ \forall \ T \in \mathcal{T}_\ell \}, \quad l \in \{0, ..., L\}, \ m \geq 1. \end{align} Here, $\mathcal{P}_m(T)$ denotes the space of polynomials of degree $m$ on the element $T$. Let $V_h := \mathcal{S}_{L}^{m} \cap \mathcal{H}^1_0(\Omega)$ be a finite dimensional subspace of $\mathcal{H}^1_0(\Omega)$ with piecewise polynomial basis functions that vanish on the boundary. In particular, we employ the standard sets of Lagrange basis functions $P_m$ for polynomial degree $m$ \cite{Elman:2014:OUP}. Furthermore, given a function $c_{\Gamma} := c_{\Gamma}(\mathbf{x}, t)$ that defines suitable Dirichlet boundary conditions, let $V^D_h := \mathcal{S}_{L}^{m} \cap \mathcal{H}^1_D(\Omega)$ with $\mathcal{H}^1_D := \{ v_h \in \mathcal{H}^1(\Omega) : v_h = c_{\Gamma} \text{ on } \partial \Omega_D \}$. We apply the $\Theta$-method to the time-discretization of the diffusive term \cite{Quarteroni:2008:Springer}. The finite dimensional version of the weak formulation of \cref{eq:advection-diffusion-pde-reformulated} then reads: given $\hat{c}_h^{n} = \hat{c}_h^{n}(\mathbf{x}) \in V^D_h$, find $c_h^{n+1} = c_h^{n+1}(\mathbf{x}) \in V^D_h$ so that \begin{equation}\label{eq:finite-dimensional-galerkin-approximation} \begin{aligned} \frac{1}{\tau_n} (c_h^{n+1} - \hat{c}_h^{n}, v_h) &+ \Theta \kappa (\nabla c_h^{n+1}, \nabla v_h) + (1 - \Theta) \kappa (\nabla \hat{c}_h^{n}, \nabla v_h) \\ &= (\Theta q(t_{n+1}) + (1 - \Theta) q(t_n), v_h), \quad \text{for all } v_h \in V_h \end{aligned} \end{equation} and $c_h^{0} = c_{0,h}$. $(\cdot, \cdot)$ denotes the inner product in $L^2(\Omega)$ and $\Theta \in [0, 1]$. This corresponds to an implicit Euler or Crank-Nicolson scheme for the diffusive term, for $\Theta = 1$ or $\Theta = 0.5$, respectively. For the formulation of the bilinear and linear forms in the case of a blended domain, \mbox{i.\,e.}\xspace $\Phi \neq \mathrm{Id}$, we refer to \cite{Bauer:2018:SISC,Gordon:1973:NUMA}. Associating $c_h^{n+1}$, $\hat{c}_h^n$, $(q(t_n), v_h)$, and $(q(t_{n+1}), v_h)$ with coefficient vectors $\underline{\mathbf{c}}^{n+1}$, $\underline{\mathbf{\hat{c}}}^{n}$, $\underline{\mathbf{q}}^{n}$, and $\underline{\mathbf{q}}^{n+1}$ we formulate \cref{eq:finite-dimensional-galerkin-approximation} as the linear system \begin{equation}\label{eq:linear-system} \begin{aligned} (M + \tau_n \Theta \kappa A) \underline{\mathbf{c}}^{n+1} ={} (M - \tau_n(1 - \Theta) \kappa A)\underline{\mathbf{\hat{c}}}^{n} + \tau_n (\Theta \underline{\mathbf{q}}^{n+1} + (1-\Theta) \underline{\mathbf{q}}^{n}) \end{aligned} \end{equation} that has to be solved in each time step. $M$ represents the finite element mass matrix, and $A$ the stiffness matrix. The matrix $E := (M + \tau_n \Theta \kappa A)$ is symmetric and positive definite. This allows for efficient inversion. Especially for small time steps, $E$ tends to be more diagonally dominant than the stiffness matrix $A$ and is therefore well suited for treatment with conjugate gradient and multigrid solvers \cite{Trottenberg:2001:GreyBook}. It remains to determine an approximation for $\underline{\mathbf{\hat{c}}}^{n}$, which requires the evaluation of $\hat{c}^n_h(\mathbf{x}) = c^n_h(\mathbf{X}(\mathbf{x}, t_{n+1}, t_{n}))$. The advected temperature is obtained by calculation of the departure point $\mathbf{X}(\mathbf{x}, t_{n+1}, t_{n})$ via the integral in \cref{eq:departure-point}. Due to the initial condition and the continuous Galerkin discretization, $c^n_h(\mathbf{x})$ can be evaluated for all $\mathbf{x} \in \Omega$. In general, the integral in \cref{eq:departure-point} cannot be evaluated analytically but has to be approximated numerically. Here, we apply standard, explicit \gls*{rk} schemes that repeatedly evaluate the velocity field $\mathbf{u}$. For the general case of time-dependent and time-discrete velocity fields, evaluation at time $t^* \in (t_n, t_{n+1})$ requires interpolation. In this case, we employ linear interpolation in time. Spatially, we represent the velocity field $\mathbf{u}$ also in one of the continuous finite element spaces $\mathcal{S}_\ell^{m}$ resulting in a well-defined approximation $\mathbf{u}_h$. Details on the numerical integration and evaluation are presented in \cref{sec:implementation}. \Cref{alg:ad} summarizes the time-stepping scheme for the advection-diffusion equation. To determine a suitable time-step size, we employ a \gls*{cfl} condition via a constant $\text{CFL}_\text{max}$, the length of the shortest edge of the mesh $h_\text{min}$, and the maximum velocity magnitude at time-step $n$, \mbox{i.\,e.}\xspace $\max_{\mathbf{x}\in\Omega}|\mathbf{u}_h(\mathbf{x}, t_n)|$. \begin{algorithm} \footnotesize \algloopdefx{Repeat}[1]{\textbf{repeat} #1 \textbf{times}} \begin{algorithmic}[1] \Procedure{AD}{$c_h^n, \mathbf{u}_h$} \State $\tau_n = \text{CFL}_\text{max} \cdot h_\text{min} / \max_{\mathbf{x}\in\Omega}|\mathbf{u}_h(\mathbf{x}, t_n)|$ \Comment{determine time-step size} \State $\hat{\mathbf{x}} = \mathbf{X}(\mathbf{x}, t_{n+1}, t_{n})$ \Comment{calculate departure points (see \cref{sec:implementation})}\label{alg:ad:departure-points} \State $\hat{c}_h^{n}(\mathbf{x}) = c_h^{n}(\hat{\mathbf{x}})$ \Comment{advection}\label{alg:ad:advection} \State solve \cref{eq:linear-system} to advance from $\hat{c}_h^{n}$ to $c_h^{n+1}$ \Comment{diffusion} \State \textbf{return} $c_h^{n+1}$ \EndProcedure \end{algorithmic} \caption{\footnotesize Time-stepping scheme, advection-diffusion.} \label{alg:ad} \end{algorithm} \subsection{Parallel implementation}\label{sec:implementation} In this section, we describe the parallel implementation of the \gls*{mmoc} on \gls*{hhg}. In particular, we discuss the execution of the Lagrangian step, \mbox{i.\,e.}\xspace the calculation of $\hat{c}^n_h$ , and the implementation in the \gls*{hyteg} finite element framework. This corresponds to lines~\ref{alg:ad:departure-points}, and~\ref{alg:ad:advection} in \cref{alg:ad}. \subsubsection{Particle tracing}\label{sec:particle-tracing} We employ \emph{tracer particles} that are created at the \glspl*{dof} of $c_h$ at time $t_{n+1}$ and are transported backwards along the velocity trajectories, until they reach the departure points at time $t_n$. Usually, for standard Lagrange finite element discretizations, the \glspl*{dof} are set to coincide with the grid vertices for a $\mathbb{P}_1$\xspace discretization, and with the vertices and edge-midpoints for a quadratic $\mathbb{P}_2$\xspace discretization. However, the method is not restricted to such a choice, and discretizations with a different \gls*{dof}-layout such as finite-volumes may also be realized. The values of $\hat{c}^n_h$ at the \glspl*{dof} are then determined by evaluation of $c^n_h$ at the departure points. Given the continuous Galerkin approximation $c^n_h$ of $c$ on the \gls*{hhg} structure, we split the approximation of $\hat{c}^n_h$ into three steps: (i) particle creation, (ii) particle integration and (iii) temperature evaluation. It follows a discussion of the grid and particle data structures, and steps (i) -- (iii). \paragraph{Grid data structure} For each element of the unstructured coarse grid, a \emph{macro-primitive} (macro-faces in 2D, macro-cells in 3D) data structure is created. The macro-primitives are then uniformly refined. The \gls*{hhg} concept introduces \emph{interface primitives} for each interface between two coarse grid elements. The interface primitives are also refined uniformly. As an example, in 2D, two neighboring \emph{macro-face} primitives are interfaced by a \emph{macro-edge} primitive, and two adjacent macro-edges are interfaced by a \emph{macro-vertex}. This allows for a unique assignment of each individual \gls*{dof} to a single primitive data structure. Each primitive is assigned a globally unique ID, and in a parallel setting, assigned to one of the parallel processes. For distributed memory architectures, communication is implemented via MPI. The coarse grid and all mesh-related metadata are distributed without global data structures, allowing for parallel runs on hundreds of thousands of parallel processes \cite{Gmeiner:2016:JoCS,Kohl:2020:arXiv}. More details on the \gls*{hhg} data structures can be found in \cite{Bergen:2004:NLAA,Kohl:2019:IJPEDS,Kohl:2020:arXiv}. \paragraph{Particle data structure and synchronization} The tracer particles are realized by the \gls*{mesapd} \cite{Eibl:2018:PARCO,Eibl:2019:arXiv}, which implements particle data structures for massively parallel particle simulations. It allows to equip each particle with arbitrary properties, that are transported together with the particle through a distributed domain. The individual subdomains correspond to the volume primitives defined by the unstructured coarse grid. Particles that leave the subdomain of a process are communicated via MPI. Similar to the \gls*{hhg} structure, the parallel particle data structures are distributed to allow for massively parallel simulations by design. After the position of a particle is updated, a synchronization step follows, that assigns each particle uniquely to a single neighboring volume primitive. The target primitive is determined only by the previous owner process of the particle, and therefore prevents race conditions. Detailed information on the parallel data structures and communication are found in \cite{Eibl:2019:arXiv}. \begin{figure} \footnotesize \centering \begin{subfigure}[t]{0.40\textwidth} \centering \includegraphics[width=\textwidth]{figures/rk_stepping_0.pdf} \end{subfigure} \hspace{12pt} \begin{subfigure}[t]{0.40\textwidth} \centering \includegraphics[width=\textwidth]{figures/rk_stepping_1.pdf} \end{subfigure} \caption{Illustration of the parallel particle integration and temperature evaluation (steps (ii) and (iii)) on two neighboring volume primitives. In this example, a 2-stage RK method is employed: (a) initial particle position, evaluation of $\tilde{\mathbf{u}}^1_h(\mathbf{y}_i^1)$ (tangent to velocity field at that point, illustrated by dotted blue line), (b) setting particle position to $\mathbf{y}^2_i$, (c) evaluation of $\tilde{\mathbf{u}}^2_h(\mathbf{y}_i^2)$, (d) calculation of $\mathbf{X}(\mathbf{x}_i, t_{n+1}, t_n)$ using $\tilde{\mathbf{u}}^1_h(\mathbf{y}_i^1)$ and $\tilde{\mathbf{u}}^2_h(\mathbf{y}_i^2)$ according to the RK method, and particle communication to the neighboring volume primitive, (e) evaluation of $c^n_h(\mathbf{X}(\mathbf{x}_i, t_{n+1}, t_n))$, (f) communication of $c^n_h(\mathbf{X}(\mathbf{x}_i, t_{n+1}, t_n))$ back to initial \gls*{dof}.} \label{fig:particle-integration} \end{figure} \paragraph{Step (i): particle creation} For each \gls*{dof} of the Eulerian grid, a particle is created. The particles are initialized with the corresponding macro-primitive ID, \gls*{dof}-index, and the process ID, so that their corresponding \gls*{dof} can be backtracked in a distributed setting. Particles are also initialized on interface primitives, since they are responsible for \glspl*{dof} at the interfaces of the volume primitives. The initial position of a particle corresponds to $\mathbf{x}_i = \mathbf{X}(\mathbf{x}_i, t_{n+1}, t_{n+1})$, where $\mathbf{x}_i$ is the location of a \gls*{dof} with index $i$ on $\Omega_\text{phy}$. A following synchronization step assigns all particles that were created on an interface primitive to a single volume primitive. It is of no particular importance which volume primitive is chosen. \paragraph{Step (ii): particle integration} This step performs the backward transport along the velocity field using an explicit \gls*{rk} integrator with $S$ stages. This corresponds to the computation of $\mathbf{X}(\mathbf{x}_i, t_{n+1}, t_n)$ according to \cref{eq:departure-point} using numerical integration. The RK integration requires the evaluation of the velocity field at a time $\tilde{t}^{s} \in [t_n, t_{n+1}]$ and position $\mathbf{y}_i^s$, with $\mathbf{y}_i^1 = \mathbf{x}_i$ in each stage $s \in [1, ..., S]$. Before each RK stage, the position of a particle is set to the position $\mathbf{y}_i^s$ where the velocity field needs to be evaluated (see \cref{fig:particle-integration} step (b)). Immediately after that, a synchronization step follows so that all particles are available on the process that owns the volume-primitive that contains $\mathbf{y}_i^s$. We assume that the velocity field is known for the discrete time steps $t_n$ and $t_{n+1}$. Both fields $\mathbf{u}^n_h$ and $\mathbf{u}^{n+1}_h$ are evaluated and we perform linear interpolation. This means we approximate \begin{align}\label{eq:linear-interpolation} \mathbf{u}(\mathbf{y}_i^s, \tilde{t}^s) \approx \tilde{\mathbf{u}}^s_h(\mathbf{y}_i^s) := \left( \frac{t_{n+1} - \tilde{t}^s}{t_{n+1} - t_n} \right) \mathbf{u}^{n}_h(\mathbf{y}_i^s) + \left( \frac{\tilde{t}^s - t_n}{t_{n+1} - t_n} \right) \mathbf{u}^{n+1}_h(\mathbf{y}_i^s) \end{align} (see \cref{fig:particle-integration} steps (a) and (c)). For scenarios where the velocity depends on the temperature field, we refer to \cref{sec:coupled-flow} where we discuss buoyancy-driven flows. The intermediate result $\tilde{\mathbf{u}}^s_h(\mathbf{y}_i^s)$ is stored in the particle data structure before the next stage is executed. After the last stage, all intermediate results and weights of the RK method are combined to calculate the actual final position $\mathbf{X}(\mathbf{x}_i, t_{n+1}, t_n)$ (see \cref{fig:particle-integration} step (d)). \paragraph{Step (iii): temperature evaluation} In this last step, the temperature field $c^n_h$ is evaluated at $\mathbf{X}(\mathbf{x}_i, t_{n+1}, t_n)$ (see \cref{fig:particle-integration} step (e)). This gives $\hat{c}^n_h(\mathbf{x}_i)$ at the initial position $\mathbf{x}_i$ of the particle. Since the initial position was a \gls*{dof}, we set the corresponding coefficient $\underline{\mathbf{\hat{c}}}^n_i = c^n_h(\mathbf{X}(\mathbf{x}_i, t_{n+1}, t_n))$. If $\mathbf{x}_i$ is located on a different volume primitive than $\mathbf{X}(\mathbf{x}_i, t_{n+1}, t_n)$, $\underline{\mathbf{\hat{c}}}^n_i$ is communicated (see \cref{fig:particle-integration} step (f)). \subsubsection{Field evaluation} The evaluation of $\mathbf{u}_h^n(\mathbf{z})$ or $c_h^n(\mathbf{z}),\ \mathbf{z} \in \Omega$ involves localization of the underlying geometric element, and computing a sum of the shape functions evaluated at $\mathbf{z}$ weighted by the corresponding DoF values. In general, as described in \cref{sec:domain}, $\Omega = \Omega_\text{phy}$ may be non-polyhedral, \mbox{i.\,e.}\xspace $\Phi \neq \mathrm{Id}$. We therefore map $\mathbf{z}$ to the computational domain $\Omega_\text{comp}$ and set $\mathbf{z}_\text{comp} := \Phi^{-1}(\mathbf{z})$. Since we require $\Phi$ to be a homeomorphism, we know that $\mathbf{z}_\text{comp} \in T \subset \Omega_\text{comp} \Leftrightarrow \mathbf{z} \in \Phi(T) \subset \Omega_\text{phy}$. We split the search-locate algorithm on the computational domain into two steps. In a first step, the enclosing volume-primitive that contains $\mathbf{z}_\text{comp}$ is determined by searching in the direct neighborhood of the volume-primitive that previously contained the corresponding particle. Then, we search for the containing element $T \subset \Omega_\text{comp}$ of $\mathbf{z}_\text{comp}$ in the uniformly refined volume primitive. Since we employ block-structured \gls*{hhg}, the element $T$ is found in $\mathcal{O}(1)$ cost. Finally, the value of the finite element function is computed as is standard, by application of a pull-back mapping of $T$ to the reference element. \subsubsection{Look-back Distance}\label{sec:re-init} The field evaluation in step (iii) implicitly corresponds to an interpolation of the advected temperature field $\hat{c}^n_h$ into the space $\mathcal{S}_\ell^{m}$. While the discretized original field at time $t_n$ satisfies $c^n_h\in\mathcal{S}_\ell^{m}$ this will typically not be the case for $\hat{c}^n_h$. Consequently this step introduces an interpolation error. If the field $\hat{c}^n_h$ used to update $c^{n+1}_h$ is computed from $c^n_h$, the latter already involves $n$ previous interpolations, whose errors might accumulate. However, in the purely advective case ($\kappa=0$ and $q = 0$) this issue can be diminished or even completely removed. To do so, one can simple follow the particle trajectory back in time over more than only one temperature time step $\tau$, i.e.~instead of integrating from $t_{n+1}$ back to $t_n$ we select an earlier time $t_{n+1-b}$. We will refer to the integer $b$ as \emph{look-back distance} as $t_{n+1-b}$ will be the time when temperature is evaluated. Of course, this approach requires that the temperature field is still known at $t_{n+1-b}$, as must be the intermediate velocity fields required by the ODE solver. By selecting $b=n+1$ one can derive $c^{n+1}_h$ from the initial temperature $c_{0,h}$ itself. However, the look-back distance then grows with the simulation, a fact that we will mark by using the notation $b=\infty$. This extreme approach preserves the accurate representation of the initial temperature in the Lagrangian domain, and leads to very accurate solutions, as we will see in the following benchmarks. There we will employ different look-back distances $b$ to demonstrate that the interpolation between the Lagrangian and Eulerian representation is the primary source of approximation error. A similar discussion of the accumulation of the interpolation error is found in \cite{Malevsky:1991:PFA}. \section{Numerical verification}\label{sec:benchmarks} In the following subsections we assess the accuracy of our implementation through numerical benchmarks. \subsection{Test setup} In all benchmarks, we employ either linear ($\mathbb{P}_1$\xspace) or quadratic ($\mathbb{P}_2$\xspace) Lagrangian finite element discretizations for the temperature and velocity, block-structured triangular and tetrahedral meshes for two- and three-dimensional domains respectively. For the particle integration we use the standard fourth-order \gls*{rk} integrator (often referred to as RK4). We note that the implementation supports any explicit \gls*{rk} integrator. To asses the quality of our scheme, we employ the following norms and metrics: let $\tilde{c}_h$ the computed solution, $c_h$ the point-wise interpolated exact solution and $e_h = c_h - \tilde{c}_h$. The corresponding coefficient vectors are denoted as $\underline{\mathbf{c}}$, $\underline{\mathbf{\tilde{c}}}$, and $\underline{\mathbf{e}}$. A discrete version of the $\mathcal{H}^0$-norm of the error is then defined as \begin{align} \norm{e_h}_{\mathcal{H}^0} := \left( \underline{\mathbf{e}}^\top M \underline{\mathbf{e}} \right)^\frac{1}{2} \end{align} where $M$ is the finite element mass matrix. We define $\text{var}(t_n)$ as in \cite{John:2008:CMAME}, and $E_\text{peak}(t_n)$ as \begin{align} \text{var}(t_n) := \max_{j} \underline{\mathbf{\tilde{c}}}^n_j - \min_{j} \underline{\mathbf{\tilde{c}}}^n_j, \quad E_\text{peak}(t_n) := \frac{\max_{j} \underline{\mathbf{\tilde{c}}}^n_j}{\max_{j} \underline{\mathbf{c}}^n_j} - 1 \end{align} to indicate the amount of spurious oscillations, and to detect if peaks of the solution are preserved. To quantify the energy conservation of our implementation, we indicate a relative energy difference $\Delta m(t_n)$ compared to the initial solution by \begin{align} \Delta m(t_n) := \frac{m(t_n)}{m(t_0)} - 1, \quad m(t_n) := \underline{\mathbf{1}}^\top M \underline{\mathbf{\tilde{c}}}^n, \quad \underline{\mathbf{1}} := (1, \dots, 1)^\top. \end{align} \subsection{Circular advection} First we consider a two dimensional body rotation problem as employed in \cite{Zalesak:1979:JCP,LeVeque:1996:SINUM,John:2008:CMAME}. In particular, the setup is the same as in \cite{John:2008:CMAME} to compare the numerical results. Let $\Omega = (0, 1)^2$ be the domain where the initial temperature is imposed by three bodies as shown in \cref{fig:benchmark-01-2d-initial}. All bodies are defined on circles with radius $r_0 = 0.15$, the initial condition is zero outside of these circles. We define $\mathbf{x} = (x_1, x_2)$, $\bar{\mathbf{x}} = (\bar{x}_1, \bar{x}_2)$, $r(\mathbf{x}) := \norm{\mathbf{x} - \bar{\mathbf{x}}}_2 / r_0$, and the initial condition $c_0 = c_0^\text{slotted} + c_0^\text{cone} + c_0^\text{hill}$ by \begin{align} c_0^\text{slotted}(\mathbf{x}) &= \begin{cases} 1 \quad & \text{if } r(\mathbf{x}) \leq 1,\ |\mathbf{x} - \bar{\mathbf{x}}| \geq 0.025,\ y \geq 0.85 \\ 0 \quad &\text{otherwise} \end{cases} & \bar{\mathbf{x}} = (0.5, 0.75),\\ c_0^\text{cone}(\mathbf{x}) &= \begin{cases} 1 - r(\mathbf{x}) \quad & \text{if } r(\mathbf{x}) \leq 1 \\ 0 \quad & \text{otherwise} \end{cases} & \bar{\mathbf{x}} = (0.5, 0.25),\\ c_0^\text{hill}(\mathbf{x}) &= \begin{cases} \frac{1}{4} ( 1 + \cos(\pi r(\mathbf{x})) ) \quad & \text{if } r(\mathbf{x}) \leq 1 \\ 0 \quad & \text{otherwise} \end{cases} & \bar{\mathbf{x}} = (0.25, 0.5). \end{align} The bodies are rotating counter-clockwise along the constant velocity field $\mathbf{u} = (0.5 - x_2, x_1 - 0.5)^\top$. Since we consider pure advection ($\kappa=0$, $q=0$), at $t = 2\pi$, the bodies have finished a full revolution and the resulting temperature field should be equal to the initial condition. The time step size $\tau$ is constant. In \cref{tab:benchmark-01-2d}, the different versions of the \gls*{mmoc} are compared to the linear and non-linear \gls*{fct} methods that performed best in \cite{John:2008:CMAME}. We observe the strong influence of the look-back distance $b$ on the solution, as visualized in the plots of the computed solutions in \cref{fig:benchmark-01-2d}. \begin{table}[!ht] \centering \footnotesize \caption{Comparison of different parameterizations of the \gls*{mmoc} to the best performing methods from the study in \cite{John:2008:CMAME}. The FEM-FCT employs $\mathbb{P}_1$\xspace finite elements and a grid spacing of $h = 1/128$. We run our \gls*{mmoc} implementation both, with $\mathbb{P}_1$\xspace and $\mathbb{P}_2$\xspace elements and grid spacings of $h = 1/128$ and $h = 1/64$ respectively. The mesh size $h$ refers to quadratic elements, which are divided into two triangles each. Therefore, all grids result in 16641 \glspl*{dof}, including boundary. The time-step size is $\tau \approx \num{1e-03}$ for all settings (which corresponds to 6283 time steps for one revolution).} \label{tab:benchmark-01-2d} \resizebox{\columnwidth}{!}{% \begin{tabular}{lrrr} \toprule method & $\norm{e_h}_{\mathcal{H}^0}$ & $\text{var}(2\pi)$ & $\Delta m(2\pi)$ \\ \midrule MMOC $\mathbb{P}_1$\xspace ($b = 1$) & \num{1.74e-01} & \num{0.5913} & \num{-4.73e-02} \\ MMOC $\mathbb{P}_1$\xspace ($b = 10$) & \num{1.65e-01} & \num{0.6296} & \num{ 4.76e-03} \\ MMOC $\mathbb{P}_1$\xspace ($b = 100$) & \num{8.60e-02} & \num{0.9847} & \num{ 2.67e-04} \\ MMOC $\mathbb{P}_1$\xspace ($b = 1000$) & \num{3.85e-02} & \num{1.0000} & \num{-5.52e-04} \\ MMOC $\mathbb{P}_1$\xspace ($b = \infty$) & \num{1.38e-13} & \num{1.0000} & \num{ 2.22e-16} \\ FEM-FCT n.-l. \cite{John:2008:CMAME} & \num{1.44e-01} & \num{1.0010} & no data \\ FEM-FCT \cite{John:2008:CMAME} & \num{1.92e-01} & \num{1.0069} & no data \\ \bottomrule \end{tabular} \hspace{3pt} \begin{tabular}{lrrr} \toprule method & $\norm{e_h}_{\mathcal{H}^0}$ & $\text{var}(2\pi)$ & $\Delta m(2\pi)$ \\ \midrule MMOC $\mathbb{P}_2$\xspace ($b = 1$) & \num{1.09e-01} & \num{1.2773} & \num{-2.19e-02} \\ MMOC $\mathbb{P}_2$\xspace ($b = 10$) & \num{9.71e-02} & \num{1.2943} & \num{-1.39e-02} \\ MMOC $\mathbb{P}_2$\xspace ($b = 100$) & \num{5.29e-02} & \num{1.3049} & \num{-5.56e-03} \\ MMOC $\mathbb{P}_2$\xspace ($b = 1000$) & \num{3.03e-02} & \num{1.3185} & \num{-1.09e-03} \\ MMOC $\mathbb{P}_2$\xspace ($b = \infty$) & \num{1.68e-13} & \num{1.0000} & \num{-6.79e-14} \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[!ht] \footnotesize \centering \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/circular_advection_p1_level7_initial.png} \caption{interpolated solution} \label{fig:benchmark-01-2d-initial} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/circular_advection_p1_level7_solution_r_1000.png} \caption{computed solution, $b = 1000$} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/circular_advection_p1_level7_solution_r_inf.png} \caption{computed solution, $b = \infty$} \end{subfigure} \caption{Interpolated and computed solutions of the body rotation problem for the setup as in \cref{tab:benchmark-01-2d} with $\mathbb{P}_1$\xspace discretization for different look-back distances $b$. The plots show the influence of the look-back distance on the quality of the computed solution.} \label{fig:benchmark-01-2d} \end{figure} Optimal results are achieved with infinite look-back distance ($b=\infty$). This suggests that the interpolation between Eulerian and Lagrangian representation is the primary source of errors, and energy difference. We note, that in a massively parallel setting, occasional interpolation to the Eulerian domain may be desired to reduce the communication overhead during the temperature evaluation (step (iii) in \cref{sec:particle-tracing}). \begin{remark}[Choice of space-discretization, oscillations]\label{remark:space-discretization} The amount of spurious oscillations denoted by $\text{var}(2\pi)$ in \cref{tab:benchmark-01-2d} with $\mathbb{P}_2$\xspace elements and $b=\infty$ is partly misleading. While there are no oscillations at time $t = 2\pi$, some oscillations appear at the discontinuity around the slotted cylinder for $0 < t < 2\pi$. For the linear ($\mathbb{P}_1$\xspace) space discretization, there are no oscillations over the entire time interval. Typically, continuous Lagrange finite elements of higher order tend to produce over- and undershoots at discontinuities. However, we note, that this is owed to the space-discretization and not to the presented time-discretization, \mbox{i.\,e.}\xspace the \gls*{mmoc}. An advantage of the \gls*{mmoc} is, that it can be applied to any space-discretization as long as the solution can be locally evaluated. In presence of discontinuities in the solution, discontinuous Galerkin space-discretizations could be considered in combination with the \gls*{mmoc}. \end{remark} To demonstrate, the contribution of the discontinuity around the slotted cylinder to the error, we show in \cref{tab:benchmark-01-2d-gaussian-only} results for the smooth initial condition and solution $c_0 = c_0^{\text{hill}}$. In this run, the error and energy discrepancy is much smaller than for the results with a discontinuous solution in \cref{tab:benchmark-01-2d}, especially, for $b < \infty$. \begin{figure}[!ht] \centering \begin{minipage}{0.49\textwidth} \centering \resizebox{\textwidth}{!}{% \begin{tabular}{crrcr} \toprule $\tau$ & $b$ & \multicolumn{1}{c}{$\norm{e_h}_{\mathcal{H}^0}$} & $\text{var}(2\pi)$ & \multicolumn{1}{c}{$\Delta m(2\pi)$} \\ \midrule \num{1.01e-01} & 1 & \num{3.36e-04} & \num{0.5019} & \num{-6.63e-05} \\ \num{1.01e-01} & 10 & \num{5.32e-05} & \num{0.5008} & \num{ 3.83e-05} \\ \num{1.01e-01} & $\infty$ & \num{9.62e-07} & \num{0.5000} & \num{ 9.35e-07} \\ \midrule \num{1.00e-02} & 1 & \num{4.33e-03} & \num{0.5054} & \num{-4.46e-06} \\ \num{1.00e-02} & 10 & \num{3.43e-04} & \num{0.5020} & \num{-5.33e-05} \\ \num{1.00e-02} & 100 & \num{4.87e-05} & \num{0.5008} & \num{ 2.28e-05} \\ \num{1.00e-02} & $\infty$ & \num{9.13e-11} & \num{0.5000} & \num{ 9.12e-12} \\ \midrule \num{1.00e-03} & 1 & \num{6.33e-03} & \num{0.5066} & \num{-2.05e-05} \\ \num{1.00e-03} & 10 & \num{4.33e-03} & \num{0.5054} & \num{-4.38e-06} \\ \num{1.00e-03} & 100 & \num{3.40e-04} & \num{0.5020} & \num{-4.80e-05} \\ \num{1.00e-03} & 1000 & \num{4.96e-05} & \num{0.5009} & \num{ 3.27e-05} \\ \num{1.00e-03} & $\infty$ & \num{9.87e-15} & \num{0.5000} & \num{ 4.88e-15} \\ \bottomrule \end{tabular} } \captionof{table}{Results for the circular advection benchmark with $c_0 = c_0^{\text{hill}}$ and a $\mathbb{P}_2$\xspace space discretization.} \label{tab:benchmark-01-2d-gaussian-only} \end{minipage}\hfill \begin{minipage}{0.49\textwidth} \centering \resizebox*{\textwidth}{!}{ \input{figures/bm_01_rotation_time_step_conv.pgf}} \caption{Time-step size study for the circular advection benchmark with $c_0 = c_0^\text{slotted} + c_0^\text{cone} + c_0^\text{hill}$ and $b=\infty$.} \label{fig:benchmark-01-time-convergence} \end{minipage} \end{figure} \Cref{tab:benchmark-01-2d-gaussian-only} additionally lists results of simulations with time-steps that are increased by a factor of 10 and 100. The measured errors demonstrate that the Lagrangian approach yields promising stability and accuracy also for comparatively large time-steps. For $b < \infty$ the resulting errors are mostly caused by the interpolation between the Eulerian and the Lagrangian representation. If we compare runs where $\tau \cdot b = \text{const}$, we obtain almost identical errors (\mbox{e.\,g.}\xspace $\tau = \num{1.01e-01}$ and $b = 1$ compared to $\tau = \num{1.00e-02}$ and $b = 10$ in \cref{tab:benchmark-01-2d-gaussian-only}). In those runs, the number of time-steps in which the solution is interpolated is equal. Despite a significant time-step size reduction, the interpolation error dominates. In the case of $b = \infty$ no temperature interpolation is performed throughout the simulation. Therefore the increased accuracy of the \gls*{rk} integrator directly affects the error in the solution when the time-step size is reduced. In \cref{fig:benchmark-01-time-convergence} we plot the $\mathcal{H}^0$ error of the solution of the original benchmark problem ($c_0 = c_0^\text{slotted} + c_0^\text{cone} + c_0^\text{hill}$, \mbox{i.\,e.}\xspace with discontinuous solution) discretized with $\mathbb{P}_1$\xspace finite elements for different time-step sizes and $b=\infty$. For the largest time-step size in \cref{fig:benchmark-01-time-convergence} ($\tau \approx .065$) and a maximum absolute velocity of $\approx .7$ this results in a \gls*{cfl} number of roughly $3$. \subsection{Swirling advection} Next we move to a three-dimensional setting with a time-dependent velocity field. The benchmark is taken from \cite{LeVeque:1996:SINUM}. Let $\Omega = (0, 1)^3$ and $t \in [0, T]$ with $T = 1.5$. The the initial condition $c_0$ and the velocity field $\mathbf{u}(\mathbf{x}, t)$ are defined by \begin{align} c_0(\mathbf{x}) := \begin{cases} 1 \quad \text{if } x_1 < 0.5 \\ 0 \quad \text{otherwise} \end{cases}, \quad \mathbf{u}(\mathbf{x}, t) := \begin{pmatrix} 2 \sin^2(\pi x_1) \sin(2 \pi x_2) \sin(2 \pi x_3) g(t) \\ - \sin(2 \pi x_1) \sin^2(\pi x_2) \sin(2 \pi x_3) g(t) \\ - \sin(2 \pi x_1) \sin(2 \pi x_2) \sin^2(\pi x_3) g(t) \end{pmatrix}, \end{align} with $g(t) := \cos(\pi t / T)$. The temperature field undergoes a deformation which reverses at $t = T/2$ and should return to the initial solution at $t = T$. Again, we consider pure advection ($\kappa=0$, $q=0$). The results for the \gls*{mmoc} with $b=\infty$ at $T=1.5$ are listed in \cref{tab:benchmark-02}. The $\mathcal{H}^0$-errors are small and no spurious oscillations are detected for all chosen time step and grid sizes. \begin{table}[!ht] \centering \footnotesize \resizebox{0.7\textwidth}{!}{% \begin{tabular}{rccccr} \toprule \multicolumn{1}{c}{\glspl*{dof}} & $h_\text{min}$ & $\tau$ & $\norm{e_h}_{\mathcal{H}^0}$ & $\text{var}(1.5)$ & \multicolumn{1}{c}{$\Delta m(1.5)$} \\ \midrule \numINT{ 35937} & \num{3.12e-02} & \num{1.00e-01} & \num{8.67e-04} & \num{1.0000} & \num{-9.75e-05} \\ \numINT{ 35937} & \num{3.12e-02} & \num{5.00e-02} & \num{5.48e-05} & \num{1.0000} & \num{-2.37e-06} \\ \numINT{ 35937} & \num{3.12e-02} & \num{2.50e-02} & \num{5.11e-06} & \num{1.0000} & \num{-1.37e-08} \\ \numINT{2146689} & \num{7.81e-03} & \num{1.00e-01} & \num{1.26e-03} & \num{1.0000} & \num{-2.77e-05} \\ \numINT{2146689} & \num{7.81e-03} & \num{5.00e-02} & \num{5.01e-05} & \num{1.0000} & \num{-1.11e-06} \\ \numINT{2146689} & \num{7.81e-03} & \num{2.50e-02} & \num{2.57e-06} & \num{1.0000} & \num{-2.18e-08} \\ \bottomrule \end{tabular}} \caption{Results for application of the \gls*{mmoc} with $b=\infty$ and $\mathbb{P}_1$\xspace finite elements in space to the swirling flow benchmark in 3D.} \label{tab:benchmark-02} \end{table} \Cref{fig:benchmark-02} shows the computed solution at $x_3 = \num{0.425}$ for $t=T/2$, and $t=T$ with $h_\text{min}=\num{7.81e-03}$ (refinement level 7), and $\tau = \num{5.00e-02}$. At $t=T$, the initial temperature field is restored without visible artifacts or numerical diffusion. The slice is chosen to coincide with the slice shown in \cite[figure 11.2]{LeVeque:1996:SINUM}. \begin{figure}[!ht] \footnotesize \centering \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/swirling_advection_p1_level7_numts_30_ts_0.png} \caption{interpolated initial condition} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/swirling_advection_p1_level7_numts_30_ts_15.png} \caption{$t = T/2$ (computed solution)} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/swirling_advection_p1_level7_numts_30_ts_30.png} \caption{$t = T$ (computed solution)} \end{subfigure} \caption{Elevated slice of the computed solution at $x_3 = \num{0.425}$, as in \cite[figure 11.2]{LeVeque:1996:SINUM}. Parameters: $\mathbb{P}_1$\xspace discretization, $b=\infty$, $h_\text{min}=\num{7.81e-03}$ (refinement level 7), $\tau = \num{5.00e-02}$. The discontinuities are preserved without any oscillations or numerical diffusion.} \label{fig:benchmark-02} \end{figure} \subsection{Advection-diffusion on blended geometry}\label{sec:benchmark-advection-diffusion} Finally we apply the \gls*{mmoc} to a problem with a diffusion coefficient $\kappa > 0$, without internal heating ($q=0$), on a blended geometry. Since $\kappa > 0$, we need to solve the linear system \cref{eq:linear-system} in each time-step. Thus the solution must be interpolated to the Eulerian grid in each time-step and we must limit the look-back distance to $b = 1$. The physical domain $\Omega_\text{phy}$ is an annulus defined by $\Omega = \{ \mathbf{x} \in \mathbb{R}^2 : r_\text{min} \leq \norm{\mathbf{x}}_2 \leq r_\text{max} \}$ with $(r_\text{min}, r_\text{max}) = (0.5, 1.5)$. The computational domain $\Omega_\text{comp}$ approximates the annulus with a coarse triangular mesh that is uniformly refined and projected onto $\Omega_\text{phy}$, see \cref{fig:annulus-domain}. The benchmark is inspired by the unsteady advection-diffusion benchmark in \cite{Kuzmin:2004:CMAME}. A circular velocity field $\mathbf{u}(\mathbf{x}) = (-x_2, x_1)$ transports a gradually smeared Gaussian hill around the annulus. The time-dependent position of the hill is given by an initial position $(\bar{x}_1, \bar{x}_2) = (0, 1)$ and $\hat{\mathbf{x}}(t) = (\hat{x}_1(t), \hat{x}_2(t))$ with $\hat{x}_1(t) = \bar{x}_1 \cos(t) - \bar{x}_2 \sin(t)$, and $\hat{x}_2(t) = -\bar{x}_1 \sin(t) + \bar{x}_2 \cos(t)$. The analytical solution $c$ is defined by \begin{align} c(\mathbf{x}, t) := \frac{1}{4 \pi t \kappa} \exp\left(-\frac{r(\mathbf{x}, t)^2}{4t\kappa}\right), \quad \kappa > 0,\ t > 0\enspace, \end{align} where $r(\mathbf{x}, t) := \norm{\mathbf{x} - \hat{\mathbf{x}}}_2$. At time $t = 0$, $c$ becomes a Dirac delta function, which is why we start the simulation at a later time. To better compare the proposed method for different $\kappa$, we parameterize the initial time $t_0$ via $t_0(\kappa) = \frac{2 \pi \times 10^{-3}}{\kappa}$. The moving hill has therefore a different initial position when applying a different diffusion coefficient, but the initial shape is identical for all choices of $\kappa$. The computed solution is evaluated after a full revolution, at time $T = t_0(\kappa) + 2\pi$. We employ a $\mathbb{P}_2$\xspace space-discretization in all runs. Results after one revolution are listed in \cref{tab:benchmark-04}. \begin{table}[!ht] \centering \footnotesize \resizebox{\columnwidth}{!}{% \begin{tabular}{rr|rr|rr|rr} \multicolumn{2}{c}{} & \multicolumn{2}{c}{$\kappa = \num{1e-3}$} & \multicolumn{2}{c}{$\kappa = \num{1e-5}$} & \multicolumn{2}{c}{$\kappa = \num{1e-7}$} \\ \midrule \glspl*{dof} & $h_\text{min}$ & $\norm{e^h}_{\mathcal{H}^0}$ & $E_\text{peak}(2\pi)$ & $\norm{e^h}_{\mathcal{H}^0}$ & $E_\text{peak}(2\pi)$ & $\norm{e^h}_{\mathcal{H}^0}$ & $E_\text{peak}(2\pi)$ \\ \midrule \num{ 12480} & \num{3.12e-02} & \num{1.48e-02} & \num{-1.30e-03} & \num{8.32e-02} & \num{-2.02e-02} & \num{8.51e-02} & \num{-1.98e-02} \\ \num{ 49536} & \num{1.56e-02} & \num{3.86e-03} & \num{ 3.45e-03} & \num{7.38e-03} & \num{-1.90e-03} & \num{7.84e-03} & \num{-1.56e-03} \\ \num{197376} & \num{7.81e-03} & \num{4.32e-03} & \num{ 3.98e-03} & \num{6.30e-04} & \num{-1.25e-04} & \num{6.92e-04} & \num{-1.01e-04} \\ \bottomrule \end{tabular}} \caption{Results after one revolution of the Gaussian hill, with time-step size $\tau \approx 1e-1$, implicit Euler time-integration for the parabolic part ($\Theta = 1$ in \cref{eq:finite-dimensional-galerkin-approximation}), three different diffusion coefficients $\kappa$, and different refinement levels.} \label{tab:benchmark-04} \end{table} For a comparably large time step size (\gls*{cfl} between 4 and 20) and varying diffusivity, we observe satisfying results for sufficiently small mesh sizes. \section{Coupled flow}\label{sec:coupled-flow} Finally, we apply our scheme to a buoyancy-driven, coupled flow problem. Due to the negligible Reynolds number in mantle convection models, the Stokes equations are used to model the creeping flow of the medium. We consider the incompressible formulation for the Boussinesq approximation \cite{Ricard:2007:Treatise} \begin{equation}\label{eq:stokes-pde} \begin{aligned} - \nabla \cdot \sigma = \mathbf{F}(c)&, \quad \nabla \cdot \mathbf{u} = 0 \\ \sigma(\mathbf{u}, p) = 2 \mu \epsilon(\mathbf{u}) - pI&, \quad \epsilon(\mathbf{u}) = \frac{1}{2}\left(\nabla \mathbf{u} + (\nabla \mathbf{u})^\top\right). \end{aligned} \end{equation} where $\mu$ is a viscosity field, $p$ the pressure, $\mathbf{F}(c) := \text{Ra}\,c\,\mathbf{g}$ a temperature dependent forcing term, $\text{Ra}$ the Rayleigh number, $\mathbf{g}$ the normalized gravitation, and $\sigma$ the Cauchy stress tensor associated with an incompressible, highly viscous Newtonian fluid. The \glspl*{pde} \cref{eq:advection-diffusion-pde} and \cref{eq:stokes-pde} are coupled through both, the velocity which is the solution of the Stokes system and drives the advection of the temperature, and the temperature which enters the Stokes equation through the forcing term. Different from the benchmarks in \cref{sec:benchmarks}, the convectivity is in the following setups steered solely through the Rayleigh number $\text{Ra}$, \mbox{i.\,e.}\xspace we set $\kappa = 1$ in \cref{eq:advection-diffusion-pde}. When $\text{Ra}$ is large, so is the \gls*{rhs} of \cref{eq:stokes-pde} and the velocity that enters \cref{eq:advection-diffusion-pde} has a large magnitude, resulting in advection-dominated transport. Note that since $\kappa > 0$, the look-back distance is set to $b = 1$ in the following benchmarks. The advection-diffusion equation \cref{eq:advection-diffusion-pde} is constrained by the Stokes equation \cref{eq:stokes-pde} at all times, and a non-linear system must be solved at each time-step, which is at least a computationally expensive challenge for large-scale simulations. In practice, \cref{eq:advection-diffusion-pde} is thus usually \emph{decoupled} from the constraints to the velocity $\mathbf{u}$, so that the systems can be solved in an alternating fashion \cite{Kronbichler:2012:GJI,Waluga:2016:JCP}. The solution of the Stokes system in each time-step dominates the computational cost of this scheme and is therefore crucial to performance. We employ an efficient, monolithic matrix-free geometric multigrid solver as described in \cite{Kohl:2020:arXiv}, and large \gls*{cfl}-numbers to reduce the number of required solves. The Stokes system is discretized with a mixed $\mathbb{P}_2$\xspace-$\mathbb{P}_1$\xspace finite element approximation. For more in-depth discussion of efficient matrix-free geometric multigrid solvers on \gls*{hhg}, we refer to \cite{Kohl:2020:arXiv,Bauer:2020:SPPEXA,Bauer:2017:ANM,Bauer:2019:JoCS,Gmeiner:2016:JoCS,Gmeiner:2015:SISC}. In \cref{sec:pc}, we outline a predictor-corrector scheme (see \cref{alg:pc}) to approximate the solution of the non-linear, coupled system, and apply the method to two benchmark problems. \subsection{Strang-splitting} We employ a tighter coupling of the advection- and diffusion-step via a Strang-splitting approach \cite{Strang:1968:SINUM}. Instead of an alternating application of the advection and diffusion step, the diffusion step is split, and the advection step is framed by two fractional diffusion steps with reduced time-step size, giving a scheme with three stages. The algorithm is listed in \cref{alg:ad-splitting}. \begin{algorithm} \footnotesize \begin{algorithmic}[1] \Procedure{ADS}{$c_h^n, \mathbf{u}_h$} \State $\tau_n = \text{CFL}_\text{max} \cdot h_\text{min} / \max_{\mathbf{x}\in\Omega}|\mathbf{u}_h(\mathbf{x}, t_n)|$ \Comment{determine time-step size} \State solve \cref{eq:linear-system} with $\tau_n^* = \tau_n/2$ to advance from $c_h^{n}$ to $c_h^{n+(1/3)}$ \Comment{diffusion} \State $\hat{\mathbf{x}} = \mathbf{X}(\mathbf{x}, t_{n+1}, t_{n})$ \Comment{calculate departure points (see \cref{sec:implementation})} \State $c_h^{n+(2/3)}(\mathbf{x}) = c_h^{n+(1/3)}(\hat{\mathbf{x}})$ \Comment{advection} \State solve \cref{eq:linear-system} with $\tau_n^* = \tau_n/2$ to advance from $c_h^{n+(2/3)}$ to $c_h^{n+1}$ \Comment{diffusion} \State \textbf{return} $c_h^{n+1}$ \EndProcedure \end{algorithmic} \caption{\footnotesize Time-stepping scheme, advection-diffusion, with Strang-splitting} \label{alg:ad-splitting} \end{algorithm} The splitting procedure noticeably increases the accuracy of the method in the benchmarks of this section, however, we did not observe relevant differences when applying it to the advection-diffusion benchmark in \cref{sec:benchmark-advection-diffusion}. \subsection{A predictor-corrector scheme}\label{sec:pc} To resolve the non-linear coupling of the advection-diffusion equation \cref{eq:advection-diffusion-pde} and the Stokes problem \cref{eq:stokes-pde}, we apply a predictor-corrector method \cite{vandenBerg:1993:GJI}, as outlined in \cref{alg:pc}. \begin{algorithm} \footnotesize \begin{algorithmic}[1] \Procedure{PC}{} \State solve \cref{eq:stokes-pde} for $\mathbf{u}_h^0$ \Comment{initial velocity field} \For{$n \in \{0, 1, \dots\}$} \State $\tilde{\mathbf{u}}_h(\mathbf{x}, t) \gets \mathbf{u}_h^n(\mathbf{x})$ \Comment{time-invariant velocity field at time-step $n$ for temp. predictor} \State $c_h^\text{pr} \gets$ \Call{ADS}{$c_h^n, \tilde{\mathbf{u}}_h$} \Comment{predict temperature} \State solve \cref{eq:stokes-pde} for $\mathbf{u}_h^\text{pr}$ with RHS $\mathbf{F}(c_h^{\text{pr}})$ \Comment{predict velocity} \State $\tilde{\mathbf{u}}_h(\mathbf{x}, t) \gets \text{lerp}(\mathbf{u}_h^n, \mathbf{u}_h^\text{pr}, t)(\mathbf{x})$ \Comment{linear interpolation in $t$ between $\mathbf{u}_h^n$ and $\mathbf{u}_h^\text{pr}$} \State $c_h^{n+1} \gets$ \Call{ADS}{$c_h^n, \tilde{\mathbf{u}}_h$} \Comment{correct temperature} \State solve \cref{eq:stokes-pde} for $\mathbf{u}_h^{n+1}$ with RHS $\mathbf{F}(c_h^{n+1})$ \Comment{correct velocity} \EndFor \EndProcedure \end{algorithmic} \caption{\footnotesize Predictor-corrector scheme to couple \cref{eq:advection-diffusion-pde} and \cref{eq:stokes-pde}. In each time-step both \glspl*{pde} are solved twice. For the advection-diffusion step, \cref{alg:ad-splitting} is employed.} \label{alg:pc} \end{algorithm} For the temperature prediction step, we approximate the velocity field with the time-invariant state at $t = t_n$, \mbox{i.\,e.}\xspace the interpolation \cref{eq:linear-interpolation} yields $\mathbf{u}_h^n$ for all $t \in [t_n, t_{n+1}]$. A prediction $\mathbf{u}_h^\text{pr}$ for the velocity is then computed using the predicted temperature field for the \gls*{rhs} force term of \cref{eq:stokes-pde}. The correction step is then executed, employing the interpolation in \cref{eq:linear-interpolation} between $\mathbf{u}_h^n$ at $t = t_n$, and $\mathbf{u}_h^\text{pr}$ at $t = t_{n+1}$. Finally a new velocity solution is computed using the corrected temperature field. \subsection{Time-dependent convection benchmark}\label{sec:blankenbach} To verify our implementation, we consider a classical benchmark from Blankenbach et al.~\cite{Blankenbach:1989:GJI} (case 3) that was also investigated e.g.~in \cite{Vynnytska:2013:C+G}. The test considers time-dependent convection with constant viscosity ($\mu = 1$ in \cref{eq:stokes-pde}) and internal heating ($q = 1$ in \cref{eq:advection-diffusion-pde}) in a two-dimensional, rectangular domain $\Omega = [0, L] \times [0, H]$, $L = 1.5, H = 1$. The top, bottom, and side boundaries are denoted as $\Gamma_t$, $\Gamma_b$, and $\Gamma_s$. $\mathbf{n}$ and $\mathbf{t}$ represent the outward normal, and tangential vectors respectively. For the velocity free-slip conditions are prescribed at the vertical boundaries ($\mathbf{u} \cdot \mathbf{n} = \sigma \mathbf{n} \cdot \mathbf{t} = 0$ for $\mathbf{x} \in \Gamma_s$), and no-slip conditions at the horizontal boundaries ($\mathbf{u} = 0$ for $\mathbf{x} \in \Gamma_t \cup \Gamma_b$). For the temperature zero Dirichlet boundary conditions are prescribed at the top boundary ($c = 0$ for $\mathbf{x} \in \Gamma_t$), and Neumann boundaries otherwise ($\partial_\mathbf{n} c = 0$ for $\mathbf{x} \in \Gamma_b \cup \Gamma_s$). We employ the initial condition $c_0(\mathbf{x}) = 0.5 \left(1-x_2^2\right) + 0.01 \cos(\pi x_1 / L) \sin(\pi x_2 / H)$ given in \cite{Vynnytska:2013:C+G}. The benchmark solution is expected to exhibit a characteristic, periodic development of downwelling plumes and is quantified via the local extrema of the root-mean-square velocity $\mathbf{u}_\text{rms}$ and the Nusselt number $\text{Nu}$, defined as \begin{align}\label{eq:rms} \mathbf{u}_\text{rms} = \left( \frac{1}{|\Omega|} \int_\Omega \norm{\mathbf{u}}^2 dx \right)^{1/2}, \quad \text{Nu} = - \frac{\int_0^L \partial_{x_2} c(\mathbf{x}, x_2 = H) dx}{\int_0^L c(\mathbf{x}, x_2 = 0) dx} \enspace. \end{align} At low Rayleigh numbers, every plume shows the same behavior. With increasing $\text{Ra}$ the periodicity is characterized by every $n$-th plume behaving identically, resulting in a $Pn$-cycle. In particular, the benchmark suggests that the convective motion transitions from a $P2$- to a $P4$-cycle between $\text{Ra} = \numINT{216000}$ and $\text{Ra} = \numINT{218000}$. We partition a Pn-cycle into $n$ time intervals, denoted as stages $S0, \dots, S(n-1)$. Each stage of a cycle comprises a local maximum of $\mathbf{u}_\text{rms}$ and $\text{Nu}$, followed by a local minimum. We apply the time-stepping scheme \cref{alg:pc} on two meshes with sizes $24 \times 16$ and $48 \times 32$ squares (each divided into 2 triangles) and for two CFL-numbers (\num{0.5} and \num{1}) running the simulation from $t = 0$ to $t=3$. For $t \in [2.5, 3]$, the described repetitive cyclic motion of the plumes is observed. The computed solution is compared to the reference values in \cite[table 9]{Blankenbach:1989:GJI} for $\text{Ra} = \numINT{216000}$, and \cite[table 8a, Code Ha, $96 \times 64$]{Blankenbach:1989:GJI} for $\text{Ra} = \numINT{218000}$. We selected the latter reference from the various codes compared in \cite{Blankenbach:1989:GJI} as the presumably most accurate implementation, and note that no analytical solution is known. The relative errors (compared to the reference) of the minima and maxima of $\text{Nu}$ and $\mathbf{u}_\text{rms}$ are calculated. For $\text{Ra} = \numINT{216000}$, all extrema coincide with the reference up to a relative error of less than \percent{0.4} for both meshes and \gls*{cfl} numbers. For $Ra = \numINT{218000}$, a maximum relative error of less than \percent{1} for all extrema is reached for the finer mesh with $48 \times 32$ squares. We conclude that the computed results agree well with those reported in \cite{Blankenbach:1989:GJI,Vynnytska:2013:C+G}. The characteristic trends of $\mathbf{u}_\text{rms}$ and $\text{Nu}$ for both scenarios, with mesh size $48 \times 32$ and $\text{CFL} = 1.0$ are plotted in \cref{fig:blankenbach-plots}. \begin{figure}[!ht] \footnotesize \centering \begin{subfigure}[t]{0.48\textwidth} \resizebox*{\textwidth}{!}{% \input{figures/ra_216000_cfl_1p0_level_3_nx_6.db.pgf} } \caption{Ra = \numINT{216000}} \end{subfigure} \hfill \begin{subfigure}[t]{0.48\textwidth} \resizebox*{\textwidth}{!}{% \input{figures/ra_218000_cfl_1p0_level_3_nx_6.db.pgf} } \caption{Ra = \numINT{218000}} \end{subfigure} \caption{$\mathbf{u}_\text{rms}$ and $\text{Nu}$ plotted over a $P2$-cycle for $\text{Ra} = \numINT{216000}$, and a $P4$-cycle for $\text{Ra} = \numINT{218000}$ (mesh: $48 \times 32$, CFL = 1).} \label{fig:blankenbach-plots} \end{figure} \subsection{Mantle convection on a spherical shell}\label{sec:mc} As a demonstrator for the applicability to large scale applications, we employ the coupled method to simulate isoviscous convection with $\text{Ra} = 10^8$ and no internal heating ($q = 0$). The domain approximates Earth's mantle by the spherical shell $\Omega = \{ \mathbf{x} \in \mathbb{R}^3 : r_\text{min} \leq \norm{\mathbf{x}}_2 \leq r_\text{max} \}$ with $r_\text{min} = 0.5$ and $r_\text{max} = 1$. The computational grid is composed of \numINT{19200} tetrahedral macro-cells, which are refined $4$ times and projected onto the sphere, resulting in more than \num{3.2e8} unknowns for the Stokes equation, and \num{1.0e8} \glspl*{dof} (and therefore particles) for the advection-diffusion equation, solved for in every time step. The initial and Dirichlet boundary conditions for the temperature are prescribed by $c_0(r) = \exp\left(-10 \frac{r - r_\text{min}}{r_\text{max} - r_\text{min}}\right)$ where $r$ is the distance to the origin. For the velocity, we set no-slip boundary conditions at all boundaries. We apply the predictor-corrector scheme in \cref{alg:pc} with Strang-splitting, and simulate \numINT{3000} time-steps with a \gls*{cfl}-number of $1$. The Stokes system is solved with a monolithic geometric multigrid solver that employs an inexact Uzawa smoother with weighted Jacobi relaxation \cite{Kohl:2020:arXiv}. Its excellent performance and scalability to linear systems more than a trillion ($10^{12}$) unknowns is discussed in \cite{Gmeiner:2015:SISC,Gmeiner:2016:JoCS,Kohl:2020:arXiv}. For the diffusive term, \mbox{i.\,e.}\xspace the solution of the linear system \cref{eq:linear-system}, we employ a standard conjugate gradient iteration, which turns out to be sufficient. The simulation is performed on 400 nodes (\numINT{19200} processes) of \mbox{SuperMUC-NG}\xspace in roughly 16 hours. In \cref{fig:mc-details-parameters} we list a summary of the benchmark parameters. \Cref{fig:mc-details-stackplot} shows a stacked bar chart of the fractional run time of the relevant components of the predictor-corrector scheme. On average, the computation of a single time-step takes about \numINT{19} seconds. About \percent{85} of the total run time is spent for the solution of the Stokes system. Almost half of that time (roughly \percent{41} of total run time) accounts for communication during the Jacobi-relaxation. Especially during iterations on the coarser grids, communication time strongly dominates time spent in the compute kernels. Strategies to further improve the performance of the coarse grid solver are presented in \cite{Buttari:2020:Block}. \begin{figure}[!ht] \footnotesize \centering \subcaptionbox{parameter summary\label{fig:mc-details-parameters}}{% \centering \resizebox*{0.48\textwidth}{!}{ \begin{tabular}{ll} \multicolumn{2}{c}{Mantle convection benchmark parameters} \\ \midrule machine & \mbox{SuperMUC-NG}\xspace \\ nodes & \numINT{400} \\ cores & \numINT{19200} \\ \addlinespace numerical scheme & predictor-corrector (see \cref{alg:pc}) \\ \addlinespace Stokes & \\ \hspace{6pt} discretization & $\mathbb{P}_2$\xspace-$\mathbb{P}_1$\xspace (Taylor-Hood) \\ \hspace{6pt} solver & \makecell[tl]{monolithic geometric multigrid (GMG)} \\ \hspace{6pt} \glspl*{dof} & \num{3.2e8} \\ \addlinespace temperature & \\ \hspace{6pt} discretization & $\mathbb{P}_2$\xspace finite elements + \gls*{mmoc} \\ \hspace{6pt} advection scheme & \gls*{mmoc} \\ \hspace{6pt} solver diffusion & conjugate gradient (CG) \\ \hspace{6pt} \glspl*{dof} & \num{1.0e8} \\ \addlinespace Rayleigh number & $10^8$ \\ \gls*{cfl} & \numINT{1} \\ avg. run time / ts & $\approx 19$s (incl. pred. + corr., I/O)\\ \end{tabular} }} \hfill \subcaptionbox{run time of components of \cref{alg:pc}\label{fig:mc-details-stackplot}}{ \includegraphics[trim=65 1 1 40,clip,width=0.48\textwidth]{figures/run_time_bar_chart.pdf} } \caption{Mantle convection benchmark: (\protect\subref{fig:mc-details-parameters}) Summary of parameters. (\protect\subref{fig:mc-details-stackplot}) Stacked bar chart of average fractional run time of components of \cref{alg:pc}. The percentage in parentheses indicates the fractional run time with respect to the overall run time of a predictor-corrector step.} \label{fig:mc-details} \end{figure} In \cref{fig:mc}, the contour surfaces of the temperature at $c_\text{cont} = 0.15$ are shown at time steps \# \numINT{200}, and \numINT{3000}. \begin{figure}[!ht] \footnotesize \centering \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=\textwidth]{figures/mantle_convection/mc_new_ts0200_contour_0.15.png} \caption{time-step 200} \end{subfigure} \hfill \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=\textwidth]{figures/mantle_convection/mc_new_ts3000_contour_0.15.png} \caption{time-step 3000} \end{subfigure} \caption{Contour plot at $c = 0.15$ of the temperature solution on the spherical shell for $\text{Ra} = 10^8$, colored by velocity magnitude.} \label{fig:mc} \end{figure} Thin, chaotically rising plumes are observed as expected at such large Rayleigh numbers. \section{Parallel performance}\label{sec:performance} Scalability of the HHG data structures in \gls*{hyteg}, and of the particle dynamics framework \gls*{mesapd} has separately been demonstrated on some of the worlds largest supercomputers, \cite{Kohl:2020:arXiv,Eibl:2018:PARCO}. It remains to assess the parallel performance of our \gls*{mmoc} implementation, in which both software architectures are coupled. For the scalability benchmark, we set up an elongated, three-dimensional cuboid domain, where a smooth initial temperature field is initialized, and transported along a constant velocity field, resembling flow through a pipe. This setup allows for straightforward parameterization of domain size, and number of coarse grid primitives. We employ $\mathbb{P}_2$\xspace finite elements for the space-discretization and perform a single time-step, consisting of particle creation, particle integration, and temperature evaluation, including synchronization (steps (i)--(iii) in \cref{sec:particle-tracing}). All runs in this section were performed on \mbox{SuperMUC-NG}\xspace, ranked 15\textsuperscript{th} in the Top500\footnote{\url{https://www.top500.org/}} list (Nov 2020). The system is composed of \numINT{6336} so-called thin-nodes, \numINT{3072} of which we had access to at the time of writing. Two Intel\textsuperscript{\textregistered} Skylake Xeon\textsuperscript{\textregistered} Platinum 8174 CPUs are installed on each node, which sums up to 48 cores per node and \numINT{147456} cores in total on the accessible \numINT{3072} nodes. Per node $96$GB of main memory are available. \paragraph{Strong scaling} We conduct a strong scaling experiment on a grid that consists of \numINT{3072} tetrahedral coarse grid elements, each of which is refined 4 times, resulting in $\approx \num{1.7e7}$ \glspl*{dof} in total. Leaving the grid fixed, we increase the number of processes, so that the number of particles per process decreases. As a baseline for the parallel performance we consider a single-node run. We plot the parallel performance and the number of updated particles per second in \cref{fig:scaling-strong}. For the largest setting with 64 nodes (\numINT{3072} processes) we obtain a parallel efficiency of roughly \percent{36} for one macro-cell and $\approx \numINT{5500}$ particles per process. \paragraph{Weak scaling} Additionally, we perform a weak-scaling experiment, where the number of \glspl*{dof} per process is kept constant. In this setting, each process is assigned a single tetrahedral macro-cell and we refine the initial grid 5 times. This results in about \num{3.55e5} \glspl*{dof} per process. Starting from a single node, again used as baseline for parallel efficiency, we scale up to the available \numINT{147456} processes of \mbox{SuperMUC-NG}\xspace. In the largest scenario, this amounts to more than \num{5.2e10} \glspl*{dof} in total for the discretization of the solution of the advection-diffusion equation \cref{eq:advection-diffusion-pde}. All runs maintain an excellent parallel efficiency of more than \percent{92}. \begin{figure}[!ht] \footnotesize \centering \begin{subfigure}[t]{0.49\textwidth} \resizebox*{\textwidth}{!}{% \input{figures/scaling_strong.pgf} } \caption{strong scaling, $\approx 1.7\times10^7$ temperature \glspl*{dof} (\mbox{i.\,e.}\xspace particles) in total} \label{fig:scaling-strong} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \resizebox*{\textwidth}{!}{% \input{figures/scaling_weak.pgf} } \caption{weak scaling, $\approx 3.55 \times 10^{5}$ temperature \glspl*{dof} (\mbox{i.\,e.}\xspace particles) per process} \label{fig:scaling-weak} \end{subfigure} \caption{Weak and strong scaling results (ts = time-step)} \label{fig:scaling} \end{figure} Overall, we observe a run time per time step of about 5 or less seconds in all tested scenarios, and less than a second in the strong-scaling limit. We note, however, that it is not sufficient to consider the number of updated \glspl*{dof} per second alone as a measure to quantify the efficiency of the method. The results of \cref{sec:benchmarks} show that the stability and accuracy of the \gls*{mmoc} allows for large time-steps even in the strongly advection-dominated problems. This may be an advantage in coupled convection simulations as they appear in Earth mantle convection, where the majority of the run time is spent for the solution of the Stokes system \cite{Kronbichler:2012:GJI,Gmeiner:2015:SISC}. Given sufficiently accurate coupling schemes, the \gls*{mmoc} does not only allow for a reduction of simulation time by itself, but also permits to advance faster in time, due to less restrictive \gls*{cfl} limitations. \section*{Conclusion} In this article, we presented an implementation of an Eulerian-Lagrangian discretization based on the method of characteristics to treat the advection-diffusion equation in the advection-dominated regime. Its numerical performance was demonstrated on multiple two- and three-dimensional benchmarks, including cases with pure advection, curved geometries, and discontinuous solutions. Motivated by the demand of extreme spatial resolution in mantle convection simulations, the parallel scalability of our implementation was assessed in a weak and strong scaling benchmark for the advection-diffusion equation. We demonstrate a parallel efficiency of more than \percent{92}, solving for more than \num{5.2e10} \glspl*{dof} per time-step on \numINT{147456} parallel processes. Finally, we applied the method to buoyancy-driven Stokes flow, embedding it into a non-linear scheme based on a predictor-corrector method. The scheme was verified through a classical benchmark for time-dependent convection, and its practical applicability to large scale problems demonstrated in a mantle convection benchmark on the spherical shell, with combined more than \num{4.0e8} unknowns solved for in each time step for \numINT{3000} time steps. \section*{Acknowledgements} The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (\url{www.gauss-centre.eu}) for funding this project by providing computing time on the GCS Supercomputer SuperMUC-NG at Leibniz Supercomputing Centre (\url{www.lrz.de}). The authors also gratefully acknowledge financial support by the Bavarian State Ministry of Science and the Arts through the Competence Network for Scientific High Performance Computing in Bavaria (KONWIHR) and by the German Research Foundation through the Priority Programme 1648 Software for Exascale Computing (SPPEXA), RU 422/16-2.
2,877,628,091,423
arxiv
\section{Introduction} \label{sec:intro} The Sun is a magnetically active star that shows various magnetic activity structures extending from its surface to its higher atmospheric layers, such as bipolar active regions (ARs) on the photosphere, filaments in the chromosphere, and coronal holes (CHs) in its corona. Through its magnetic activity, the Sun governs the conditions in the vicinity of Earth and throughout the heliosphere, which creates space weather and space climate. Space weather is defined as the effects of the solar wind, and solar eruptive phenomena, such as flares and coronal mass ejections (CMEs), on Earth's magnetosphere, ionosphere, and thermosphere \citep{2006LRSP....3....2S}. The space weather conditions have drastic effects our space- and ground-based technology \citep{2017RiskA..37..206E}. One of the most important solar magnetic features creating the space weather and in turn affecting the Earth, is the solar wind. The observations revealed that there are three different types of solar wind; (i) steady fast solar winds originate in the CHs, (ii) unsteady slow winds from opening magnetic loops and active regions, and (iii) transient winds from CMEs \citep{2006LRSP....3....1M}. The identification of the CHs on the Sun as one of the source regions of the solar wind \citep{1968SSRv....8..258W} that creates space weather and in turn influences our space- and ground-based technology is therefore crucial to achieve predictive capabilities. As the source regions of the steady fast solar winds, CHs are identified as regions of low density collisionless plasma that is generally located above inactive parts of the Sun, where open magnetic field lines extend throughout the heliosphere \citep{2006LRSP....3....2S, 2009LRSP....6....3C}. The magnetic field inside a CH is known to be more unipolar and the CHs show sharp and/or diffuse transition on the boundaries between them and their surroundings \citep{2009LRSP....6....3C}. The temporal evolution of the CH as well as the area they cover on the Sun depends on the solar activity cycle, also known as the Schwabe cycle \citep{1844AN.....21..233S}. During the minimum phase of a solar cycle, the CHs are observed to be larger and located mainly on the solar polar caps. On the inclining phase of a cycle, the CHs are observed to be present at any latitude and to be short-lived. During solar maximum, the CHs are smaller and only exist around mid-latitudes, while on the declining phase of the solar cycle there are more long-lived CHs at lower latitudes and they form closer to the solar equator as the cycle progresses \citep{2020SoPh..295..161H}. Additionally, during the inclining and declining phases of a solar cycle, the CHs can evolve into structures extending from a solar pole to solar equator. As CHs have lower densities and temperatures, and hence the lowest emission in UV and X-ray in comparison to their surrounding environment consisting of active regions and quiet Sun, they appear as dark regions solar images in wavelengths around 194 \AA\,whether they are on-disk or off-limb CHs \citep{2009LRSP....6....3C}. Detection of CHs are done by eye-based on He $\text{I}$ 10830 \AA\,near-infrared absorption line triplet \citep{2002SoPh..211...31H}, histogram-based intensity thresholding on 193 \AA\,and 195 \AA\, passband images of the Sun from the Atmospheric Imaging Assembly \citep[AIA;][]{2012SoPh..275...17L} on {\it the Solar Dynamics Observatory} \citep[SDO;][]{2012SoPh..275....3P} and the Extreme Ultraviolet Imaging Telescope \citep[EIT;][]{1995SoPh..162..291D} on {\it the Solar and Heliospheric Observatory} (SOHO), respectively \citep[CHARM;][]{2009SoPh..256...87K}. Additionally, an automated method for detection and segmentation of CHs based on multi-thermal intensity segmentation using 171 \AA, 193 \AA, and 211 \AA\,passband images of the Sun from the AIA/SDO \citep[CHIMERA;][]{2018JSWSC...8A...2G}, and semi-automated method based on intensity threshold that is modulated by the intensity gradient of a CH have been developed \citep[CATCH;][]{2019SoPh..294..144H}. There are also methods based on supervised and unsupervised machine learning (ML) methods. \citet{2014A&A...561A..29V} developed a set of segmentation procedures based on spatial possibilistic clustering algorithm (SPoCA) to detect CHs in an unsupervised ML fashion. Identified ARs and CHs by this algorithm are uploaded to the event catalogs in the Heliophysics Event Knowledge (HEK) database \citep{2012SoPh..275...67H}. \citet{2018MNRAS.481.5014I} used convolutional neural networks \citep[CNNs;][]{2014arXiv1404.7828S, 2015Natur.521..436L} based on the U-net architecture \citep{10.1007/978-3-319-24574-4_28} to identify CHs on solar images at 193 \AA\,passband images of the Sun from AIA/SDO. They trained their network using binary maps from Kislovodsk Mountain Astronomical Station. Recently, \citet{2021A&A...652A..13J} utilized a progressively growing architecture based CNNs using data from all 7 channels of AIA/SDO (94 \AA, 131 \AA, 171 \AA, 193 \AA, 211 \AA, 304 \AA\,and 335\AA\,) as well as line-of-sight magnetograms from Helioseismic and Magnetic Imager \citep[HMI;][]{2012SoPh..275..207S} on the SDO. For their network, the authors used binary maps from manually reviewed SPoCA-CH \citep{2018mlts.book..365D}. In this study, we utilize pixel-wise $k$-means algorithm, which is an unsupervised ML method, to detect CHs based on 171 \AA, 193 \AA, and 211 \AA\, passband images from the AIA/SDO. To achieve this objective, we used data from each channel in different combinations, and compared results from each combination to each other as well as to those from CATCH and the HEK data to calculate their performances. We first describe the data used in this study in Section~\ref{sec:data} and explain the analyses and present our results in Section~\ref{sec:analyses_res}. We discuss the results and conclude in Section~\ref{sec:dis_conc}. \section{Data} \label{sec:data} To detect the CHs on the solar corona, we use passband data with 2 second exposure from AIA/SDO in wavelengths 171 \AA, 193 \AA\,, and 211 \AA\,in different combinations (Figure~\ref{fig:data}). The AIA telescope on the SDO takes passband measurements of the Sun in every 12 seconds in full disk with a spatial resolution of 4096$\times$4096 pixels, and each pixel corresponds to 0.6 arsec on the solar disk leading to a spatial resolution of 1.5 arcsec \citep{2012SoPh..275...17L}. These 3 EUV bandpasses are centred on specific spectral emission lines of Fe $\text{IX}$ for 171 \AA, Fe $\text{XII, XXIV}$ for 193 \AA, and Fe $\text{XIV}$ for 211 \AA, which covers the temperature range from $6\times10^{5}$ to $2\times10^{6}$ K, corresponding to the upper transition region, quiet corona (171 \AA), corona and hot flare plasma (193 \AA), and active-region corona (211 \AA) \citep{2012SoPh..275...17L}. \begin{figure*} \begin{center} {\includegraphics[width=5.5in]{data.jpg}} \caption{Passband images of the Sun in 171 \AA\,(the left panel), 193 \AA\,(the middle panel), and 211 \AA\,(the right panel) taken by the AIA/SDO on 8 December 2016 at 00:00 UT.} \label{fig:data} \end{center} \end{figure*} \section{Analyses and Results} \label{sec:analyses_res} \subsection{Preprocessing data} To detect the CHs, we use solar images taken by AIA/SDO in passband images in wavelengths 171 \AA, 193 \AA\,, and 211 \AA\,in different configurations. We also study the most efficient wavelength or configuration of wavelengths to identify the CHs. To achieve this, we compare our CH binary maps with those from the CATCH . We also compared the CH polygons provided by the HEK with the CATCH binary maps to have a base-line with which we compare our results. The CATCH binary maps are selected from the last two months of each year in a time-range from November 2010 to December 2016, extending through solar cycle 24. The CATCH data in this period is reliable with minimal uncertainties. The total of 237 CATCH CH binary maps consist only contributions from the longitudinal range of $\left[-400, 400\right]$ arcseconds in helioprojective coordinates as in this region the CHs can be identified more robustly \citep{2021A&A...652A..13J}. We also imported CH polygons from the HEK database for the same dates as the CATCH maps, and converted them into binary maps. \begin{figure*} \begin{center} {\includegraphics[width=5.5in]{histograms.jpg}} \caption{Probability densities of AIA/SDO 171 \AA\,(top panel), 193 \AA\,(middle panel), and 211 \AA\,(bottom panel) intensities of the solar disk on 8 December 2016 at 00:00 UT. The left panels show the probability densities of the preprocessed data, while the right panels show probability densities of the post-processed data. The vertical dashed lines show mean ($\mu$) and $\mu\pm4\sigma$ values calculated to identify the threshold values.} \label{fig:histograms} \end{center} \end{figure*} In total, we analyze 237 days of data. for each date, we import the level 1 data in 171 \AA, 193 \AA\,, and 211 \AA\,wavelengths and preprocess them using {\it aiapy} \citep{barnes_w_t_2020_4274931,Barnes2020} and {\it SunPy} \citep{sunpy_community2020,stuart_j_mumford_2021_5751998} python packages. This step consists of correcting the data for instrument degradation, for pointing and observer location. Following to these corrections, we registered and aligned the data and normalize it so it has a unit of count/pixel/second. Following these corrections, we correct the passband images for limb brightening using annulus limb brightening correction approach \citep{2014A&A...561A..29V}. We then deconvolve the passband images using instrument point spread function for each wavelength, and rescaled them to 1024$\times$1024 using spline method. As the final step, we log-norm transformed the data. Following these steps, we created histograms of each data set to determine the lower and upper threshold values. Determining these values allows us to increase the contrast in the data. To avoid using any arbitrary values for these thresholds and to have a more systematic approach for determining these values, we fit a bimodal gaussian curve to each histogram (Figure~\ref{fig:histograms}), where it is possible. For some dates, however, it was not possible to fit a bimodal gaussian fit. For these dates, we used a unimodal gaussian fit. Using the obtained parameters of the gaussian fits, we calculated the lower- and upper-threshold values based on the mean and standard deviation values of the higher peak (the right panels of Figure~\ref{fig:histograms}), because the lower peak represents the CH pixels \citep{2019SoPh..294..144H}. For each date in the dataset, we calculate a lower-threshold value for each wavelength based on ($\mu - 4\sigma$), while the upper-threshold value is determined based on ($\mu + 4\sigma$). Values below (above) the lower-threshold (upper-threshold) value are stacked to have only one value that is the threshold value. We then investigate the temporal variations in the calculated mean ($\mu$) and the lower threshold values ($\mu - 4\sigma$) (Figure~\ref{fig:temporal_threshold}). The $\mu$ values of 193 \AA\, and 211 \AA\,passband images show variations in phase with the solar cycle, while the $\mu$ values of 171 \AA\,does not show such a trend (Figure~\ref{fig:temporal_threshold}a). The $\mu$ values for each passband images also show day-to-day fluctuations. Similarly, the lower threshold values show day-to-day fluctuations as well. These fluctuations have wider range for the threshold values calculated for the 211 \AA\,passband images especially during the maximum phase of the solar cycle, while the other two channels do not exhibit such wide fluctuations (Figure~\ref{fig:temporal_threshold}b). An important feature to note is the "negative" threshold values found for the 211 \AA\,passband images. There are 27 days where the lower thresholds are negative values. However, as this does not have a physical meaning, the threshold values for these days were accepted as zero. The reason for the negative values come from the underlying shape of the gaussians. \begin{figure*} \begin{center} {\includegraphics[width=5.5in]{temporal_mu_th.jpg}} \caption{Calculated mean ($\mu$) (a) and lower threshold values ($\mu - 4\sigma$) (b) for AIA/SDO 171 \AA\,(green), 193 \AA\,(red), and 211 \AA\,(blue) passband images for the study period. Note that there are 27 points below zero, meaning that no lower threshold value could be calculated, therefore no thresholding applied to the 211 \AA\,passband data on these dates.} \label{fig:temporal_threshold} \end{center} \end{figure*} \subsection{Pixel-wise clustering the images using the $k$-means algorithm} After increasing the contrast in each image based on their individual mean and standard deviation values, we created 4 different data sets; (i) 193 \AA\, image, (ii) 211 \AA\, image, (iii) 193\AA\,and 211\AA\, composite image (2 channel composite, 2CC), and (iv) 171 \AA, 193\AA\,, and 211\AA\, composite image (3 channel composite, 3CC). We then pixel-wise cluster each image using the $k$-means method. This method is used to automatically cluster a given data set into $k$ groups of equal variance \citep{macqueen1967}. The most commonly used clustering criterion is the sum of squared Euclidian distances (SSD), also known as the within-cluster sum-of-squares, of each data point to centroid of the cluster, to which that data point is attained \citep{LIKAS2003451}. The $k$-means algorithm first randomly selects $k$ cluster centroids, and then iteratively refine these initial cluster centroids by assigning each Euclidian distance to its closest cluster centroid. Then the algorithm updates each cluster centroid value to be the mean of its elements by minimizing the SSD \citep{wagstaff2001constrained,LIKAS2003451}. \begin{figure} \begin{center} {\includegraphics[width=3in]{screeplot.jpg}} \caption{Sum of squared distances (SSD) calculated for each number of clusters, which ranges from 1 to 10 for passband data in 193 \AA\,on 8 December 2017 at 00:00 UT.} \label{fig:scree} \end{center} \end{figure} The number of clusters, the $k$ value, for this method is an input parameter. To choose the optimum number of clusters, we used the scree-plot method \citep{10.1145/2723372.2737793}. In this method, we use k = 1, 2, 3, …, 10 and calculate the the sum of squared distances (SSD) for each $k$ value. The results show that after the cluster number 3, any further decrease in SSD is very small compared to previous ones, which means that the optimum $k$ value to use, is 3 (Figure~\ref{fig:scree}). This indicates that there are darker regions, brighter regions, and regions that surround them, which can be attained to the CHs, active regions, and the quiet Sun. The $k$-means method allows us to determine a threshold value for single channel inputs, a threshold line for 2 channel inputs, and a threshold surface for 3 channel inputs in a systematic way that enables us to deter from choosing these thresholds arbitrarily. Additionally, this method, when automated, is flexible enough for day-to-day variations in solar images, providing a dynamical response to them. We calculate segmentation maps for each date using $k$-means method throughout solar cycle 24. Following that, we convert these maps to binary maps by merging the 2 clusters that identify brighter regions (active regions) and regions that surrounds darker and brighter regions (quiet sun). The reason we did not use $k$ value as 2, is to avoid overestimation of the darker pixels on the passband images of the solar disk. We then remove small dotted-like regions using {\it morphology} module of scikit-image package \citep{scikit-image}. This method requires two inputs; the smallest allowable object size and connectivity, which we use 200 and 10 pixels, respectively. We also used morphological closing using a disk-shaped footprint with a radius of 2 pixels to remove smaller holes in identified CHs. The reason for using a smaller footprint is to try to avoid smoothing out larger bright points in identified CHs, which might be related to the Coronal Bright Points \citep{2006ApJ...642..562K, 2014ApJ...796...73H, 2018ApJ...864..165W}. In addition to the 4 different binary maps types generated based on the 193 \AA, 211 \AA, 2CC, and 3CC, we generated another type of binary map. We generated them based on the overlap between binary maps of the 193 \AA\,and 211 \AA\,images, which we will refer to as the 2 Channel Overlap (2CO). The 2CO binary maps are created if a pixel is simultaneously identified as a CH pixel in the two binary maps from the 193 \AA\,and 211 \AA\,images. Those pixel, which are not simultaneously identified as a CH are then accepted as non-CH pixels. \subsection{Pixel-wise evaluation metrics} To calculate the performances of our binary maps generated by the $k$-means method for each date, we used pixel-wise evaluation metrics. As there will be an imbalance between non-CH and CH pixels in the passband and composite images of the Sun, we use intersection over union (IoU), also known as the Jaccard index \citep{https://doi.org/10.1111/j.1469-8137.1912.tb05611.x}, and true skill statistics (TSS) \citep{hanssen1965relationship} as pixel-wise evaluation metrics. To calculate these metrics, we used binary maps from CATCH. IoU and TSS are calculated based on each confusion matrix for each date using; \begin{eqnarray} \label{eq:TSS} IoU &=& \frac{TP}{TP + FP + FN}, \\ \nonumber \\ TSS &=& \frac{TP}{TP+FN}-\frac{FP}{FP+TN} \end{eqnarray} \noindent where TP, TN, FP, and FN denote pixel-wise calculated number of true positives, true negatives, false positives, and false negatives, respectively. \begin{figure*} \begin{center} {\includegraphics[width=5.5in]{IoU_TSS_violin.jpg}} \caption{The distributions of the calculated IoU (a) and TSS (b) values between binary maps generated in this study and the CATCH, together with those between the HEK database and the CATCH. The white dots indicate the median value for each distribution. We also show the median values together with median absolute deviation for each evaluation metric in the figure. The red, blue, orange, green, purple, and yellow colors show AIA 193, AIA 211, 2CC, 3CC, 2CO, and HEK binary maps, respectively.} \label{fig:IoU_TSS} \end{center} \end{figure*} The distributions of the IoU values calculated between our and the CATCH binary maps together with those between the HEK and the CATCH binary maps show that the IoU for the HEK CH binary maps has a median value of 0.53$\pm$0.13, while our results from the AIA 193 and 2CC show median values of 0.62$\pm$0.14 and 0.64$\pm$0.14, respectively. This indicates a better overlap of the identified CHs from our method with those generated by CATCH. The other three binary maps from our study, the AIA 212, 3CC, and 2CO, result in IoU values of 0.51$\pm$0.20, 0.50$\pm$0.21, and 0.61$\pm$0.19, respectively (Figure~\ref{fig:IoU_TSS}a).. The median TSS values of the AIA 193 and 2CC are 0.91$\pm$0.06 and 0.93$\pm$0.06, respectively (Figure~\ref{fig:IoU_TSS}b), while the median TSS value for the HEK is 0.73$\pm$0.13. These results indicate that our binary maps generated by AIA 193 and 2CC are more in line with those from CATCH. The AIA 212, 3CC, and 2CO, show median TSS values lower than AIA 193 and 2CC (Figure~\ref{fig:IoU_TSS}b). \subsection{Coronal hole areas} To further validate our results against the HEK and CATCH results, we calculate the total areas of the CHs on the solar disk in percentage of CH coverage on the solar disk. To achieve this, we first corrected each pixel in our binary maps for projection effects by applying; \begin{eqnarray} \label{eq:TSS} A_i &=& \frac{A_{i, proj}}{\cos\alpha_i}, \end{eqnarray} \noindent where A$_i$ and $\alpha_i$ denote the corrected pixel area and the heliographic angular distance of each pixel to the center of the solar disk as seen from the AIA/SDO, respectively. \begin{figure*} \begin{center} {\includegraphics[width=5.5in]{corr_temporal.jpg}} \caption{Temporal evolution of the correlation coefficients between total CH areas from our method, HEK against CATCH data through November 2010 and December 2016, extending through solar cycle 24. Note that the correlations are calculated using data during the last two months of each year (see text).} \label{fig:corr_tseries} \end{center} \end{figure*} We calculated the Pearson correlation coefficients for each year between results from our study, HEK binary maps and CATCH (Figure~\ref{fig:corr_tseries}). We need note that we use the last two months of each year to calculate the correlations. Similar to the results obtained for IoU and TSS, AIA 193 and 2CC generally provide higher correlations through the study period. Interestingly after 2014, the correlation coefficients calculated for every binary map become similar and evolve in parallel until 2016 (Figure~\ref{fig:corr_tseries}). \begin{figure*} \begin{center} {\includegraphics[width=5.5in]{Areas_compare.jpg}} \caption{The total percentage areas from this study (a to e) and HEK data base (f) as a function of the areas from CATCH. The black solid lines show the linear fits, while the shaded areas show uncertainty. We also show the Pearson correlation coefficients and their statistical significances. The color coding is the same in Figure~\ref{fig:IoU_TSS}.} \label{fig:areas} \end{center} \end{figure*} We also calculated the overall correlations between the binary maps from our study and HEK, and binary maps from CATCH. The highest correlation of 0.88 for the CH areas is observed between the HEK and the CATCH data, while our 2CC gave a correlation coefficient of 0.82, followed closely by AIA 193 that gave a correlation coefficient of 0.81. The correlation coefficients for the 2CO, 3CC, and AIA 212 are 0.79, 0.75, and 0.73 respectively (Figure~\ref{fig:areas}). \subsection{Comparison of the CH binary maps} We then select three dates that represent different phases of solar cycle 24 to compare the CH binary maps. These dates are (i) 05 November 2012 on the inclining phase before the cycle maximum, (ii) 07 December 2014 right after the solar cycle maximum, and (iii) 07 December 2016 on the declining phase of solar cycle 24 (Figure~\ref{fig:three_dates}). \begin{figure*} \begin{center} {\includegraphics[width=3.5in]{Sol_Im_2012_2014_2016.jpg}} \caption{The CH binary maps for 05 November 2012 (top row), 07 December 2014 (middle row), and 07 December 2016 (bottom row) identified from the AIA 193, AIA 211, 2CC, 3CC, 2CO together with binary maps from the HEK and CATCH. The vertical white dashed lines indicate the longitudinal range of $\left[-400, 400\right]$ arcseconds in helioprojective coordinates. The color coding is the same in Figure~\ref{fig:IoU_TSS}.} \label{fig:three_dates} \end{center} \end{figure*} On the inclining phase of solar cycle 24, on 05 November 2012, our method identifies smaller CHs. The results from the AIA 193, 3CC, and 2CO are observed to be more in line with those from the CATCH, where there is only one CH at $\left[0, 500\right]$ arcseconds in helioprojective coordinates. The results from the AIA 211 and the 2CC, on the other hand, more in line with those from the HEK database (the top row of Figure~\ref{fig:three_dates}). On 07 December 2014, a few months after the cycle maximum, the binary maps from the AIA 193, the 3CC, and the 2CO show similar CH coverage on the solar disk to the CATCH within the longitudinal range of $\left[-400, 400\right]$ arcseconds. All of the CH binary maps from our method, except for the 3CC, are similar to the CHs from the HEK showing a small coronal hole near $\left[-750, 500\right]$ arcseconds (the middle row of Figure~\ref{fig:three_dates}). On the declining phase of solar cycle 24, on 07 December 2016, the CH areas identified using the AIA 193, the 2CC, and the 3CC are in line with those from the HEK database and CATCH. On this date, the total CH area coverage also reaches its maximum, where it extends from the southern solar pole to the solar equator (the bottom row of Figure~\ref{fig:three_dates}). To evaluate the consistency of our results, we plotted the detected CHs using 2CC on the dates from 3 November 2015 through 11 November 2015 (Figure~\ref{fig:2cc_seq}). The temporal evolution of the detected CHs close to the solar equator is consistent with the solar rotation. Formation and evolution of a new CH, again close to the solar equator, starting from the 6th of November through 11th of November can also be observed. In addition, temporal evolution of the large CH on the northern solar hemisphere is also consistent in each date (Figure~\ref{fig:2cc_seq}). \begin{figure*} \begin{center} {\includegraphics[width=6.5in]{2CC_sequence.jpg}} \caption{The CH binary maps for a time sequence from 03 through 11 November 2015 identified from the 2CC.} \label{fig:2cc_seq} \end{center} \end{figure*} To further investigate the consistency, we checked the day-to-day temporal evolution of the areas during 2012 and 2016 (Figure~\ref{fig:area_temporal}). Note that the areas are calculated for the last two months of each year. In 2012, there is a general good agreement between our 2CC, CATCH, and HEK CHs especially during December, whereas in November, the HEK CH areas are larger compared to our 2CC and the CATCH (Figure~\ref{fig:area_temporal}a). During 2016, on the other hand, CH areas from the three sources covary with some small differences in amplitudes (Figure~\ref{fig:area_temporal}b). \begin{figure*} \begin{center} {\includegraphics[width=4in]{area_temporal.jpg}} \caption{The CH areas during the last two months of 2012 (a) and 2016 (b). The coral, gold, and maroon lines represent 2CC, HEK, and CATCH data, respectively.} \label{fig:area_temporal} \end{center} \end{figure*} \section{Discussion and Conclusions} \label{sec:dis_conc} CHs are the source regions of the steady fast solar winds, which results in CIR driven storms, the so-called HILDCAA events \citep{TSURUTANI1987405}. In comparison to their surroundings, CH have lower plasma density and temperatures and therefore they have the lowest emissions in UV and X-ray wavelength range. This physical feature makes them appear as darker regions in passband images of the Sun taken in these wavelengths. CHs are also known to have very complex magnetic structures extending from the photosphere to the corona \citep{2018ApJ...863...29H, 2021SoPh..296..141H}, where the open magnetic field lines extend into the interplanetary medium. They also show solar cycle dependence. There are several methods to identify CHs on the solar images taken by AIA/SDO and EIT/SOHO based on histograms \citep{2009SoPh..256...87K}, multi-thermal intensity segmentation \citep{2018JSWSC...8A...2G}, and intensity threshold, which is modulated by the intensity gradient of a CH \citep{2019SoPh..294..144H}. Recently, unsupervised and supervised ML methods are used to detect CHs using single or multi-channel passband data from the AIA/SDO \citep{2014A&A...561A..29V,2018MNRAS.481.5014I,2021A&A...652A..13J}. The supervised ML methods mainly rely on the CNNs for image segmentation. These methods, however, require a reliable training data set that is CH polygons detected either by an observer or by an unsupervised method. In our study, to identify the CHs we used a simple clustering algorithm, $k$-means, to pixel-wise cluster the passband images of the Sun taken in 171 \AA, 193 \AA, and 211 \AA\, by the AIA/SDO covering the time period between November 2010 and December 2016. In addition to using a single-channel approach, we used different combinations of these channels. To detect the lower and upper threshold values, we fitted bimodal gaussians to the probability densities of intensities for each channel on each date. We then calculated the thresholds based on the mean and standard deviation of the local maximum at higher intensities. To cluster the passband images, we used the $k$-means method, where the optimum number of clusters, 3, is calculated based on the scree plot. The $k$-means method, together with pre- and post-processing steps enabled us to build a automated flexible approach which dynamically responds to day-to-day variations in solar images. As a result we obtained 5 different binary maps for each identified CHs, that are (i) AIA 193, (ii) AIA 211, (iii) 2CC, (iv) 3CC, and (v) 2CO. We then calculated pixel-wise evaluation metrics based on CH binary maps from CATCH and compared our results with each other as well as those from the HEK database. Following that, we calculated the total percentage area identified as a CH per date, after correcting the binary maps for the projection effects. Our results show that the 2CC, a composite image using only 193 and 211 \AA\ passband images, provides the best results that is closely followed by results from AIA 193. The median IoU and TSS values for the 2CC are 0.64$\pm$0.14 and 0.93$\pm$0.06, respectively, while they are 0.62$\pm$0.14 and 0.91$\pm$0.06 for the AIA 193. Our results show higher similarity to CATCH results than the HEK database (IoU = 0.53$\pm$0.13 and TSS = 0.73$\pm$0.13). Our results provided better overlap with the CATCH data than those obtained by the CHRONNOS method \citep{2021A&A...652A..13J} for the same period, which provided mean IoU and TSS values as as 0.63 and 0.81, respectively. This method uses all of the 7 channels from the AIA/SDO and line-of-sight magnetograms from the HMI/SDO in a progressively growing CNNs \citep{2021A&A...652A..13J}. Even though our results from AIA 193 and 2CC also provide high overall correlations, they are still lower than the correlation coefficient of 0.88 between the HEK binary maps and CATCH. We also showed the consistency of our results, especially from the 2CC method ,when the formation and temporal evolution of the CHs are considered. Our method was able to identify and track the CHs from the 3 November through 11 November for 9 consecutive days. Additionally, temporal variations of CH areas from our method follows the trends that is observed in the CATCH and HEK CH areas. To investigate the effects of the chosen lower and upper threshold values, we also calculated the same evaluation metrics and areas for the threshold ranges of {$\mu\pm3\sigma$}, $\mu\pm5\sigma$, as well as for cases where we do not apply any thresholding at all. Similarly, we calculated the thresholds based on the bimodal gaussian fit and the mean and standard deviation of the local maximum at the higher intensities. However, using different thresholds, and also not using any thresholds, provided lower evaluation metrics as well as correlation coefficients of the total areas. Interestingly enough, our results show significant discrepancies between the identified CHs using our method, HEK and CATCH when we look at the temporal variations in the correlation coefficients calculated for the total areas. Recently, some steps have been taken to create a reliable database where there is a consensus about the CH boundaries and their uncertainties are being discussed \citep{2021ApJ...918...21L,2021ApJ...913...28R}. In conclusion, as an unsupervised ML method, using the $k$-means clustering provides better results with those from complex methods, such as CNNs. One of the most important steps in this method is the preprocessing of the data and the choice of the lower and upper threshold values in a more systematic way, which then can lead to automation of the CH detection at any given date or a date range. More importantly, our study shows that there is need for a CH database that a consensus about the CH boundaries are reached by observers independently, and that can be used as the "ground truth", when using a supervised method or just to evaluate the goodness of the models. \acknowledgments This research is supported by the Helmholtz Imaging Platform, Solar Image-based Modelling (SIM) ZT-I-PF4-016.
2,877,628,091,424
arxiv
\section{Introduction} This paper is focused on the statistical analysis of a secondary instability of turbulent streaks in transitional plane Couette flow. Plane Couette flow (PCF), the flow between two parallel moving planes (sketched in figure~\ref{f1}), displays a discontinuous transition to turbulence. The laminar baseflow is linearly stable for all Reynolds numbers $R$ and can coexist in space and in time with turbulent flow. Turbulence can be sustained above a first Reynolds number $R_{\rm g}\simeq 325$, and takes the form of oblique turbulent bands \cite{prigent,RM,BT07,BT11} (Fig.~\ref{f1_}) which correspond to a modulation of turbulence \cite{RM,BT07}. All laminar troughs disappear above a second Reynolds number $R_{\rm t}\simeq 415$. Other wall-bounded flows like plane Poiseuille flow display oblique laminar-turbulent coexistence \cite{ATK}. Poiseuille pipe flow (PPF) also displays coexistence of laminar and turbulent flow as well in an unsteady manner : the low Reynolds number puff regime displays relaminarisations and splitting, while turbulence invades the whole pipe in the high Reynolds number slug regime \cite{avila,DWK,SK}. \begin{figure} \centerline{\includegraphics[height=3cm]{f1a.eps}}\caption{Sketch of plane Couette flow indicating the parameters and coordinate system.}\label{f1}\end{figure} From a microscopic point of view, the turbulent regime beyond $R_{\rm t}$ is well understood in term of the self sustaining process of turbulent streaks and streamwise vortices \cite{W,schhu}. The turbulence inside the bands consists of velocity streaks and streamwise vortices as well \cite{BT07} (Fig.~\ref{f1_}). However, the self-sustaining process of wall-bounded turbulence alone is insufficient to explain the coexistence of laminar and turbulent flow at low Reynolds number. Several results point toward possible mechanisms. Using DNS of the flow, one can show that the wavelengths of the bands are related to the Reynolds number through the balance of time averaged diffusion and advection in the laminar part of the flow \cite{BT07}. DNS, Coupled map and reaction-diffusion models \cite{BPPF,B,PRL} brought insight on the role of advection of small scale chaos by the large scale coherent flow in order to turn local transient chaos into global sustained turbulence. \begin{figure} \centerline{{\large \textbf{(a)}\hspace{1mm}\includegraphics[width=4cm,clip]{normeym062_.eps}\hspace{1mm}\textbf{(b)}\hspace{1mm}}\includegraphics[width=4cm,clip]{normeyp062_.eps}} \caption{Example of turbulent oblique band, colour plot of $\bf{v}^2$ (a): in a $y=-0.62$ plane, (b) : and $y=0.62$ plane. Computed by DNS in a periodic domain, $L_x=110$, $L_z=72$, $R=370$. }\label{f1_} \end{figure} In this paper, we seek to determine the phenomena behind these models and averaged results. A specific activity found in the trailing edges of puff and slugs of pipe flow \cite{DWK,SK} provides a starting point. One can see formation of azimuthal vorticity, \emph{via} a destabilisation of the shear layer of slow speed streaks. The vorticity is advected toward (puff regime) or away from (slug regime) the turbulent zone and feeds turbulence. This led to the suggestion of a self-sustaining process of the puffs \cite{SK}. Besides, it was argued that the advected vorticity was accountable for the expansion of slugs \cite{DWK}, provided the speed of the vortices is lower than the advection velocity of the slug. The study of a slightly idealised shear layer indicated that the vorticity formation mechanism is along the lines of a Kelvin--Helmholtz instability \cite{SK}. A similar type of activity can be found in the leading edge of puffs, due to the inflectional nature of the velocity profile \cite{HDAS}. Aida \emph{et al.} \cite{ATK} reported a comparable phenomenon in plane Poiseuille flow in the spot regime. Similar short wavelength instabilities have been pointed out in the growing spots of PCF \cite{ispspot}. They lead to a formation of spanwise vorticity that differ from the tongues of spanwise vorticity associated with the self sustaining process of turbulence \cite{schhu}. A preliminary study in an idealised situation indicates that, again, the vorticity formation mechanism is very likely a Kelvin--Helmholtz instability \cite{ETC}. It was shown that the quadrupolar flow around the spot advected the spanwise vorticity toward the edges where they very likely contribute to the extension of the spot. The advection of such vorticity may very well be central to the sustainment mechanism of the steady bands as well. Unlike PPF or the spots of PCF, the bands have a steady large scale flow. Its full three dimensional, three components structure has first been computed in DNS by Barkley \& Tuckerman by averaging in time over $2000$ eddy turnover time \cite{BT07}. A final wall normal average shows a circulation along the bands \cite{BT07,M11} (Streamlines in figure.~\ref{2d2c} (a), Sketched in figure~\ref{2d2c} (b)), which varies sinusoidally over a scale of fifty half gaps. This large scale flow is strongest in the Intermediate (or overhanging \cite{cole66}) zone between laminar and turbulent flow. This zone is somewhat equivalent to the trailing edge of puffs and slugs. Advected rolls in the intermediate zone may therefore constitute a starting point for the study of the sustainment of the bands. \begin{figure} \begin{flushleft} \textbf{(a)}\end{flushleft} \centerline{\includegraphics[height=5cm,clip]{streamlines.eps}\hspace{1cm} \begin{pspicture}(7,7) \rput(0,6.5){{\Large \textbf{(b)}}} \psline{}(0.5,0.5)(0.5,6.5) \psline{}(0.5,6.5)(6.5,6.5) \psline{}(6.5,6.5)(6.5,0.5) \psline{}(0.5,0.5)(6.5,0.5) \psline[linecolor=gray]{}(0.5,2)(5,6.5) \psline[linecolor=gray]{}(2,0.5)(6.5,5) \rput(3.5,4){$L$} \rput(3.25,5){$I$} \rput(3.75,2){$I$} \rput(2.25,5.5){$T$} \rput(5.25,1.5){$T$} \psline{->}(1.75,4.75)(1.45,4.45) \psline{->}(2,4.5)(1.25,3.75) \psline{->}(2.25,4.25)(0.75,2.75) \psline{->}(2.5,4)(1.75,3.25) \psline{->}(2.75,3.75)(2.45,3.45) \psline{->}(4.25,2.25)(5.75,3.75) \psline{->}(4.5,2)(5.25,2.75) \psline{->}(4.75,1.75)(5.05,2.05) \psline{->}(4,2.5)(4.75,3.25) \psline{->}(3.75,2.75)(4.05,3.05) \psline[linecolor=gray]{}(0.5,5.6)(1.4,6.5) \psline[linecolor=gray]{}(5.6,0.5)(6.5,1.4) \end{pspicture} } \caption{Large scale flow around the band. (a): Resulting flow along the bands. The streamlines indicate wall normal averaged streamwise and spanwise velocity fields. The colour levels indicate wall normal averaged norm of velocity $\mathbf{v}^2$ (DNS result). (b) : sketch of that flow, noting the Turbulent and Laminar zones as well as the Intermediate zones between the two.} \label{2d2c} \end{figure} In order to address this question, the article is organised as follow. The first section contains a description of the system and a reminder of our procedure (\S\ref{sys}). The roll formation in the velocity streaks is displayed (\S~\ref{inst}). The link to spanwise vorticity is then explained. The measurements, autocorrelation function and advection velocity of perturbations are then considered in \S\ref{mes}. These results are eventually summed up and discussed in the last section (\S\ref{concl}). \section{Numerical procedure\label{sys}} The system studied is plane Couette flow, with periodic in plane boundary conditions (Fig.~\ref{f1}). The velocity of the moving plates placed at $y=\pm h$ is $\pm U$. These quantities are used to make dimensionless velocities ($\mathbf{v}/U$), length ($\mathbf{x}/h$) and time ($th/U$). The Reynolds number $hU/\nu$, with $\nu$ the kinematic viscosity, is the control parameter for the transition in large systems \cite{prigent,RM,BT11}. Together with the sizes $L_x$, $L_z$ (Fig.~\ref{f1}), they set the whole statistical behaviour of the flow \cite{RM,BT11,PM}. In addition to the coordinate system $(x,y,z)$, we call $z'$ the direction along the band. The flow field can be written $\mathbf{V}=y \mathbf{e}_x+\mathbf{v}$ where $y \mathbf{e}_x$ is the laminar baseflow and $\mathbf{v}$ the departure from the laminar baseflow. The norm of the departure $\mathbf{v}^2$ is called the energy. The incompressible Navier--Stokes equations are numerically integrated using J. Gibson's Direct Numerical Simulation code {\sc channelflow} \cite{gibs}. More details on our implementation and use of the code can be found in previous articles \cite{RM,MR}. The in-plane resolution is $N_{x,z}/L_{x,z}=4$ (using the $2/3$ de-aliasing rule) and wall normal resolution is $N_y=27$. This resolution ensures reliable quantitative results \cite{dsc10}. The oblique bands regime (Fig.~\ref{f1_}) is found between Reynolds numbers $R_{\rm g} =325\pm 5$ and $R_{\rm t}=410\pm 5$. This is in agreement with previous numerical simulations and experiments \cite{prigent,BT07}. A domain containing one band is sufficient to study the roll formation in PCF. Therefore the data presented in the band case is at $R=370$, in the middle of the range $[R_{\rm g}; R_{\rm t}]$, in a domain of size $L_x=110$ and $L_z=72$, which is optimal to accommodate one band. The band regime is obtained by the following procedure: a smooth random velocity field is integrated in time for a duration of $500h/U$ at $R=500$, in order to obtain uniform wall turbulence. The Reynolds number is then decreased at $R=370$, and the velocity field in integrated in time for a duration of $1500h/U$ in order to reach the steady band regime. This velocity field is used as an initial condition to perform the study of the vorticity formation. The flow is statistically steady and the lifetime of turbulence is tremendously long at this Reynolds number \cite{shi}, therefore, the same results can be obtained after any time integration of the initial condition. \section{Spanwise vorticity formation: microscopic description in bands\label{inst}} In this section we first verify that departure to the velocity streaks can be viewed as instabilities (\S~\ref{frame}). Then, the roll formation in the velocity streaks is identified \emph{via} two dimensional visualisation of DNS (\S~\ref{visu}). In order to go beyond visualisations, we use spanwise vorticity to characterise the rolls (\S\ref{vort}), from which we define a marker of the rolls. \subsection{A statistically steady and coherent flow \label{frame}} We will investigate a possible secondary instability of the streamwise vorticity streaks. As it is often the case in the study of secondary instabilities in wall normal turbulence \cite{SK,W,schhu}, we will consider the streaks to be frozen, since they are evolving on a long time scale. In order to test this approximation, one can compute the normalised autocorrelation function of the streamwise velocity averaged over a duration $\Delta t$ (Fig.~\ref{f4bf}). The autocorrelation function is computed as a function of $z'$, the diagonal direction of the band. This coordinate is defined such that $\mathbf{e}_{z'}=1/(\sqrt{L_x^2+L_z^2})(L_x\mathbf{e}_x+L_z\mathbf{e}_z)$. On each diagonal starting at $x=x_0,z=0$, one has $z'=\sqrt{(x-x_0)^2+z^2}$. \begin{figure} \centerline{\includegraphics[width=6.5cm,clip]{cor_z_vxmoy_Tym062.eps}} \caption{Autocorrelation function of the time averaged streamwise velocity $v_x$ along a diagonal $z'$, as a function of $\delta z'$, for increasing averaging times, sampled in a statistically steady band.} \label{f4bf} \end{figure} One can note two interesting facts in the spirit of the frozen streaks framework. Firstly, the flow is nearly periodical and coherent over a long distance: this is shown by the modulation of the correlation function and the slowly decreasing envelope. The first minimum is approximately at $\delta z'=2$ and the modulation has a non negligible amplitude over a distance $\delta z'\gtrsim 20$. Secondly, the correlation function is time invariant and the flow remains coherent over long period of time: coherence starts to disappear for averaging over durations larger than $\Delta t=100$. This result is not surprising, since the turbulent bands are merely a long wavelength modulation of the classical low Reynolds number velocity streaks. This gives a firm basis to the preliminary linear analysis \cite{ETC}. \subsection{A Typical example \label{visu}} We present a typical example of the roll occurring in an intermediate zone of the band. The turbulent bands are obtained by our procedure, and are followed in time until clear realisations of the events are seen. Colour levels of the streamwise velocity field in a $x-y$ plane at successive instants are displayed in figure~\ref{f2}. The whole streamwise length and gap are included (Fig.~\ref{f2} (a)), and one can see the turbulent zone at $90\lesssim x\lesssim 110$, $0\lesssim x \lesssim 10$, and the intermediate (or over-hanging zones \cite{cole66}) at $60\lesssim x\lesssim 90$ and $10\lesssim x\lesssim 40$. For the most part of the flow, one can see the typical velocity streaks of plane Couette flow in the long scale ($L_x=110$) modulation of the bands. \begin{figure} \centerline{\includegraphics[width=17cm,clip]{fig1amod.eps}} \centerline{{\large\textbf{(b)}\hspace{0.1cm}\includegraphics[width=3cm,clip]{T330_Z33_.eps}\hspace{0.1cm}\textbf{(c)}\hspace{0.1cm} \includegraphics[width=3cm,clip]{T340_Z33_.eps}\hspace{0.1cm}\textbf{(d)}\hspace{0.1cm} \includegraphics[width=3cm,clip]{T350_Z33_.eps}\hspace{0.1cm}\textbf{(e)}}\hspace{0.1cm} \includegraphics[width=3cm,clip]{T360_Z33_.eps}} \caption{Colour levels of the streamwise velocity field in a $z=cte$ plane at successive instants. (a): $T=20$. The Intermediate, Turbulent and Laminar zones are indicated by letters I,T and L and separated by gray lines. Zoomed on the perturbation (b): $T=0$, (c): $T=10$, (d): $T=20$, (e): $T=30$.} \label{f2} \end{figure} However, an intermediate region ($15< x <45$) departs strongly from the shear layer seen in the other intermediate region (transformed by the centro-symmetry of the band \cite{BT07}) and displays the same type of roll-up as the slow speed streaks of pipe flow \cite{DWK,SK}. Unlike what is seen in pipe flow, the roll is centered around $y=0$ and extends over the whole gap. Like pipe flow, this roll in concentrated in the $z$ direction. The two rolls have a typical size of $5h$, leading to a wavevector of order $1$. Setting an origin of time $20h/U$ before the snapshot of figure~\ref{f2} (a), one can follow the behaviour of the rolls in the $15<x<45$ frame by snapshots every $10 h/U$. We can first see the expected shear layer ($T=0$ Fig.~\ref{f2} (b)). A portion of roll appears in the frame (Fig.~\ref{f2} (c)), then followed by the full rolls (Fig.~\ref{f2} (d)). The rolls then disappear (Fig.~\ref{f2} (e)) in a duration much shorter than the typical viscous decay time (of order $1/R$). There is no apparent effect of vorticity stretching, this change is most likely an effect of the advection of the roll. The advection appears much more clearly in the $z$ direction than it does in the $x$ direction. Note that in unconstrained DNS of turbulent flows such as ours, the study is limited to the non-linear development of the roll \cite{DWK,SK}. This limitation has manifested itself in the former studies of coherent structures of shear flows turbulence \cite{Jimenez91}. Only constrained simulations \cite{HKW} or idealised models \cite{W} can shed light on the onset of the rolls formations. In the case of azimuthal vorticity formation in pipe flow, the examination of an idealised version of the shear layers found in DNS confirmed that a Kelvin--Helmholtz instability was at the source of the vorticity formation. In the case of plane Couette flow, a preliminary study of an idealised shear layer indicated that a kelvin--Helmholtz instability is also the likely cause of the roll formation \cite{ETC}. A full fledged stability analysis will be proposed in the second part of this article \cite{isp2}. Such a mechanism should be expected: other instabilities are very unlikely. \subsection{Spanwise vorticity as a marker \label{vort}} In order to go beyond the visualisations and perform measurements on these rolls (lengthscale, advection velocity \emph{etc}.), we quantitatively justify the use of a marker derived from the spanwise vorticity $\omega_z=\partial_x (v_y)-\partial_y (v_x)$. This marker has already be used in the study of such rolls in the spots \cite{ispspot}. \subsubsection{Principle} We start from our example. Colour levels of $\omega_z$ in the same $x-y$ plane as figure~\ref{f2} (a) are displayed in figure~\ref{f3} (a), for comparison with the velocity field. The framed region is the same as in figure~\ref{f2} (a,d). A general view of the flow shows that nearly everywhere, there is $\omega_z<0$ near the walls and $\omega_z>0$ in the core region. This is the spanwise vorticity field expected for the velocity streaks. We can see a few tongues of $\omega_z<0$ going from the walls to the center of the gap, related to the self sustained process of turbulence \cite{schhu}. The framed region, however, displays $w_z<0$ in the mid gap, where $w_z>0$ is expected. We will take advantage of this fact to build our marker. \subsubsection{The coherent vorticity field} So as to generalise this observation, we first measure the average vorticity profiles in each region (laminar, turbulent, intermediate), that will give us the vorticity profiles in the velocity streaks. This will give a quantitative description of the background on which the negative vorticity stands out. For that matter, we use a discrimination method between laminar and turbulent flow that allows us to build a spatial mask $I^{\rm t,l}(x,z)$ for each region \cite{RM}. The flow is divided in small cells $l_x\times l_y\times l_z=2\times 1\times 2$ ($y<0$ or $y>0$), which are large enough to contain a coherent structure. The square norm of the departure to the laminar baseflow $\textbf{v}^2$ is averaged in each cell, and a criterion $\gamma$ is applied. If the average is larger than $\gamma$, the cell is considered to be turbulent, otherwise is it considered to be laminar. One can then educe each region: if two laminar cells are on top of each other, the zone $(x,z)$ is considered to be laminar. If two turbulent cells are on top of each other, the zone $(x,z)$ is considered to be turbulent. And if a laminar cell is on top of a turbulent one (or \emph{vice versa}), the zone $(x,z)$ is considered to be intermediate, or overhanging \cite{cole66}. One can further distinguish the two intermediate zones, one where a laminar cell is on top of turbulent cell and the one where a turbulent cell is on top of a laminar cell. By an abuse of language, the first intermediate zone is call the ``rear'' zone, while the second is called the ``front'' zone, since by starting the laminar zone and increasing $x$ toward the band, one enters the rear of the band, go through the turbulent zone and exists toward another laminar zone through the front of the band (similarly to what is done in \cite{ispspot}). The spatial masks are then defined by: $I^{\rm i}(x,z)=1$ (resp. if $I^{\rm t}=1$) if $(x,z)$ belongs to any of the intermediate (resp. turbulent) zones and $I^{\rm i}(x,z)=0$ (resp if $I^{\rm t}=0$) otherwise. By doing a conditional average of $\omega_z$ in each of these four zones (laminar, intermediate rear, turbulent, intermediate front), one obtains the profiles of figure~\ref{f3} (b). The shear $-\partial_y v_x$ dominates in $\omega_z$. The vorticity profile $\omega_z$ in the turbulent zone is positive and maximum at $y=0$, while it is negative near the walls. It has the same shape, with smaller amplitude, in the laminar zone. In the intermediate zones, the profile matches that of the turbulent zone in a half gap and that of the laminar zone in the other one. Note that since our procedure averages over the spanwise modulation of the velocity streaks, theses profiles are approximately halved compared to what can typically be found in the heart of velocity streaks (Fig.~\ref{f3} (a)). This gives a quantitative base to the observation of the properties of the vorticity field in a frozen velocity streak. \begin{figure*} \centerline{\includegraphics[width=17cm,clip]{wz_pert_large_mod.eps}} \centerline{\includegraphics[height=6cm,clip]{vortzfrlt.eps}} \centerline{{\large \textbf{(c)}\includegraphics[width=6cm,clip]{wz_filtfront.eps}\textbf{(d)}}\includegraphics[width=6cm,clip]{wz_filtturb.eps}} \caption{(a) Colour levels of the spanwise vorticity field $\omega_z$ in the $z=constant$ plane of figure~\ref{f2}, $T=20$. The Intermediate, Turbulent and Laminar zones are indicated respectively by letters I, T and L and separated by gray lines. (b) Profiles of spanwise vorticity conditionally averaged in turbulent and intermediate area. Thresholded and spatially filtered spanwise vorticity in the $y=0$ plane: (c) $\bar{\omega}_z^{\rm i}$, (d) $\bar{\omega}_z^{\rm t}$. (colour levels are saturated at $-1$)}\label{f3} \end{figure*} \subsubsection{The marker} We eventually show the effect of the rolls on the vorticity field using kinematic argument on the velocity and vorticity fields. Indeed, one can approximately write, at constant $z$ inside a streak, in presence of a roll centered at $y=0$: \begin{equation}v_y\simeq v_y^{\rm s}(X,y)+\alpha(t)(1-y^2)^2\sin(k x)\,,\end{equation} This description accounts for the spatial dependence of $v_y$ in a streak that contains rolls: the flow alternatively goes up and down with a wavelength $\lambda=2\pi/k$. We denote $v_x^{\rm s}$, $v_y^{\rm s}$, $\omega_z^{\rm s}$, the velocity and vorticity fields of the streaks and $\alpha(t)$ the amplitude of the perturbation. The variable $X$ corresponds to the slow spatial dependence, on a scale of order $L_x=110$, as opposed to the fast spatial dependance of the rolls ($k\simeq 1$). The shape and amplitude of the slowly varying velocity fields in each of the zones have been computed by Barkley \& Tuckerman \cite{BT07}. Using the incompressibility $\partial_x v_x+\partial_y v_y=0\Rightarrow v_x=\int {\rm d}x \partial_y v_y$ of the two dimensional roll, this dependence leads to: \begin{equation}v_x\simeq v_x^{\rm s}(X,y)+\frac{4\alpha(t)}{k} y(y^2-1)\cos(kx)\,,\end{equation} for $v_x$ inside a streak containing a roll. This spatial dependence describes the field $v_x$ computed in our DNS very well (Fig.~\ref{f2} (a,d)). The wall normal dependence of the perturbation in $v_y$ is that of the first function of the orthogonal basis fitting the boundary condition for $v_y$. The wall normal dependence of the perturbation in $v_x$ is that of the second function of the basis fitting the boundary conditions for $v_x$. These bases date from early studies of thermal convection \cite{conv}. This type of polynomial descriptions are commonly used to describe the velocity field \cite{Jimenez91}. Even at lowest order, these bases can approximate most of the wall normal dependence of the flow \cite{m}. These two bases of function will be used and detailed more extensively in the second part of the article \cite{isp2}. One can then approximate the spanwise vorticity by: \begin{equation} \omega_z\simeq \omega_z^{\rm s}(X,y)+\alpha(t) \cos(kx)\left(k(1-y^2)^2 +\frac{4-12y^2}{k} \right)\,,\label{eqvrt}\end{equation} with $\omega_z^{\rm s}\simeq -\partial_y v_x^{\rm s}$. The $y$ dependance of the slowly varying vorticity field can be seen in figure~\ref{f3} (b) (with a factor $1/2$). The additive perturbation of $\omega_z$ has the same shape as the spanwise vorticity, with a fast modulation in $x$. Both the contributions of $\partial_y v_x$ and $\partial_x v_y$ have the same sign. In the midplane, the additive perturbation leads to a change of sign of the spanwise vorticity everywhere $\cos(x)$ is negative. One can see that detecting $\omega_z<0$ near $y=0$ is equivalent to detecting the rolls. Negative spanwise vorticity around $y=0$ can be used as a marker of the developing instability. The field $\omega_z^{\rm th}$, spanwise vorticity thresholded at zero: \begin{equation} \omega_z\le 0 \Leftrightarrow \omega_z^{\rm th}=\omega_z\, , \quad \omega_z> 0 \Leftrightarrow \omega_z^{\rm th}=0\, \end{equation} is used. In the range $-0.5\lesssim y\lesssim 0.5$, the field $\omega_z^{\rm th}$ is non zero if the rolls appear and zero if it is not present. One might think that such spanwise vorticity results from the tilting of wall normal vorticity created by the streak instability \cite{HKW,W} or from tilting of the streamwise vortices. This is not the case, such an effect has never been reported. Streamwise vorticity and thresholded spanwise vorticity are decorrelated. Indeed using velocity fields computed in this study, one finds (see \cite{rmq} for the definition of brackets) : \begin{equation}\left| \frac{\langle (\omega_x-\langle \omega_x\rangle_{x,z})(\omega_z^{\rm th}-\langle \omega_z^{\rm th}\rangle_{x,z}) \rangle_{x,z}}{\sqrt{\langle (\omega_x-\langle \omega_x\rangle_{x,z})^2\rangle_{x,z}\langle (\omega_z^{\rm th}-\langle \omega_z^{\rm th}\rangle_{x,z})^2\rangle_{x,z}}}\right|\lesssim 0.05\,. \end{equation} The computation is performed in the plane $y=0$. This can be easily understood. The examination of the vorticity evolution equation shows that the tilting of $\omega_y$ into $\omega_z$ would root from the wall normal shear of spanwise velocity, through the term $\omega_y \partial_y v_z$. The first non zero term in the centerline results from the large scale flow around the bands, leading to $\partial_y v_z\propto O(10^{-2})$ \cite{BT07}. This is much smaller than the wall normal shear of streamwise velocity (of order $1$) responsible for the creation of streamwise vortices. The tilting of streamwise vortices is even less likely. Indeed, it would root from $\omega_x \partial_x v_z$. Again, the first non zero contribution results from the large scale flow around the bands. One then has $\partial_x v_z\propto O(10^{-4})$, since the streamwise scale of variation of the large scale flow is the wavelength of the bands. The laminar/intermediate/turbulent discrimination procedure is used to mask the thresholded spanwise vorticity. It yields two fields $\omega_z^{\rm th, i}=I^{\rm i}\omega_z^{\rm th}$ (figure~\ref{f3} (c)) and $\omega_z^{\rm th, t}=I^{\rm t}\omega_z^{\rm th}$ (figure~\ref{f3} (d)). They allow one to monitor the rolls in these zones only. Note that the vorticity appears uniformly in the band: this differs from the case of the early spots, which contain two core of production of spanwise vorticity \cite{ispspot}. \section{Measurements \label{mes}} The thresholded (and masked) vorticity fields are now used for the systematic measurement of the characteristics of the rolls: their size and as their advection velocity. We follow the approaches proposed in the study of the spots \cite{ispspot}. \subsection{Lengthscale measurements\label{span}} \begin{figure} \centerline{\includegraphics[width=5.5cm,clip]{correlation_mt_x_vx_wz.eps}\includegraphics[width=5.5cm,clip]{correlation_mt_z_vx_wz.eps}} \caption{Correlation function in the $x$ (a) and $z$ (b) direction of streamwise velocity $v_x$, thresholded vorticity field $\bar{\omega_z}$ and masked $\bar{\omega}_z^{\rm i}$ and $\bar{\omega}_z^{\rm t}$, averaged in time.} \label{lgsc} \end{figure} In order to determine the characteristic sizes of the rolls in both streamwise and spanwise direction, and compare them to that of the velocity streaks, we turn to normalised correlation functions : $\langle f(x,z)f(x+\delta x,z+\delta z)\rangle$ \cite{rmq}. The field $f$ can be the masked thresholded vorticity $\omega_z^{\rm th}$, $\omega_z^{{\rm th}, {\rm i},{\rm t}}$, or the streamwise velocity $v_x$. The correlation functions are computed at $y=0$ where the perturbation to the velocity streaks is maximum. They are time averaged, a minimum of ten samples is necessary for convergence. The envelope of the correlation functions gives to the characteristic correlation length of the field. The modulation indicates periodicity. The streamwise correlation functions of the thresholded vorticity fields decrease exponentially (Fig.~\ref{lgsc} (a)). The characteristic size is estimated from the value $\delta x$ for which the tangent at $\delta x=0$ crosses the $\langle f(x) d(f+\delta x)\rangle =0$ line. This characteristic size is also the inverse of the slope at $\delta x=0$. This yields a coherence length ranging from $1$ ($\omega_z^{\rm th, t}$) to $1.5$ ($\omega_z^{\rm th, i}$). This puts figures on the difference of lengthscales seen in the visualisations \cite{ispspot}. Since the spanwise vorticity is negative on less than half a wavelength, this gives wavelength of more than $2$ and $3$, consistent with the constatation of rolls of size $5$ (Fig.~\ref{f2}). The ratio of coherent length can be trusted as the ratio of typical length scales of the perturbations in the intermediate and turbulent zone. The velocity field has a much longer coherence length, of order $10$. It corresponds to the velocity streak coherence length. This shows the scale separation between the velocity streaks in the bands and the perturbations and places this secondary instability in the same framework as the study of the growth of the spots \cite{PRL}. The spanwise dependence of $\omega_z^{{\rm th}, {\rm i},{\rm t}}$ can be examined in the time averaged correlation function as well: the correlation function as a function of $\delta z$, at $\delta x=0$, is displayed in figure~\ref{lgsc} (b). The figure is zoomed around $\langle f(z)f(z+\delta z)\rangle=0$. The slope at the origin and the first zero of each correlation function is an estimate of the thickness of velocity streaks (for $v_x$) and vorticity (for $\omega_z^{\rm th}$). One finds a spanwise coherence length for $ \omega_z^{{\rm th}, {\rm i},{\rm t}}$ of $1$, slightly smaller than that of $v_x$. In both the intermediate and turbulent zone, spanwise vorticity $\omega_z^{\rm th}$ is concentrated in locations slightly smaller than the turbulent streaks. The modulation of the correlation function of $v_x$ shows the spanwise modulation of the velocity streaks. The vorticity fields $\omega_z^{\rm th}$ and $\omega_z^{\rm th, t}$ bear no trace of long range correlation and periodicity, whereas $\omega_z^{\rm th, i}$ has a small trace of periodicity. This is consistent with the constatation that perturbations develop independently in each velocity streak (Fig.~\ref{f3} (c,d)). This background is quasi periodical and can leave a hint of periodicity in the perturbation field $\omega_z^{\rm th}$. \subsection{Advection of perturbations\label{advpert}} In this section, we go beyond the visualisation of figure~\ref{f2} and measure the advection velocity of the rolls. This advection can be seen in a video of the thresholded spanwise vorticity $\omega_z^{\rm th,i}$ zoomed in an intermediate zone, available in the supplementary material. The $y=0$ plane is chosen. The spanwise vorticity remains coherent in time. It clearly moves in the increasing $z$ direction. However, the advection in the streamwise direction is not as clearly identifiable as in the spanwise direction. In this section, we investigate this matter more thoroughly using the systematic advection velocity measurement procedure proposed in the study of the spots \cite{ispspot}. \subsubsection{The processing procedure\label{gen}} The advection is characterised by its velocity. A direct measure of velocity $\mathbf{c}=c_x\mathbf{e}_x+c_z\mathbf{e}_z$ for each perturbation is heavy and cumbersome in plane Couette flow. The measurement is therefore automated \emph{via} an image correlation approach. This is similar to Particle Image Velocimetry algorithms, measurement techniques proposed over thirty years ago which have become commonplace in the last twenty years \cite{AnnRevPIV}. The non masked thresholded spanwise vorticity $\omega_z^{\rm th}$ is used for measurement. The measurement procedure is as follow. A given $x-z$ plane is divided in squares of size $2\times 2$. This size is small enough to capture the small scale details of the flow, and large enough to contain enough information \cite{RM}. Then for each cell, at time $t$, the correlation to a cell shifted by $\Delta z$ at time $t+\delta t$ is computed: \begin{align}\notag C_{x_0,z_0,y,t}(\Delta z,\delta t)=\\\int_{x_0,z_0}^{x_0+2,z_0+2}{\rm d}x{\rm d}z\, \tilde{\omega}(x,y,z,t)\tilde{\omega}(x,y,z+\Delta z,t+\delta t)\,, \end{align} where $x_0,z_0$ denominates the position of the ``lower left'' corner of the cell . One has $x_0=2m_x$, $z_0=2m_z$ with $m_{x,z}$ integers. $\tilde{\omega}$ denominates $\left(\omega_z^{\rm th}-\langle \omega_z^{\rm th} \rangle\right)/(\langle(\omega_z^{\rm th}-\langle \omega_z^{\rm th} \rangle)^2 \rangle)^\frac12$. Here $\langle .\rangle$ is the average over the relevant $2$ by $2$ square. For each cell at $x_0,z_0$ at each time $t$, for a given $\delta t$, the shift $\Delta z$ maximising $C_{x_0,z_0,y,t}(\Delta z)$ is computed. The corresponding velocity for the cell is $c_z(y)=\Delta z/\delta t$. A field of advection velocity is obtained. An example of $c_z$ at $y=0$ is displayed in colour levels in figure.~\ref{advspd_}, (a). The partial coarse graining in the $(x,z)$ plane is visible. The field of advection velocity is smoothed before the analysis. Indeed, when little to no marker are present in the cell, the correlation function has no meaning, maxima can arise for any value. The laminar zone is the most sensitive one, due to the very small density of markers (spanwise vorticity). Given the translational invariance along the band, an average in the $z'$ direction is performed (figure~\ref{advspd_} (b)). The computation depends weakly on $\delta t$ provided it remains in the $1\le \delta t\le 4$ range. For higher values of $\delta t$ coherence is lost. The trackers can be advected to far away in $x$ and $z$. The tracker (vorticity) evolves in time and is not steady enough in shape. This procedure can be applied in a similar fashion for increments of space in the streamwise direction $\Delta x$. The two cases are examined separately. The streamwise direction requires a little more processing. This is why the two computations are decoupled. \subsubsection{spanwise advection} The spanwise advection velocity $c_z$ as a function of $x$ is displayed in figure~\ref{advspd_} (b,c). The instantaneous result is coherent (Fig.~\ref{advspd_} (b)). The wall normal and time ($200 h/U$) average of $c_z$ is nearly sinusoidal (Fig.~\ref{advspd_} (c)) and is perfectly matched by the spanwise component of the large scale flow (Fig.~\ref{advspd_} (c)). As mentioned in the introduction, the large scale flow is obtained by the $y$, $z'$ and time average of the spanwise velocity field. This shows that, in the spanwise direction, the rolls travel at the velocity of the wall-normal averaged large scale flow. The standard deviation over $y$ of the advection velocity is computed as well. It shows that $c_z$ depends very weakly on $y$, except in the laminar region, where there are very few markers. The spatial dependence of $c_z$ is summed up in the sketch of the $x-z$ plane (Fig.~\ref{sketchres}) based on sketch~\ref{2d2c} (b). The turbulent band is indicated, as well as the Turbulent, Intermediate and Laminar zones. The direction of the spanwise advection velocity is indicated by the $c_z$ vector. Since $c_z=0$ in the middle of the turbulent and pseudo-laminar zone, no vector is drawn. $c_z$ is positive in one intermediate zone ($10\lesssim x\lesssim 40$) and negative in the other one ($60\lesssim x \lesssim 100$). \begin{figure} \centerline{\includegraphics[width=7cm,clip]{dt2vity0.eps}\includegraphics[width=7cm,clip]{dt2vitmoyzy0_.eps}} \centerline{\includegraphics[width=7cm,clip]{dt2vitmoyzmoyy.eps} \includegraphics[width=7cm,clip]{cx_shift_zone.eps}} \caption{(a) : Field of spanwise advection velocity $c_z$, in the case of band, at $y=0$. (b): Advection velocity $c_z$ averaged over the diagonal at a given time at $y=0$ for two time steps $\delta t=2$ and $\delta t=4$ and a sinusoidal fit. (c): Average and fluctuation of the advection velocity $c_z$ over the diagonal and the wall normal direction at a given time, compared to the spanwise large scale flow. (d) Comparison of the measured streamwise advection velocity, $\delta t=1$, at $y=0$ and the averaged large scale flow in the band.} \label{advspd_} \end{figure} \begin{figure} \centerline{ \begin{pspicture}(7,7) \psline[linecolor=lightgray]{->}(1.75,4.75)(1.45,4.45) \psline[linecolor=lightgray]{->}(2,4.5)(1.25,3.75) \psline[linecolor=lightgray]{->}(2.25,4.25)(0.75,2.75) \psline[linecolor=lightgray]{->}(2.5,4)(1.75,3.25) \psline[linecolor=lightgray]{->}(2.75,3.75)(2.45,3.45) \psline[linecolor=lightgray]{->}(4.25,2.25)(5.75,3.75) \psline[linecolor=lightgray]{->}(4.5,2)(5.25,2.75) \psline[linecolor=lightgray]{->}(4.75,1.75)(5.05,2.05) \psline[linecolor=lightgray]{->}(4,2.5)(4.75,3.25) \psline[linecolor=lightgray]{->}(3.75,2.75)(4.05,3.05) \psline{}(0.5,0.5)(0.5,6.5) \psline{}(0.5,6.5)(6.5,6.5) \psline{}(6.5,6.5)(6.5,0.5) \psline{}(0.5,0.5)(6.5,0.5) \psline[linecolor=gray]{}(0.5,2)(5,6.5) \psline[linecolor=gray]{}(2,0.5)(6.5,5) \rput(3.5,3.5){$L$} \rput(3.3,5){$I$} \rput(3.7,2){$I$} \rput(1.25,5.5){$T$} \rput(5.5,1.25){$T$} \rput(1.65,3.95){$c_z$} \rput(1.2,4.4){$c_x$} \rput(1.2,3.75){$\mathbf{c}$} \psline{->}(1.45,4.25)(1.05,4.25) \psline{->}(1.45,4.25)(1.45,3.85) \psline{->}(1.45,4.25)(1.05,3.85) \rput(4.2,5.3){$c_x$} \rput(3.6,4.4){$c_z$} \rput(4.2,4.4){$\mathbf{c}$} \psline{->}(3.8,5.1)(4.8,5.1) \psline{->}(3.8,5.1)(3.8,4.1) \psline{->}(3.8,5.1)(4.8,4.1) \rput(2.2,4.45){$c_x$} \rput(2.85,3.8){$c_z$} \rput(2.35,3.6){$\mathbf{c}$} \psline{->}(2.65,4.25)(1.65,4.25) \psline{->}(2.65,4.25)(2.65,3.25) \psline{->}(2.65,4.25)(1.65,3.25) \psline{->}(4.35,2.75)(5.35,2.75) \psline{->}(4.35,2.75)(4.35,3.75) \psline{->}(4.35,2.75)(5.35,3.75) \rput(4.15,3.15){$c_z$} \rput(4.8,2.55){$c_x$} \rput(4.65,3.45){$\mathbf{c}$} \psline{->}(5.45,2.75)(5.85,2.75) \psline{->}(5.45,2.75)(5.45,3.15) \psline{->}(5.45,2.75)(5.85,3.15) \rput(5.55,2.55){$c_x$} \rput(5.25,2.95){$c_z$} \rput(5.65,3.2){$\mathbf{c}$} \psline{->}(3.5,2.25)(2.5,2.25) \psline{->}(3.5,2.25)(3.5,3.25) \psline{->}(3.5,2.25)(2.5,3.15) \rput(3,2){$c_x$} \rput(3.8,2.7){$c_z$} \rput(2.85,2.4){$\mathbf{c}$} \psline[linecolor=gray]{}(0.5,5.6)(1.4,6.5) \psline[linecolor=gray]{}(5.6,0.5)(6.5,1.4) \end{pspicture} } \caption{Sketch of the flow (based on sketch~\ref{f1_} (b)), indicating the band, the Laminar, Intermediate and turbulent zones, as well as both components $(c_x,c_z)$ and the resulting advection velocity $\mathbf{c}$ in each zone.} \label{sketchres} \end{figure} \subsubsection{Streamwise advection} We then apply the same procedure to measure $c_x$ (Fig.~\ref{advspd_} (d)). Due to stronger fluctuations than in the measurement of $c_z$, time average is absolutely necessary. Good agreement is found between $c_x$ and the streamwise component of the large scale flow around the turbulent area ($35 \lesssim x \lesssim 75$). However, a different picture is found for $0\lesssim x \lesssim 35,\,75\lesssim x\lesssim 110$, in the laminar region and part of the intermediate regions. The advection velocity strongly differs from the large scale flow, and even changes sign. In that case the vorticity is no longer advected along the band (taking into account $c_z$) but toward the laminar zone. This may very well be an effect of the laminar baseflow on perturbation localised in the intermediate zones. Indeed, perturbations in the front intermediate zone ($y>0$ part of the flow) see a positive effective velocity, while perturbations in the rear intermediate zone ($y<0$ part of the flow) see a negative effective velocity. The contribution of the laminar baseflow balances out inside near the turbulent zone. The direction of the streamwise component of the advection velocity is included in the sketch~\ref{sketchres}. It is indicated by the $c_x$ vectors. One has $c_x=0$ in the middle of the turbulent and laminar regions, hence the absence of vector. In the intermediate region, the two directions of advection the are indicated by vectors of opposite signs, one near the turbulent region and one near the laminar region. The vectorial advection velocity $\mathbf{c}=c_x \mathbf{e}_x+c_z\mathbf{e}_z$ is added. One can see that around the turbulent region, the spanwise vorticity is advected along the turbulent band, whereas around the laminar region, the perturbations are advected toward the laminar region, in a direction nearly orthogonal to the band. \section{Discussion \label{concl}} In this article, we identified the formation of rolls in the shear layers of velocity streaks of the laminar-turbulent oblique bands of plane Couette flow, which lead to spanwise vorticity. These rolls are very similar to those found in Hagen--Poiseuille flow \cite{DWK,SK}, plane Poiseuille flow \cite{ATK} or in spots of PCF \cite{ispspot}. We justified quantitatively the use of a criterion based on the sign of the vorticity in the midplane to systematically detect these events and used it to perform measurements. Correlation functions have been used to measure the lengthscale of said rolls. This stressed on the scale separation between the large scale flow and the vorticity as well as the localisation of the vorticity inside the streaks. We extended a method to measure the advection velocity of the vorticity used in the study of the spots \cite{ispspot} to the case of the bands. It showed that the advection velocity of the rolls quantitatively matched the large scale flow along the bands, except in the laminar region where vorticity is advected away from the bands. The onset of the secondary instability creating the rolls, as well as its convective or absolute nature will be investigated in the second part of the article (a preliminary study can be found in \cite{ETC}). We will focus on equivalent of the sustainment cycles of the puffs of pipe flow (see \cite{DWK,SK}) in the second part of the article. It will link the advection along the bands and a possible feed back mechanism. The feed back found in the sharp trailing edges of puffs rooting in an inflectional instability is likely in the same class of mechanism \cite{HDAS}. We focus here on the possible effect of the advection of vorticity away from the band on the distance between two bands. DNS of PCF showed that, in time average, there was a precise force balance between dissipation and the advection by the laminar baseflow. This gave a relation between the Reynolds number and the wavelengths of the bands and this mechanism may be at the source of the distance between two bands \cite{BT07}. Similarly, The effect of leading edge on the trailing edge of the next puff was proposed to explain the distribution of distance between two puffs centered around a well defined average distance \cite{sam}. If the puffs are too close, the inflexion of the streamwise velocity profile at the leading edge is drastically reduced \cite{HDAS}. The advection of the spanwise vorticity of the rolls back toward the laminar zone where it is dissipated, instead of feeding turbulence, may be the instantaneous version of such an interaction in PCF. This advection-dissipation mechanism would also be the instantaneous version of the average force budget. In that matter, PCF differs from the case of the slugs where the advection of the azimuthal vorticity mainly fed the extension of turbulence downstream of the slug \cite{DWK}. It can be argued that this is partially the effect of the centro-symmetry of PCF, and partially the effect of the difference of Reynolds number. Indeed, dissipation is a fundamental part of the small scale dynamics at these Reynolds numbers \cite{PRL}. Without the band structure, turbulence in PCF naturally decays if $R\lesssim 415$ \cite{PM,DSL}. Meanwhile, the slug regime corresponds to $R>R_{\rm t}$, for those Reynolds number, the advection of the rolls toward laminar flow may contribute to the unlimited streamwise extension of the spots. The strong similarities between Pipe and Couette flows in that matter motivates further studies aimed at understanding the very complex mechanisms behind the laminar-turbulent coexistence. \section*{acknowledgments} The author acknowledges discussions with Y. Duguet, P. Huerre and P. Manneville.
2,877,628,091,425
arxiv
\section{Introduction} Horizontal-branch (hereafter HB) stars have evolved past the main sequence and burn helium in their core which is surrounded by a hydrogen burning shell. In general, after core helium exhaustion, they evolve towards the asymptotic giant branch (AGB), but a part of HB stars does not reach the AGB stage and is usually called the extreme horizontal branch (EHB) \cite{Dorman+93}. The boundary between EHB stars that evolve mainly to hotter temperatures and those that evolve towards the AGB is near $T_{\rm eff}$ = 20000~K on the zero age extended horizontal branch (ZAEHB) \cite{Dorman+93}. The hottest stars, those with the thinnest hydrogen envelopes, evolve to higher temperatures during and after core helium burning and completely bypass the AGB. Stars with effective temperatures near 20000~K on the ZAEHB still have very small envelopes, which are nevertheless sufficiently thick to allow the star to evolve towards the AGB for a while after core helium exhaustion, although the shell burning is soon quenched and the star contracts again to hotter temperatures \cite{Ostensen+12}. HB stars with effective temperatures larger than approximately 11500~K are of particular interest since they exhibit abundance anomalies (Glaspey et al. 1989; Behr et al. 1999; Moehler et al. 1999; Behr, Cohen \& McCarthy 2000; Behr 2003a) such as under-abundances of helium and over-abundances of several metals including iron. HB stars with $T_{\rm eff}$ above the 11500~K threshold also show low rotational velocities as compared to cooler HB stars. The rotational velocity of these hot HB stars drops to a value of $V$sin$i\simeq$ 10 km s$^{-1}$ or less (Peterson, Rood \& Crocker 1995; Behr et al. 2000a,b; Behr 2003b; Recio-Blanco et al. 2004). Such a drop in the rotational velocity is thought to lead to a more hydrodynamically stable atmosphere where atomic diffusion (Michaud 1970) may take place. Queivy et al. (2009) demonstrated that for HB stars with such low rotational velocities, the helium convection zone disappears because meridional circulation is not strong enough to prevent helium from settling gravitationally. The atmosphere therefore becomes stable and atomic diffusion leads to vertical abundance stratifications and detectable surface abundance anomalies. Other observational anomalies are detected due to the presence of vertical abundance stratification in the atmospheres of these blue HB stars. For example, a photometric jump in the ($u,u-y$) colour-magnitude diagram is observed at $T_{\rm eff}\simeq$ 11500~K in several globular clusters (Grundahl et al. 1999). Photometric gaps are also detected at this $T_{\rm eff}$ (Ferraro et al. 1998). These two photometric anomalies were theoretically confirmed by the model atmospheres of Hui-Bon-Hoa, LeBlanc \& Hauschildt (2000) and LeBlanc et al. (2009). These models include the effect of the vertical stratification of the elements on the atmospheric structure which can explain the observed photometric jumps and gaps (LeBlanc, Hui-Bon-Hoa \& Khalack 2010). Khalack et al. (2007, 2008 and 2010) detected vertical stratification of certain elements including iron in several blue HB stars. The stars studied there are found in the $T_{\rm eff}$ = 10750 to 15500~K range. These results serve as additional proof that atomic diffusion is at play in their atmosphere. The lower $T_{\rm eff}$ limit where abundance stratification in HB stars occurs is relatively well established at approximately 11500~K. However, the upper limit in $T_{\rm eff}$ where no such stratification exists is not as well established. The results of Moni Bidin et al. (2012) for HB stars in $\omega$ Centauri shows that helium is underabundant for stars up to approximately 32000~K. It suggests that this might give the upper limit where other physical processes such as mass loss for instance, could dominate over atomic diffusion. This paper aims to verify if vertical stratification of the elements is present in the post-HB star HD~76431. This is by far the hottest star ($T_{\rm eff}$ = 31000~K; Ramspeck, Heber \& Edelmann 2001) for which a detailed abundance analysis that verifies for the presence of vertical stratification has been undertaken. The results from this spectral analysis could give insight on whether or not atomic diffusion is still dominant in such hot stars. \section{Details concerning HD~76431} HD~76431 was found to be evolved past the HB phase by Ramspeck et al. \shortcite{ram01b} (see their Figure 5). This has also been confirmed by the results of Chountonov \& Geier \shortcite{cho12} (see their Figure 1). \subsection{Observations and data reduction} \label{obs} Our analysis is based on high-resolution spectropolarimetric observations carried out with ESPaDOnS at CFHT\footnote{The Canada-France-Hawaii Telescope (CFHT) is operated by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii.} \cite{Petit+12}. Seventeen spectra were obtained in the range of 3700\AA\, to 10000\AA\, at a spectral resolution of 65000 with aim to search for the signatures of magnetic field \cite{otool+05}. Petit et al. \shortcite{Petit+12} have confirmed the results of Elkin \shortcite{Elkin98} and Chountonov \& Geier \shortcite{cho12}, and found no detectable Zeeman signatures in the Stokes I and V spectra of HD~76431. \begin{figure} \includegraphics[width=3.3in,angle=-90]{StokesI_HeI5875.eps} \includegraphics[width=3.3in,angle=-90]{StokesI_HeI6678.eps} \caption{ Profiles of He\,{\sc i} 5875\AA\, (right panel) and He\,{\sc i} 6678\AA\, (left panel) absorption lines obtained during the different dates of observations. The spectra are shifted vertically by 0.05 for better visibility. On the right side of each panel the time of each observation is presented with respect to the HJD=2455000. For the first observation with HJD=2455107.1 we show the observational errors that have almost the same value for the other spectra presented here. } \label{fig1} \end{figure} A detailed pre-analysis of the 17 spectra obtained by Petit et al. \shortcite{Petit+12} has shown that the profiles of almost all the visible spectral lines do not vary much with the date of observation (see for example Fig.~\ref{fig1}) spanning from 2009 Oct. 02 to 2010 Feb. 02 (see Petit et al. 2012). This fact does not argue in favour that HD~76431 might be in a close binary system \cite{cho12}. Taking into account the detected stability of the line profiles, we have composed all these spectra into a single spectrum which has been used for the abundance analysis presented here. \begin{figure*} \includegraphics[width=2.45in,angle=-90]{hd76431.mmt.WITHmetals.eps} \includegraphics[width=2.45in,angle=-90]{hd76431.bok.WITHmetals.eps} \caption{The effective temperature and gravity derived from the fitting Balmer, He\,{\sc i} and He\,{\sc ii} line profiles in the MMT (left) and Bok spectra (right) of HD~76431.} \label{fig1b} \end{figure*} Nine low-resolution spectra of HD 76431 were obtained with the B\&C Cassegrain spectrograph on Steward Observatory’s 2.3 m Bok telescope on Kitt Peak between 1999 and 2010. The 400/mm first-order grating was used with a 2.5 arcsec slit to obtain spectra with a typical resolution of 9\AA\, (R$\sim$560) over the wavelength interval 3620 - 6900 \AA\AA. The instrument rotator was set prior to each exposure to align the slit within $\sim$2$\degr$ of the parallactic angle at the midpoint of the exposure. Eight intermediate-resolution spectra of HD~76431 were taken with the Blue spectrograph on the 6.5 m MMT on Mount Hopkins, Arizona between 1996 and 1998. The 832/mm second-order grating and 1.0 arcsec slit gave a resolution of 1.05\AA\,(R$\sim$4200) over 4000 - 4950 \AA\AA. Again, the slit was always aligned with the parallactic angle. Exposure times on both telescopes were chosen to achieve S/N of 100 - 200 for each of the individual spectra. The Bok and MMT spectra were bias-subtracted, flat-fielded, background-subtracted, optimally extracted, wavelength-calibrated and flux calibrated using standard IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} tasks (Tody 1986; 1993).
2,877,628,091,426
arxiv
\section{\label{sec:intro}Introduction} Traditionally, adsorption is considered as a deposition of particles on a surface. In the case of simple atomic or molecular species, polymers, liquid crystals, proteins, other biological objects it has received a huge amount of attention, considering different aspects of the phenomena. In particular, monolayer and multilayer adsorption including wetting, prewetting, different phase transitions and criticallity of adsorbed layers were investigated in detail~ \cite{roe:74:0, dash:75:0,kreuzer:86:0,kukushkin:98:0,jerome:91:0,patrykiejew:00:0,netz:03:0,bruch:07:0,rabe:11:0}. On the other hand, not much attention has been given to adsorption in monolayers on restricting walls. If the adsorbed monolayer is modeled as a two-dimensional (2D) system, then the confining walls are one-dimensional (1D) and one can consider layers of particles formed in the neighborhood of the 1D walls as adsorbed layers. In the present study, we describe how particles interacting via nonmonotonical pair potential in which effectively hard-core repulsion is followed by attraction well and repulsive tail (SALR system) adsorb on a straight 1D wall that confines a flat surface. For 2D systems, the influence of confinement on pattern formation was previously investigated experimentally~\cite{antelmi:95:0,yu:06:0,yu:08:0,su:03:0,li:05:0,huang:06:0,haghgooie:06:0}, theoretically~\cite{tasinkevych:01:0,imperio:07:0, imperio:08:0,archer:08:0,chi:11:0} and by computer simulations~\cite{almarza:16:0,pekalski:19:0}. When the pattern formation is induced by competing attractive and repulsive interactions the size and shape of the confinement was shown to be crucial when the aim is to fabricate a defect free pattern~\cite{almarza:16:0} or a chiral structure~\cite{pekalski:19:0}. The confinement effects on SALR clusters were described also in 1D case, where the aggregates were shown to induce spatial bistability~\cite{pekalski:14:1} or pressure decrease upon increase of the density~\cite{pekalski:14:0}. To our knowledge, however, for particles interacting with isotropic competing interactions 2D adsoprtion isotherms have not been described so far. In dilute SALR systems, formation of clusters has been observed when the particle density exceeds the value corresponding to the critical cluster density~\cite{santos:17:0,litniewski:19:0}, analogous to the critical micelle density in surfactant mixtures. While the ordered periodic phases in the SALR\cite{zhuang:16:0} systems have not been observed experimentally yet~\cite{royall:18:0,zhuang:16:2}, the cluster fluids, first reported in Ref.~\cite{stradner:04:0}, are quite often observed for various systems. Only recently, the effect of clustering on the adsorption on confining walls has been studied~\cite{litniewski:19:0} and it was shown that the effect of clustering on the adsorption phenomena in 3D is very strong, and deserves further investigation. In this work we study the entirely unexplored question of the effect of self-assembly into clusters on adsorption on a confining line in a 2D system of SALR particles. In the case of a single boundary line at $z=0$, the line excess amount or the Gibbs adsorption is defined as follows: \begin{equation} \Gamma(\mu^*) = \int_{0}^{\infty}(\rho (z) - \rho_b)dz \label{eq:gamma} \end{equation} where $\rho(z)$ and $\rho_b$ are the average density at the distance $z$ from the wall and in the bulk, respectively, for fixed chemical potential $\mu^*$. We calculate the adsorption isotherms for a triangular lattice model introduced earlier in Refs.\cite{pekalski:14:0,almarza:14:0} and summarized in sec. \ref{sec:model}. In the same section our Monte Carlo simulation method is briefly described. In addition, we calculate structural characteristics such as the cluster distribution in the bulk and near the wall, density profile in direction perpendicular to the wall and the correlation function in the layers parallel to the wall. The results are presented in sec.\ref{sec:results}, where the relation between the shape of the adsorption isotherm and the structure of the fluid is also discussed. We summarize our results and present our conclusions in sec.\ref{sec:concl}. \section{\label{sec:model} The model and the simulation procedure} In order to allow close-packed structure, we have used a triangular lattice with a lattice constant equal to the particle diameter. After Ref.\cite{almarza:16:0, almarza:14:0}, we assume the following interaction potential between the particles on the lattice sites: \begin{equation} V(\Delta \mathbf{x}) = \begin{cases} -J_1 \quad \textrm{for $|\Delta \mathbf{x}| = 1,$ \quad (for nearest neighbors)} \\ +J_2 \quad \textrm{for $|\Delta \mathbf{x}| = 2,$ \quad (for third neighbors)} \\ 0 \qquad \textrm{otherwise} \end{cases} \end{equation} where $-J_1$ and $J_2$ represent the energy of interparticle attraction and repulsion, respectively. We used the ratio $J_2/J_1 = 3$ as in Refs \cite{pekalski:14:0, almarza:16:0, almarza:14:0}. The thermodynamic Hamiltonian for our system has the following form: \begin{equation} H = \frac{1}{2} \sum_{\mathbf{x}}\sum_{\mathbf{x'}}\hat{\rho}(\mathbf{x})V(\mathbf{x}-\mathbf{x'})\hat{\rho}(\mathbf{x'}) - \mu\sum_{\mathbf{x}}\hat{\rho}(\mathbf{x}) + U_w\sum_{\mathbf{x_0}}\hat{\rho}(\mathbf{x}) \end{equation} where $\sum_{\mathbf{x}}$ is the sum over all lattice sites, $\sum_{\mathbf{x_0}}$ is the sum over the sites nearest to the wall (or walls), $\hat{\rho}(\mathbf{x})$ is the occupation number. $\hat{\rho}(\mathbf{x})=1$ or 0 if the site with the coordinate $\mathbf{x}$ is occupied or vacant. $U_w$ is the interaction energy of a particle with the wall. It can be negative (attractive boundary), positive (repulsive boundary) or vanishing (neutral wall). In experimental systems, the long-range repulsion between the particles is often of electrostatic origin. In such a case, the walls interacting with the particles only at short distances are charge-neutral. In calculations, the dimensionless values $T^*=k_B T/J_1$, $\mu^*=\mu/J_1$, $J_2^*=J_2/J_1$, $h=U_w/J_1$ are used. The phase diagram as well as the ground states of this system in the bulk were investigated in Ref.\cite{almarza:14:0,pekalski:14:0}. As shown in Ref.\cite{almarza:16:0} for the stripe (lamellar) phase, the confinement between two parallel lines can drastically change the structure in the whole slit. The presence of a single wall may change the particle arrangement in its vicinity, but the effect of a single confining line has not been studied in this model yet. In theory, the phenomenon of adsorption is considered as the deposition of particles on a planar boundary in a semi-infinite system, Eq.(\ref{eq:gamma}), but in the computer simulation, a stripe between two walls has to be modelled. We define the adsorption for our model by \begin{equation} \Gamma(\mu^*) \approx 1/2 \sum_{z=0}^{L-1}(\rho(z) - \rho_c) \label{eq:gammal} \end{equation} where $\rho(z)$ and $\rho_c$ are the average density at the distance $z$ from the wall and in the central one third part of the system, respectively, and both densities are calculated for the chemical potential $\mu^*$. For large enough inter-wall distance $L$, $\rho_c$ should be the same as the density in the bulk, $\rho_b$; otherwise the finite-size effects have to be taken into account. The finite size effects can be studied by varying the wall-wall distance, but it goes beyond the scope of this work. In the direction parallel to the walls, periodic boundary conditions are specified. The Monte Carlo simulation procedure was carried out in the grand canonical ensemble according to the Metropolis algorithm with a standard importance sampling. We chose the distances between the walls $L$ and the system size along the walls $H$ the same as $L = H = 80$ to be sure that this distance is several times larger of the largest correlation length in the system. To verify that $\rho_c=\rho_b$, we computed $\rho_b$ in a system with periodic boundary conditions. \section{\label{sec:results}Results } \subsection{The ground State ($T^*=0$)} As noted above, we assume that the interaction between the particles and the walls exists only in the rows closest to them. The walls can be neutral $h=0$, attractive $h<0$ or repulsive $h>0$. In the case of neutral walls ($h=0$), the ground states are cluster (e,f), lamellar (h) and bubble (i,l) phases (Fig. ~\ref{fig:GS_ADS}) that are similar to that in the bulk. The phase (e) only indicates absence of adsorption on the wall with homogeneous distribution of the average density in layers parallel to the wall. The lamellar phases (h) and (h') are strictly periodic in the direction perpendicular to the wall and filled layers start directly from the attractive wall. The other phases are separated from the wall by different structures in two (f,l) or four (i) adjoining layers. For not neutral walls the same bulk structures appear with more rich adsorbed structures within six adjoining rows at most. It is interesting to consider an attractive wall and small values of the chemical potential corresponding to the ordered cluster phase shown in panel (c) in Fig.\ref{fig:GS_ADS}. In the two rows closest to the boundary (rows 1 and 2), the clusters are packed more densely than in the bulk. The two following rows (rows 3 and 4), however, are empty because of the repulsion between the third neighbors. As a result, the adsorption is negative, $\Gamma=-1/3$. This counter-intuitive result showing that the attractive surface can lead to a desorption follows from the formation of the depletion (empty) zone following the adsorbed layer. Let us focus on the vacuum that is the $T=0$ state of the disordered dilute phase studied at $T>0$ in the following sections. A layer of thin or thick clusters is formed for $-2.5<\mu^*<-2.0$ or $-2.0<\mu^*<-1.5$ respectively. The adsorption is positive in the two cases, in contrast to the case discussed above. At $\mu^*=-2.0$ there is a discontinuous change from $\Gamma=1/2$ to $\Gamma=1$, next at $\mu^*=-1.5$ the adsorption jumps from $\Gamma=1$ to $\Gamma=-1/3$, and at $\mu^*=-0.75$ from $\Gamma=-1/3$ to $\Gamma=0$. \begin{figure*}[htb!] \includegraphics{fig1.eps} \caption{\label{fig:GS_ADS} The ground states ($T^* = 0$) for $h=-1,0,1$ and for different values of the chemical potential $\mu^*$. The snapshots present a region of the slit close to one of the confining walls. Blue and white circles represent occupied and empty sites, correspondingly. Below them the density profiles, $\rho(x)$, along the slit cross-section are presented. The lower panel presents regions of stability of the structures (a-n) in dependence of $\mu^*$ for the three different values of $h$. The regions denoted by vac. and cond. correspond to the vacuum and condensed phases, respectively.} \end{figure*} \subsection{The adsorption isotherms} We investigated the behavior of the adsorption at various temperatures and the interaction with the walls (Fig. ~\ref{fig:ADS}). The chemical potential values are restricted by $\mu^*_{pt} = -1.0, -0.7, 0.0$ for $T^* = 0.5, 0.7, 1.0$, respectively, to avoid the influence of phase transition effects on the adsorption phenomena. \begin{figure*}[htb!] \includegraphics[width=1\linewidth]{fig2.eps} \caption{\label{fig:ADS} Adsorption $\Gamma$ $versus$ the chemical potential $\mu^*$ for different values of the wall-particle interaction $h$ at the system temperatures $T^*=1.0$ left, $T^*=0.7$ central and $T^*=0.5$ right panels. $\Gamma$, $T^*$ and $\mu^*$ are dimensionless.} \end{figure*} For the walls with attracting character ($h<0$), the maximum of adsorption at the low temperature $T^*=0.5$ is observed at the value of the chemical potential, which corresponds to the disordered gas phase in the bulk. At the lowest value of the chemical potential, the adsorption is weak due to very low density of particles. With increasing the chemical potential and correspondingly the bulk density, the adsorption increases as well up to an intermediate value of the chemical potential and then starts to decrease. In a warmer environment ($T^*=0.7, 1.0$), the adhesion of particles to the wall becomes less intense. As a result, the maximum values of adsorption decrease with increasing temperature and the peak of the adsorption is smoothed out. It is of interest to investigate the effect of the formation of this peak in more detail. To this end, the partial adsorption is introduced as \begin{equation} \Gamma(z) = \sum_{x=0}^{z}(\rho(x) - \rho_c). \label{eq:PartAds} \end{equation} The partial adsorption for the chemical potential values smaller, at the adsorption maximum and above the latter is shown in Fig. ~\ref{fig:Par_ADS}. \begin{figure}[htb!] \includegraphics[width=1\linewidth]{fig3.eps} \caption{\label{fig:Par_ADS} The partial adsorption (\ref{eq:PartAds}) for the chemical potential $\mu^*$ values smaller, at the adsorption maximum and above the latter at $T^* = 0.5$ and $h = -1$.} \end{figure} The partial adsorption is asymptotically approaching the total adsorption for the same chemical potential. However, the deviations of the partial adsorption from the asymptotic value are growing when the chemical potential and bulk density are increasing. This is a consequence of the clusters formation in the system. The partial adsorption dependence on $z$ allows to estimate the distance from the wall where its influence spans. It strongly increases with the density and involve 4, 8 and 16 layers for $\mu^*=-2.1, -1.7$ and -1.0, consecutively. \subsection{The density profile} The distribution of particles in the near wall region demonstrates considerable changes with increasing the chemical potential and correspondingly the bulk concentration (Fig. ~\ref{fig:profiles}). At low density at the system states before and at the adsorption maximum four or eight nearest to the wall rows show deviation from the bulk density. The closest row is excessively populated due to attraction to the wall and the next row is populated as well because of the interparticle attraction of the first neighbours. Two subsequent rows are depleted in view of the repulsion of the next-next-nearest neighbours. At low density of particles the influence of the wall decreases fast with the distance to the wall. At larger concentration (for $\mu=-1$) the relay mechanisms transfer the density deviations for longer distances. The qualitative behavior of the density profiles at higher reduced temperatures (0.7 and 0.9) remains the same. However, because of larger densities at $T^*=0.9$ the damped oscillations are visible already at the chemical potential corresponding to the adsorption maximum. With an increase in the energy of interaction with the wall $h$, the density of adsorbed particles near it increases and the peak position of $\Gamma(\mu^*)$ shifts to the region of lower chemical potential (Fig. ~\ref{fig:ADS}). \begin{figure}[htb!] \includegraphics[width=0.7\linewidth]{fig4.eps} \caption{\label{fig:profiles} The density profiles in the near wall region for different particle attraction to the wall, $h$, and for the temperature $T^*=0.5$. There are 3 values of the chemical potential corresponding to the regions before, at and after the peak of the adsorption isotherm. The profiles are shown for $\mu = -2.6, -2.1$ (a), $\mu^*=-2.0,-1.7$ (b), $\mu^* = -1.0, -1.0$ (c) for particle-wall interaction $h=-2$ (solid line) and $h=-1$ (dashed line). } \end{figure} Because of the attraction to the wall, the density of particles $\rho_0$ in the row adjoining to it grows initially faster with the chemical potential increase than in the bulk (Fig. ~\ref{fig:Densities}). However, when $\rho_0$ attains the value corresponding to the maximal adsorption, its growth slows down. \begin{figure}[htb!] \includegraphics[width=0.8\linewidth]{fig5.eps} \caption{\label{fig:Densities} The average density in the row next to the wall (solid line) and in the bulk (dashed line) at $T^*=0.5$ and for wall-particle interaction $h = -1$. The derivative of the two densities with respect to the chemical potential is shown in the inset. The turnover of the densities increase occurs at $\mu^*=-1.62$.} \end{figure} The total adsorption is mainly determined by the rivalry of the density deviations in the two rows closest to the wall and the two subsequent ones that is the result of the competing interparticle interactions. The long range repulsive interaction is of minor importance at low density of particles. With the chemical potential increase and due to the wall attraction, the density of the particles in the layers closest to the wall increases to the values when the interparticle repulsion joins the game and starts to hamper the density increase. It is well illustrated by the dependence of the density $\rho_0$ on the intensity of the wall attraction (Fig. ~\ref{fig:profiles}). Even at the lowest bulk density (Fig. ~\ref{fig:profiles}a) the ratio ($\rho_0/\rho_c$) is considerably smaller than the Boltzmann factor exp$(-h/T^*)$, especially at $h=-2$. This is just the influence of the interparticle repulsion, because $\rho_0$ is large enough when the repulsion can order the system into succession of rhombuses\cite{almarza:14:0}. The upper bound for $\rho_0$ for a layer of clusters that do not repel one another is $\rho_0=1/2$ (see Fig.\ref{fig:GS_ADS}). At the same time, the bulk density increases as well, and the density in the rows 3 and 4 decreases to very small values. These two layers lead to a negative contribution to the adsorption, and the absolute value of this contribution increases with increasing bulk density. The result of this is the maximum on the chemical potential dependence of the total adsorption. An additional information can be drawn from adsorption at neutral and repulsive walls. The neutral wall behaves like an attractive one, especially at the chemical potentials corresponding to not very small concentrations (Fig. ~\ref{fig:ADS}). Definitely, this is the result of the long range interparticle repulsion. The particles in the rows closest to the wall feel repulsion from the bulk side of the system and do not experience such an action from the wall. This effect comes even out for the repulsive wall resulting in a considerably weaker negative adsorption as compared to that at the attractive one. \subsection{Cluster formation} In systems with competing interactions the formation of clusters plays an important role in general~\cite{almarza:14:0, bomont:12:0, santos:17:0} as well as in adsorption phenomena as it was demonstrated for a 3D off-lattice system\cite{litniewski:19:0}. Thus, the analysis of possible clusters and their distribution in the near wall region can give an additional insight into characterization of the adsorption in our 2D lattice system. The following types of clusters are predominantly found in the gas phase of these systems: monomers, dimers, 2 types of triangular clusters and rhomboidal clusters. The contribution of each of them changes with increasing the chemical potential or density both in the bulk and in the border area (Fig. ~\ref{fig:Cluster_distr}). The distribution is shown for cluster sizes $M \leq 7$ since the contribution of clusters of higher sizes is much lower of the order of $10^{-6}$. In addition, for each of the sizes greater than or equal to 3, the sum of the probabilities of all possible configurations was calculated due to the fact that the probability of the most energetically favorable configuration is much higher than the others for the same cluster size (an equilateral triangle for $M$=3, a rhombus for $M$=4, a trapezoid for $M$=5) and is approximately 95\% relative to the total number of clusters of a given size for the chemical potential values considered. As an exception, three-particle clusters can be distinguished: the contribution of the clusters that form the equilateral triangle is about 80\%, about 20\% is a part of the clusters, forming an irregular triangle, and much less than 1\% are for linear clusters. \begin{figure}[htb!] \includegraphics[width=0.8\linewidth]{fig6.eps} \caption{\label{fig:Cluster_distr} The distribution of the probabilities for particles to belong to a cluster of size $M$ for different values of the chemical potential at $T^*=0.5$ in the two closest to the wall rows and in the bulk. The partial contributions of different configurations of particles in a cluster of size $M$ are not displayed. Symbols indicate the distributions in the near-wall region (squares, triangles and circles are used for the chemical potentials $\mu^* = -2.1, -1.7, -1.0$, respectively). The filled areas reflect the difference between the probability distribution for certain clusters in the bulk and in the near-wall area. The hatched fillings indicate the probability excess in the border area as compared to the bulk. For $\mu^*=-1.7$, $P(1)=0.316$ and $P(4)=0.325$ in the bulk. As these values are very close to each other, at this value of the chemical potential the monomer-dominated fluid crosses over to the cluster-dominated fluid.} \end{figure} With the density increase, the particle distribution shifts to larger cluster sizes. As compared to the bulk, the excess of isolated particles in the near-wall region decreases with the chemical potential increase and takes a negative value for $\mu=-1.0$. At the chemical potential that corresponds to the maximum of the adsorption, the probability distribution of particles among the clusters is a qualitatively different from two other situations. The probability distribution has two highs, one for isolated particles and the other for rhombuses. A similar situation takes place in the bulk that can be explained by the density increase although there is a large difference in the bulk and the near-wall mean particle densities (0.2 against 0.4 at $\mu=-1$ and $T^*=0.5$). The ordered structures in these regions are different. In the near-wall region a two-row stripe filled by the rhomboid clusters is followed by almost empty two rows while in the bulk they are homogeneously distributed. Thus, the most ordered rhombus state in the bulk\cite{almarza:14:0} is observed at $\rho=1/3$, while in the near-wall region at $\rho=1/2$ (Fig. ~\ref{fig:GS_ADS}) that leads to similar distributions of particles among the clusters in both regions. Two orientations of rhomboid clusters in the near-wall region are observed with the edges parallel to the wall. \subsection{Correlation effect} For a detailed analysis, we decided to study the behavior of the correlation functions $g_z(\delta y)$ depending on the distance $\delta y$ between the sites of the lattice in the direction parallel to the wall, for several layers at the distance $z$ from it. \begin{equation} g_z(\delta y) = \langle \hat\rho(y,z) \hat\rho(y+\delta y,z) \rangle \end{equation} For small values of $\mu^*$, when the density is very low, the density at the surface grows faster with increasing the chemical potential than the density in the bulk, because interaction with the wall makes it favourable to introduce a particle or a cluster at the surface. When the density becomes larger, and the average distance between the clusters at the surface is too small to introduce another cluster without causing a repulsion with the new and the existing clusters, it becomes more favourable to introduce a cluster to the bulk, where the density is smaller. At this value of the chemical potential the correlation function (Fig. ~\ref{fig:CorFunc}) in the near-wall row starts to show an oscillatory decay. This short-range ordering of clusters allows to avoid repulsion between some pairs of clusters if the distribution of clusters would be random. No correlation was observed in the positions of particles in the first and fifth rows due to very low density of particles in the third and fourth rows. \begin{figure}[htb!] \includegraphics[width=1\linewidth]{fig7.eps} \caption{\label{fig:CorFunc} Correlation functions $g_1(\delta y)$ along the wall for the first row for different values of the chemical potential, which correspond to the states before, at and after the maximum of the adsorption $\Gamma(\mu)$ at temperature $T^*=0.5$ and the particle-wall interaction energy $h=-1$.} \end{figure} The oscillatory decay of the correlation function in the parallel direction and of the density profile in the perpendicular direction both start when the adsorption begins to decay as a function of $\mu^*$. This is consistent with the slower increase of the density at the wall than in the bulk. In the bulk the density is still low enough and the probability that a cluster introduced randomly will be close to existing clusters is low. Since the density at the surface (rows 1 and 2) grows more slowly, and in the layer next to them (rows 3 and 4) the density attains very small values, the difference between the mean near-wall density and the density in the bulk must decrease. This is an effect of the repulsion between the clusters when the density is relatively large. It leads to the local periodic ordering on the one hand, and to the decreasing adsorption on the other hand. As a result of which, it can be noted that the attractive surface covered by clusters changes to an efficiently repulsive one: the repulsive barrier in it is formed by adsorbed particles\cite{litniewski:19:0}, which, due to interaction with each other, have a stronger repulsion at long distances. \section{\label{sec:concl}Summary and Conclusions} The purpose of this work was investigation of the effect of cluster formation on adsorption phenomena. We focused on a monolayer of SALR particles confined by a straight wall. In order to determine general effects common to many SALR systems, we considered a generic model with a phase diagram determined earlier in Ref.\cite{almarza:14:0}. We assumed that the particles occupied sites of the 2D triangular lattice, and interacted as in the model introduced in Ref.~\cite{pekalski:14:0} (first neighbor attraction and third neighbor repulsion). The wall modeled by a straight line interacted only with the particles in the first row (next to the wall). We have obtained the adsorption isotherm as a function of the chemical potential, $\Gamma(\mu^*)$, for a few values of temperature and for different strengths of the wall-particle interaction. In addition, structural characteristics such as cluster size distribution in the bulk and near the wall, density profile in the direction perpendicular to the wall, correlation function in the direction parallel to the wall, density in the first row and in the bulk, and partial adsorption defined in Eq.(\ref{eq:PartAds}) were computed. All quantities were obtained by MC simulations. We have found that the shape of the adsorption isotherm is qualitatively different than in simple fluids. In the case of the dilute phase, the adsorption is a nondecreasing function of the chemical potential when the long-range repulsion between the particles is absent. In contrast, if the long-range repulsion is present, the adsorption takes a pronounced maximum for the chemical potential $\mu^*=\mu^*_{max}$ that is significantly smaller than its value at the phase transition (i.e. still in the low-density disordered phase). We have found this characteristic non-standard shape of $\Gamma(\mu^*)$ for all studied temperatures and even in the absence of wall-particle attraction. Interestingly, all the studied structural characteristics undergo a qualitative change for $\mu^*\approx\mu^*_{max}$. When $\Gamma(\mu^*)$ is an increasing function of $\mu^*$, the system behavior is dominated by individual particles. Even though clusters are present when $\mu^*$ exceeds a certain value, the probability of finding an isolated particle is larger than the probability of finding a particle belonging to the optimal cluster (Fig.\ref{fig:Cluster_distr}). The density in the first row increases faster than the density in the bulk for increasing $\mu^*$ (Fig.\ref{fig:Densities}), but it is still low enough so that the average distance between the particles is larger than the range of the repulsion. No short-range order is present near the wall. In this low-density regime, it is more probable that single particles rather than clusters of particles will be introduced to the system upon increase of $\mu^*$. Moreover, with a large probability the new particles will be adsorbed at the attractive wall. Even a neutral wall effectively attracts particles, because the long-range repulsion by the particles at $z>0$ is not compensated due to missing neighbors for $z<0$. For $\mu^*>\mu^*_{max}$, however, the probability of finding an isolated particle is smaller than the probability of finding a particle belonging to the optimal cluster (Fig.\ref{fig:Cluster_distr}). In this case, we may expect that clusters will be introduced to the system with a larger probability than isolated particles when $\mu^*$ increases. For this range of $\mu^*$, the density in the first row is larger, and the average distance between the clusters is significantly smaller than in the bulk. For this reason the long-range repulsion can be more easily avoided when a new cluster is introduced to the bulk, rather than to the near-surface region. As a result, the density in the bulk grows faster than at the wall, and the adsorption decreases for increasing $\mu^*$. Moreover, to avoid the repulsion between the clusters, a short-range order near the wall appears. This short-range order is represented by the oscillatory density and correlation function in the perpendicular and parallel directions, respectively. The density in the layer of clusters at the wall (rows 1 and 2) approaches $1/2$, and the density in the rows 3 and 4 approaches $0$ when $T\to 0$ (Fig\ref{fig:Densities}). This empty layer gives a negative contribution to the adsorption, and shows that an attractive surface covered by the clusters of the SALR particles becomes effectively repulsive. A similar depletion zone was observed in a 3D system~\cite{litniewski:19:0}. We expect the remaining anomalies to hold in 3D as well. However, due to a repulsive barrier between the adsorbed particles and the bulk, it can be tricky to study and would require nontrivial sampling methods. It would be interesting to verify our predictions experimentally. The maximum of $\Gamma(\mu^*)$ could serve as an indication of a crossover between monomer- and cluster-dominated fluid, and of appearance of short-range order near the system boundary. \section{Acknowledgements} This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 734276. Additional support in the years 2017–2020 has been granted for the CONIN project by the Polish Ministry of Science and Higher Education (agreement no. 3854/H2020/17/2018/2). Financial support from the National Science Center under Grant No. 2015/19/B/ST3/03122 is also acknowledged.
2,877,628,091,427
arxiv
\section{Introduction} A Banach algebra $A$ is \emph{biprojective} if the multiplication map $\Delta_*:A{\widehat{\otimes}} A\rightarrow A$ has a right inverse in the category of $A$-bimodule maps. This can be thought of as a ``finiteness condition''. In particular, the group algebra $L^1(G)$ is biprojective if and only if $G$ is compact, see \cite[Chapter~IV, Theorem~5.13]{hel}. When dealing with more non-commutative (or ``quantum'') algebras (here we focus on $L^1(G)^* = L^\infty(G)$ when we suggest that the classical situation is commutative) there is a large amount of evidence that \emph{operator spaces} form the correct category to work in. For example, if we consider the Fourier algebra $A(G)$, then $A(G)$ is \emph{operator} biprojective if and only if $G$ is discrete, \cite{wood}. When $G$ is abelian, as $A(G) \cong L^1(\hat G)$, and $\hat G$ is compact if and only if $G$ is discrete, this result is in full agreement with what we might expect. By contrast, if we ask when $A(G)$ is biprojective, then, if $G$ is discrete and almost abelian (contains a finite-index abelian subgroup) then $A(G)$ is biprojective. Conversely, if $A(G)$ is biprojective, then $G$ is discrete, and either almost abelian, or is non-amenable yet does not contain $\mathbb F_2$, see \cite{runde}. In this note, we shall continue the study of when the convolution algebra of a (reduced) compact quantum group is operator biprojective. It was shown in \cite[Theorem~4.12]{aristov} that if the convolution algebra of a locally compact quantum group ${\mathbb G}$ is operator biprojective, then ${\mathbb G}$ is already compact. Conversely, if ${\mathbb G}$ is a compact Kac algebra, then ${\mathbb G}$ is operator biprojective. We shall show that if the right inverse to $\Delta_*$ can be chosen to be completely contractive, then ${\mathbb G}$ must already be a Kac algebra. We make some remarks on the general case. We indicate that the modular theory of the Haar state seems to be important outside of the Kac case, and it seems likely that a better understanding of how to deal with how the coproduct iteracts with the modular automorphism group will be necessary to completely characterise when the convolution algebra of ${\mathbb G}$ is operator biprojective. We shall follow the notation of \cite{ER}, and in particular, write ${\widehat{\otimes}}$ for the operator space projective tensor product, and write $\mc{CB}(E,F)$ to denote the space of complete bounded linear maps between operator spaces $E$ and $F$. \section{Locally compact quantum groups} Locally compact quantum groups \cite{kus1, kus3} are an axiomatic framework which encompass the $L^1(G)$ algebras, the Fourier algebra $A(G)$, and various ``quantum'' examples, for example, Woronowicz's compact quantum groups. Kac algebras \cite{ES} are an earlier axiomatic framework which fails to encompass many of the ``quantum'' examples, for example \cite{woro3}. However, we shall concentrate on the compact case, which is technically easier. We shall follow the presentation of \cite{timm}, which in turn closely follows Woronowicz's original papers \cite{woro1} and \cite{woro2}. See also readable, non-technical accounts in \cite{kus3}, and the survey \cite{maes}, although be aware that these sources use different notation. A compact quantum semigroup is a unital C$^*$-algebra $A$ equipped with a unital $*$-homomorphism $\Delta:A\rightarrow A\otimes_{\min} A$ such that $(\Delta\otimes\iota)\Delta = (\iota\otimes\Delta)\Delta$. A compact quantum group is a compact quantum semigroup $(A,\Delta)$ which satisfies the \emph{cancellation laws}, namely that \[ \Delta(A)(A\otimes 1) := \operatorname{lin}\{ \Delta(a)(b\otimes 1) : a,b\in A \}, \quad \Delta(A)(1\otimes A), \] are both dense in $A\otimes_{\min}A$. If $G$ is a compact semigroup, then we may set $A=C(G)$ and $\Delta(f)(s,t) = f(st)$ to get a compact quantum semigroup $(A,\Delta)$. Then the cancellation laws correspond to $G$ having the cancellation laws: namely that if $st=sr$ for $s,t,r\in G$, then $t=r$, and similarly with the orders reversed. As sketched in \cite{maes}, these are equivalent to $G$ being a group. From now on, fix a compact quantum group $(A,\Delta)$. These axioms imply that $A$ carries a unique \emph{Haar state}, that is, a state $\varphi\in A^*$ such that \[ (\varphi\otimes\iota)\Delta(a) = \varphi(a) 1 = (\iota\otimes\varphi)\Delta(a) \qquad (a\in A). \] We can form the GNS construction $(H,\Lambda)$ for $\varphi$. We shall always suppose that $(A,\Delta)$ is \emph{reduced}, that is, that $\varphi$ is faithful. As such, we shall identify $A$ with a concrete C$^*$-algebra acting on $H$. If $\varphi$ is not faithful, then we may quotient by its kernal $N=\{ a\in A : \varphi(a^*a)=0 \}$ to obtain a reduced compact quantum group. Note that $N$ is an ideal because $\varphi$ is a KMS weight (see below), see the details in \cite[Theorem~2.1]{bmt}. Let $M = A''$ be the von Neumann algebra generated by $A$. Then $\Delta$ extends to a normal $*$-homomorphism $\Delta:M\rightarrow M\overline\otimes M$. Then, by \cite[Theorem~7.2.4]{ER}, $(M\overline\otimes M)_* = M_* {\widehat{\otimes}} M_*$ and normality of $\Delta$ induces a complete contraction $\Delta_*:M_* {\widehat{\otimes}} M_* \rightarrow M_*$. That $\Delta$ is coassociative implies that $\Delta_*$ is associative, so $M_*$ becomes a completely contractive Banach algebra. If we started with a compact group $G$, then $M_*$ is nothing but $L^1(G)$, and so we refer to $M_*$ as the \emph{convolution algebra} of $(A,\Delta)$. For more on (locally) compact quantum groups in the von Neumann algebra setting see \cite{kus2}. A \emph{finite-dimensional corepresentation} of $(A,\Delta)$ is a matrix $u=(u_{i,j}) \in \mathbb M_n(A)$ such that \[ \Delta(u_{ij}) = \sum_{k=1}^n u_{ik} \otimes u_{kj} \qquad (1\leq i,j\leq n). \] There are suitable notions of \emph{intertwiner} between corepresentations, and what an \emph{irreducible} corepresentation is. Every finite-dimensional corepresentation can be written as the direct sum of irreducible corepresentations. Using the Haar state, it can be shown that every finite-dimensional corepresentation is equivalent to a \emph{unitary} one, that is, where $u\in\mathbb M_n(A)$ is unitary. The general corepresentation theory of $(A,\Delta)$ parallels the representation theory of compact groups very closely. Let $\{u^\alpha = (u^\alpha_{ij})_{i,j=1}^{n_\alpha} : \alpha\in\mathbb A\}$ be a maximal family of finite-dimensional irreducible unitary co-representations of $(A,\Delta)$. Let $\alpha_0\in\mathbb A$ be such that $v^{\alpha_0} = 1$, the trivial corepresentation. Let $\mc A$ be the algebra generated by $\{ u^\alpha_{ij} : \alpha\in\mathbb A, 1 \leq i,j \leq n_\alpha \}$ in $A$. Then $\mc A$ is a \emph{Hopf $*$-algebra}, and $\{ u^\alpha_{ij} : \alpha\in\mathbb A, 1 \leq i,j \leq n_\alpha \}$ forms a basis for $\mc A$. This means that $\mc A$ is a $*$-algebra, that $\Delta$ restricts to give a $*$-homomorphism $\Delta: \mc A\rightarrow \mc A\otimes \mc A$ (the algebraic tensor product) and there exist maps $\epsilon:\mc A\rightarrow\mathbb C$ and $S:\mc A\rightarrow\mc A$, the \emph{counit} and \emph{antipode}, satisfying the usual properties. Indeed, for $\alpha\in\mathbb A$ and $1\leq i,j\leq n_\alpha$, we have that \begin{gather*} \Delta\big( u^\alpha_{i,j} \big) = \sum_{k=1}^{n_\alpha} u^\alpha_{i,k} \otimes u^\alpha_{k,j}, \quad S(u^\alpha_{i,j}) = \big( u^\alpha_{j,i} \big)^*, \quad \epsilon\big( u^\alpha_{i,j} \big) = \delta_{ij}, \quad \varphi\big( u^\alpha_{i,j} \big) = \delta_{\alpha, \alpha_0}. \end{gather*} Furthermore, for each $\alpha\in\mathbb A$, there exists a unique positive invertible matrix $F^\alpha \in \mathbb M_{n_\alpha}$ with $\operatorname{Tr} F^\alpha = \operatorname{Tr} (F^\alpha)^{-1}$, and such that \[ \varphi\big( (u^\beta_{ij})^* u^\alpha_{kl} \big) = \delta_{\alpha\beta} \delta_{jl} \frac{((F^\alpha)^{-1})_{ki}}{\operatorname{Tr}(F^\alpha)}, \quad \varphi\big( u^\beta_{ij} (u^\alpha_{kl})^* \big) = \delta_{\alpha\beta} \delta_{ik} \frac{F^\alpha_{lj}}{\operatorname{Tr}(F^\alpha)}. \] The Hopf $*$-algebra $\mc A$ is norm dense in $A$, and is the unique such dense Hopf $*$-algebra, see \cite[Appendix~A]{bmt}. These ``$F$-matricies'' allow us to define characters on $\mc A$. For $z\in{\mathbb C}$, define \[ f_z:\mc A\rightarrow {\mathbb C}, \quad u^\alpha_{ij} \mapsto \big((F^\alpha)^z\big)_{ij}. \] As $F^\alpha$ is positive, the matrix $(F^\alpha)^z$ makes sense. Then, for $w,z\in{\mathbb C}$, define \[ \rho_{z,w}:\mc A\rightarrow\mc A, \quad u^\alpha_{ij} \mapsto \sum_{k,l=1}^{n_\alpha} f_w(u^\alpha_{ik}) f_z(u^\alpha_{lj}) u^\alpha_{kl}. \] Then $\rho_{z,w}$ is an automorphism of $\mc A$ with inverse $\rho_{-z,-w}$, and if $z$ and $w$ are purely imaginary, then $\rho_{z,w}$ is a $*$-automorphism of $\mc A$. In particular, set \[ \sigma_z = \rho_{iz,iz}, \quad \tau_z = \rho_{-iz,iz} \qquad (z\in{\mathbb C}). \] Then $(\sigma_t)_{t\in{\mathbb R}}$ is the (restriction) of the modular automorphism group for $\varphi$, and $(\tau_t)_{t\in{\mathbb R}}$ is the (restriction) of the scaling group. For example, we can calculate that $\varphi(a \sigma_{-i}(b)) = \varphi(ba)$ for $a,b\in\mc A$, a relation which we expect, as $\varphi$ is KMS for $\sigma$. See \cite{tak2} for more details on modular theory of weights. \begin{proposition}\label{spec_coreps} There exists a maximal family of finite-dimensional irreducible unitary co-representations of $(A,\Delta)$, say $\{v^\alpha = (v^\alpha_{ij})_{i,j=1}^{n_\alpha} : \alpha\in\mathbb A\}$, with the property that the associated $F$-matricies are all diagonal, say $F^\alpha$ has diagonal entries $(\lambda^\alpha_i)_{i=1}^{n_\alpha}$, so that $\sum_i \lambda^\alpha_i = \sum_i (\lambda^\alpha_i)^{-1} = Tr_\alpha$, say. \end{proposition} \begin{proof} Start with some maximal family $\{u^\alpha = (u^\alpha_{ij})_{i,j=1}^{n_\alpha} : \alpha\in\mathbb A\}$ as before. As each $F^\alpha$ is positive it can be diagonalised by some unitary matrix $Q^\alpha \in \mathbb M_{n_\alpha}$. Let $(\lambda^\alpha_i)_{i=1}^{n_\alpha}$ be the eigenvalues of $F^\alpha$, so that $\operatorname{Tr}(F^\alpha) = \sum_i \lambda^\alpha_i = \operatorname{Tr}((F^\alpha)^{-1}) = \sum_i (\lambda^\alpha_i)^{-1}$. Then $(Q^\alpha)^* F^\alpha Q^\alpha$ is the diagonal matrix with entries $(\lambda^\alpha_i)_{i=1}^{n_\alpha}$. Set \[ v^\alpha_{ij} = \big( (Q^\alpha)^* u^\alpha Q^\alpha \big)_{ij} = \sum_{k,l=1}^{n_\alpha} \overline{Q^\alpha_{ki}} u^\alpha_{kl} Q^\alpha_{lj} \qquad (\alpha\in\mathbb A, 1\leq i,j\leq n_\alpha). \] It is now routine to check that $v^\alpha$ is a unitary corepresentation matrix, and that the properties above still hold for the family $\{ v^\alpha_{ij} \}$. For example, we see that \begin{align*} \varphi\big( (v^\beta_{ij})^* v^\alpha_{kl} \big) &= \varphi\Big(\Big( \sum_{r,s} \overline{Q^\beta_{ri}} u^\beta_{rs} Q^\beta_{sj} \Big)^* \sum_{t,p} \overline{Q^\alpha_{tk}} u^\alpha_{tp} Q^\alpha_{pl} \Big) = \sum_{r,s,t,p} Q^\beta_{ri} \overline{Q^\beta_{sj}} \overline{Q^\alpha_{tk}} Q^\alpha_{pl} \varphi\big( (u^\beta_{rs})^* u^\alpha_{tp} \big) \\ &= \delta_{\alpha\beta} \frac{1}{\operatorname{Tr}(F^\alpha)} \sum_{r,s,t} Q^\beta_{ri} \overline{Q^\beta_{sj}} \overline{Q^\alpha_{tk}} Q^\alpha_{sl} ((F^\alpha)^{-1})_{tr} \\ &= \delta_{\alpha\beta} \frac{1}{\operatorname{Tr}_\alpha} \sum_s (Q^\alpha)^*_{js} Q^\alpha_{sl} \big( (Q^\alpha)^* (F^\alpha)^{-1} Q^\alpha \big)_{ki} = \delta_{\alpha\beta} \delta_{jl} \delta_{ki} \frac{1}{\operatorname{Tr}_\alpha} \frac{1}{\lambda^\alpha_i}. \end{align*} Similar calculations show that \[ \varphi\big( v^\beta_{ij} (v^\alpha_{kl})^* \big) = \delta_{\alpha\beta} \delta_{ik} \delta_{jl} \frac{\lambda^\alpha_j}{\operatorname{Tr}_\alpha}. \] and also \[ f_z(v^\alpha_{ij}) = \delta_{ij} (\lambda^\alpha_i)^z, \qquad \rho_{z,w}(v^\alpha_{ij}) = (\lambda^\alpha_i)^w (\lambda^\alpha_j)^z v^\alpha_{ij}. \] \end{proof} \section{Biprojectivity} Let $(A,\Delta)$ be a reduced compact quantum group, with associated Haar state $\varphi$, GNS construction $(H,\Lambda)$, von Neumann algebra $M$ and convolution algebra $M_*$. We shall study when $M_*$ is operator biprojective, that is, whether there is a completely bounded right inverse to $\Delta_*:M_*{\widehat{\otimes}} M_* \rightarrow M_*$ which is also an $M_*$-bimodule homomorphism. Henceforth, we shall term such a map $\theta_*$ a \emph{splitting morphism}. See \cite{aristov,aristov2} for further details on the operator space case, and \cite[Chapter~IV]{hel} or \cite[Section~4.3]{rundebook} for the classical Banach space setting. \begin{lemma} $M_*$ is biprojective if and only if there exists a normal completely bounded map $\theta:M \overline\otimes M \rightarrow M$ with \[ \theta\Delta = \operatorname{id}, \quad \Delta\theta = (\theta\otimes\operatorname{id})(\operatorname{id}\otimes\Delta) = (\operatorname{id}\otimes\theta)(\Delta\otimes\operatorname{id}). \] \end{lemma} \begin{proof} Suppose that such a $\theta$ exists, so as $\theta$ is normal, there exists $\theta_* : M_* \rightarrow M_*{\widehat{\otimes}} M_*$ with $\Delta_* \theta_* = \operatorname{id}$. Then, for $\omega,\tau\in M_*$ and $x\in M$, \begin{align*} \ip{x}{\theta_*(\omega*\tau)} &= \ip{\theta(x)}{\Delta_*(\omega\otimes\tau)} = \ip{(\theta\otimes\operatorname{id})(\operatorname{id}\otimes\Delta)(x)}{\omega\otimes\tau} \\ &= \ip{(\operatorname{id}\otimes\Delta)(x)}{\theta_*(\omega) \otimes \tau} = \ip{x}{\theta_*(\omega) * \tau}. \end{align*} Here we write $*$ for both the product in $M_*$, and the bimodule action of $M_*$ on $M_* {\widehat{\otimes}} M_*$. Similarly, $\theta_*(\omega * \tau) = \omega * \theta_*(\tau)$, so we see that $\theta_*$ is a $M_*$-bimodule homomorphism. The converse is simply a case of reversing the argument. \end{proof} In the following section, we shall carefully study the structure of normal completely bounded maps $M\overline\otimes M\rightarrow M$. From now on, fix such a map $\theta:M\overline\otimes M\rightarrow M$ and let $\{ (v^\alpha_{ij})_{i,j=1}^{n_\alpha} : \alpha\in\mathbb A\}$ be as in Proposition~\ref{spec_coreps}. \begin{proposition}\label{theta_struc} We have that $\theta\Delta = \operatorname{id}$ and $\Delta\theta = (\theta\otimes\operatorname{id})(\operatorname{id}\otimes\Delta) = (\operatorname{id}\otimes\theta)(\Delta\otimes\operatorname{id})$ if and only if there exists a family $\{ X^\alpha \in \mathbb M_{n_\alpha} : \alpha\in\mathbb A\}$ such that, for $\alpha,\beta\in\mathbb A$, $1\leq i,j\leq n_\alpha$ and $1\leq k,l\leq n_\beta$, \[ \theta\big( v^\alpha_{ij} \otimes v^\beta_{kl} \big) = \delta_{\alpha\beta} X^\alpha_{jk} v^\alpha_{il}, \qquad \sum_{r=1}^{n_\alpha} X^\alpha_{rr} = 1. \] \end{proposition} \begin{proof} The ``if'' part follows as $\mc A$ generates $M$ and $\theta$ is normal. Conversely, let $x\in M$ and $\alpha\in\mathbb A$. For $1\leq i,j\leq n_\alpha$, \begin{align*} \Delta\theta\big( x \otimes v^\alpha_{ij} \big) &= (\theta\otimes\operatorname{id})(\operatorname{id}\otimes\Delta)\big( x \otimes v^\alpha_{ij} \big) = \sum_{r=1}^{n_\alpha} \theta\big( x \otimes v^\alpha_{ir} \big) \otimes v^\alpha_{rj}. \end{align*} Let $a_{ij} = \theta(x \otimes v^\alpha_{ij})$, so that $\Delta(a_{ij}) = \sum_r a_{ir} \otimes v^\alpha_{rj}$. As $\Delta$ is a $*$-homomorphism, for $1\leq k,l\leq n_\alpha$, we have that \[ \Delta\big(a_{ij} (v^\alpha_{kl})^*\big) = \sum_{r,s=1}^{n_\alpha} a_{ir} (v^\alpha_{ks})^* \otimes v^\alpha_{rj}(v^\alpha_{sl})^*. \] Applying $(\iota\otimes\varphi)$, we see that, by the calculations in Proposition~\ref{spec_coreps}, \[ \varphi\big(a_{ij} (v^\alpha_{kl})^*\big) 1 = \sum_{r,s=1}^{n_\alpha} a_{ir} (v^\alpha_{ks})^* \varphi\big(v^\alpha_{rj}(v^\alpha_{sl})^*\big) = \delta_{jl} \frac{\lambda^\alpha_j}{\operatorname{Tr}_\alpha} \sum_{r=1}^{n_\alpha} a_{ir} (v^\alpha_{kr})^*. \] As $v^\alpha$ is a unitary matrix, we see that $1 = \sum_k (v^\alpha_{kr})^* v^\alpha_{ks} = \delta_{rs} 1$ for $1\leq r,s\leq n_\alpha$. Thus \[ \sum_{k=1}^{n_\alpha} \varphi\big(a_{ij} (v^\alpha_{kl})^*\big) v^\alpha_{ks} = \delta_{jl} \frac{\lambda^\alpha_j}{\operatorname{Tr}_\alpha} \sum_{r,k=1}^{n_\alpha} a_{ir} (v^\alpha_{kr})^* v^\alpha_{ks} = \delta_{jl} \frac{\lambda^\alpha_j}{\operatorname{Tr}_\alpha} a_{is}. \] It follows that \[ a_{is} = \frac{\operatorname{Tr}_\alpha}{\lambda^\alpha_j} \sum_{k=1}^{n_\alpha} \varphi\big(a_{ij} (v^\alpha_{kj})^*\big) v^\alpha_{ks} \qquad \big( \alpha\in\mathbb A, 1\leq i,j,s\leq n_\alpha \big). \] Similarly, if we set $b_{ij} = \theta(v^\alpha_{ij}\otimes x)$, then $\Delta(b_{ij}) = \sum_r v^\alpha_{ir} \otimes b_{rj}$, and we can show that \[ b_{sj} = \lambda^\alpha_i \operatorname{Tr}_\alpha \sum_{k=1}^{n_\alpha} \varphi\big( (v^\alpha_{ik})^* b_{ij} \big) v^\alpha_{sk} \qquad \big( \alpha\in\mathbb A, 1\leq i,j,s\leq n_\alpha \big). \] In particular, we see that $\theta(v^\alpha_{ij} \otimes v^\beta_{kl})$ is in the linear span of $\{ v^\alpha_{is} : 1\leq s\leq n_\alpha\}$, and the linear span of $\{ v^\beta_{rl} : 1\leq r\leq n_\beta\}$. Hence $\theta(v^\alpha_{ij} \otimes v^\beta_{kl}) = 0$ if $\alpha\not=\beta$. If $\alpha=\beta$, then by linear independence, we see immediately that \[ \theta(v^\alpha_{ij} \otimes v^\alpha_{kl}) = X^\alpha_{jk} v^\alpha_{il}, \] for some scalar $X^\alpha_{jk}$. Finally, as $\sum_k \theta(v^\alpha_{ik}\otimes v^\alpha_{kj}) = v^\alpha_{ij}$, it follows $\sum_k X^\alpha_{kk}=1$, as required. \end{proof} \begin{theorem} Let $(A,\Delta)$ be a compact quantum group with associated von Neumann algebra $M$. Let $\theta_*:M_*\rightarrow M_*{\widehat{\otimes}} M_*$ be a splitting morphism, and suppose further that $\theta=\theta_*^*$ is an $M$-bimodule map, in the sense that $\theta(\Delta(a)x\Delta(b)) = a \theta(x) b$ for $x\in M\overline\otimes M$ and $a,b\in M$. Then the Haar state $\varphi$ is tracial, so $(M,\Delta)$ is a Kac algebra. \end{theorem} \begin{proof} Let $\alpha\in\mathbb A$ and $1\leq i,j,k\leq n_\alpha$. As $\theta(x\Delta(b)) = \theta(x)b$ for $x\in M\overline\otimes M$ and $b\in M$, using the notation of the last proposition, we see that \begin{equation}\label{eq:one} X^\alpha_{jk} v^\alpha_{ij} (v^\alpha_{ij})^* = \theta\big( v^\alpha_{ij} \otimes v^\alpha_{kj} \big) (v^\alpha_{ij})^* = \sum_{l=1}^{n_\alpha} \theta\big( v^\alpha_{ij}(v^\alpha_{il})^* \otimes v^\alpha_{kj} (v^\alpha_{lj})^* \big). \end{equation} Now, as $\{ v^\beta_{rs} \}$ forms a basis for the $*$-algebra $\mc A$, and as $\varphi$ picks out the trivial corepresentation $v^{\alpha_0} = 1$, by the calculations of Proposition~\ref{spec_coreps}, we see that \[ v^\alpha_{ij}(v^\alpha_{il})^* \otimes v^\alpha_{kj} (v^\alpha_{lj})^* = \delta_{jl}\frac{\lambda^\alpha_j}{\operatorname{Tr}_\alpha} 1 \otimes \delta_{kl} \frac{\lambda^\alpha_j}{\operatorname{Tr}_\alpha} 1 + \text{other terms}. \] By the structure of $\theta$ established in the last proposition, it follows that \[ \sum_{l=1}^{n_\alpha} \varphi\theta\big( v^\alpha_{ij}(v^\alpha_{il})^* \otimes v^\alpha_{kj} (v^\alpha_{lj})^* \big) = \delta_{jk} \Big( \frac{\lambda^\alpha_j}{\operatorname{Tr}_\alpha} \Big)^2 1 + \text{other terms}. \] By applying $\varphi$ to (\ref{eq:one}), we conclude that \[ X^\alpha_{jk} \frac{\lambda^\alpha_j}{\operatorname{Tr}_\alpha} = \delta_{jk} \Big( \frac{\lambda^\alpha_j}{\operatorname{Tr}_\alpha} \Big)^2 \quad\text{so that}\quad X^\alpha_{jk} = \delta_{jk} \frac{\lambda^\alpha_j}{\operatorname{Tr}_\alpha}. \] We now repeat this argument on the right, so we find that \begin{align*} X^\alpha_{jk} (v^\alpha_{ij})^* v^\alpha_{ij} &= (v^\alpha_{ij})^* \theta\big( v^\alpha_{ij} \otimes v^\alpha_{kj} \big) = \sum_s \theta\big( (v^\alpha_{is})^* v^\alpha_{ij} \otimes (v^\alpha_{sj})^* v^\alpha_{kj} \big) \\ &= \sum_s \delta_{sj} \frac{1}{\lambda^\alpha_i \operatorname{Tr}_\alpha} \delta_{sk} \frac{1}{\lambda^\alpha_k \operatorname{Tr}_\alpha} 1 + \text{other terms} \end{align*} Again, by applying $\varphi$ we see that \[ X^\alpha_{jk} \frac{1}{\lambda^\alpha_i \operatorname{Tr}_\alpha} = \delta_{jk} \frac{1}{\lambda^\alpha_i \operatorname{Tr}_\alpha} \frac{1}{\lambda^\alpha_k \operatorname{Tr}_\alpha} \quad\text{so that}\quad X^\alpha_{jk} = \delta_{jk} \frac{1}{\lambda^\alpha_k \operatorname{Tr}_\alpha}. \] We hence see that for all $\alpha$ and $1\leq k\leq n_\alpha$, we have $\lambda^\alpha_k = 1 / \lambda^\alpha_k$. As $\lambda^\alpha_k > 0$, we see that $\lambda^\alpha_k = 1$. In particular, the modular automorphism group $\sigma$ is trivial, and so $\varphi$ is tracial, as claimed. Indeed, if $\varphi$ is tracial, then from Proposition~\ref{spec_coreps}, we see that $\lambda^\alpha_j = (\lambda^\alpha_i)^{-1}$ for all $i,j$. Thus $\lambda^\alpha_i=1$ for all $i$ and $\alpha$. It follows that the automorphism $\rho_{z,w}$ are trivial, and hence also the scaling group is trivial. So the antipode $S$ is bounded. It is now easy to verify the axioms of a compact Kac algebra, see \cite[Section~6.2]{ES}. \end{proof} We note that an argument of Soltan, \cite[Remark~A.2]{soltan}, shows that if a compact quantum group $(A,\Delta)$ has a faithful family of tracial states (that is, for non-zero $x\in A$ there is a tracial state $\phi$ with $\phi(x^*x)\not=0$) then $(M,\Delta)$ is a Kac algebra. \begin{theorem} Let $(A,\Delta)$ be a compact quantum group with associated von Neumann algebra $M$. Let $\theta_*:M_*\rightarrow M_*{\widehat{\otimes}} M_*$ be a splitting morphism. Suppose that $\theta = \theta_*^*$ is completely positive, or that $\Delta\theta$ is a contraction. Then $(M,\Delta)$ is a Kac algebra. \end{theorem} \begin{proof} As $\theta(1)=\theta\Delta(1)=1$, if $\theta$ is positive, then $\theta$ is contractive, so $\Delta\theta$ is contractive. We have that $\Delta\theta:M\overline\otimes M\rightarrow M\overline\otimes M$ is contractive, and is a projection of $M\overline\otimes M$ onto the subalgebra $\Delta(M)$. A result of Tomiyama, \cite{tom} or \cite[Theorem~3.4, Chapter~III]{tak1}, tells us that, in particular, $\Delta\theta(\Delta(a)x\Delta(b)) = \Delta(a) \theta(x) \Delta(b)$ for $a,b\in M$ and $x\in M\overline\otimes M$. As $\Delta$ is an injective homomorpshim, the above theorem applies. \end{proof} In the following section, we shall show the converse to this corollary: namely that for a compact Kac algebra $(M,\Delta)$, we can choose $\theta$ to be a complete contraction; alternatively, see \cite{RX} or \cite{aristov}. It is shown in \cite{CS} that if we have a completely bounded map $\theta:M\overline\otimes M\rightarrow M$ with $\theta\Delta=\operatorname{id}$ then there exists a completely bounded map $\theta_1:M\overline\otimes M\rightarrow M$ which is an $M$-bimodule map, in the above sense. However, there is no reason that $\theta_1$ need be normal, and no reason that the other conditions on $\theta$ will carry over to $\theta_1$, so that Proposition~\ref{theta_struc} need not apply to $\theta_1$. We can even choose $\theta_1$ to be completely positive, which were it also \emph{faithful} would imply, by \cite[Theorem~4.2, Chapter~IX]{tak2}, the existence of a weight $\omega$ on $M\overline\otimes M$ with interesting modular properties. Again, there seems to be no reason to expect that we can choose $\theta_1$ in such a way. \section{Completely bounded maps} There is a well-known structure theory for completely bounded maps, \cite[Section~5.3]{ER}. If $\theta:N\rightarrow (M,H)$ is a completely positive normal map between von Neumann algebras, then the usual proof of the Stinespring theorem (for example, \cite[Chapter~IV, Theorem~3.6]{tak1}) can be adapted to show that there exists a Hilbert space $K$, a \emph{normal} $*$-homomorphism $\pi:N\rightarrow\mc B(K)$ and a bounded map $U:H\rightarrow K$ such that $\theta(x) = U^* \pi(x) U$ for $x\in N$. Showing the same for completely bounded maps is not quite as simple, but the details are worked out in, for example, the proof of \cite[Theorem~2.4]{HM}. In particular, given $\theta:N\rightarrow (M,H)$ a completely contractive normal map between von Neumann algebras, there exist unital completely positive \emph{normal} maps $\phi_1,\phi_2:N \rightarrow M$ such that \[ \sigma:\mathbb M_2(N) \rightarrow \mathbb M_2(M); \qquad \begin{pmatrix} a & b \\ c & d \end{pmatrix} \mapsto \begin{pmatrix} \phi_1(a) & \theta(b^*)^* \\ \theta(c) & \phi_2(d) \end{pmatrix} \] is unital completely positive and normal. One can now follow the presentation in \cite[Theorem~5.33]{ER} or \cite{Paulsen}, essentially applying the Stinespring construction to $\sigma$. This yields a Hilbert space $K$, a normal $*$-homomorphism $\rho: \mathbb M_2(N)\rightarrow\mc B(K)$ and an isometry $U:H^2 \rightarrow K$ such that $\sigma(x) = U^*\rho(x)U$ for $x\in\mathbb M_2(N)$. Following the proof of \cite[Theorem~2.3]{HM}, there also exists a normal $*$-homomorphism $\rho':\mathbb M_2(M)' \rightarrow \rho(\mathbb M_2(N))'$ such that $\rho'(y)U = Uy$ for $y\in\mathbb M_2(M)'$. Define $\pi:M\overline\otimes M\rightarrow\mc B(K)$, $\pi':M'\rightarrow\mc B(K)$ and $S,T:H\rightarrow K$ by \[ \pi(x) = \rho \begin{pmatrix} x & 0 \\ 0 & x \end{pmatrix}, \quad \pi'(y) = \rho' \begin{pmatrix} y & 0 \\ 0 & y \end{pmatrix}, \quad T(\xi) = \rho\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} U \begin{pmatrix} \xi \\ 0 \end{pmatrix}, \quad S(\xi) = U \begin{pmatrix} 0 \\ \xi \end{pmatrix}, \] for $x\in M\overline\otimes M, y\in M'$ and $\xi\in H$. So $\pi$ and $\pi'$ are normal $*$-homomorphisms and $S$ and $T$ are contractions. Then, for $x\in M\overline\otimes M$ and $\xi,\eta\in H$, \begin{align*} \big( S^*\pi(x)T\xi \big| \eta \big) &= \Big( \rho \begin{pmatrix} x & 0 \\ 0 & x \end{pmatrix} \rho\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} U \begin{pmatrix} \xi \\ 0 \end{pmatrix} \Big| U \begin{pmatrix} 0 \\ \eta \end{pmatrix} \Big) \\ &= \Big( \sigma \begin{pmatrix} 0 & 0 \\ x & 0 \end{pmatrix} \begin{pmatrix} \xi \\ 0 \end{pmatrix} \Big| \begin{pmatrix} 0 \\ \eta \end{pmatrix} \Big) = \big( \theta(x) \xi \big| \eta \big). \end{align*} So $\theta(x) = S^*\pi(x)T$. Then also, for $y\in M'$ and $\xi\in H$, \[ T y \xi = \rho\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} U \begin{pmatrix} y & 0 \\ 0 & y \end{pmatrix} \begin{pmatrix} \xi \\ 0 \end{pmatrix} = \rho\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \pi'(y) U \begin{pmatrix} \xi \\ 0 \end{pmatrix} = \pi'(y) T \xi. \] So $T y = \pi'(y) T$ and similarly $S y = \pi'(y) S$, for $y\in M'$. Let $M$ be a von Neumann algebra with a normal faithful state $\varphi$, leading to GNS construction $(H,\Lambda)$ (here we identify $M$ with a subalgebra of $\mc B(H)$). We can apply Tomita-Takesaki theory to find an anti-linear isometry $J:H\rightarrow H$ such that $M' = JMJ$ (see \cite{tak2}). Let $(\sigma_t)_{t\in{\mathbb R}}$ be the modular automorphism group, and let $\mc A\subseteq M$ be a $*$-subalgebra of elements analytic for $(\sigma_t)$ such that $\sigma_z(a)\in\mc A$ for $z\in{\mathbb C}$ and $a\in\mc A$. For $a\in \mc A$, write $a' = J\sigma_{i/2}(a)^* J$. Then \[ a' \Lambda(1) = J\sigma_{i/2}(a)^* J \Lambda(1) = \Lambda(a) \qquad (a\in\mc A). \] \begin{proposition} Let $M$ be a von Neumann algebra as above, and suppose that $\mc A''=M$. Let $N$ be a von Neumann algebra. If $\theta:N\rightarrow M$ is completely bounded normal map, then we can find a Hilbert space $K$, normal $*$-homomorphisms $\pi:N\rightarrow\mc B(K)$ and $\pi':M'\rightarrow \pi(N)'$, and $\xi_0,\xi_1\in K$ such that the maps \[ \Lambda(a)\mapsto \pi'(a')\xi_0, \qquad \Lambda(a)\mapsto \pi'(a')\xi_1 \qquad (a\in\mc A), \] are bounded, and \begin{equation}\label{eq:two} \varphi(\theta(x)a) = \big( \pi(x) \pi'(a') \xi_0 \big| \xi_1 \big) \qquad (x\in N, a\in\mc A). \end{equation} Conversely, given such $K,\pi,\pi',\xi_0$ and $\xi_1$, there exists a completely bounded normal map $\theta:N\rightarrow M$ satisfying (\ref{eq:two}). Furthermore, $\theta$ is completely positive if and only if we can choose $\xi_0=\xi_1$. \end{proposition} \begin{proof} As $\mc A''=M$, it follows that $\mc A$ is strongly dense in $M$ and hence that $\Lambda(\mc A)$ is norm dense in $H$. If $\theta$ is of the form claimed, then the map $T:\Lambda(\mc A)\rightarrow K; \Lambda(a)\mapsto \pi'(a')\xi_0$ is bounded and so extends to a bounded linear map $T:H\rightarrow K$. Similarly, there exists $S\in\mc B(H,K)$ with $S\Lambda(a) = \pi'(a')\xi_1$. Then, for $a,b\in\mc A$ and $x\in N$, \begin{align*} \big( S^*\pi(x) T \Lambda(a) \big| \Lambda(b) \big) &= \big( \pi(x) \pi'(a') \xi_0 \big| \pi'(b') \xi_1 \big) = \big( \pi(x) \pi'((b')^*a') \xi_0 \big| \xi_1 \big) \\ &= \varphi\big( \theta(x) a\sigma_{-i}(b^*) \big), \end{align*} as $(b')^* = (J\sigma_{i/2}(b)^*J)^* = J \sigma_{i/2}(b) J = J \sigma_{i/2}(c)^* J = c'$ if $c = \sigma_{-i}(b^*)$, and $d\mapsto d'$ is an anti-homomorphism. By the KMS condition, we see that \[ \big( S^*\pi(x) T \Lambda(a) \big| \Lambda(b) \big) = \varphi\big( b^* \theta(x) a \big) = \big( \theta(x) \Lambda(a) \big| \Lambda(b) \big). \] Hence $\theta$ is completely bounded, as $\theta(x) = S^*\pi(x)T$ for $x\in N$. If $\xi_0=\xi_1$ then $S=T$ and $\theta$ is completely positive. Conversely, given $\theta$, from the discussion above, we can find normal $*$-homomorphisms $\pi:N\rightarrow \mc B(K)$ and $\pi':M'\rightarrow\pi(N)'$, and bounded maps $S,T:H\rightarrow K$ with $\theta(x) = S^*\pi(x)T$ for $x\in N$ and $Sy=\pi'(y)S, Ty=\pi'(y)T$ for $y\in M'$. Thus, for $x\in N$ and $a\in\mc A$, \[ \varphi(\theta(x)a) = \big( S^*\pi(x)T\Lambda(a) \big| \Lambda(1) \big) = \big( S^* \pi(x) \pi'(a') T \Lambda(1) \big| \Lambda(1) \big), \] so the proof is complete by setting $\xi_0=T\Lambda(1)$ and $\xi_1=S\Lambda(1)$. If $\theta$ is completely positive, then we can set $S=T$ and hence $\xi_0=\xi_1$. \end{proof} Notice that by the KMS condition, the calculations above also show that if $x,y\in M$ are such that $\varphi(xa) = \varphi(ya)$ for all $a\in\mc A$, then $x=y$. The following is proved using different methods in \cite{RX} and \cite{aristov}. Our proof makes explicit how $\varphi$ being tracial, for a Kac algebra, is central to the proof, and indicates that understanding the modular properties of $\varphi$ for a general compact quantum group will be important in finding a completely bounded analogue of the following. \begin{theorem} Let $(M,\Delta)$ be a compact Kac algebra. Then there exists a splitting morphism $\theta_*:M_*\rightarrow M_* {\widehat{\otimes}} M_*$ such that $\theta = \theta_*^*$ is completely positive. \end{theorem} \begin{proof} We have that $\varphi$ is tracial. Let $\pi:M\overline\otimes M\rightarrow M\overline\otimes M \subseteq \mc B(H\otimes H)$ be the trivial representation, let $\xi_0=\xi_1 =\Lambda(1)\otimes\Lambda(1)$, and define $\pi'$ by \[ \pi'(y) = (J\otimes J) \Delta(JyJ) (J\otimes J) \qquad (y\in M'). \] This formula is derived from the natural coproduct on $M'$, see \cite[Section~4]{kus2}. Let $\mc A$ be the Hopf $*$-algebra associated to $(M,\Delta)$, as before. Then we can apply the above proposition to see that there exists a completely positive normal map $\theta:M\overline\otimes M\rightarrow M$ such that \[ \varphi(\theta(x)a) = \big( x(J\otimes J)\Delta(a^*)(J\otimes J) \Lambda(1)\otimes\Lambda(1) \big| \Lambda(1)\otimes\Lambda(1) \big), \] where we use that $\sigma$ is trivial, as $\varphi$ is tracial. Then $J\Lambda(a) = \Lambda(a)^*$ for $a\in \mc A$, and so, as $\Delta(a^*)\in\mc A\otimes\mc A$, \begin{align*} \varphi(\theta(x)a) &= \big( x(J\otimes J)\Delta(a^*) \Lambda(1)\otimes\Lambda(1) \big| \Lambda(1)\otimes\Lambda(1) \big) \\ &= \big( x(\Lambda\otimes\Lambda)\Delta(a) \big| \Lambda(1)\otimes\Lambda(1) \big) = (\varphi\otimes\varphi)\big( x\Delta(a) \big). \end{align*} In particular, \[ \varphi(\theta\Delta(x)a) = (\varphi\otimes\varphi)\big( \Delta(xa) \big) = \varphi(xa), \] so by the observation above, $\theta\Delta=\operatorname{id}$. Indeed, one may calculate (thinking about Proposition~\ref{theta_struc}) that \[ \theta(v^\alpha_{ij} \otimes v^\alpha_{kl}) = \frac{1}{n_\alpha} \delta_{jk} v^\alpha_{il}, \] using that $\lambda^\alpha_i=1$ for all $\alpha$ and $i$. Thus also $\Delta\theta=(\theta\otimes\operatorname{id})(\operatorname{id}\otimes\Delta) =(\operatorname{id}\otimes\theta)(\Delta\otimes\operatorname{id})$, and so $\theta_*$, the preadjoint to $\theta$, is a splitting morphism, as required. \end{proof} If $\varphi$ is not tracial, then the above proof fails, as for $a\in\mc A$, \[ \Delta(\sigma_{i/2}(a)^*) = \big((\tau_{i/2}\otimes\sigma_{i/2})\Delta(a)\big)^*, \] and hence, as $J\Lambda(b) = \Lambda(\sigma_{i/2}(b)^*)$ for $b\in\mc A$, \[ (J\otimes J)\Delta(\sigma_{i/2}(a)^*)(J\otimes J)(\Lambda(1)\otimes\Lambda(1)) = (\Lambda\otimes\Lambda)\big((\tau_{i/2}\sigma_{-i/2}\otimes\operatorname{id})\Delta(a)\big). \] If we continus to form $\theta$ as above, then we find that \[ \theta\big( v^\alpha_{ij} \otimes v^\beta_{kl}\big) = \delta_{\alpha\beta} v^\alpha_{il} \frac{\delta_{jk}}{\operatorname{Tr}_\alpha} \qquad (\alpha,\beta\in\mathbb A, 1\leq i,j \leq n_\alpha, 1\leq k,l \leq n_\beta ). \] This is nearly of the correct form, but we find that \[ \theta\Delta\big( v^\alpha_{ij} \big) = \frac{n_\alpha}{\operatorname{Tr}_\alpha} v^\alpha_{ij} \qquad (\alpha\in\mathbb A, 1\leq i,j \leq n_\alpha ). \] Notice that $n_\alpha = \sum_i (\lambda^\alpha_i)^{1/2} (\lambda^\alpha_i)^{-1/2} \leq \big(\sum_i \lambda^\alpha \big)^{1/2} \big(\sum_i (\lambda^\alpha)^{-1} \big)^{1/2} = \operatorname{Tr}_\alpha$, it follows that $\theta\Delta=\operatorname{id}$ if and only if $\lambda^\alpha_i=1$ for all $\alpha,i$, that is, again, $\varphi$ is tracial.
2,877,628,091,428
arxiv
\section{Introduction} In~\cite{Shannon48} Shannon presented his celebrated result on the asymptotic optimality of separable source and channel coding. However, for finite block length systems, the importance and superior performance of joint source channel coders has been well recognized and is an area of active research~(see for example \cite{Gastpar}). Specific research thrusts have included investigating source coders that incorporate channel information in the design, channel coders that provide unequal error protection to various source bits, and iterative source-channel decoders. An important issue in joint source channel coding is the tradeoff between source and channel coding rates. For a fixed source vector dimension and channel capacity, there is a tradeoff between the source and channel coding rates. A high rate channel code implies more bits for the source coder which results in a high quality representation at the source but has a higher probability of being received in error. Similarly, a low rate channel code results in fewer bits for the source coder; consequently, the representation is of lower quality at the source but there is a higher probability of being received without error at the receiver. This tradeoff has been quantified for binary symmetric channels~(BSC)~\cite{Hochwald97} and Gaussian channels~\cite{Hochwald98}. This paper addresses the problem of optimal allocation of rate between a source encoder and a channel encoder for transmission over erasure channels. The system under investigation is a concatenation of a vector quantizer with a channel coder and the objective is to minimize the end-to-end distortion. Upper and lower bounds on the channel coding rate are constructed that minimizes the end-to-end distortion. The upper bound on channel coding rate is derived using the sphere packing and straight line exponents as a bound for performance of the channel code. Similarly, the lower bound on rate is derived based on the expurgated error exponent for the erasure channel. The proposed bounds suggest that the optimal channel coding rate is substantially smaller than the channel capacity. Asymptotically, as the erasure probability $\epsilon \rightarrow 0$, the optimal channel coding rate equals 1. The resulting upper and lower bounds are then adapted to obtain the optimal coding rate for packet erasure channels. \par The closed form approximations for the optimal coding rate are derived under the assumption of asymptotically small erasure probabilities. Also, a high rate quantization regime is considered and hence, distortion achieved asymptotically with large k equals the rate distortion bound. The proposed bounds are independent of the source distribution for sources with fixed dimensionality and finite support. \par The rest of the paper is organized as follows. In Section~2, we define the system and present the notation and assumptions made in this paper. In Section~3 we evaluate the upper and lower bounds on the rate for erasure channels using the expurgated, sphere packing and straight line bounds. In Section~4 we present some numerical results and conclude in Section~5. \begin{figure*}[tbph] \begin{center} \mbox{\epsfbox{Figure1.ps}} \end{center} \label{Fig. 1.} \caption{System Block Diagram} \end{figure*} A model of the communication system under investigation is given in Fig. 1. Consider a random vector $\textit{X} \in {\Re}^{k}$ that has a probability density function \textit{f} over support set $A$, a closed bounded subset of ${\Re}^{k}$ with nonempty interior. Let X be quantized by a vector quantizer $Q: {\Re}^{k}\rightarrow C$ where $C=\left[\textbf{y}_{1}, \textbf{y}_{2}, ...,\textbf{y}_{M}\right]$ is the codebook of the vector quantizer with $m=\log M$ bits per source symbol. All logarithms are to base 2. Consequently, the quantizer can be modeled as~\cite{GershoGray}, \begin{equation} Q\left(\textbf x\right)=\sum_{i=1}^{M}\textbf{y}_{i}\textbf{1}_{S_{i}}\left(\textbf x\right) \end{equation} where $\lbrace{S_{i}\rbrace}_{i=1}^{M}$ is a partition of ${\Re}^{k}$ into disjoint regions, each of which is represented by $\textbf{y}_{i}$ and $\textbf{1}_{S_{i}}\left(.\right)$ is the indicator function which equals 1 if \textbf x lies in the $i^{th}$ cell of the partition. The average distortion using this quantizer is given by, \begin{eqnarray} D_{m}\left(Q\right) &=& \sum_{i=1}^{M}\int_{S_i}||\textbf{x}-\textbf{y}_{i}||^{p}f\left(x\right)dx \\ &=&2^{-pRr + O\left(1\right)}, \label{eqn:dist_quant} \end{eqnarray} where~(\ref{eqn:dist_quant}) follows from Zador's distortion formula~\cite{Zador82} and $p$ is the power of the distortion measure. Traditional quantization theory has worked on computing optimal quantizers that achieve the infimum of $D_{m}\left(Q\right)$. The quantizer design has been expanded to include the effect of channel errors; however, the problem is extremely challenging and little analytical results are known in such cases. \newline In our system, we consider that the $m$-bit source codewords are first randomly permuted using a mapping $\pi$ and then passed to a channel encoder of rate $r=m/n$ before transmission over a binary erasure channel with erasure probability $\epsilon$. For simplicity, we have included the index assignment $\pi$ as part of the source encoder. The results are independent of the index assignment. The channel encoder generates a unique $n$-bit channel codeword for each of the $m$-bit source codewords. The added redundancy $n-m$ is used to protect the source code word from channel impairments. The transmission rate per source component is $R = n/k$, and the quantization rate is $R_{s}=m/k=Rr$. Following the notations used in~\cite{Hochwald97}, we denote $a_{i}=O\left(b_{i}\right)$ if $|a_{i}|/b_{i} \leq c$ for some $c > 0$ and $i$ sufficiently large. We denote $a_{i}=\Omega\left(b_{i}\right)$ if $|a_{i}|/b_{i} \geq c$ for some $c>0$ and sufficiently large $i$. Finally, $a_{i}=o\left(b_{i}\right)$ if $\lim_{i\rightarrow\infty}a_{i}/b_{i} = 0$. \section{Binary Erasure Channel} We now consider obtaining bounds on the coding rate for a binary erasure channel (BEC). The end-to-end distortion for the system in Fig.~1, is readily given by~\cite{Hochwald97} \begin{equation} D_{R}\left(Q,\epsilon\right) = \sum_{i,j=1}^{M}q\left(j|i\right)\int_{S_i}||\textbf{x}-\textbf{y}_{i}||^{p}f\left(x\right)dx \end{equation} where $\epsilon$ is the bit erasure probability and $q\left(j|i\right)$ is the conditional probability that the channel decoder decides in favor of the $j^{th}$ channel codeword when the $i^{th}$ codeword was transmitted. \subsection{Lower bound on channel coding rate} \par The lower bound on the channel coding rate is obtained by upper bounding the distortion at the decoder. Assuming small bit erasure probability and following \cite{Hochwald97}, the total distortion may be upper bounded as, \begin{equation} D_{R}\left(Q,\epsilon \right \leq D_{m}\left(Q\right) + O(1)\max_{1\leq i \leq M}P_{e|i} \label{eqn:total_dist_lb} \end{equation} In (\ref{eqn:total_dist_lb}), the total distortion is the sum of the distortion due to the vector quantizer~( $D_{m}\left(Q\right)$), and the distortion due to the errors in transmission. The positive O(1) term is due to the fact that $f$ has support $A$, and $y_{j}$ is contained in $A$ for all $j$ \cite{Hochwald97}. The problem of interest is posed as follows: ``Given a binary erasure channel with $R$ and source $\textit{X} \in {\Re}^{k}$, find the optimal rate $r$ that minimizes the distortion $D_{R}\left(Q,\epsilon,\right)$'' For an arbitrary binary discrete memoryless channel, Shannon's channel coding theorem guarantees that for channel code rates $r$ below capacity, the probability of error is upper bounded by~\cite{Gallager68} \begin{equation} \max_{1\leq i \leq M}{P}_{e|i} \leq {2}^{-nE_{ex}(r) + o(r)}, \end{equation} where, $E_{ex}\left(r\right)$ is the expurgated error exponent and is an exponentially decreasing function of the rate. The dependence of $P_{e|i}$ on $n$ indicates that the decoding error probability can be decreased by increasing the length of the channel codewords. The expurgated error exponent is given by \cite{Gallager68}, \begin{equation} {E}_{ex}\left(r\right) = \sup_{\rho \geq 1}\left[-\rho r + \max_{\textbf{q}} {E}_{x}\left(\rho, \textbf{q}\right)\right] \end{equation} where, \begin{gather} {E}_{x}\left(\rho, \textbf{q} \right) = \nonumber \\ -\rho\log\sum_{k=0}^{K-1}\sum_{i=0}^{K-1}q\left(k\right)q\left(i\right)\left[\sum_{j=0}^{J-1}\sqrt{P\left(j|k\right)P\left(j|i\right)}\right]^{1/\rho}. \label{eqn1} \end{gather} Note that ${\mathbf q} = [q(0)\: q(1)\: \ldots q(K-1)] $ represents the probability of the input channel alphabets and $P(j|i)$ is the probability of receiving output symbol $j$ when input symbol $i$ is transmitted. In $\left(\ref{eqn1}\right)$, $K$ and $J$ represent, respectively, the cardinality of the input and output alphabets of the channel. For a binary erasure channel, $J=3$ and $K=2$. For the binary erasure channel,the transition probability matrix is given by \begin{table}[h] \begin{center} \begin{tabular}{|c c|c|c|c|} \hline & & \multicolumn{3}{|c|}{Output, j}\\ \hline \multirow{3}{*} & &1 & e & 0\\ \hline {Input, i/k} & 1 & $1-\epsilon$ & $\epsilon$ & 0\\ & 0 & 0 & $\epsilon$ & $1-\epsilon$ \\ \hline \end{tabular} \end{center} \caption{Transition Probability Matrix with elements $P(j|i)$ for BEC.} \end{table} For a symmetric channel, the \textbf{q} that maximizes the error exponent is the uniform probability assignment~\cite{Jelinek68}. Thus for the binary erasure channel, $\textbf{q} =\left[q\left(0\right)\: q\left(1\right)\right]=\left[0.5\: 0.5\right]$. Substituting the upper bound for the probability of error into~(\ref{eqn:total_dist_lb}), we obtain the end-to-end distortion as \begin{equation} D_{R}\left(Q,\epsilon,\pi\right) \leq 2^{-pRr + O\left(1\right)} +{2}^{-kRE_{ex}(r) + o(r)} \label{eqn:dist_lb2} \end{equation} Consider the case of large $R$: To ensure that neither of the two terms on the right hand side of~(\ref{eqn:dist_lb2}) dominates the distortion upper bound, we choose the exponents of the two terms to be within o(1) of each other \cite{Hochwald97}, \cite{ZegerManzella}. Hence, we set \begin{equation} E_{ex}\left(r\right) = \frac{p}{k}r_{ex} + o\left(1\right), \label{eqn:linear_reln_lb} \end{equation} to obtain the channel coding rate that optimizes the end-to-end distortion at the decoder. This optimal rate is characterized by Theorem~1, which is similar to Theorem~1 in~\cite{Hochwald97}. \textit{Theorem 1:} The upper bound on the minimum $p^{th}$ power distortion, averaged over all index assignments of a $k$-dimensional cascaded good vector-quantizer and channel encoder that transmits over a binary erasure channel with bit erasure probability $\epsilon$, is achieved with a channel code rate $r_{ex}$ satisfying \begin{gather} r_{ex} = 1 - 2^{-c_\epsilon}\left(\frac{\log\log\left(1/\epsilon\right) + \log e + c_\epsilon}{\log\left(1/\epsilon\right)}\right) \nonumber \\+ O\left(\frac{\log\log\left(1/\epsilon\right)}{\log^{2}\left(1/\epsilon\right)}\right) + o\left(1\right), \label{eqn:optimal_rate_lb} \end{gather} where, $c_\epsilon$ satisfies \begin{equation} \frac{p}{k}2^{c_\epsilon} - \frac{\left(p/k\right)\left(\log\log\left(1/\epsilon\right)+\log e + c_{\epsilon}\right) -2^{-c_\epsilon}}{log\left(1/\epsilon\right)} - 1 = 0. \end{equation} \textit{Proof:} \par For a BEC, evaluating (8), we obtain \begin{equation} \max_\textbf{q} {E}_{x}\left(\rho, \textbf{q}\right) = \rho\left[1 - \log\left(1 + \epsilon ^{1/\rho}\right)\right] \end{equation} and thus the expurgated error exponent becomes \begin{equation} E_{ex}\left(r\right) = \sup_{\rho \geq 1}\left\{ \rho\left[1-r- \log\left(1 + {\epsilon}^{1/\rho}\right)\right]\right\} \label{eqn:expurgated_exponent} \end{equation} The $\rho$ which maximizes the error exponent and also satisfies~(\ref{eqn:linear_reln_lb}) is given by \begin{equation} \rho = \frac{\log\left(1/\epsilon\right)}{\log\log\left(1/\epsilon\right) + c_\epsilon} \end{equation} Substituting for $\rho$ and $c_{\epsilon}$ into~(\ref{eqn:linear_reln_lb}) we obtain the optimal rate~(\ref{eqn:optimal_rate_lb}) and hence the theorem is proved. \hfill $\square$ Note that the expression for the expurgated error joint source channel rate is similar to the BSC case \cite{Hochwald97} with the difference being the argument of the $\log$ term. Appendix~1 in~\cite{Hochwald97} provides details on the derivation for $\rho$ and $c_{\epsilon}$. A further simplification in the expression for the rate can be obtained by neglecting the $O(1)$ and $o(r)$ terms and equating ~(\ref{eqn:expurgated_exponent}) to the exponent of the source coding distortion yielding \begin{equation} r_{ex}=\frac{\rho}{\frac{p}{k} + \rho}\left[1 - \log\left(1 + \epsilon^{1/\rho}\right)\right] \end{equation} Numerical values of $r_{ex}$ is given in Figure~2 and are explained in Section IV. \subsection{Upper bound on channel coding rate} Following the analysis in~\cite{Hochwald97}, the upper bound on the average distortion minimized over all channel code rates for large $R$ and small bit error probability for the binary erasure channel can be obtained as \begin{gather} D_{R}\left(Q,\epsilon,\pi\right)\geq D_{m}\left(Q\right)\left(1-P_{e}\right) + \Omega\left(1\right)\frac{1}{M}\sum_{k=1}^{M}P_{e|k} \nonumber \\=2^{-pRr + O\left(1\right)}\left(1-P_{e}\right) + \Omega\left(1\right)P_{e} \end{gather} where $P_e$ is the probability of error occurring in the channel. A lower bound on this probability of error is given by, \begin{equation} {P}_{e} \geq {2}^{-nE_{sl}(r) + o(n)} = 2^{-kRE_{sl}\left(r\right) + o\left(R\right)} \end{equation} where $E_{sl}$ is the straight line exponent. The straight line exponent $E_{sl}\left(r\right)$ is a linear function of $r$ which is tangent to the sphere packing exponent $E_{sp}\left(r\right)$ and also satisfies $E_{sl}\left(0\right) = E_{ex}\left(0\right)$. The sphere packing exponent \cite{Gallager68} is given by, \begin{equation} {E}_{sp}\left(r\right) = \sup_{\rho \geq 0}\left[-\rho r + \max_{\textbf{q}} {E}_{o}\left(\rho, \textbf{q}\right)\right] \end{equation} and, \begin{equation} \max_\textbf{q} {E}_{o}\left(\rho, \textbf{q}\right) = -\log\sum_{j=0}^{J}\left[\sum_{k=0}^{K}q(k)P(j|k)^{1/\left(1+\rho\right)}\right]^{1+\rho} \end{equation} The straight line exponent can be written as, \begin{equation} E_{sl}\left(r_{sl}\right) = E_{ex}\left(0\right) + r_{sl}\frac{\left[E_{sp}\left(r'\right) - E_{ex}\left(0\right)\right]}{r'} \label{eqn:Esl} \end{equation} where, $r'$ is the rate at which the straight line exponent meets the sphere packing exponent tangentially. The straight line exponent has also been characterized in~\cite{McEliece} for a binary erasure channel. Now, the end-to-end distortion is thus bounded as \begin{equation} D_{R}\left(Q,\epsilon,\pi\right)\geq 2^{-pRr + O(1)} + 2^{-kRE_{sl}(r) + o(R)} \label{eqn:total_dist_ub1} \end{equation} The channel coding rate that minimizes this bound is now characterized in Theorem~2. \textit{Theorem 2:} An upper bound on the channel code rate $r$ that minimizes the $p^{th}$ power distortion averaged over all random index assignments of a $k$-dimensional cascaded good vector quantizer for a binary erasure channel with small effective bit erasure probability $\epsilon$ and large $R$ is given by \begin{equation} r_{sl} = \frac{E_{ex}\left(0\right)}{\frac{p}{k} - \frac{E_{sp}\left(r'\right) - E_{ex}\left(0\right)}{r'}} \label{eqn:optimal_rate_ub} \end{equation} \textit{Proof:} As in the earlier case, for large $R$, to prevent either of the terms in the distortion bound~(\ref{eqn:total_dist_ub1}) from dominating the other, we set the straight line exponent to be linearly proportional to the exponent term of the noiseless-optimal distortion within $o\left(1\right)$ of each other. Thus, \begin{equation} E_{sl}\left(r\right) = \frac{p}{k}r_{sl} + o\left(1\right) \label{eqn:linear_reln_ub2} \end{equation} Substituting ~(\ref{eqn:Esl}) in ~(\ref{eqn:linear_reln_ub2}), the theorem is proved \hfill $\square$. Note that to completely characterize $r_{sl}$ we need to explicitly evaluate the sphere packing exponent $E_{sp}(r)$. It is easily seen that a uniform probability assignment for the input states to the channel $q\left(.\right)$ maximizes $E_{o}\left(\rho, \textbf{q}\right)$ and thus $E_{sp}(r)$ can be evaluated as, \begin{equation} E_{sp}\left(r\right) = \sup_{\rho\geq0}\left\{\rho\left(1-r\right)-\log\left[\left(1-\epsilon\right)+\epsilon2^{\rho}\right]\right\} \label{eqn:Esp} \end{equation} Note that $(\ref{eqn:Esp})$ is a concave function of $\rho$ and hence the supremum can be replaced by the max operator. The $\rho$ which maximizes ~(\ref{eqn:Esp}) satisfies \begin{equation} r=\frac{\left(1-\epsilon\right)}{\left(1-\epsilon\right) + 2^{\rho}\epsilon} \end{equation} We can use this relation between the rate and $\rho$ to express the sphere packing exponent in terms of the channel encoding rate for a given erasure channel as, \begin{equation} E_{sp}\left(r\right) = r\log r + ~(1-r)\log(1-r) -r\log\left(\frac{1-\epsilon}{\epsilon}\right) -\log\epsilon \label{eqn:Esp_r} \end{equation} At $r'$, the slope of the sphere packing exponent equals the slope of the straight line exponent. Thus, \begin{equation} \frac{\partial E_{sp}}{\partial r}\mid_{r=r'}=\frac{E_{sp}\left(r'\right)-E_{ex}\left(0\right) }{r'} \label{slope} \end{equation} Differentiating $(\ref{eqn:Esp_r})$ and substituting in ~(\ref{slope}), we get \begin{equation} r'=1 - 2^{E_{ex}~(0)-\log\left(1/\epsilon\right)} \label{eqn:r_prime} \end{equation} It turns out that $E_{sp}(r')$ is nearly 0 for small values of $\epsilon$. \section{Numerical Results} The bounds derived above for the erasure channel can be easily extended to the case of packet erasures. We use a simplified model for the packet erasure channel and assume that a packet erasure occurs if any of the bits within the packet suffers an erasure. Although this assumption simplifies the packet erasure channel model, it is useful in obtaining closed form bounds on the coding rate over such channels. For a packet of size~$P$ bits, the probability of a packet erasure $\delta$ is given by $\delta = 1 - \left(1 - \epsilon\right)^{P}$, where as before $\epsilon$ denotes the probability of bit erasure. The error exponent for a binary erasure channel with erasure probability $\epsilon$ and a $2^{P}$-ary erasure channel with erasure probability $\delta$ is the same. Hence, given the packet erasure probability $\delta$, we consider an equivalent binary erasure channel with bit erasure probability $\epsilon = 1 - \left(1 - \delta\right)^{1/P}$ and find the bounds on the rate and distortion for the corresponding BEC. The plot of the upper and lower bound on channel coding rate as a function of the erasure channel probability for k=4 and squared distortion measure is given in Fig.~2. The bounds for various packet sizes $P=1$, 10 and 100 are shown in Fig.~2 . It is observed that for a given packet size, as the erasure probability increases, the channel coding rate decreases indicating that more bits need to be invested on channel coding to combat a hostile channel. Further, for a given packet erasure probability, as the packet size increases, the channel coding rate increases implying that more bits can be allocated for source coding with larger packet size. Fig.~3 offers a different perspective on the results. From the bounds on the channel coding rate, we can get the bounds on the distortion due to channel coding. By virtue of our optimal joint source-channel coding criterion, the total distortion will be twice the distortion due to channel coding. Hence, given an end-to-end limit on the distortion, we can get the minimum packet length to be chosen from Fig.~3. The squared distortion metric with $k=4$, $R=10$ and packet erasure probability $\delta=10^{-3}$ was chosen. The $o\left(r\right)$ and $o\left(R\right)$ terms were neglected in the terms for distortion due to noisy channel decoding in $(\ref{eqn:dist_lb2})$ and $(\ref{eqn:total_dist_ub1})$.The asymptotic nature of the curve indicates that large packet size is not required for packet erasure channels with small erasure probabilities. \begin{figure}[tbph] \begin{center} \epsfxsize = 3.5in \epsfbox{RateVariousN_061005.eps} \end{center} \label{Fig. 2.} \caption{The upper (UB) and lower bounds (LB) on the optimal channel coding rate are plotted for various values of packet size P.} \end{figure} \begin{figure}[h] \begin{center} \epsfxsize = 3.5 in \mbox{ \epsfbox{PacketDistortion.eps}} \end{center} \label{Fig. 3.} \caption{The Distortion for various packet lengths for a packet erasure channel with packet erasure probability $\delta = 10^{-3}$ and R = 10.} \end{figure} \section{Conclusion} The results presented in this paper provide a mechanism for optimal concatenation of source and channel coders. Analytic results are provided for lower and upper bounds for a binary erasure channel and for packet erasure channels. The results on packet erasure channel enable us to obtain the bounds on the packet size for a specified bound on the distortion and given packet erasure probability. Alternately, for a given packet erasure probability, we can find bounds on the channel encoding rate for various packet lengths. By studying the optimal rate allocation for a bit and packet erasure channel, one can apply these results for transmission in a wide range of scenarios, including wireline channels with congestion. In future work, these bounds should be expanded to include transmission over more sophisticated channel models. \section*{Acknowledgment} This work has been supported in part by Nokia Inc.
2,877,628,091,429
arxiv
\section{Observations} Magnetic white dwarfs account for a substantial fraction of the population of white dwarf stars \citep{2007ApJ...654..499K}. Spectroscopic surveys \citep{2012MNRAS.425.1394K} routinely uncover new candidates showing a great diversity in field strength and geometry \citep{2017A&A...607A..92L}. Our most recent observations were obtained with ESO's FOcal Reducer and low-dispersion Spectrograph 2 (FORS2) and the intermediate-dispersion X-shooter spectrograph both on ESO's Very Large Telescopes (VLTs). Detailed modelling of spectroscopic time series often reveals complex surface field structures or the presence of a close degenerate companion, as observed in the case of NLTT~12758 \citep{2017MNRAS.466.1127K}. \section{Modeling and analysis} We followed a methodology described in \citet{1984MNRAS.206..407M} and \citet{1989ApJ...346..444A} and modelled the field distribution in magnetic hydrogen-rich white dwarfs, known as DAH white dwarfs, using a dipole of strength $B_p$ which may be offset along the polar axis by a fraction of the radius $a_z$ and inclined with respect to the viewer at an angle $i$. We divided the surface into 450 elements along the surface longitude and latitude and integrated the emergent intensity spectrum. These model spectra describe average surface field properties at a particular time and do not account for possible blurring caused by a short rotation period. \subsection{Hydrogen Balmer lines} The hydrogen Balmer spectra were computed using line strengths and Zeeman shifts from \citet{1974Ap&SS..31..103G}. The following examples illustrate the method. The new magnetic white dwarf NLTT~8435 ($B_p=6.1$\,MG) is relatively cool ($\approx 5360$~K) and hydrogen-rich (Fig.~\ref{fig1}). Photometric time series obtained with the Danish 1.54-m telescope revealed a likely rotation period of 95 minutes (Fig~\ref{fig2}). We also observed radial velocity variations of at least 60~km\,s$^{-1}$ that are not related to surface field variations but, instead, caused by the presence of a close, unseen companion. The cool magnetic white dwarf NLTT~13015 is also hydrogen-rich and exhibits marked field variations around a mean polar field of $\approx$12~MG. Fig.~\ref{fig1} shows one of the three individual exposures obtained with FORS2: The best-fitting model implies a field strength of 11.4~MG and a small offset along the polar axis of -11\%. \begin{figure*}[t!] \vspace{-0.3cm} \includegraphics[width=0.49\textwidth,clip=25 317 548 674]{vennes_fig1a.eps}% \includegraphics[width=0.49\textwidth,clip=25 317 548 674]{vennes_fig1b.eps} \vspace{-0.2cm} \caption{Observation and modelling (H$\alpha$) of the DAH white dwarfs NLTT\,8435 (left) and NLTT\,13015 (right).} \label{fig1} \end{figure*} \begin{figure*}[t!] \vspace{-0.2cm} \begin{center} \includegraphics[width=0.6\textwidth,clip=25 317 548 674]{vennes_fig2.eps} \vspace{-0.0cm} \caption{Photometric (R-band) time series (middle panel) and residuals (bottom panel) of NLTT~8435 obtained with the Danish 1.54-m telescope. The period analysis finds a significant periodicity near 95 minutes (top panel).} \label{fig2} \end{center} \end{figure*} \subsection{Heavy elements} White dwarf atmospheres are often contaminated with trace heavy elements \citep{2003ApJ...596..477Z}. Some cool and polluted hydrogen-rich white dwarfs known as DAZH white dwarfs such as NLTT~7547 (Kawka et al. 2018, in preparation) and NLTT~53908 \citep{2014MNRAS.439L..90K} show strong CaH\&K lines imbedded in a magnetic field with strengths ranging from $\approx10^5$ to $10^6$ G. Other trace elements are also seen in the spectra of these objects (e.g., sodium, magnesium, aluminum, and iron) and modelling of spectral line shapes should provide additional constraints on the strength and structure of the magnetic field. We computed detailed line profiles following the procedure described in \citet{2011A&A...532A...7K} but updated with offset dipole field distributions described above and assuming quadratic Zeeman line splitting following \citet{2004ASSL..307.....L}. The updated Zeeman patterns agree with earlier calculations employing \citet{1975Ap&SS..36..459K}. Figure~\ref{fig3} shows the calcium K line in two polluted, magnetic white dwarfs. In the case of NLTT~7547, the broad line shape requires a field spread characteristic of a centered dipole ($a_z=0$) of 240 kG, while in the case of NLTT~53908, the narrow Zeeman components require a marked offset ($a_z=-0.2$) and a dipole field of 635 kG. \begin{figure*}[t!] \vspace{-0.3cm} \includegraphics[width=0.49\textwidth,clip=25 317 548 674]{vennes_fig3a.eps}% \includegraphics[width=0.49\textwidth,clip=25 317 548 674]{vennes_fig3b.eps} \vspace{-0.2cm} \caption{Observation and modelling (Ca~K) of the DAZH white dwarfs NLTT\,7547 (left) and NLTT\,53908 (right).} \label{fig3} \end{figure*} \section{Discussion} \citet{2014MNRAS.439L..90K} found evidence of field enhancement among cool, polluted hydrogen-rich white dwarfs. This simple fact can be interpreted either as evidence of a correlation between magnetic field strength and heavy element pollution, or as a field enhancement in {\it all} cool white dwarfs. Ultimately, this project aims at delivering field structure and binary properties for a large sample of magnetic white dwarfs and constrain population statistics. In particular we seek to determine the fraction of magnetic white dwarfs as a function of age, companionship, and spectral type. \acknowledgements A.K., L.F. and S.V. acknowledge support from the Czech Science Foundation (15-15943S). This work is based on observations made with ESO telescopes at the La Silla Paranal Observatory under programme IDs 84.D-0862, 90.D-0473, 091.D-0267 and 095.D-0311, and at Kitt Peak National Observatory and Cerro Tololo Inter-American Observatory (National Optical Astronomy Observatory).
2,877,628,091,430
arxiv
\section{Introduction} Even though general relativity (GR) has proven to be extremely accurate in describing our universe, it predicts the existence of spacetime singularities such as that inside the black hole. It is believed that some sort of quantum effects should be taken into account in order to naturally resolve the singularity problem. However, so far there is no consensus on how a fundamental theory encapsulating gravity and quantum effects should be built and it is still currently an intensive research arena. One of the mainstream approaches in this direction is loop quantum gravity (LQG). A popular approach to address the black hole singularity problem based on LQG has been to consider effective models that include some of the non-perturbative quantum effects of the theory. This is a first-step in realizing the physical implications of the putative quantum geometry, which is predicted in LQG, for black hole spacetimes.\footnote{It should be emphasized that there are other non-perturbative approaches towards the formulation of quantum gravity, such as those through the canonical quantum gravity approach \cite{qgkiefer}, string theory \cite{Strominger:1996sh} and the Euclidean path-integral approach \cite{Chen:2018aij}.} It is rather intriguing that the property of quantum geometry encoded in such a non-perturbative quantum theory of gravity can be captured by effective models, in which the quantum spacetime properties can usually be scrutinized systematically, sometimes analytic solutions are even attainable. More surprisingly, the singularity-resolution is a common feature shared by these effective models based on LQG. The success of these approaches has strongly motivated further investigations following this line in LQG. Typical effective models, which robustly examine the issue of singularity resolution in black holes due to the unique quantum effects implied by the theory, include the so-called holonomy modifications derived from LQG \cite{Ashtekar:1995zh}. The combination of having a minimum area-gap (at Planck scales) along with the necessity of working with holonomies (or parallel transport of connections), instead of the connections themselves, in the quantum theory gives rise to such corrections in LQG \cite{Ashtekar:1996eg,Ashtekar:2003hd}. Holonomy modifications in effective models are usually implemented via the so-called polymerization technique, which replaces the conjugate momenta $p$ in the phase space with their polymerized counterparts $\sin(\lambda p)/\lambda$, where $\lambda$ stands for a quantum parameter related to the area-gap. (The latter trigonometric quantities can be regarded as matrix element of holonoies.) Due to this, classical dynamics could be significantly modified at large curvature scales in these effective models, whereas the models are expected to recover the classical limit by sending $\lambda$ to zero. In previous studies of resolving black hole singularities within the LQG framework, there have been two main classes of investigations. The first one is the $\mu_0$-type schemes \cite{Ashtekar:2005qt,Modesto:2005zm,Campiglia:2007pr,Modesto:2008im,Gambini:2013ooa,Bojowald:2016itl}. In this approach, the quantum parameters are assumed to be constant on the entire phase space. However, some problems may appear in this setup, such as that the final results may depend on the fiducial structures, which are initially introduced to construct the classical phase space. In addition, significant quantum effects may emerge at the low curvature regime rendering these models unphysical. On the other hand, in the so-called $\bar\mu$-type schemes \cite{Bohmer:2007wi,Chiou:2008nm,Chiou:2008eg,Joe:2014tca}, the quantum parameters are allowed to be functions of phase space variables. In this approach, there could still be huge quantum modifications on the event horizon. See also \cite{Bodendorfer:2019cyv} for the construction of effective models based on a new classical phase space description and \cite{Bojowald:2018xxu,BenAchour:2018khr} for effective black hole models including deformed covariance in LQG. Recently, the authors in \cite{Ashtekar:2018cay,Ashtekar:2018lag} proposed a generalized version of the $\mu_0$-type schemes, which we shall refer to as the AOS model. In this effective model, the quantum parameters are chosen such that they are functions of the effective Hamiltonian itself. In this sense, the quantum parameters turn out to be Dirac observables, that is, constant only along the dynamical trajectories. This model is quite interesting because it not only solves the singularity problem, but also does not suffer from the aforementioned drawbacks, such as the appearance of quantum effects at low curvature regimes, dependence on fiducial structures, and mass amplifications from crossing through the transition surfaces. In \cite{Ashtekar:2018cay,Ashtekar:2018lag}, the authors also extended the effective models to its exterior spacetime, intended to construct an effective picture which smoothly connects the interior and the exterior regions. It is exciting that the AOS model, based on the novel formulation regarding the definition of its quantum parameters in terms of phase space variables, is able to address so many long-lasting issues in the LQG community. Therefore, the AOS model definitely deserves deeper and further analysis. Under more careful scrutiny, however, it was shown that there exist subtle complications in the AOS model \cite{Bodendorfer:2019xbp}. For an effective black hole constructed in LQG, it is expected that quantum effects are significant only inside the horizon. This can be treated as a necessary requirement in order for the model to be viable. When moving away from the center of black hole, the quantum effects become negligible and the theory reduces to GR in vacuum (for an isolated black hole). More precisely, the black hole spacetime should reduce to the Schwarzschild spacetime (if the spacetime is static and spherically symmetric), when sufficiently outside the event horizon, which is asymptotically flat. This implies that the effective black hole should also be asymptotically flat. The physical meaning of asymptotic flatness is that the gravitational field approaches zero at large distances from an isolated black hole. We regard this property as an important requirement for an isolated black hole model since local objects should not change the asymptotic structure of spacetime. Therefore, in absence of matter field, any effective quantum-corrected black hole which is not asymptotically flat should not be physically viable. The main objective of this letter is to point out that the exterior spacetime proposed in \cite{Ashtekar:2018cay,Ashtekar:2018lag} has a serious flaw. In fact, the singularity-resolution in this model comes at a very heavy cost of destroying a global (or, asymptotic) property of the spacetime. Even though the interior spacetime recovers the Schwarzschild metric near the event horizon and can be smoothly connected to the exterior, the full spacetime itself is \textit{not asymptotically flat}. In the asymptotic region, this effective model does not reduce to the Schwarzschild spacetime because of the presence of quantum parameters. Such deviations in the asymptotic region raises serious questions on the viability of the model and its claim that quantum effects exist only deep inside the black hole horizon. Indeed, as discussed in \cite{Bojowald:2019dry}, these shortcomings might be pointing towards a deeper malaise. \section{The AOS quantum black hole} As mentioned earlier, the AOS quantum black hole proposed in \cite{Ashtekar:2018cay,Ashtekar:2018lag} is an effective description of Schwarzschild spacetimes in the context of LQG. Using the fact that the interior of the Schwarzschild black hole is isometric to the Kantowski-Sachs spacetime, the interior region is foliated by spacelike homogeneous surfaces with coordinates $x$, $\theta$, and $\phi$. Within this setup, the components of the SU(2) connections $A_a^i$ can be described by two variables $b$ and $c$. Their conjugate momenta $E_i^a$, on the other hand, are described by the variables $p_b$ and $p_c$. In this regard, ($b$, $p_b$); ($c$, $p_c$) are two canonically conjugate pairs \cite{Ashtekar:2018cay,Ashtekar:2018lag}. With an appropriate choice of the time coordinate $T$ and its corresponding lapse function $N$, the interior region can be described by the following metric \begin{equation} ds^2=-N^2dT^2+\frac{p_b^2}{\left|p_c\right|L_0^2}dx^2+\left|p_c\right|d\Omega^2\,,\label{metricinq} \end{equation} where $L_0$ is the size of the fiducial cell on the 3-manifold and the physically viable solution should not depend on it. The Poisson brackets between the Ashtekar-Barbero connection and the triad variables lead to $[c,p_c]=2G\gamma$ and $[b,p_b]=G\gamma$, where $\gamma$ is the Immirzi parameter. The Schwarzschild form of the interior region can be recovered by choosing the lapse function $N_{cl}^2=\gamma^2\left|p_c\right|/b^2$ and using the Hamiltonian constraint \begin{equation} H_{cl}\left[N_{cl}\right]=-\frac{1}{2G\gamma}\left[2cp_c+\left(b+\frac{\gamma^2}{b}\right)p_b\right]\,. \end{equation} In this regard, the mass of the black hole $m\equiv cp_c/L_0\gamma$ can be proven to be a Dirac observable, i.e., a constant of motion along the dynamical trajectory. The effective modifications to the above classical lapse function and Hamiltonian in \cite{Ashtekar:2018cay,Ashtekar:2018lag} are introduced through the two quantum (polymerization) parameters $\delta_b$ and $\delta_c$. These parameters are assumed to be Dirac observables as well, in the sense that they should commute with the effective Hamiltonian. The lapse function and the effective Hamiltonian for the interior spacetime read \begin{eqnarray} N^2=\frac{\gamma^2p_c\delta_b^2}{\sin^2\left(\delta_bb\right)}\,,\nonumber\\ H_{eff}[N]=-\frac{1}{2G\gamma}\left[\frac{2\sin\left(\delta_cc\right)}{\delta_c}p_c+\left(\frac{\sin\left(\delta_bb\right)}{\delta_b}+\frac{\gamma^2\delta_b}{\sin\left(\delta_bb\right)}\right)p_b\right]\,. \end{eqnarray} When $\delta_b\rightarrow 0$ and $\delta_c\rightarrow 0$, the lapse function and the Hamiltonian recover their classical limits. After solving the equations of motion generated by the effective Hamiltonian, it can be shown that the quantum corrected black hole is free of spacetime singularities. Indeed, the interior singularity is replaced with a spacelike transition surface separating a trapped and an anti-trapped regions. In addition, at the transition surface the curvature acquires its maximum scale which is independent of the black hole mass for a macroscopic black hole, a requirement necessary for the validity of the effective description \cite{Ashtekar:2018cay,Ashtekar:2018lag}. In addition to the interior solution, the authors of \cite{Ashtekar:2018cay,Ashtekar:2018lag} extended the investigations to the exterior region by foliating it with timelike homogeneous surfaces (e.g. constant $r$ surfaces in Schwarzschild coordinates) such that the Ashtekar-Barbero connections ($\tilde{b}$ and $\tilde{c}$) and their canonical conjugate momenta ($\tilde{p}_b$ and $\tilde{p}_c$) take values in $\textrm{SU}(1,1)$ rather than in $\textrm{SU}(2)$. The phase space variables describing the exterior dynamics are related to the interior via the substitution \begin{equation} b\rightarrow i\tilde{b}\,,\qquad p_b\rightarrow i\tilde{p}_b\,,\qquad c\rightarrow\tilde{c}\,,\qquad p_c\rightarrow\tilde{p}_c\,. \end{equation} The exterior metric reads \begin{equation} d\tilde{s}^2=-\frac{\tilde{p}_b^2}{\tilde{p}_cL_0^2}dx^2-\tilde{N}^2dT^2+\tilde{p}_cd\Omega^2\,.\label{metricoutq} \end{equation} This method of deriving the exterior solution does work for classical solutions, regaining the well-known form of the Schwarzschild exterior. Furthermore, the interior and exterior regions can be smoothly connected at the event horizon. Later, we will show that the exterior solution of the effective spacetime derived using this approach is not asymptotically flat. The presence of the quantum parameters unavoidably alters the asymptotic behavior of the exterior region, jeopardizing the validity of the whole solution. By choosing a proper lapse function $\tilde{N}^2=-\gamma^2 \tilde{p}_c\delta_b^2/\sinh^2{(\delta_b\tilde{b})}$, one obtains an effective Hamiltonian for the exterior spacetime from which the equations of motion can be solved as follows \cite{Ashtekar:2018cay,Ashtekar:2018lag} \begin{eqnarray} \tan\left(\frac{\delta_c\tilde{c}\left(T\right)}{2}\right)=\frac{\gamma L_0\delta_c}{8m}e^{-2T}\,,\\ \tilde{p}_c\left(T\right)=4m^2\left(e^{2T}+\frac{\gamma^2L_0^2\delta_c^2}{64m^2}e^{-2T}\right)\,,\label{34}\\ \cosh\left(\delta_b\tilde{b}\left(T\right)\right)=b_0\tanh\left(\frac{1}{2}\left(b_0T+2\tanh^{-1}\left(\frac{1}{b_0}\right)\right)\right)\,,\label{35}\\ \tilde{p}_b\left(T\right)=-2m\gamma L_0\frac{\sinh\left(\delta_b\tilde{b}\left(T\right)\right)}{\delta_b}\frac{1}{\gamma^2-\frac{\sinh^2\left(\delta_b\tilde{b}\left(T\right)\right)}{\delta_b^2}}\,. \end{eqnarray} The two quantum parameters $\delta_b$ and $\delta_c$ share the same values with their interior counterparts and similarly they are assumed to be constant along the dynamical trajectories \cite{Ashtekar:2018cay,Ashtekar:2018lag}. Here, $b_0\equiv\sqrt{1+\gamma^2\delta_b^2}$ has been introduced for the sake of abbreviation. As expected, $m =p_c\sin\left(\delta_cc\right)/\left(\gamma L_0\delta_c\right)$, a Dirac observable, denotes the mass of the black hole. \section{Asymptotic structure} To study the asymptotic structure of the AOS metric (\ref{metricoutq}), we shall transform the metric into the spherically symmetric slices. First, we use (\ref{35}) to get \begin{eqnarray} \sinh^2\left(\delta_b\tilde{b}\right)=\frac{\gamma^2\delta_b^2\left(b_0^2X+X+2b_0\right)}{\left(b_0+X\right)^2}X\,,\label{38}\\ \gamma^2-\frac{\sinh^2\left(\delta_b\tilde{b}\right)}{\delta_b^2}=\gamma^2b_0^2\frac{1-X^2}{\left(b_0+X\right)^2}\,,\label{39} \end{eqnarray} where $X\equiv\tanh\left(b_0T/2\right)$. Using equations (\ref{38}) and (\ref{39}), the metric function $-g_{xx}=\frac{\tilde{p}_b^2}{\tilde{p}_cL_0^2}$ can be written as \begin{equation} -g_{xx}=\frac{4m^2\left(b_0^2X+X+2b_0\right)(b_0+X)^2X}{\tilde{p}_cb_0^4\left(1-X^2\right)^2}\label{gxx}\,. \end{equation} When $T\rightarrow\infty$, we have $X\rightarrow1$ and $\tilde{p}_c\rightarrow 4m^2e^{2T}$, which corresponds to the asymptotic region of the exterior spacetime. Note that constant $T$ surfaces in the exterior spacetime are timelike surfaces. The asymptotic expression of $-g_{xx}$ reads \begin{equation} -g_{xx}\approx\frac{\left(b_0+1\right)^4}{16b_0^4}e^{2T\left(b_0-1\right)}. \end{equation} It can be seen that the $g_{xx}$ would diverge when $T\rightarrow\infty$ as long as the quantum parameter $\delta_b$ is not zero (i.e., when $b_0\ne1$), no matter how carefully one fine-tunes it. The behaviour of the metric function $g_{xx}$ is shown in Figure~\ref{f2}. On the other hand, the metric function $g_{TT}$ is given by \begin{equation} g_{TT}=-\tilde{N}^2=\frac{\tilde{p}_c(b_0+X)^2}{\left(b_0^2X+X+2b_0\right)X}\,, \end{equation} where (\ref{38}) is used in the last equality. When $T\rightarrow\infty$, it can be approximated as $g_{TT}\approx\tilde{p}_c\approx4m^2e^{2T}$. Therefore $g_{TT}$ also diverges asymptotically. Note that the divergence of $g_{TT}$ at large $T$ can be removed by redefining the radial coordinate hence is not harmful. \begin{figure} \center \includegraphics[scale=0.7]{incons} \caption{\label{f2} The metric function $-g_{xx}$ of the exterior AOS spacetime (dotted) is shown as a function of $T$. The solid curve corresponds to the Schwarzschild metric. When approaching the event horizon, the AOS black hole recovers the Schwarzschild solution, but they deviate significantly from each other when moving away from the black hole.} \end{figure} If we do the transformation $r\equiv|\tilde{p}_c|^{1/2}$ and change $x\rightarrow t$ with a proper rescaling, the effective metric in the asymptotic region ($r\rightarrow\infty$) reads \begin{equation} ds^2|_{r\rightarrow\infty} = -r^{2(b_0-1)} d t^2 + d r^2 + r^2d\Omega^2\,.\label{12me} \end{equation} We will show explicitly that in the asymptotes, the AOS quantum black hole does not recover the Schwarzschild spacetime. In fact, the AOS quantum black hole is not asymptotically flat unless the quantum correction is absent ($b_0=1$), i.e., to obtain the proper classical limit in the asymptotic region, as is required of any physical model, we end up with the classical solution everywhere! \subsection{Asymptotic non-flatness} In order to explicitly demonstrate the asymptotic non-flatness of the AOS spacetime, we recall the definition of the so-called (weakly) asymptotically simple spacetimes. Essentially, a spacetime on a manifold $\mathcal{M}$ defined by a metric $g_{\mu\nu}$ is asymptotically simple if there exists a conformal compactification $\tilde{\mathcal{M}}=\mathcal{M}\cup\partial\mathcal{M}$ such that \cite{Wald,Andersson:2018rqm} \begin{enumerate} \item its metric $\tilde{g}_{\mu\nu}$ is conformal to the original metric $g_{\mu\nu}$, which can be written as $\tilde{g}_{\mu\nu}=\Omega^2 g_{\mu\nu}$, \item every null geodesic in $\mathcal{M}$ has future and past endpoints on $\partial\mathcal{M}$, \item the conformal factor satisfies the following conditions: 1) $\Omega>0$ on $\mathcal{M}$, 2) $\Omega=0$ and $\nabla_\alpha\Omega\ne0$ on $\partial\mathcal{M}$. \end{enumerate} In order to include spacetimes which may contain singularities, one can define the so-called weakly asymptotically simple manifold $\mathcal{N}$ which contains an open set $U$, such that the open set $U$ is isometric to a neighborhood of the boundary of another compactified asymptotically simple manifold. Finally, one defines an asymptotically flat spacetime if the spacetime is weakly asymptotically simple and asymptotically empty in the sense that the Ricci tensor vanishes in a neighborhood of $\partial\mathcal{M}$. In order to recast the asymptotic metric (\ref{12me}) as a form applicable to the criteria mentioned above, the only possibility is to consider the conformal factor $\omega =r^{-2(b_0-1)}$ to remove the $r$ dependence in the $g_{tt}$ component. In this regard, it can be seen that the metric (\ref{12me}) is conformally related to the following line element: \begin{equation} d\tilde{s}^2=-dt^2 + d\tilde{r}^2 + \left(2-b_0\right)^2\tilde{r}^2 d\Omega^2\,,\label{bvas} \end{equation} where $b_0\ne1$. Therefore, the metric (\ref{12me}) is conformally related to a ``quasi-asymptotically flat" metric, which has been proven to be asymptotically simple but \textit{not} asymptotically empty \cite{Wald,Andersson:2018rqm,Nucamendi:1996ac}. In fact, the metric (\ref{bvas}) is the asymptotic spacetime of the global monopole proposed by Vilenkin and Barriola \cite{Barriola:1989hx}, which contains a deficit solid angle. Given that the metric (\ref{bvas}) is not fully asymptotically flat, the AOS metric which is related to (\ref{bvas}) via a conformal transformation, cannot be asymptotically flat either. This lets us unambiguously conclude that the metric (\ref{12me}) is not asymptotically flat. \subsection{Curvature fall-off} An additional clue that implies the inconsistent asymptotic behavior of the AOS black hole can be seen from the fall-off behavior of the curvature invariants when $r$ becomes large. The fall-off behavior of curvature corresponds to the physical requirement that the gravitational effects generated by a black hole should decrease when moving away from the black hole. Although the curvature invariants approach zero as $r\rightarrow\infty$, their fall-off behavior deviates significantly from those of the Schwarzschild solution. If one calculates the Kretschmann scalar $K$ of the effective metric and expand it in terms of $1/r$, then one finds \begin{equation} K=\frac{c_1}{r^4}+\left(\textrm{higher order of }\frac{1}{r}\right)\,, \end{equation} where $c_1=3\gamma^4\delta_b^4+\left(\textrm{higher order of }\gamma\delta_b\right)$ and it vanishes \textit{iff} the quantum parameter $\delta_b=0$. This shows that the asymptotic behavior of the metric is obviously not like the Schwarzschild spacetime, where $K\propto 1/r^6$. In fact, this inconsistency not only happens for the Kretschmann scalar, but also for other curvature invariants constructed solely from the Riemann tensor. Even though the quantum parameter is very tiny, the deviation of the AOS metric from that of the Schwarzschild would be greatly amplified according to the different asymptotic behaviors of their metrics. For a viable quantum corrected black hole solution, one would expect that the quantum effects occur only deep inside the black hole and not at large areal radius, which, if occurs, would indeed be problematic. No matter how small the quantum parameter is, the deviation of the AOS solution from that of the classical one would inevitably become sizable at large $r$. Finally, there is one more evidence to demonstrate that the AOS spacetime is not asymptotically flat. If the spacetime is asymptotically flat, which approximately means that the spacetime reduces to the Minkowskian spacetime, the asymptotic regions should be maximally symmetric and should satisfy the following relation \begin{equation} R_{\alpha\beta\mu\nu}=\frac{R}{d(d-1)}\left(g_{\alpha\mu}g_{\beta\nu}-g_{\alpha\nu}g_{\beta\mu}\right)\,,\label{maximallsym} \end{equation} where $d=4$ is the spacetime dimension. From the asymptotic metric (\ref{12me}), we find that \begin{equation} R=-\frac{2b_0\left(b_0-1\right)}{r^2}\,,\qquad R_{t\theta\theta t}=-\left(b_0-1\right)r^{2b_0-2}\,. \end{equation} If $b_0\ne1$, it can be immediately seen that (\ref{maximallsym}) is not satisfied when $r\rightarrow\infty$. Therefore, the metric (\ref{12me}) is not asymptotically flat. \section{Asymptotic structure at null infinity} After this paper appeared on arXiv, the authors of Refs.~\cite{Ashtekar:2018cay,Ashtekar:2018lag} wrote another paper \cite{Ashtekar:2020ckv}, mentioning that the asymptotic behavior of the curvature fall-off of the AOS model does not match to that of the Schwarzschild black hole, as we have mentioned in the previous section (the v2 version on arXiv of this paper arXiv:1902.07874v2). Although the authors of \cite{Ashtekar:2020ckv} gave a coordinate transformation on the metric and claimed that the asymptotic metric of the AOS model approaches to the Minkowski metric $\eta_{\mu\nu}$ as $1/r$, we would like to argue that such a coordinate transformation is problematic, so is the asymptotic structure of the so-derived metric. More explicitly, we will show in this section that if the asymptotic metric given in Ref.~\cite{Ashtekar:2020ckv} is correct, the existence of the quantum parameter will inevitably destroy the asymptotic structure of the spacetime. According to Ref.~\cite{Ashtekar:2020ckv} (see Eqs.~(4.11) and (4.12) in that paper), under the following coordinate transformation \begin{equation} \tau=t\left(\frac{r}{2m}\right)^\epsilon\,, \end{equation} the asymptotic metric of the AOS spacetime can be written as \begin{eqnarray} g_{\mu\nu}dx^\mu dx^\nu&=&\left(-d\tau^2+dr^2+r^2d\Omega^2\right)+\left(\frac{2m}{r}\right)^{1+\epsilon}d\tau^2\nonumber\\&+&2\epsilon\frac{\tau}{r}\left[1-\left(\frac{2m}{r}\right)^{1+\epsilon}\right]drd\tau\nonumber\\&-&\left[\left(1-\frac{1}{1-\left(\frac{2m}{r}\right)^{1+\epsilon}}\right)-\epsilon^2\frac{\tau^2}{r^2}\left(1-\left(\frac{2m}{r}\right)^{1+\epsilon}\right)\right]dr^2\,, \end{eqnarray} where $\epsilon\equiv b_0-1$. Although this metric is written in the form of $\eta_{\mu\nu}+\mathcal{O}(1/r)$, it cannot describe properly the asymptotic spacetime. For example, considering a \textit{would-be} null-like surface $\tau\pm r=\textrm{const.}$ and $d\Omega=0$, one can see that this surface is in fact not a null-like surface at the asymptotic region because \begin{equation} g_{\mu\nu}dx^\mu dx^\nu=\left(2\epsilon+\epsilon^2\right)dr^2+\mathcal{O}\left(\frac{1}{r}\right)\,, \end{equation} which does not vanish in the presence of the quantum parameter $\epsilon=b_0-1$. Therefore, the existence of the quantum parameters in the AOS model does destroy the asymptotic structure of the spacetime, which is not supposed to happen for a local gravitational object. \section{Conclusions} The recently proposed AOS effective black hole \cite{Ashtekar:2018cay,Ashtekar:2018lag} is constructed within the framework of LQG in order to resolve the classical singularity problem of the Schwarzschild spacetime. This model has several intriguing physical outcomes. In the interior region, the Schwarzschild singularity is indeed replaced with a spacelike transition surface connecting a trapped and an anti-trapped region. This transition surface results from the quantum corrections introduced in the theory, and these quantum corrections quickly decrease when moving away from the transition surface. For a macroscopic black hole, the curvature scale acquires its maximum value at the transition surface and this maximum value turns out to be independent of the black hole mass. The authors extended their solution to the exterior region. It was argued \cite{Ashtekar:2018cay,Ashtekar:2018lag} that such exterior spacetime is asymptotically flat and a corresponding ADM mass can be defined. Based on the ADM mass defined in this model, it was argued that there is no mass amplification when crossing the transition surface. It is in fact rather exciting that the AOS model is able to address so many long-lasting issues haunting in the LQG community, and this model definitely deserves a deeper scrutiny. In this letter, we have explicitly shown that the AOS exterior region is actually not asymptotically flat. In addition, even though the curvature invariants approach zero in the asymptotic region, their fall-off behaviors do not recover those of the Schwarzschild spacetime. This means that the AOS model \textit{does not} reduce to the Schwarzschild black hole in the asymptotic region, at parametrically large areal radius. In addition, we have found that after a proper conformal transformation, the transformed AOS effective metric shares the same asymptotic expression of the global monopole spacetime proposed by Vilenkin and Barriola \cite{Barriola:1989hx}, which is asymptotically simple, but not asymptotically flat. The AOS spacetime being asymptotically non-flat can also be seen from the viewpoint of how the curvature tensors behave in the asymptotic region. Even though it is possible to define an appropriate ADM mass in some asymptotically non-flat spacetimes, such as the global monopole spacetime \cite{Nucamendi:1996ac}, it is actually not clear whether this can be done for the AOS spacetime. In any case, the AOS quantum black hole does not reduce to the Schwarzschild black hole when $r\rightarrow\infty$ in the presence of the quantum parameters. In fact, after we had pointed out this issue in \cite{Bouhmadi-Lopez:2019hpp}\footnote{Here we refer specifically to the v1 version of the paper, which was posted on arXiv on 21 Feb 2019.}, the author of \cite{Bojowald:2019dry} has proven that the AOS exterior spacetime violates general covariance. The effective Hamiltonian can be expanded in terms of the quantum parameters and it has been shown that the first order terms in quantum parameters violate general covariance. Therefore, the exterior spacetime of AOS black hole being asymptotically non-flat might be a manifestation of its violation of general covariance. Thus, in the presence of quantum parameters, the AOS model leads to unphysical conclusions, which lead us to conclude that the singularity resolution in this model comes at a very heavy price of spoiling its classical limit, at least in what concerns the asymptotic property of the spacetime. \vspace{3mm} Note added: Although the authors of \cite{Ashtekar:2020ckv} agree with the fact that the curvature fall-off is slower compared to Schwarzschild spacetime, they claim that this is not harmful because the deviation is extremely small if the quantum parameters are small. The authors also provide a coordinate transformation showing that the metric can be recast into a Minkowskian one in the asymptotic region (See Eqs.~(4.11) and (4.12) in \cite{Ashtekar:2020ckv}). However, as we emphasize in this paper, in the asymptotic region, any deviation compared with the classical solution resulting from quantum corrections is a manifestation of a pathology of the effective spacetime, no matter how small it is. In addition, the coordinate transformation (4.11) in \cite{Ashtekar:2020ckv} is extremely problematic to examine the asymptotic limit, i.e., when $r \rightarrow \infty$. It is so because, in this limit $r\rightarrow\infty$, the new coordinate $\tau$ is finite \textit{only when} $t\rightarrow 0$. Keeping in mind that $\tau$ appears explicitly in some of the metric components (see Eqn. (4.12) in \cite{Ashtekar:2020ckv}), it is clear that such a coordinate transformation breaks down when $t$ is large and cannot be used to probe the asymptotic limit of the AOS model. Furthermore, the asymptotic metric (4.13) is not valid at null infinity, as we have shown explicitly in the previous section. This can be proven by showing that $\tau\pm r=\textrm{const.}$ is not a null-like surface in the metric (4.12). More explicitly, considering the \textit{would-be} null-like surface $\tau-r=\textrm{const.}$ and $d\omega=0$ in Eqn.~(4.12), and making a series expansion in terms of $1/r$, the leading order of the right-hand side of Eqn.~(4.12) becomes $\left(2\epsilon+\epsilon^2\right)dr^2$, which is nonzero asymptotically. Therefore, the presence of the quantum parameter $\epsilon\equiv b_0-1$ does destroy the asymptotic structure of the spacetime. \ack{MBL is supported by the Basque Foundation of Science Ikerbasque. She also would like to acknowledge the partial support from the Basque government Grant No. IT956-16 (Spain) and from the project FIS2017-85076-P (MINECO/AEI/FEDER, UE). CYC and PC are supported by Ministry of Science and Technology (MOST), Taiwan, through No. 107-2119-M-002-005 and No. 108-2811-M-002-682, Leung Center for Cosmology and Particle Astrophysics (LeCosPA) of National Taiwan University, and Taiwan National Center for Theoretical Sciences (NCTS). PC is in addition supported by US Department of Energy under Contract No. DE-AC03-76SF00515. The research of SB and DY is supported in part by the Ministry of Science, ICT \& Future Planning, Gyeongsangbuk-do and Pohang City and the National Research Foundation of Korea grant no. 2018R1D1A1B07049126. SB is also supported in part by funds from NSERC, from the Canada Research Chair program and by a McGill Space Institute fellowship.} \section*{References}
2,877,628,091,431
arxiv
\subsubsection*{#1}} \newsavebox{\fminipagebox} \NewDocumentEnvironment{fminipage}{m O{\fboxsep}} {\par\kern#2\noindent\begin{lrbox}{\fminipagebox} \begin{minipage}{#1} } {\end{minipage}\end{lrbox}% \makebox[#1]{% \kern\dimexpr-\fboxsep-\fboxrule\relax \fbox{\usebox{\fminipagebox}}% \kern\dimexpr-\fboxsep-\fboxrule\relax }\par\kern#2 } \NewDocumentEnvironment{takeaway}{}{\begin{fminipage}{\linewidth}\sffamily\footnotesize\begin{flushleft}}{\end{flushleft}\end{fminipage}} \acrodef{API}{Application Programming Interface} \acrodef{SLR}{Systematic Literature Review} \usepackage{tikz} \newcommand{\etal}[0]{et~al{.}} \newcommand{\shrug}[1][]{% \begin{tikzpicture}[baseline,x=0.8\ht\strutbox,y=0.8\ht\strutbox,line width=0.125ex,#1] \def\arm{(-2.5,0.95) to (-2,0.95) (-1.9,1) to (-1.5,0) (-1.35,0) to (-0.8,0)}; \draw \arm; \draw[xscale=-1] \arm; \def\headpart{(0.6,0) arc[start angle=-40, end angle=40,x radius=0.6,y radius=0.8]}; \draw \headpart; \draw[xscale=-1] \headpart; \def\eye{(-0.075,0.15) .. controls (0.02,0) .. (0.075,-0.15)}; \draw[shift={(-0.3,0.8)}] \eye; \draw[shift={(0,0.85)}] \eye; \draw (-0.1,0.2) to [out=15,in=-100] (0.4,0.95); \end{tikzpicture}} \begin{document} \title{Don't forget your classics:\\Systematizing 45 years of Ancestry for\\Security API Usability Recommendations} \author{Nikhil Patnaik, Andrew C Dwyer, Joseph Hallett and Awais Rashid~\IEEEmembership{Member,~IEEE} \IEEEcompsocitemizethanks{ \IEEEcompsocthanksitem N. Patnaik, J. Hallett and A. Rashid are with the Bristol Cyber Security Group, University of Bristol, UK. A.C. Dwyer is with the University of Durham, UK.\protect\\ {E-mail: \{nikhil.patnaik, joseph.hallett, awais.rashid\}@bristol.ac.uk} \& andrew.dwyer@durham.ac.uk } } \maketitle \begin{abstract} Producing secure software is challenging. The poor usability of security \acp{API} makes this even harder. Many recommendations have been proposed to support developers by improving the usability of cryptography libraries and \acp{API}; rooted in wider \emph{best practice} guidance in software engineering and API design. In this SLR, we systematize knowledge regarding these recommendations. We identify and analyze 65 papers spanning 45 years, offering a total of 883 recommendations. We undertake a thematic analysis to identify 7 core ways to improve usability of \acp{API}. We find that most of the recommendations focus on helping API developers to \emph{construct} and \emph{structure} their code and make it more usable and easier for programmers to \emph{understand}. There is less focus, however, on \emph{documentation}, \emph{writing requirements}, \emph{code quality assessment} and the impact of \emph{organizational software development practices}. By tracing and analyzing paper ancestry, we map how this knowledge becomes validated and translated over time. We find evidence that less than a quarter of all API usability recommendations are empirically validated, and that recommendations specific to usable security \acp{API} lag even further behind in this regard. \end{abstract} \acresetall \begin{IEEEkeywords} API, usability, security, SLR, recommendations \end{IEEEkeywords} \section{Introduction} Programming is hard to do well, and, even more so, securely. Developers frequently combine functions from \acp{API}; but some are notoriously difficult to use correctly~\cite{robillard2009makes,mclellan1998building}, with cryptography and security libraries often singled out as being particularly obtuse~\cite{kamp2014please,nadi2016jumping}. Strategies ranging from Gamma~\etal{}'s design patterns~\cite{gamma1993design} to the design principles of the 1975 Saltzer \& Schroeder paper on computer security~\cite{saltzer1975protection} have been influential on software engineering practices and, we find, hold a lot of influence over security API design recommendations as well. We investigate how such strategies may have informed recommendations for designing \acp{API}, especially those \acp{API} that provide security and cryptographic functionality. Over the last 10 years, to help \ac{API} designers produce security \acp{API} that are more usable, various papers have proposed \emph{usability guidelines, principles and recommendations}~\cite{brown2017finding,green2016developers,mendoza2018mobile,mindermann2018rust,patnaik2019usability}\textemdash hereafter \emph{recommendations}. The field of security \ac{API} usability is, however, relatively new; with its recommendations building upon work spanning the past 45 years. Recommendations such as those proposed by Green \& Smith in 2016~\cite{green2016developers}, build upon work on general \ac{API} design by Joshua Bloch from 2006~\cite{bloch2006design}. Likewise, Bloch~\cite{bloch2001effective} references the design patterns of Gamma~et~al{.}~\cite{gamma1993design}. This form of ancestry tracing offers a means to systematize the knowledge that inform recommendations for improving usability of security \acp{API}; providing a deeper understanding of current areas of focus, how these have been validated, and where more evidence may be required. Although previous studies have highlighted existing guidance available to developers~\cite{stylos2007mapping}, no work, to date, has systematized knowledge across 45 years, traced ancestral relationships, as well as the impact of such ancestry on current recommendations for security API usability. Our \ac{SLR} begins with 13 papers that provide \emph{Security API designer recommendations}. We trace and analyze their ancestry by identifying 883 recommendations in 65 papers across 45 years (Figure~\ref{tab:papers}). These include papers offering general \ac{API} design recommendations (\emph{API designer}), those providing general security best practice (\emph{Security engineering}), and broader software engineering design guidance and recommendations (\emph{Software engineering}). We categorize recommendations in this corpus and analyze their ancestral chains in order to investigate three research questions: \textbf{\textit{RQ1: What do current recommendations focus on?}} Using thematic analysis, we developed 36 descriptive themes across 883 recommendations. These 36 themes are consolidated into 7 broad categories. While many papers have recommended improving documentation as a strategy to assist developers~\cite{bloch2006design,mindermann2018rust,nielsen1994enhancing,beaton2008usability,pane1996usability,zibran2008makes,robillard2009makes,patnaik2019usability,tondel2008security,bloch2001effective}, we are able to excavate how this is reflected in Security API designer usability papers. This means we can identify if Security API designer usability recommendations, for example, effectively address \emph{Documentation} or whether they focus more on the \emph{Construction} of \acp{API}. \begin{table*} \caption {Count of Recommendations (Papers) analyzed in the study, broken down by category of paper.} \centering \footnotesize \begin{tabular}{ >{\RaggedRight}p{\dimexpr 0.16\linewidth-2\tabcolsep} p{\dimexpr 0.57\linewidth-2\tabcolsep} p{\dimexpr 0.08\linewidth-2\tabcolsep} p{\dimexpr 0.19\linewidth-2\tabcolsep}} \toprule Paper Category & Description & Count & Papers \\ \midrule Security API designer recommendations & Literature that explicitly provides recommendations for improving the usability of security \acp{API} through design. & 84 (13) & \cite{green2016developers, mindermann2018rust, mendoza2018mobile, brown2017finding, patnaik2019usability, votipka2020securitymistakes, oneill2018securesocketAPI, gutmann2002cryptosoftware, oliveira2018APIblindspots, acar2017comparing, egele2013cryptomisuse, georgiev2012mostdangerouscode, meng2018securecodingpracticesjava}\\ \addlinespace API designer recommendations & Literature that focuses on \acp{API}, which may include limited elements of general practice that permits `good' security, but does not explicitly attend to security itself. & 285 (15) & \cite{bloch2006design, henning2007api, clarke2003using, robillard2009makes, mclellan1998building, stylos2007mapping, zibran2008makes, robillard2011APIlearningobstacles, rivieres2004lines, beaton2008usability, nino2009introducing, grill2012methods, bloch2001effective, pane1996usability, ko2004six} \\ \addlinespace Software engineering recommendations & Literature that is around generic software engineering and best practices in the form of recommendations. & 207 (17) & \cite{gamma1993design, myers2016improving, nielsen1994enhancing, cwalina2008framework, green1989cognitive, green1996usability, kannampallil2006handling, fowler2018refactoring, hoffman1990criteria, molich1990improving, holcomb1989amalgamated, polson1990theory, carroll1992getting, nielsen1996usability, heninger2012mining, Ko2004SoftwareErrorsFramework, Smith1982StarUI} \\ \addlinespace Security engineering recommendations & Literature in software engineering and computer security that explicitly make related recommendations but does not specifically focus on API security. & 307 (20) & \cite{OWASPDevGuide, assal2018security, MicrosoftSDL, BSIMM, OWASPSAMM, SeaCord2018SecurePractices, saltzer1975protection, fahl2012eve, ghafari2017security, gutmann1999design, saltzer1974protection, tondel2008security, mead2005security, haley2008security, bostrom2006extending, CLASP2005ApplicationSecurity, meier2006web, lipner2004trustworthy, apvrille2005secure, gorski2018developers} \\ \midrule Total & & 883 (65) & \\ \bottomrule \\ \end{tabular} \label{tab:papers} \end{table*} \textbf{\textit{RQ2: How, and to what extent, have various recommendations been validated?}} Through a review of recommendations made by different paper types, we find that less than a quarter (across the whole corpus) have been empirically validated. Only 3 papers from Security API designer guidance fall within this category. We also identify which of the 7 broad categories seem to receive greater focus from the research community regarding empirical validation and where potential gaps may lie. \textbf{\textit{RQ3: What are the implications of this coverage, in terms of ancestry and their validation, for the emerging set of Security API designer recommendations?}} In developing ancestries for 13 identified Security API designer papers, we find extended ancestry chains for almost half of these papers, however empirical validation is limited within those chains. We also explore how these ancestries develop---by addressing usability challenges arising from \acp{API} relating to particular languages, specific security problems pertaining to particular applications, or via experiences from developing security analysis and verification tools. The systematization afforded by our investigation of these three RQs leads to a number of key insights: \emph{How recommendations propagate over time.} `Classics' in the field were identified, such as Saltzer \& Schroeder~\cite{saltzer1975protection}, Nielsen~\cite{nielsen1994enhancing}, and Gamma~\etal{}~\cite{gamma1993design}, who publish recommendations to improve usability through design in software engineering and computer security. Over time, these recommendations were used, reused and adapted to address challenges in niche fields, as well as being empirically validated. Empirical validation is the scientific method of verifying a theory through thorough experimentation. Through empirical validation recommendations are able to gain stronger supporting research evidence. \emph{The need for deepening the focus on all facets of usability.} Our analysis shows that API designer and Security API designer papers pay more attention to the technical features of software and \acp{API}. In contrast, Software engineering papers focus strongly on improving the developer's understanding of the code and Security engineering papers address testing of software and \acp{API}. To improve the usability of security \acp{API}, the recommendations the community follow should place an equally strong focus on other themes that also influence their usability. \emph{The endurance and importance of the Classics~\cite{saltzer1975protection, bloch2001effective, gamma1993design, nielsen1994enhancing}} Almost all the Security API designer papers derive their recommendations from well-used `classics'. This is seen in the ten usability recommendations proposed by Green \& Smith that are influenced by the works of both Bloch~\cite{bloch2006design} and Gutmann~\cite{gutmann2002cryptosoftware}, two different ancestry chains originating from Gamma~\etal{}~\cite{gamma1993design} and Saltzer \& Schroeder~\cite{saltzer1975protection}. The process of tracing the origins of our security API design recommendations introduced us to these works. At this point we asked questions such as: \begin{itemize} \item How does a paper written in 1975 by Saltzer \& Schroeder~\cite{saltzer1975protection} stand the test of time and show up in papers written in 2020? \item Why are these papers still relevant to security API design today? \item How did these recommendations transition from one community to another? From software engineering to computer security? \end{itemize} Evolution appears to occur when faced with a challenge. Saltzer \& Schroeder's design principles were adapted by Gutmann~\cite{gutmann1995cryptlib} to address the challenge of designing secure cryptographic APIs. Bloch needed to adapt his work from addressing the challenges of usage in Java programming to the challenges of API design~\cite{bloch2001effective, bloch2006design}. The reason these recommendations evolved is because they were flexible enough to address arising challenges of security engineering and security API design, while maintaining an actionable element. To provide a stronger empirical footing for the Security API designer research community, there is a need for a concerted effort to empirically validate prior recommendations in their ancestral chains, as well as to refine them and identify where gaps may lie. Works such as Saltzer \& Schroeder~\cite{saltzer1975protection} and Gamma~\etal{}'s design patterns~\cite{gamma1993design} have stood the test of time and we see their implicit influence across the field. Security API designer research needs to engage more explicitly with these classical works to both strengthen its foundations and improve the diversity and breadth of its ancestral links. Our SLR results in a set of \emph{\textbf{8 meta-recommendations}} which summarize 45 years of design guidance targeted at software engineering and computer security. \section{Systematic Literature Review} \label{sec:slr} \subsection{Identifying seed-papers} To identify relevant work, we used a \emph{snowball method}~\cite{wohlin2014snowball}. This required us to first identify \emph{seed papers}, upon which our analysis could be based. By forward and backward snowballing from these seeds, we ensure we found connections between all recommendations emergent from these seeds. \subsubsection{Identifying Papers} \label{sec:search-terms} \newcommand{\fakesection}[1]{\noindent\textbf{\itshape #1.}} \fakesection{Step 1: Online Search} We used Google~Scholar and IEEExplorer with the following search terms to select papers that offer Security API designer recommendations: \begin{center} \ttfamily API $\binom{\text{\ttfamily Usability|Guidelines|}}{\text{\ttfamily Principles|Design|Librar(y|ies)}}$? (Security)? \end{center} \newpage \fakesection{Step 2: Review of relevant journals and conferences} We reviewed six relevant venues from their first issue to their latest available (September 2020) and added any paper that appeared to offer Security API designer recommendations to our initial set. We reviewed the following venues as they represented the leading security and software engineering venues from the ACM, IEEE, and USENIX communities: \begin{itemize} \item {IEEE Transactions on Software Engineering (TSE)}, \item {IEEE Symposium on Security \& Privacy (Oakland)}, \item {International Conference on Software Engineering (ICSE)}, \item {USENIX Security Symposium (USENIX)}, \item {International Symposium on Usable Privacy \& Security (SOUPS)}, and \item {ACM Conference on Computer and Communications Security (CCS)}. \end{itemize} Papers from other venues were identified through the snowballing process. We selected 45 papers that provided Security API designer recommendations. \fakesection{Step 3: Selecting Relevant Work} From our initial selection, each paper was reviewed independently by 3 authors. During review, we used the following inclusion and exclusion criteria to decide whether a paper would be included in our seed-list or not. \noindent\textbf{Inclusion:} \begin{itemize} \item The paper gave recommendations about improving an \ac{API} or programming interface. \item The recommendations aimed to improve the usability of the security API. \item The \ac{API} was designed to be used by programmers building an application, rather than end-users using a program for security (e.g. to accomplish PGP encryption). \end{itemize} \noindent\textbf{Exclusion:} \begin{itemize} \item The recommendations were not about \acp{API}, but rather a technology an API might wrap (e.g. the use of various cryptographic modes such as ECB, or the benefits of certain cryptographic algorithms). \item The recommendations were targeted at improving the engineering quality of an API rather than security directly. Whilst a secure API is often a well engineered API, the recommendations did not focus on security but rather broader engineering concerns (e.g., reducing an API's size to a few clear methods may reduce confusion, and be easier to verify---but unless the paper explicitly stated that this was for security, it would not be included). \item The recommendations must discuss an API---several papers gave security recommendations for configuration management which were similar to recommendations for securing \acp{API}; but these papers focused on advising IT workers who maintain these systems and did not describe \emph{programming} interfaces. \item The recommendations given were too generic to offer any meaningful advice (e.g. ``an API must be secure''---we agree, but this recommendation offers no advice on how to achieve this). \end{itemize} Through the inclusion and exclusion criteria, 13 Security API designer \emph{seed papers} were used in our snowballing process. \subsubsection{Snowballing} \fakesection{Step 4: Snowballing} We performed backward snowballing to trace the ancestry of the recommendations presented by our Security API designer seed papers, and we also performed forward snowballing on every paper along the ancestry chain to see if they were validated by other work~\cite{wohlin2014snowball}. At the end of this step we had 156 papers. \fakesection{Step 5: Identifying `actionable' recommendations from the snowball process} We sought papers that provide specific steps to improve \acp{API}, rather than general engineering guidance. From the 156 papers offering recommendations through the initial search and snowballing, we narrowed our recommendations to those that are \emph{actionable}--- that is they detail specific and clear steps to improving the usability of an API, such as: \fakesection{Improved Usability} \begin{quote} ``Easy to use, even without documentation: Developers like end-users do not like to read manuals before getting started. If the API is not self-explanatory or worse gives the false impression that it is, developers will make dangerous mistakes.''~\cite{green2016developers} \end{quote} \fakesection{Offered Guidance} \begin{quote} ``Economy of mechanism: Keep the design as simple and small as possible. This well-known principle applies to any aspect of a system, but it deserves emphasis for protection mechanisms for this reason: design and implementation errors that result in unwanted access paths will not be noticed during normal use (since normal use usually does not include attempts to exercise improper access paths). As a result, techniques such as line-by-be inspection of software and physical examination of hardware that implements protection mechanisms are necessary. For such techniques to be successful, a small and simple design is essential''~\cite{saltzer1975protection} \end{quote} However, those that were too \emph{general} (i.e. not actionable), such as describing a general principle that should be taken into account to improve the usability of an API without detailing specifics about how that principle should be implemented were excluded, such as: \fakesection{Developing General Principles} \begin{quote} ``If it’s hard to find good names, go back to the drawing board.''~\cite{bloch2006design} \end{quote} \fakesection{Directions on Design} \begin{quote} ``Offer Meaningful Options. The most crucial aspect of a security warning is to offer meaningful options to get out of the situation that triggered the warning.''~\cite{gorski2018developers} \end{quote} From the 156 papers, we identified 65 papers providing 883 actionable recommendations for improving usability and security. We also identified 91 papers, that provided more general guidance. The actionable papers were taken forward for further analysis, and the 91 general papers were kept to show the connections between papers (i.e. where they had influenced or validated actionable guidance), and to describe the ancestry of API recommendations. \fakesection{Step 6: Deriving Paper Types} From our 65 actionable papers, 2 authors allocated each paper to one of four paper types, as shown in Table~\ref{tab:papers}, based on an inductive process derived from the papers themselves. This takes the first paper type of `Security API designer' in addition to three others. This process offers a high-level overview of what each paper category broadly addresses and helps us to understand how recommendations propagate against different communities. \subsubsection{Analyzing Recommendations}\label{sec:analysis} \begin{table*}\footnotesize\centering \caption{Codebook showing \textbf{Categories} and \subcat{Descriptors} for the recommendations identified} \begin{tabular}{p{\dimexpr 0.25\linewidth-2\tabcolsep}p{\dimexpr 0.745\linewidth-2\tabcolsep}} \toprule Code & Description \\ \midrule \textbf{Assessment} & \textbf{The quality and testing of software and \acp{API}.}\\\addlinespace \subcat{Quality Engineering} & The development, good practice, and management of software and \acp{API}.\\\addlinespace \subcat{Quality Assurance} & The methods and tools used to assess and audit software and \acp{API}. \\ \midrule \textbf{Construction} & \textbf{The technical features of software and \acp{API}.}\\\addlinespace \subcat{Abstraction} & Expressing different components of software and \acp{API} through abstraction.\\\addlinespace \subcat{Access Validation} & Complete Mediation---Checking for access.\\\addlinespace \subcat{Code} & Any code or data involved in the construction of software and \acp{API}.\\\addlinespace \subcat{Error Handling} & How software and \acp{API} deal with errors.\\\addlinespace \subcat{Economy of Mechanism} & Ensuring minimalist and simple design of software and \acp{API}.\\\addlinespace \subcat{Open Design} & Ensuring that it is clear what the design is.\\\addlinespace \subcat{Technical Specifics} & Any element not covered by the other `Construction' descriptors.\\\addlinespace \subcat{Durability} & How software and \acp{API} develop over time, are maintained, and can be depreciated. \\ \midrule \textbf{Default Secure} & \textbf{The different methods and practices to develop security as a fundamental outcome.}\\\addlinespace \subcat{Bug and Defect Management} & Processes and practices for the handling of bugs and defects.\\\addlinespace \subcat{Fail-Safe Default} & How does software or an API ensure that it will always provide, by default, the safest option?\\\addlinespace \subcat{Secure Architecture} & How software and API architecture is designed with security at its core.\\\addlinespace \subcat{Compartmentalization} & Least Common Mechanism---Ensuring that things are not unnecessarily shared.\\ \midrule \textbf{Documentation} & \textbf{Documentation methods and practices.}\\\addlinespace \subcat{Explain} & How well documentation describes the usage of an API or software.\\\addlinespace \subcat{Inventory / COTS} & The development of an inventory to record the different components of an API and software.\\\addlinespace \subcat{Telemetry and Reporting} & Ensuring active collection and recording of data and information.\\\addlinespace \subcat{Publish and Communicate} & The use of documentation to distribute or to offer information.\\\addlinespace \subcat{Standardized} & Ensuring that documentation provides cohesive standards.\\\addlinespace \subcat{Exemplars} & The use of examples (frequently code) to help explain different aspects of software and \acp{API}.\\\addlinespace \subcat{Guidance} & The development of guidance for users or designers.\\ \midrule \textbf{Organizational Factors} & \textbf{How organizations respond to developing software and \acp{API} and interface with external factors.}\\\addlinespace \subcat{Incident Response} & The development of practices to respond to emergencies or incidents from software and \acp{API}.\\\addlinespace \subcat{Security Practice} & How an organization develops knowledge and practice of security.\\\addlinespace \subcat{Training} & The delivering of training for organizations and their members.\\\addlinespace \subcat{Third Party} & How organizations interface with third parties and third party components.\\\addlinespace \subcat{Regulatory} & Any regulatory, legal, or compliance that an organization does.\\\addlinespace \subcat{Risk Assessment and Metrics} & Assessing risk and developing metrics to measure it.\\ \midrule \textbf{Requirements} & \textbf{The development of requirements for software and \acp{API}.}\\\addlinespace \subcat{Implement Requirements} & The implementation and application of requirements.\\\addlinespace \subcat{Write Requirements} & The construction, identification, and development of requirements.\\ \midrule \textbf{Understanding} & \textbf{How software and \acp{API} come to be understood and practiced by humans.}\\\addlinespace \subcat{Assist} & Psychological Acceptability---how an API user or API developer deals with the load of programming and techniques to assist developers.\\\addlinespace \subcat{Drawing Attention} & Highlighting or pointing towards information required for proper or secure use of software and \acp{API}.\\\addlinespace \subcat{Misuse} & The prevention of an API user or API developer misusing software and \acp{API}.\\\addlinespace \subcat{Relevant Information} & The provision of information that concerns a particular task or object of study.\\\addlinespace \subcat{Meaningful Options} & The provision of options that make sense to API users.\\\addlinespace \subcat{Sufficient Information} & The provision of enough information in order to effectively communicate and provide understanding.\\\addlinespace \subcat{Validation of Activity} & Providing API users and API developers tools that check their activities.\\ \bottomrule \end{tabular} \label{tab:category-code-book} \end{table*} \fakesection{Step 7: Categorizing Recommendations} To understand the different areas of recommendation focus, we categorized each of our 65 actionable guidance papers. To alleviate bias from predefined categorization the analysis followed an inductive, bottom-up approach~\cite{rivas2012coding} to draw out recommendation themes. \begin{enumerate} \item Two authors, in joint discussion, selected 50 recommendations to identify different \emph{codes} in order to build a mutual understanding of the recommendations. An initial \emph{codebook}~\cite{ando2014achieving} with 28 codes was created. \item Over three iterations, the 883 recommendations were categorized using the initial codebook. Additional codes were created to capture the diversity of recommendations identified during the process. The initial codebook was updated to include 19 categories and 75 descriptive sub-categories. \item Through a visual whiteboard mapping discussion between three authors, the number of codes were reduced and amalgamated. This resulted in a consolidated codebook with 7 categories, and 36 descriptive sub-categories, as shown in Table~\ref{tab:category-code-book}. The mappings of categories onto recommendations was updated using the new codebook. \end{enumerate} Most recommendations were assigned a single top-level category and one of its sub-categories. A sixth ($\frac{162}{883}$, 18\%) were more complex---exhibiting multiple elements for API design, security or general software engineering guidance into one---and so two categories were used. No recommendation required more than two top-level categories. For example, Pane and Myers recommend supporting novice programmers: \begin{quote} ``Use Signalling to Highlight Important Information''~\cite{pane1996usability} \end{quote} This was assigned to the \emph{Understanding: Drawing Attention} category and descriptor sub-category as it is concerned with helping the developer identify relevant information. Later, Pane and Myers also recommend: \begin{quote} ``Help detect, diagnose, and recover from errors''~\cite{pane1996usability} \end{quote} This was assigned two categories: \emph{Understanding: Assist} as the recommendation concerns developer assistance to diagnose problems, and \emph{Construction: Error Handling} as it deals with recovery from errors. \fakesection{Step 8: Validating categorization} To validate our categories, a random 10\% sample of the recommendations were assessed by an independent coder. Using \emph{Cohen's $\kappa$} (a common measure of interrater reliability~\cite{cohen1960coefficient}), and using only a single category per item (as Cohen's $\kappa$ only deals with single categorizations per subject), we calculated a $\kappa$ of 0.74 when mapping the categories, and 0.76 when mapping the descriptors---indicating substantial agreement~\cite{landis1977kappa} between coders. \subsubsection{Validation Analysis}\label{sec:analysis} \fakesection{Step 9: Assessing how papers relate to each other} Using the relationships between papers captured in group discussion between three authors, 5 different \emph{ancestor--descendant relationships}---where papers interact with each other's work---were identified. Within the ancestor--descendant relationships, we make a distinction between empirical validation and the 4 remaining relationships. Through empirical validation it is possible to test recommendation effectiveness through experimentation or systematic observation. However, this bar is lower for the other ancestor--descendant relationships. These were assigned by one author and reviewed by another. These are presented in Table~\ref{fig:validation-code-book}. Below are examples of how each ancestor--descendant relationship and empirical validation relate to the literature. \begin{table} \caption{Codebook used for describing 5 different kinds of ancestor--descendant relationships between papers, including empirical validation.} \footnotesize\centering \begin{tabular}{p{\dimexpr 0.25\linewidth-2\tabcolsep}p{\dimexpr 0.75\linewidth-2\tabcolsep}} \toprule Code & Description \\ \midrule Distillation & Descendant condenses a Ancestor's recommendation to further build upon it by addressing a more specific challenge. \\ \addlinespace Borrowed & Descendant addresses Ancestor's guidelines at a superficial level either to review or to run an experiment and analyze their results.\\ \addlinespace Adaptation & Descendant has translated recommendations from a Ancestor, either through re-wording or forming their own recommendations directly derived from the Ancestor. \\ \addlinespace Comparison & Descendant contrasts Ancestor recommendation with another set of recommendations, perhaps written by another Descendant. Or Descendant compares their recommendation to that of their Ancestor.\\ \addlinespace Empirical & Descendant has experimented and evaluated through research a Ancestor recommendation. Descendant assesses whether the Ancestor's recommendations is valid.\\ \bottomrule \end{tabular} \label{fig:validation-code-book} \end{table} \textbf{Distillation} An evolution occurs from Bloch's 2001 book \emph{Effective Java}~\cite{bloch2001effective} to his 2006 paper \emph{How to Design a Good API and Why it Matters}~\cite{bloch2006design}. The book on Java programming is condensed by the short paper for API usability recommendations---with the latter paper used frequently by Security API designer papers. \textbf{Borrowed} Myers and Stylos~\cite{myers2016improving} refer to the ancestry between Grill~\etal{}~\cite{grill2012methods} and Nielsen and Molich~\cite{nielsen1990heuristic}: \begin{quote} ``Grill~\etal{} described a method where they had experts use Nielsen’s Heuristic Evaluation to identify problems with an API and observed developers learning to use the same API in the lab. An interesting finding was these two methods revealed mostly independent sets of problems with that API.''~\cite{myers2016improving} \end{quote} \hl{How is this borrowing though? Write a sentence with the word X borrowed Ys recommendations.} \textbf{Adaptation} In 2017 Acar~\etal~\cite{acar2017comparing} ran an experiment to evaluate and compare the usability of 5 Python-based cryptographic libraries. To evaluate the usability of these cryptographic libraries, Acar~\etal{} molded recommendations from Bloch~\cite{bloch2006design}, and from Green \& Smith~\cite{green2016developers}. For example, Acar~\etal{} state: \begin{quote} ``We adapt guidelines from these various sources [Bloch~\cite{bloch2006design}, Green \& Smith~\cite{green2016developers}] to evaluate the \acp{API} we examine.''~\cite{acar2017comparing} \end{quote} Acar \emph{adapts} Bloch and Green \& Smith's recommendations. \textbf{Comparison} Smith~\cite{smith2012contemporary} reflects on the work of Saltzer~\cite{saltzer1974protection} along with the 1975 paper written with Schroeder~\cite{saltzer1975protection}. Here, Smith compares the `principles' of Saltzer \& Schroeder to those of the-then contemporary recommendations in software security. \begin{quote} ``The following are new---or newly stated---principles compared to those described in 1975.''~\cite{smith2012contemporary} \end{quote} Smith makes a \emph{comparison} to the principles of Saltzer \& Schroeder. \textbf{Empirical} In 2019 Patnaik~\etal{}~\cite{patnaik2019usability} \emph{empirically} evaluate the 10 principles designed to improve usability and security of \acp{API} by Green and Smith~\cite{green2016developers}. \begin{quote} ``An empirical validation of Green and Smith’s principles showing when a principle is not being applied but also identifying issues that Green and Smith’s principles currently do not capture.''~\cite{patnaik2019usability} \end{quote} \section{SLR Findings} Table~\ref{tab:category-code-book} outlines the 7 recommendation categories and 36 descriptor sub-categories. The 7 categories describe overarching themes and topics about which papers make recommendations. The descriptor sub-categories offer greater detail within each of the categories. For example: the \emph{Construction} category captures recommendations about how to structure and build software. Its \emph{Code} descriptor sub-category identifies a focus on particular programming details. Bloch, for instance, advises developers to: \begin{quote} ``Return zero-length arrays, not nulls.''~\cite{bloch2001effective} \end{quote} In contrast, the \emph{Economy of Mechanism} descriptor identifies the simple code construction to avoid errors---% found in Nino~et~al{.}'s \emph{``be minimal''}~\cite{nino2009introducing}, Grill~et~al{.}'s \emph{``Do not provide multiple ways to achieve one thing''}~\cite{grill2012methods}, OWASP's \emph{``Keep It Simple, Stupid Principle''}~\cite{OWASPDevGuide} or Saltzer \& Schroeder's \emph{Economy of Mechanism} principle~\cite{saltzer1975protection}; from which we name the descriptor. We also observe recommendations on how to document code (the \emph{Documentation} category)---typically focused on clear explanation, communication, standardization and exemplars. Recommendations also assist \ac{API} users' \emph{Understanding} by aligning concepts with their mental models and helping them with the cognitive load of programming (the \emph{Assist} descriptor): for example Ko~et~al{.} recommend: \begin{quote} ``Help programmers recover from interruptions or delays by reminding them of their previous actions''~\cite{Ko2004SoftwareErrorsFramework} \end{quote} Other topics in the \emph{Understanding} category include \emph{Drawing Attention} to \emph{Relevant} and \emph{Sufficient} information, as well as providing \emph{Meaningful Options}. Figure~\ref{tab:categories-counts} shows the number of recommendations in each paper type mapped to each category and descriptive sub-category. We find that, over the 883 recommendations, the majority concern the construction and structure of code (the \emph{Construction} category, 32\%), as well as helping to make the code easier to comprehend and clear to the developer (the \emph{Understanding} category, 23\%). The remaining recommendations are more or less evenly spread across the 5 remaining categories (around 7--14\%). \subsection{RQ1: What do current recommendations focus on?} \begin{takeaway} \noindent\textbf{Take-Away 1:} \begin{itemize} \item API designer and Software engineering papers focus on how to \emph{structure} code and how to make it \emph{understandable}. \item Practically only Security engineering papers make recommendations about \emph{Organizational Factors}. \item Security API designer papers do not engage sufficiently with various aspects of \emph{Documentation} or \emph{Understanding: Validation of Activity}, which should be addressed in future work. \end{itemize} \end{takeaway} \begin{figure*} \centering\footnotesize \newcommand{\tableheader}[1]{\rotatebox{90}{#1}} \input{subcategory-counts} \caption{Recommendations mapped to category and paper type. Some recommendations were assigned multiple categories. All of the recommendations were assigned to at least 1 category.} \label{tab:categories-counts} \end{figure*} Recommendations, as derived from our literature search and ancestral tracing, tend to favor technical aspects. However, when we break these down by paper type, we see significant variations, enabling us to assess what may currently be missed by various paper types and how Security API designer literature compares to other communities. Both \emph{API designer} and \emph{Security API designer} paper types offer more recommendations on API \emph{Construction} and its \emph{Code}. The \emph{Construction} category is associated with 57\% of API designer and 36\% of Security API designer paper types, with \emph{Understanding} covering 21\% and 24\% of the paper types respectively. As API-related recommendations are likely to deal with the interface with code, it is reasonable to expect these paper types to focus more on \emph{Construction}. Recent work on recommendations for security API usability~\cite{green2016developers,patnaik2019usability} suggest that we may be witnessing a move to recommendations around improving code usability (\emph{Understanding}), but this is limited by the number of papers in this type (13---see Figure~\ref{tab:papers}). For recommendations in \emph{Software engineering guidance}, this relationship is reversed. A greater emphasis is placed on \emph{Understanding} (54\%), with a reduction in a focus on \emph{Construction} (25\%). Security engineering papers also covered different topics. Unlike other paper types, \emph{Security engineering guidance} focused less on \emph{Construction} and \emph{Understanding} (11\% and 3\% respectively), and instead offered greater attention to other categories such as \emph{Assessment} (25\%) and \emph{Requirements} (18\%). Recommendations categorized under \emph{Organizational Factors} are almost exclusively derived from Security engineering papers. These recommendations concern how an organization and its developers follow best practice, legal requirements, and incident handling processes for software defects. The relationship between corporate literature and academic literature is present in Security engineering papers, in Tondel~\cite{tondel2008security} and Assal~\cite{assal2018security} (Figure~\ref{fig:SLRAncestryMap}-Appendix). Many of the recommendations in this category come from corporate grey literature (such as Microsoft's SDL~\cite{MicrosoftSDL}, The BSIMM framework~\cite{BSIMM} and OWASP~\cite{OWASPSAMM}). For example, Microsoft's SDL encourages developers to \emph{``Establish a standard incident response process''}~\cite{MicrosoftSDL} so that there are mechanisms for dealing with software defects when they are inevitably discovered (which we capture under the \emph{Incident Response} descriptor). BSIMM recommends that organizations should \emph{``educate executives''}~\cite{BSIMM} so that `decision-makers' in an organization are sufficiently knowledgeable about security (\emph{Organizational Factors: Training}). Interestingly, recommendations focused on organizations were almost exclusively limited to Security engineering papers. This suggests a greater emphasis in the security community on considering the wider implications for developers and software in organizations and the impact of external contexts on being secure. The recommendations reflect how organizational factors may affect the implementation and design of security \acp{API}. For Security API designer papers, we find that in the \emph{Construction} category there is a greater emphasis on the \emph{Code} sub-category. This shows a dominance in research on what challenges developers face, when working with Security \acp{API}, on the writing of code and implementing the functions of the API. This challenge is studied in more depth by Georgiev~\etal{} who provide recommendations to mitigate and resolve the issue for various parties~\cite{georgiev2012mostdangerouscode}. At 17\%, this is the largest percentage of descriptors in this paper type. There are some descriptor categories where there are no instances, however: \begin{itemize} \item \emph{Documentation} \begin{itemize} \item Inventory/COTS; \item Telemetry and Reporting, \end{itemize} \item \emph{Organizational Factors} \begin{itemize} \item Incident Response; \item Third Party; \item Regulatory; \item Risk Assessment and Metrics, \end{itemize} \item \emph{Requirements} \begin{itemize} \item Implement Requirements; \item Write Requirements, \end{itemize} \item \emph{Understanding} \begin{itemize} \item Sufficient Information. \end{itemize} \end{itemize} Whereas some of these may be less pertinent to Security API designer papers, our analysis highlights that further consideration of these may be required to increase the breadth and depth of the community's engagement with different aspects pertinent to improving Security API designer usability. \subsection{\mbox{RQ2: Are we validating recommendations?}} \begin{takeaway} \noindent\textbf{Take-Away 2:} \begin{itemize} \item Today's Security API designer recommendations build upon those presented in historical papers. However, across this ancestry, only 22\% of the papers are empirically validated, meaning further work should be conducted to strengthen their foundations. \item Of the Security engineering papers that receive empirical validation or are part of a ancestor--descendant relationship, more than half are through an adaptation of recommendations. \item Only 3 of the 13 Security API designer paper have been empirically validated. \end{itemize} \end{takeaway} Our analysis identifies the need for a stronger focus on validating the various recommendations across different paper types, but particularly so for Security API designer papers. To assess the relationships between different recommendations over time, we constructed their ancestry by separating each recommendation's inheritance into five distinct ancestor--descendant relationships (see Figure~\ref{fig:validation-code-book}): \emph{Distillation}, \emph{Borrowed}, \emph{Adaptation}, \emph{Comparison}, and \emph{Empirical}. \subsubsection{Validation by Paper Type} \begin{figure} \centering\footnotesize \newcommand{\tableheader}[1]{\rotatebox{90}{#1}} \input{validation-counts} \caption{Rates of different kinds of paper validation for different categories of guideline papers in the literature. The {overall} columns and rows account for single papers being validated multiple times.} \label{tab:validation-counts} \end{figure} Figure~\ref{tab:validation-counts} summarizes the number of papers associated with empirical validation and the other ancestor--descendant relationships. Full charts of the various relationships we identified through mappings are presented in the Appendix in Figure~\ref{fig:SLRAncestryMap}; these show the full ancestry of the recommendations. Overall 22\% of the papers engage in empirical validation of prior work. 8 (53\%) Software engineering and 2 (47\%) Security engineering papers are empirically validated, whereas only 3 Security API designer and 1 API designer papers are empirically validated. Recommendations written more recently, as part of the software engineering and the computer security community, have developed upon some form of ancestor--descendant relationship or empirical validation of Software engineering and Security engineering papers like Nielsen's usability heuristics~\cite{nielsen1994enhancing} and Saltzer \& Schroeder's principles~\cite{saltzer1975protection}. Though efforts have been made to empirically validate older papers~\cite{nielsen1994enhancing, saltzer1975protection}, contemporary API recommendations are inherited from a large corpus of papers, where only 22\% are empirically validated. This raises the need to further understand and validate how API recommendations are built. Out of 13 Security API designer papers, only 3 papers~\cite{green2016developers, georgiev2012mostdangerouscode, egele2013cryptomisuse} are empirically validated. As we create further recommendations, we must consider the role of ancestry, and the validation of what it recommends, in order to strengthen the foundations of Security API designer recommendations. Otherwise we have no way of establishing if particular recommendations---and efforts invested in following them---have a material impact on improving the usability of security \acp{API}. We also risk propagating ineffective recommendations over time. \subsubsection{Which aspects are we validating?} If certain academic literature is not conducting extensive and in-depth validation of all areas, then which aspects are we validating? Figure~\ref{tab:validated-category-counts} counts the different recommendation categories broken down by their ancestor--descendant relationship, including empirical validation. We empirically validate more on \emph{Construction} (45\%) followed by \emph{Understanding} (27\%). For other ancestral relationships, categories exhibit different rates, but overall these are at the levels we would expect given their relative frequency across different paper types (Figure~\ref{tab:validated-category-counts}). The software engineering and security communities should focus on forming more empirical and comparison based relationships, as opposed to borrowing and distillation, to best ensure the effectiveness of the recommendations with thorough, repeatable experimentation (see Figure~\ref{tab:validated-category-counts}). \begin{figure*} \centering\footnotesize \newcommand{\tableheader}[1]{\rotatebox{90}{#1}} \input{validated-category-counts} \caption{Counts of recommendations that have been empirically validated or part of other ancestor--descendant relationships across the 7 broad category types.} \label{tab:validated-category-counts} \end{figure*} \subsection{RQ3: Where do Security API Designer Recommendations Come From?} \begin{takeaway} \noindent\textbf{Take-Away 3:} \begin{itemize} \item Almost half of the Security API designer papers have a well defined and long ancestry, dating back to 1974. \item A distinction between the capacity to validate \emph{abstract} and \emph{concrete} recommendations (derived from experiences with particular tools and applications) exists. \item Recommendations derive mainly from `standalone' ancestries, or are processed through Gutmann~\cite{gutmann2002cryptosoftware} or subsequently through Green \& Smith~\cite{green2016developers}. \end{itemize} \end{takeaway} In the development of Security API designer recommendations, there are two broad forms---abstract and concrete---in how they are developed that we identified in the ancestries we analyzed. First, \emph{abstract} recommendations such as by Green \& Smith~\cite{green2016developers} apply to a number of tools, applications, and contexts. Second, there are \emph{concrete} recommendations as identified by the ancestries of tools and applications~\cite{egele2013cryptomisuse, mendoza2018mobile}. These tend to be more tightly focused to a particular tool or application---and therefore offer advice, that as one would expect, focuses more exclusively on \emph{Construction} and \emph{Requirements}. \fakesection{\emph{Abstract} Recommendation} \begin{quote} ``Defaults should be safe and never ambiguous.''~\cite{green2016developers} \end{quote} \fakesection{\emph{Concrete} Recommendation} \begin{quote} ``Client-side validation must be thoroughly tested for consistency with server-side validation logic. WARDroid can help in identifying potential inconsistencies''~\cite{mendoza2018mobile} \end{quote} From the examples given above, we see that Green \& Smith provide a recommendation for designing security \acp{API}. The recommendation can be applied to tools, API design, and general practice by developers who are integrating security with their applications. On the contrary, Mendoza~\etal{} offer a concrete recommendation. The recommendation is a policy expressed through WarDroid~\cite{mendoza2018mobile} and addresses API \emph{Construction}. Ancestries tell us a complex story between concrete recommendations that may be easier to validate, but often are \emph{standalone}, and broader recommendations that require a wide array of studies (over time) for validation. Furthermore, to devise a method for validating broader recommendations, one may need to refer back to the ancestry of these recommendations to understand the reason for their transformation over time. The reason concrete recommendations may be easier to validate is due to their frequent association with a specific tool. This tool is presented as a solution that informs the recommendations the study has provided, therefore these recommendations can be empirically validated by validating the use of the tool. As a result, it is easier to see any direct effects and assess the impact of recommendations on the usability of security \acp{API}. However, abstract recommendations are intentionally broader so that they can be applied to fields of software design and security. Studies that propose recommendations, based on the insights of older papers that present abstract recommendations, should dedicate their efforts to validating the recommendations through experimentation designed to measure the effectiveness on the usability of security \acp{API}. Such studies should also consider that ancestral recommendations, upon which they build, may not be strongly validated themselves. The full chart of ancestor--descendant relationships, including empirical validation between papers is shown in the Appendix (Figure~\ref{fig:SLRAncestryMap}). This shows instances of empirical validation and the ancestor--descendant relationships between papers (and the recommendations they provide) across our corpus. We focus on the 13 usable Security API designer papers and discuss how the majority of the recommendations they provide derive from Saltzer \& Schroeder~\cite{saltzer1975protection}, Bloch~\cite{bloch2001effective, bloch2006design}, and Gamma~\etal{}~\cite{gamma1993design}. This is an important finding because it shows that these works have stood the test of time and their recommendations have evolved through multiple works to become a strong influence on security API design recommendations today. Of further interest is that we find instances where recommendations from the 1975 paper of Saltzer \& Schroeder are directly referenced by security API design works of 2020. \subsubsection{\mbox{Saltzer \& Schroeder: Once Upon A Time}} In 1975, Saltzer \& Schroeder wrote the paper `The Protection of Information in Computer Systems' in which they presented 8 design principles to help guide the design of protection mechanisms and prevent security flaws~\cite{saltzer1975protection}. The design principles were \emph{adapted} by a revision of material originally published by Saltzer in 1974~\cite{saltzer1974protection}. Saltzer's earlier work from 1974 has been \emph{borrowed} by Schneider~\cite{schneider1999enforceable}, \emph{distilled} by Gong \& Ellison to influence Java platform security~\cite{Gong:2003:IJP:599797}, and \emph{compared} through a contemporary look at Saltzer \& Schroeder's design principles by Smith~\cite{smith2012contemporary}. Saltzer \& Schroeder's design principles have also been \emph{empirically} validated and \emph{borrowed}~\cite{siponen2000critical,denning1982cryptography}. These relationships can be seen through Figure~\ref{fig:SLRLanguageMiniMap} in the Appendix. Saltzer \& Schroeder's work has been thoroughly influential through a range of relationships by works very different from each other, addressing fields from security policies~\cite{schneider1999enforceable} from security for Java applications~\cite{Gong:2003:IJP:599797} to cryptographic \acp{API}. This level of influence establishes Salzter \& Schroeder's work as a classic and a strong foundation for security API design recommendations. Gutmann's release of the Cryptlib cryptographic API in 1995 acted as a gateway between the security engineering recommendations published by Saltzer \& Schroeder and 7 of our 13 security API designer seeds. Gutmann \emph{adapted} Saltzer \& Schroeder's design principles when designing Cryptlib~\cite{gutmann1995cryptlib}. Gutmann's Cryptlib advertised a `high-level interface' and how abstraction can improve usability while maintaining a strong level of security. \begin{quote} ``Cryptlib provides anyone with the ability to add strong security capabilities to an application in as little as half an hour, without needing to know any of the low-level details that make the encryption or authentication work.''~\cite{gutmann1995cryptlib} \end{quote} Bernstein~\etal{} build on the design of Cryptlib and presents NaCl, an even more abstracted cryptographic API that \emph{compared} NaCl to Cryptlib in detail~\cite{bernstein2012security}. NaCl, itself, has forks such as Libsodium which also has forks including Monocypher. We can see how strongly Gutmann's adaptation of Saltzer \& Schroeder's principles has influenced the design of cryptographic \acp{API} and applications today. Later in 1999, Gutmann carried forward these adaptations when presenting his own set of recommendations to help improve the design of cryptographic security architecture~\cite{gutmann1999design}. Gutmann's recommendations were also an \emph{adaptation} of principles used to design NSA's Security Service API. Gutmann also \emph{compared} his recommendations to that used to design Microsoft's Crypto API. In 2002, Gutmann concludes his trilogy by presenting a set of recommendations in the form of lessons learned from implementing cryptographic software~\cite{gutmann2002cryptosoftware}. These recommendations are the oldest of the security API designer seeds. Compared to Saltzer \& Schroeder, Gutmann's recommendations~\cite{gutmann2002cryptosoftware} have not been as widely validated by or related to other works. The recommendations were \emph{distilled} in 2012 by Heninger~\etal{}~\cite{heninger2012mining} and later in 2016 by Green \& Smith~\cite{green2016developers}. \begin{quote} ``Most crypto software is written with the assumption that the user knows what they’re doing, and will choose the most appropriate algorithm and mode of operation, carefully manage key generation and secure key storage, employ the crypto in a suitably safe manner, and do a great many other things that require fairly detailed crypto knowledge. However, since most implementers are everyday programmers .. the inevitable result is the creation of products with genuine naugahyde crypto.''~\cite{gutmann2002cryptosoftware} \end{quote} In 2016, Green \& Smith published a list of 10 recommendations to help developers create more usable and secure cryptographic \acp{API}~\cite{green2016developers}. The recommendations stemmed from the \emph{distillation} of Gutmann's work and the \emph{adaptation} of a series of API design recommendations defined by Bloch in 2006~\cite{bloch2006design}. Green \& Smith's work is the ancestor, in the ancestor--descendant relationship, to 4 security API designer seed papers. Their recommendations are also empirically validated by Patnaik~\etal{}~\cite{patnaik2019usability}, another security API designer seed paper. Patnaik~\etal{}~\cite{patnaik2019usability} offered an \emph{empirical} validation in this chain through evaluating the 10 Green \& Smith~\cite{green2016developers} principles. Through an analysis of over 2400 Stack Overflow questions and responses from developers facing challenges using 7 cryptographic libraries, they found 16 usability issues which were mapped against the 10 principles of Green \& Smith~\cite{green2016developers}. They analyzed the extent to which the 10 principles encompassed the 16 usability issues and also identified additional issues that were not addressed by Green \& Smith's principles. Based on this, they derived additional recommendations: ~\emph{4 usability smells}, which are indicators that an interface may be difficult to use for its intended users. In 2018 Mindermann~\etal{} presented recommendations for designing cryptographic libraries based on an experiment using Rust cryptographic \acp{API}~\cite{mindermann2018rust}. They addressed insecure defaults, advertisement of authenticated encryption in low-level libraries, lack of warnings about deprecated/broken features, and the scarcity of documentation and example code from low-level libraries. They \emph{compare} their set of recommendations against the 10 principles defined by Green \& Smith noting: \begin{quote} ``Compared to Green \& Smith’s top ten principles, our recommendations are more specific but do not conflict with their suggestions.''~\cite{mindermann2018rust} \end{quote} Acar~\etal{}~\cite{acar2017comparing} \emph{adapted} Green \& Smith to evaluate participants' solutions from a controlled experiment in which 256 Python developers attempted tasks involving symmetric and asymmetric cryptography using one of five different cryptographic \acp{API}. Oliveira~\etal{}~\cite{oliveira2018APIblindspots} \emph{distill} the work of Green \& Smith as well as Acar~\etal{} as part of an empirical study to understand the developer's perspective of API blindspots. Oliveira~\etal{} studied the developers ability to perceive blindspots through a series of code scenarios. Oliveira~\etal{} analyzed the developer's personal traits, such as; perception of correctness, familiarity with code, and level of experience. Votipka~\etal{} aimed to understand what security errors developers tended to make and why. Votipka~\etal{} analyzed 94 submissions of code attempting security problems, and as a result labeled 182 unique security vulnerabilities~\cite{votipka2020securitymistakes}. Votipka's results served as an \emph{adaptation} of Green \& Smith's recommendations. The recommendations published by Votipka~\etal{} in 2020 is the latest in an ancestral chain dating back to 1975 when Saltzer \& Schroeder presented a series of design principles aimed at protection mechanisms~\cite{saltzer1975protection}. Authors like Gutmann and Green \& Smith played a pivotal role when tailoring the design principles of Saltzer \& Schroeder towards the security API design recommendations of today~\cite{gutmann2002cryptosoftware, green2016developers}. However, it is interesting to identify a direct and contemporary relationship between Saltzer \& Schroeder and Votipka~\etal{}. Votipka \emph{borrowed} Saltzer \& Schroeder's principles to highlight design violations made by developers introducing too much complexity in their code. This shows two very different forms of evolution; on the one hand we see security engineering recommendations from 1975 strongly influential and transforming slowly over time to address challenges in niche fields such as the design of cryptographic \acp{API} and security API design recommendations, and on the other, we find Saltzer \& Schroeder's principles are still relevant today and flexible enough to address the challenges of designing security \acp{API} directly. \subsubsection{\mbox{Bloch: Know Your Audience}} In order to master a new language one must learn the grammar (how to correctly structure the language), the vocabulary (how to name things you want to talk about), and the common and effective ways in which to say things (usage). These practices are also applicable to programming languages. Many have addressed the first two practices thoroughly~\cite{arnold2000java, gosling2000java}. However, Bloch acknowledged that Java developers do not have a good understanding of the third practice---usage---and so, in 2001, Bloch dedicated Effective Java to address the practice of usage. The book offers advice on code structure, and the importance of others' understanding and code readability to improve ease of use when making future modifications~\cite{bloch2001effective}. Throughout the book, Bloch evaluates and compares his recommendations to Gamma~\etal{}'s design patterns~\cite{gamma1993design}. Bloch's ancestry can be fully traced through Figure~\ref{fig:SLRLanguageMiniMap} in the Appendix. \begin{quote} ``A key feature of this book is that it contains code examples illustrating many design patterns and idioms. Where appropriate, they are cross-referenced to the standard reference work in this area [Gamma95]''~\cite{bloch2001effective} \end{quote} In 2001, Bloch takes a new direction and \emph{adapts} his recommendations from Effective Java to provide guidance for designing good \acp{API}. Initially Bloch provides this guidance in the form of a presentation at Google Inc. Following this, Bloch condenses the essence of the presentation into 39 recommendations. These recommendations were later adapted in 2016 by Green \& Smith to improve the usability of security \acp{API} through design. Bloch's work is also \emph{adapted} by Acar~\etal{}, which \emph{adapts} the work of many, including Green \& Smith~\cite{green2016developers}, Henning~\etal{}~\cite{henning2007api}, and Nielsen~\cite{nielsen1994enhancing}, to compare the usability of Python based cryptographic \acp{API}~\cite{acar2017comparing}. \begin{quote} ``We \emph{adapt} guidelines from these various sources to evaluate the \acp{API} we examine.''~\cite{acar2017comparing} \end{quote} We thus have an intricate chain of usable Security API designer recommendations---% ones that both inform Green \& Smith~\cite{bloch2001effective,bloch2006design}, and those that Green \& Smith inform~\cite{acar2017comparing, patnaik2019usability, mindermann2018rust, oliveira2018APIblindspots, votipka2020securitymistakes}. In Bloch's ancestry, we do not find any explicit evidence that Bloch's recommendations have been empirically validated as they moved into Green \& Smith, though there is some traceability to Gamma~\etal{}'s design patterns\textemdash that are rooted in observations of developers' problem-solving practices. Bloch~\cite{bloch2001effective} discusses the many architectural advantages of Gamma~\etal{}'s design patterns, and more specifically Gamma~\etal{}'s factory pattern~\cite{gamma1993design}. This suggests that we need not only further empirical validation of Green \& Smith, but also upon the ancestry with which their work builds. \subsubsection{\mbox{Georgiev: A Classic in the Making?}} Georgiev~\etal{}'s work is in itself an empirical study, in that Georgiev~\etal{} provide evidence that the SSL certificate validation is broken in many security based applications and libraries. Georgiev~\etal{} found that any SSL connection from cloud clients based on the Amazon's E2C Java library are vulnerable to man-in-the-middle attacks. The reason for this issue are poorly designed \acp{API} of SSL implementations, in turn presenting developers with a confusing set of parameters and settings to decipher. Georgiev~\etal{} conclude their paper by presenting recommendations for both application developers and SSL library developers~\cite{georgiev2012mostdangerouscode}. Georgiev~\etal{}'s recommendations are \emph{empirically} validated by O'Neill~\etal{}~\cite{oneill2018securesocketAPI}, another security API design seed, who also \emph{empirically} validate the works of Brubaker~\etal{}~\cite{brubaker2014using} and Fahl~\etal{}~\cite{fahl2012eve}. A connection is also seen between Georgiev~\etal{}, Brubaker~\etal{} and Fahl~\etal{}, as the latter two papers \emph{compare} their work to that of Georgiev~\etal{}. This ancestry is shown through Figure~\ref{fig:SLRApplicationMiniMap} in the Appendix. Building on Georgiev~\etal{} work, O'Neill~\etal{} present the Secure Socket API (SSA), a simplified TLS implementation using existing network applications~\cite{oneill2018securesocketAPI}. O'Neill~\etal{} build upon earlier work on TrustBase---an effort to improve security and flexibility available to administrators who select the certificate validation for their applications~\cite{oneill2017trustbase}. SSA presents the administrator with the choice of standard validation or TrustBase. By selecting TrustBase, administrators have finer-grained control over validation. O'Neill~\etal{} analyze the design of OpenSSL, providing recommendations to help improve the design. These recommendations generally apply when designing security \acp{API}. Meng~\etal{} perform an empirical analysis of StackOverflow posts to understand challenges faced by developers when using secure coding practices in Java~\cite{meng2018securecodingpracticesjava}. They identify security vulnerabilities in the suggested code of answers provided through StackOverflow. The findings of the study suggests more consideration should be given to secure coding assistance and education, bridging the gap between security theory and coding practices. A comparison is made to Georgiev~\etal{}'s work and recommendations. \begin{quote} ``\emph{Compared} with prior research, our study has two new contributions. First, our scope is broader. We report new challenges on secure coding practices, such as complex security configurations in Spring security, poor error messages, and multilingual programs. Second, our investigation on the online forum provides a new social and community perspective about secure coding. The unique insights cannot be discovered through analyzing code.''~\cite{meng2018securecodingpracticesjava} \end{quote} Similarly, Meng~\etal{} also \emph{compare} their work to Egele~\etal{}, who developed CryptoLint, a static program slicing tool designed to check applications from the Google Play marketplace. Egele~\etal{} find that 10,327 out of 11,748 applications that make use of cryptographic \acp{API} make at least one mistake. The criteria Egele~\etal{} based their analysis on was supported by two well-known security standards; ``security against chosen plaintext attacks (IND-CPA) and cracking resistance''~\cite{egele2013cryptomisuse}. Egele~\etal{} \emph{adapt} the work of Bellare~\etal{}~\cite{bellare2005introduction} and Desnos' Androguard~\cite{desnos2011androguard}. \begin{quote} ``Our tool, called CryptoLint, is based upon the Androguard Android program analysis framework.''~\cite{egele2013cryptomisuse} \end{quote} \begin{quote} ``We adopt the notation used by Bellare and Rogaway.''~\cite{egele2013cryptomisuse} \end{quote} Not only does the work of Bellare~\etal{} and Desnos provide a foundation for Egele~\etal{}'s analysis, but they directly influence the security criteria used by CryptoLint. Based on the analysis Egele~\etal{} present, a set of countermeasures against the vulnerabilities were found. Between these 4 security API designer seeds, we can see that a good foundation is forming. In particular, Georgiev~\etal{}'s~\cite{georgiev2012mostdangerouscode} recommendations have been \emph{empirically} validated by O'Neill~\etal{}~\cite{oneill2018securesocketAPI}, and \textbf{\emph{compared}} by Meng~\etal{}~\cite{meng2018securecodingpracticesjava}, Brubaker~\etal{}~\cite{brubaker2014using}, and Fahl~\etal{}~\cite{fahl2012eve}. Does Georgiev~\etal{} have the makings of a classic? It may be so, as long as the recommendations continue to be empirically validated or related to by other ancestor--descendant relationships. It is essential to form a strong foundation upon which the community providing future security API recommendations and applications build. \section{Threats to Validity} We identify three main threats to validity. First, our search terms on Google Scholar and IEEExplore (see Section~\ref{sec:search-terms}) may have overlooked some papers relevant to our study. We mitigate this threat by manually reviewing: \begin{itemize} \item {IEEE Transactions on Software Engineering (TSE)}, \item {IEEE Symposium on Security \& Privacy (Oakland)}, \item {International Conference on Software Engineering (ICSE)}, \item {USENIX Security Symposium (USENIX)}, \item {International Symposium on Usable Privacy \& Security (SOUPS)}, and \item {ACM Conference on Computer and Communications Security (CCS)}. \end{itemize} We also used backward snowballing to trace the ancestry of the recommendations presented by our Security API designer seed papers, and we used forward snowballing on every paper along the ancestry to see if they were validated by other work. We have reviewed all the major conferences for papers that present recommendations relevant to our SLR and are confident that no relevant work has been overlooked. Second, the categorization was conducted inductively---% meaning that our categories may not correlate with `common sense' understandings; making comparison with other categorizations more difficult. To mitigate this we calculated Cohen's $\kappa$~\cite{cohen1960coefficient}, demonstrating that our categorization was consistent between coders. However, roughly a fifth of recommendations have two categories. Cohen's $\kappa$ is not designed for data with multiple categories, so when calculating inter-rater reliability we used only the first categorization. This is unlikely to affect the overall analysis as the interrater reliability for just the first category is relatively high (0.74)---we would expect a second category to also be consistent. Third, we acknowledge that by looking at recommendations as the basis for the ancestry, we do not account for all ancestry of Security API designer papers. We believe our analysis covers a significant proportion of the literature which is inherited by Security API designer papers and that, by noting the ways different papers have related to each other, we preserve how the knowledge developed. \section{Discussion} \subsection{The Classics} What makes the works of Saltzer \& Schroeder, Bloch, Nielsen, and Gamma~\etal{} classics is that not only are they actionable, but also that they are flexible enough to transition into different branches of software engineering and computer security through adaptations~\cite{saltzer1975protection, bloch2001effective, bloch2006design, nielsen1994enhancing, gamma1993design}. This is no clearer than in Saltzer \& Schroeder's case~\cite{saltzer1975protection} where they define a set of recommendations to address the design challenges of protecting information, stored on computers, from unauthorized access. Gutmann facilitated the transition from security engineering to security API design by using Saltzer \& Schroeder's recommendations to design the Cryptlib cryptographic API~\cite{gutmann1995cryptlib}. Gutmann work is later related to by Green \& Smith, from which many other seed papers grew~\cite{green2016developers}. This evolution was possible primarily because Saltzer \& Schroeder's recommendations were actionable. The flexibility of Saltzer \& Schroeder's recommendations is tested again in 2020 directly by Votipka~\etal~\cite{votipka2020securitymistakes}, proving that not only have their recommendations stood the test of time but also that they are still relevant for addressing the challenges faced today by security API designers. Gamma~\etal{} also play an influential role in the state of today's security API design recommendations. In 2001, Bloch transitions the design pattern's of Gamma~\etal{} to address usability challenges in Java programming~\cite{bloch2001effective}. In 2006, Bloch adapts his own work towards designing good APIs~\cite{bloch2006design}. Green \& Smith tailor Bloch's API design recommendations for security API designers~\cite{green2016developers}. The wide-spread influence seen through Gamma~\etal{}'s ancestry explains why it is a classic. Saltzer \& Schroeder permeate several of our categories, but some also come from elsewhere The \emph{Documentation} category likely has its origins in the work of Nielsen and UI~usability~\cite{nielsen1994enhancing}. Nielsen's recommendations are adapted by Acar~\etal{} for comparing the usability of Python based cryptographic APIs, where the importance of good documentation is highlighted. Several other papers have highlighted the importance of usable and high quality documentation~\cite{bloch2006design,mindermann2018rust,nielsen1994enhancing,beaton2008usability,des2004eclipse,pane1996usability,zibran2008makes,robillard2009makes,patnaik2019usability,tondel2008security,bloch2001effective}. Before going on to advise on how one can ease novice programmers into programming, Pane and Myers~\cite{pane1996usability} quote their guidance that: \begin{quote} ``Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large''~\cite{nielsen1994enhancing} \end{quote} Pane and Myer go on to inspire others and bring usability specifically to developers. 20 years later Green \& Smith describe their 10 principles for creating usable and secure crypto \acp{API}~\cite{green2016developers}. Sure enough, one principle reads: \begin{quote} ``Make \acp{API} easy to use, even without documentation''~\cite{green2016developers} \end{quote} Yet again, these arguably classic usability guidelines have been restated, rediscovered and then returned to. However, there is no direct link in Green \& Smith to either Nielsen~\cite{nielsen1994enhancing} or Pane and Myers~\cite{pane1996usability}, demonstrating their pervasive role on `common sense'. 11 recommendations say specifically to use (and occasionally not to use~\cite{grill2012methods}) Gamma~\etal's \emph{Design Patterns} and 6 reference \emph{Factory Patterns} directly. 4 papers related to Gamma~\etal's work, and a further 2 validated the patterns empirically. Perhaps the easy-to-recall names of many of the patterns have helped cement the work as a classic---but whilst we identified recommendations to use design patterns and to document their use, we did not see new versions of the \emph{Factory, Visitor, Observer} or \emph{Singleton patterns} being restated for Security API designer papers, or other more specialized fields. This does not mean, however, that the original Gamma~\etal{} \emph{Design Patterns} are not connected to the Security API designer field. The \emph{Design Patterns} have influenced the recommendations from the current literature (e.g., Bloch's), and have become an underlying standard upon which new recommendations are built. The classics can be considered as a set of rules that are widely known to the current software engineering, computer security, and the usable security research communities. Future advances in security API design recommendations can refer to these standards, without hesitation, because the classics are tried and tested through developing challenges and time itself. \subsection{More Validation Please!} Why is Saltzer \& Schroeder's work the only one to survive 1975? Why is a paper from 1994, authored by Nielsen, still influential today? Or why does a series of design patterns written by Gamma~\etal{} in 1993 part of an SLR written in 2020? The answer to all these questions is seen through empirical validation and our ancestor--descendant relationships. Without the ancestry chain stemming from Saltzer \& Schroeder, would Votipka even know their recommendations existed? It is unlikely, which is probably the case for many other design recommendation papers from that time. This is exactly why empirical validation is necessary. The purpose of empirical validation is to test the effectiveness of the recommendations suggested by a paper. Empirical validation helps set aside poor design recommendations and brings forward recommendations that prove to be effective. Empirical validation provides assurance to designers that the recommendations they are considering do in fact help design better software. Whilst we found that many recommendations have not been validated or related to ($\frac{166}{883}$, 19\%), overall the software engineering and security communities seem to be making strides towards it. Yet, this seems to be less so with papers that provide both general and security-focused recommendations for developing \acp{API}, at 39\% and 33\% of papers respectively (see Figure~\ref{tab:validation-counts}). Our work shows that when we write research producing recommendations for developers, 22\% of all paper types are empirically validated. More should be done to directly engage with the ancestral \emph{chains} deriving from older recommendations rather than validating a singular set of recommendations in order to ensure a depth from \emph{classics} to contemporary papers. When creating new recommendations then, we should be looking at the history of where our knowledge comes from. Therefore we argue that studies should not only focus on validating \emph{contemporary} papers, but also engage with the older and large body of knowledge concerning usability, and its implications for \acp{API} and security. In order to engage with older papers, one could run a standalone empirical validation study on the recommendations presented in these, for example. By applying recommendations in practice through experimentation and providing detailed analyses, one can help build more solid foundations as empirical validation can then be referenced by future studies. Upon this strong foundation, the recommendations can be transformed to create new recommendations specific to fields such as Security API designer guidance. Many recommendations have arguably changed little from those made 25 or even 45 years ago---% yet relatively few of the classics are referenced in the ancestral \emph{chains} we have analyzed. Modern recommendations are clearly still being inspired by older works and to avoid restating ourselves, as a community we must take older, more established guidance and ensure---the foundational principles---are validated fully. We see evidence of a classic in the making through the work of Georgiev~\etal{}~\cite{georgiev2012mostdangerouscode}. A set of recommendations that has been empirically validated and influenced the works of many others, Georgiev~\etal{} has the potential to influence many more in the field of security API design. To ensure this potential, more validation is needed, their recommendations need to be tested against varying conditions and challenges. \subsection{Meta-Recommendations} Our categorization of the recommendations are neutral---% we do not frame the categories as things one should or should not do---% but rather describe what type of advice the recommendations offer. After analyzing many different recommendations, through a close reading and engagement, we offer meta-recommendations based on our extensive analysis of Security API designer papers. As we discussed previously, these are abstract and thus are not `actionable' by developers. However, these should guide broad thinking in both academic and practitioner material. These meta-recommendations are not exhaustive, but provide grounds for future thinking and development based on the findings from this SLR. \begin{enumerate} \item \textbf{\itshape Do Quality Engineering~\cite{bloch2006design,beaton2008usability,nino2009introducing,bloch2001effective,assal2018security,MicrosoftSDL,BSIMM,OWASPSAMM,SeaCord2018SecurePractices}.} Software is not developed in isolation. Have engineers and tools review code to spot rough edges and ensure best practice is followed. \item \textbf{\itshape Software Engineering Matters~\cite{bloch2006design,rivieres2004lines,Jacques2004APIGuidelines,sarkar2006api,nino2009introducing,gamma1993design,saltzer1975protection,saltzer1974protection}.} Follow best practices for software development and ensure code produced is of a high quality. Give mechanisms for access control, and have a plan for how the code will be maintained. Getting good, minimal, well abstracted, well structured code will pay dividends in the long run. \item \textbf{\itshape Embed Security at Every Stage~\cite{saltzer1975protection,CLASP2005ApplicationSecurity,lipner2004trustworthy,BSIMM}.} Design security in from the start by compartmentalizing components, and having sensible defaults. Have a plan for dealing with bugs and defects. \item \textbf{\itshape Show, and Tell~\cite{bloch2006design,grill2012methods,patnaik2019usability,mindermann2018rust,tondel2008security}.} Documentation matters! Document how the \acp{API} work. Document how programmers should use them. Provide exemplars. Standardize as much as possible. Make sure the documentation is easy to find and read. \item \textbf{\itshape API Developers are not an Island~\cite{BSIMM,MicrosoftSDL,OWASPSAMM,CLASP2005ApplicationSecurity}.} An API might be for programmers to use, but they are often maintained and managed within organizations. Executives need training to make good decisions, and organizations need a plan to develop their security knowledge and practices. API Developers will be influenced by outside forces (be they regulatory, risk-based, or third-party developers). \item \textbf{\itshape Write a Specification~\cite{sindre2005eliciting,meier2006web,haley2008security,mead2005security,SeaCord2018SecurePractices,assal2018security,MicrosoftSDL,BSIMM,bloch2006design}.} Start from requirements, and update those requirements as new threats are found. \item \textbf{\itshape Remember Programmers are Human~\cite{beaton2008usability,clarke2003using,stylos2007mapping,ko2004six,pane1996usability,grill2012methods,zibran2008makes,green2016developers,nielsen1994enhancing,nielsen1996usability,green1989cognitive,ko2005framework,molich1990improving,holcomb1989amalgamated,green1996usability}.} The first rule of code is you have to be able to read it. Draw programmers' attention to the important bits; make it easy to spot mistakes, and to check when they have got it right. Usability isn't just for users. \end{enumerate} \noindent These 7 guiding principles summarize our 7 categories and bring together much of the advice in body of knowledge for developing secure \acp{API} as well as advice for more general software engineering. They are not the sum total of all advice, but they cover what we distilled as a substantial amount with common points that multiple experts and papers have suggested. We also note that some papers are referenced by many of the principles: \cite{bloch2006design,BSIMM,MicrosoftSDL,beaton2008usability,green2016developers} amongst others. Perhaps then there should be an eighth principle: \begin{enumerate} \setcounter{enumi}{7} \item \textbf{\itshape Know your Classics.} The struggles developers had when computers were first being programmed, and the strategies they came up with to deal with them, are still worth knowing about. While more \emph{must} be done to validate these recommendations empirically, the refinement and restatement of them suggest they are still helpful. \end{enumerate} \section{Conclusion} Our study is the first to systematically analyze 45 years of recommendations that inform Security API designer papers, crossing scientific communities working on security, \acp{API}, and software engineering. Our research questions systematize and learn where recommendations come from, whether they build on validated scientific work, and whether these bring a strong empirical focus to supporting developers with creating usable \acp{API}. From an analysis of 65 papers guiding developers, including 13 specifically targeted at providing recommendations to developers on how to create usable and secure \acp{API}, and 883 recommendations found within the papers, we identified 7 broad categories of recommendations and 36 descriptor sub-categories. These categories and descriptors provide a system for understanding the knowledge we have for guiding developers to produce better code, understand environments, and interface with organizations. The community has made some strides towards validating recommendations, but more must be done within Security API designer literature to improve empirical validation. As we identified, there are different types of ancestry according to their attention to \emph{abstract} and \emph{concrete} recommendations. Coverage is important alongside validation rates. Through the ancestry analysis, we identified the well established ancestral chains between different areas of literature. If the new Security API designer recommendations stem from a wider coverage of ancestral chains, it will be a stronger, more reliable set of Security API designer recommendations as more validation may have been carried out in the chain. This could result in more than one chain originating from historic sets of recommendations. In addition, further developing work in the area ought to address the `classics' of the usability field in order to more appropriately attend to older principles and recommendations. This is because, as we identify in our \emph{Meta-Recommendations}, many older and `classic' papers address similar contemporary recommendations. Perhaps we don't need to reinvent the wheel so much as assess and renovate the parts to make them roadworthy for usable Security API designer recommendations today. \balance \bibliographystyle{IEEEtran}
2,877,628,091,432
arxiv
\section{} \section{Introduction} Spontaneous twisting is observed in many liquid crystalline systems in which the constituents are achiral and the confinement does not require a twisted configuration~\cite{Press1974, Pang1994, Volovik1983, Lavrentovich1990, Drzaic1999, Jeong2014, Jeong2015, Yang2008, Nayani2017, Tortora2011, Vanzo2012}. Such behavior may be observed in systems for which the relative magnitudes of the elastic constants of the material make the twist deformations energetically favorable~\cite{ Yang2008, Jeong2014, Jeong2015, Lavrentovich1990, Drzaic1999,Nayani2017, Tortora2011, Williams1985-028, Williams1986, Prinsen2004}. Examples of such spontaneous twisting have been observed in achiral nematic molecules confined to cylindrical capillaries~\cite{Jeong2015} and droplets with homeotropic anchoring~\cite{Yang2008}. Twisted bipolar structures in spherical nematic droplets provide a frequently observed example of this phenomenon~\cite{Volovik1983, Lavrentovich1990, Drzaic1999, Jeong2014}. Theoretical work by Williams~\cite{Williams1986} showed that the twisted bipolar configuration is energetically favorable in spherical bipolar droplets if the elastic constants of the material satisfy the inequality \(K_2<K_1 - 0.43 K_3\), where \(K_1\), \(K_2\) and \(K_3\) are respectively the splay, twist and bend elastic constants of the nematic. More recently, twisted bipolar structures have also been observed in elongated spindle-shaped nematic droplets~\cite{Tortora2011, Vanzo2012, Nayani2017} and polymer liquid crystalline microparticles~\cite{Wang2016, Ansell2019}. Understanding the behavior of nematics confined to elongated spindle-shaped regions has long created interest due to the spindle-shaped tactoids that form in lyotropic liquid crystals as the nematic phase nucleates. These tactoids were first observed by Zocher in the 1920s~\cite{Zocher1925} in vanadium pentoxide and have since been observed in a host of inorganic and biological materials~\cite{Bernal1937, Bernal1941, Zocher1960, Puech2010, Kim2013, Modlinska2015, Jamali2015, WangPX2016, Nayani2017, Nystrom2018}. The tactoids consist of regions of nematic that coexist with the surrounding isotropic phase. The prevalence of tactoids in lyotropic systems, as well as bipolar structures in thermotropic droplets, has inspired many studies aiming to understand the shape and director field of such systems~\cite{ Kaznacheev2002, Kaznacheev2003, Kalugin1998, Williams1985-028, Williams1986, Bates2003, Prinsen2003, Prinsen2004b, Prinsen2004, Tortora2011, Vanzo2012, vanBijnen2012, Metselaar2017, Safdari2021}, which depends on the interplay between the elasticity, surface tension and droplet size as well as the effect of applied fields. Using scaling arguments, Prinsen and van der Schoot~\cite{Prinsen2003, Prinsen2004b} showed that the director field configuration in a tactoid depends on its size, with smaller tactoids having a homogeneous director field while larger tactoids have a quasi-bipolar director field that becomes exactly bipolar in the infinite volume limit. The same authors also showed that spindle-shaped bipolar tactoids with pointed tips have a lower free energy than comparable prolate spheroidal-shaped tactoids~\cite{Prinsen2003}, consistent with the spindle shapes observed experimentally~\cite{Zocher1925, Bernal1941, Zocher1960, Kaznacheev2002, Kaznacheev2003, Tortora2011, Puech2010, Kim2013,Nystrom2018, Modlinska2015, Jamali2015}. Inspired by the work of Williams~\cite{Williams1986}, Prinsen and van der Schoot also investigated twisted bipolar structures in spindle-shaped tactoids~\cite{Prinsen2004} and generalized the Williams inequality to account for the anisotropic shape of the spindle. They showed that the maximum value of the twist elastic constant, relative to the splay and bend constants, at which twisting is preferable decreases as the tactoids become smaller in volume, and consequently more elongated. The typical values of elastic constants in many lyotropic systems do not satisfy the requirements for twisting, so these systems would not be expected to exhibit a twisted bipolar configuration~\cite{Prinsen2004}. However, twisted bipolar structures have been observed in lyotropic chromonic liquid crystals~\cite{Tortora2011, Jeong2014, Nayani2017}, a class of materials in which the twist elastic constant is significantly smaller than the splay and bend elastic constants. These materials therefore appear to satisfy the inequality required for twisting and are indeed observed to display highly twisted bipolar configurations. Recently, we investigated the twisting behavior of spindle-shaped polymer liquid crystalline microparticles~\cite{Ansell2019} and developed a geometric model to describe their twisting behavior. These bipolar polymer particles were created by polymerizing spherical bipolar nematic droplets containing the reactive mesogen RM257 at low wt\% in a mixture with the non-reactive liquid crystal 5CB. After removing the 5CB, the initially spherical polymer particles deswell anisotropically in solvents into elongated spindle shapes, forming a chiral twisted bipolar structure in the process. In our geometric model, we showed that the twisting behavior of the polymers on the surface was well-described by a type of spiral called a \textit{loxodrome}, in which the angle between the integral curves of the system and the principal directions of the surface are the same at every point along the curve. Such a twisting structure has been previously assumed in theoretical studies of twisted bipolar structures~\cite{Williams1986, Prinsen2004} and it has been suggested that such structures are consistent with observations of other twisted bipolar systems~\cite{Volovik1983, Tortora2011, Vanzo2012}. While our polymer system displays a twisted bipolar structure that is well-described by loxodromes, the overall twisting behavior is not consistent with that predicted for spindle-shaped tactoids~\cite{Prinsen2004}. Our system is consistent with the model in that we observe that smaller volume spindles have a larger aspect ratio. However, we also observe that larger aspect ratio spindles are \textit{more} twisted than those that are closer to spherical, in direct contrast with the tactoid model. In this study, we therefore develop an energetic model that captures the behavior of these twisted spindle-shaped polymer particles. Given the nematic ordering of the polymer system, we base our model around the Frank free energy of the nematic and seek to incorporate additional terms to capture the polymer behavior. We show that, as was the case in our geometric model, incorporating a constraint on the length of the integral curves in our system results in twisted loxodrome solutions that minimize the free energy and predict behavior that is consistent with our previous experimental observations. The structure of the remainder of this paper is as follows. In section 2, we introduce the parameterization of the spindle shapes and discuss the Frank free energy that will be the starting point of our model. In section 3 we consider a nematic confined to a spindle-shaped surface. We show that twisted loxodrome solutions do not minimize the Frank free energy, but that adding an additional length constraint term allows for an exact twisted loxodrome solution if \(K_1 = K_3\). We show that the loxodrome solution is a good approximation in cases where the deviation between \(K_1\) and \(K_3\) is small. In section 4 we consider the conditions on the shape profile of a surface of revolution for it to support a twisted loxodrome solution, and show that the most general surface that supports such a solution is the general torus. In section 5 we then turn our attention to the bulk and show that, while not exact, twisted bispherical loxodrome solutions are a good approximation to the bulk solution. We use this to extend the ideas of our geometric model to the bulk structure and show that the resulting model favors twisting for a wider parameter range than expected in twisted nematic systems that do not have the additional length constraint. \section{System parameterization and energy} In this study, we consider a nematic system confined within a spindle-shaped droplet with strong planar anchoring at the surface in which the nematic director field adopts a bipolar configuration. We define a spindle as a surface of revolution formed by revolving a minor arc of a circle about the chord connecting its endpoints. In line with previous investigations, we take the director field within the bipolar droplets to follow the symmetries of a bispherical director field~\cite{Williams1985-028, Williams1986, Kaznacheev2002, Kaznacheev2003, Prinsen2003, Prinsen2004}, which was shown by Williams~\cite{Williams1985-028} to be a good approximation for the numerically solved director field in the one constant approximation. The internal bispherical structure can be considered to consist of layered spindles of the same major axis length and with different minor axis lengths, arranged so that the tips of all of the spindle layers coincide at the locations of the two boojum defects. The structure of the spindles naturally leads us to describe positions in the system using bispherical coordinates \((\eta, \phi, \psi)\), with corresponding orthonormal unit vectors \((\vu{e}_{\eta}, \vu{e}_{\phi}, \vu{e}_{\psi})\), which are shown in \cref{fig:bisph-schem}. These coordinates can be related to Cartesian coordinates with a common origin through the transformation \begin{equation} \mqty(x \\ y \\ z) = \frac{1}{Z}\mqty(\sin{\eta}\cos{\psi}\cos{\phi} \\ \sin{\eta}\cos{\psi}\sin{\phi} \\ \sin{\psi}), \end{equation} where \(Z = 1+\cos{\eta}\cos{\psi}\). In the bispherical coordinate system, a surface of constant \(\eta\) defines a spindle-shaped surface. The value of \(\eta\) varies within a spindle between zero along the line connecting the two boojum defects and a maximum value \(\eta_0\) at the spindle surface. The aspect ratio \(u_0\) of the spindle is \(u_0 = (1+\cos{\eta_0})/\sin{\eta_0} = \cot{(\eta_0/2)}\), which has a minimum value of one for a sphere, for which \(\eta_0 = \pi/2\), and increases as \(\eta_0\) decreases and the spindle becomes more elongated. The coordinate \(0\leq \phi < 2\pi\) is the azimuthal angle while \(\psi\) represents the polar angle. We choose the origin of \(\psi\) such that it varies between zero at the equator of the spindle and \(\pm \pi/2\) at the tips. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{bispherical-schematic.pdf} \caption{Schematic slice through the bispherical coordinates system. Black arcs and dashed gray lines indicate surfaces of constant \(\eta\), which all pass through the two poles of the structure. A complete surface of constant \(\eta\) is formed by revolving one of these arcs about the line connecting the poles. Positions on each spindle surface (fixed \(\eta\)) are described in terms of the angle \(\psi\), which varies between \(0\) at the equator and \(\pm \pi/2\) at the tips, and \(\phi\) which is the azimuthal angle.} \label{fig:bisph-schem} \end{figure} The free energy of the nematic can be described using the Frank elastic free energy~\cite{Frank1958} \begin{align} &F_F = \frac{1}{2}{\int\dd V}\left[K_1 (\vb{n}\, \div\vb{n})^2 + K_2(\vb{n}\vdot\curl{\vb{n}})^2 + \right. \nonumber\\ &\left.\!\!\! K_3(\vb{n}\cp\curl{\vb{n}})^2 \right] - K_{24} {\int \dd \vb{S}}{\vdot}\left[ \vb{n}(\grad{\vdot}\vb{n}) - (\vb{n}{ \vdot }\grad)\vb{n}\right] \label{eq:Frank} \end{align} where \(\vb{n}\) is the spatially varying nematic director field that has the property \(\vb{n} = - \vb{n}\). The elastic constants \(K_1\), \(K_2\) and \(K_3\) are, respectively, associated with the splay, twist and bend deformations of the director field within the volume \(V\) of the droplet. The \(K_{24}\) term is integrated over the surface \(S\) of the droplet and describes saddle-splay deformations. The elastic constants must satisfy the Ericksen inequalities~\cite{Ericksen1966}, which require that \(K_1\), \(K_2\) and \(K_3\) are non-negative while \(K_{24}\) must satisfy \(\abs{K_{24}}\leq 2 \text{ min}(K_1,K_3)\). The sign of \(K_{24}\) changes the director field curvature preferred by the saddle splay term. Recall that when the director field is tangent to a surface, positive \(K_{24}\) leads to a configuration in which positive Gaussian curvature is preferred while negative \(K_{24}\) prefers a saddle configuration. The saddle configuration is not axially symmetric and is therefore incompatible with our expectation that our twisted solutions display axial symmetry. We therefore expect that our system must have a positive \(K_{24}\) solution. Due to the mathematical complexity of the equations required to describe the behavior of twisted bipolar nematic spindles, simplifications must be made in order to make progress. In the bulk, the twist angle is expected to increase from zero along the central axis of the spindle to a maximum value on the surface~\cite{Williams1986, Prinsen2004}. In the bispherical structure the director field is tangent to surfaces of constant \(\eta\). Further assuming that the twist angle, measured as the angle between the director field and the surface meridian, is constant on a surface of constant \(\eta\) therefore gives a director field that depends only on the \(\eta\) coordinate. Using these assumptions, the resulting nematic director field has a layered spindle structure in which the director field on each surface of constant \(\eta\) follows loxodrome spirals at a twist angle determined by the value of \(\eta\). Williams~\cite{Williams1986} used these assumptions when considering the effect of the values of the bulk elastic constants on the transition to a twisted state in a bipolar spherical droplet. Following on from this, Prinsen and van der Schoot~\cite{Prinsen2004} used these same assumptions to consider the conditions for twisting in elongated spindle-shaped structures as well as director fields for which the boojum defects are virtual and sit outside of the droplet surface. The assumption was justified as being due to the expectation that the twist angle would vary more with \(\eta\) than it would vary with \(\psi\) on a surface of constant \(\eta\)~\cite{Prinsen2004}. While describing a director field in terms of loxodromes is mathematically convenient, due to the lack of \(\psi\) dependence, it is not necessarily intuitive that the system would choose to adopt this twisting structure. Indeed, close to the spindle tips such a structure creates a region of high twist. In general, loxodromes are not geodesics of a surface and do not minimize the curvature of the director field. From Clairaut's relation~\cite{doCarmo}, we can determine that on a spindle surface the only loxodromes that are also geodesics are those for which the director is parallel to the meridians of the surface, which gives an untwisted bipolar configuration. We therefore first approach the question of when a twisted loxodrome solution is favorable in our spindles by considering the conditions under which the loxodrome twisting pattern is an exact energy minimum on a spindle-shaped surface. \section{Loxodromes solutions on spindle surfaces} We consider a thin spindle-shaped shell of nematic with the director field tangent to the surface everywhere and investigate the conditions under which a twisted loxodrome structure minimizes the free energy. We assume the director field follows the azimuthal symmetry of the surface and is therefore independent of \(\phi\). As such, \begin{equation} \vb{n} = \cos{\beta(\psi)}\vu{e}_{\psi} + \sin{\beta(\psi)}\vu{e}_{\phi} \label{eq:n-surf} \end{equation} describes a general director field on the surface. Here \(\beta(\psi)\) is the angle between the director field and the \(\vu{e}_{\psi}\) direction on the surface. The nematic symmetry of the system means that \(\beta(\psi)\) and \(\beta(\psi)+\pi\) are equivalent while the symmetry of the spindle shape means that \(\beta(\psi)\) and \(2\pi-\beta(\psi)\) are energetically equivalent states so that left- and right-handed twisting are equally likely. We can therefore take \(\beta(\psi)\) to be in the range \(0\leq\beta(\psi)\leq\pi/2\) without loss of generality. We wish to determine whether or not a twisted loxodrome structure minimizes the Frank free energy given in \cref{eq:Frank}. We note that on the spindle surface the twist term is zero. The loxodrome solution obeys \(\beta(\psi) = \beta_0\), where \(\beta_0\) is a constant, and \(\beta'(\psi) = \beta''(\psi) = 0\). In the one-constant approximation, in which \(K_1 = K_3\), in the limit that the spindle is a perfect sphere (\(\eta_0 = \pi/2\)), the saddle splay term is zero and loxodrome solutions of arbitrary twist angle minimize the free energy. However, allowing either \(K_1\neq K_3\) or \(\eta_0 \neq \pi/2 \) removes this solution. In this case, the only constant angle solutions are \(\beta_0 = 0, \pi/2\), which correspond to untwisted configurations with the director parallel to \(\vu{e}_{\psi}\) and \(\vu{e}_{\phi}\) respectively. We therefore find that the Frank free energy alone does not support twisted loxodrome solutions on spindle-shaped surfaces. In our geometric model of twisted spindle-shaped polymer particles with nematic ordering, we showed that the twisting pattern was well-described by loxodromes~\cite{Ansell2019}. A key feature of our model was the introduction of a constraint on the length of the integral curves of the twist pattern. We therefore introduce a length constraint term into our free energy and determine whether this addition allows a twisted loxodrome solution. A constraint on the length of the integral curves of the director field takes the form \begin{equation} F_l = \gamma \left(\int \dd s - l_0\right) \end{equation} where \(\gamma\) is a Lagrange multiplier, \(\dd s\) is the line element and \(l_0\) is the fixed curve length. A loxodrome of twist angle \(\beta_0\) on the spindle surface connecting the two tips has length \(l_{\beta_0} = 2 \eta_0\csc{\eta_0}\sec{\beta_0}\). Taking \(\gamma\) to be positive, this term therefore works to minimize the curve length and therefore the twist angle on the surface. We seek twisted loxodrome structures that minimize our new total free energy \(F = F_F + F_l \), where the general form of the director field is again given by \cref{eq:n-surf}. The resulting Euler-Lagrange equation that \(\beta(\psi)\) must satisfy is \begin{align} 0&= K_m \sin{\eta_0}[\beta'(\psi)\sin{\psi} - \beta''(\psi)\cos{\psi} ] \nonumber \\ &- K_{24}\cos{\eta_0} \frac{\sin{2 \beta(\psi)}}{Z} +\frac{\gamma}{2\pi}\frac{\sec{\beta(\psi)}\tan{\beta(\psi)}}{Z} \nonumber \\ & - \Delta K\sin{\eta_0}\cos{\psi}\biggl[\beta'(\psi )^2 \sin{2\beta(\psi)} + [\tan{\psi}\beta'(\psi) \nonumber \\ & -\beta''(\psi)]\cos{2\beta(\psi)}+\frac{\sec^2{\psi}\sin{2\beta(\psi)}}{Z}\biggr], \label{eq:eeq-beta(psi)} \end{align} where we have introduced \(K_m = (K_1+K_3)/2\) and \(\Delta K = (K_1-K_3)/2\). After substituting the conditions for a loxodrome solution into this expression, the final \(\Delta K\) term still has \(\psi\) dependence. We therefore find that a twisted loxodrome solution is possible only if \(\Delta K = 0\), which corresponds to the one-constant approximation. In this case, the loxodrome twist angle satisfies \begin{equation} |\cos^3{\beta_0}| = \frac{\gamma}{4\pi K_{24}\cos{\eta_0}} \label{eq:cos3beta} \end{equation} on a spindle with \(\eta_0<\pi/2\). From this expression, we can deduce that \(\gamma\) and \(K_{24}\) must have the same sign. Given that \(\gamma\) is positive by construction, we find that \(K_{24}\) must also be positive. This is consistent with our previous discussion on the expected behavior of \(K_{24}\) for the spindle. Whether or not the loxodrome solution is realizable depends on the interplay between \(\gamma\), \(K_{24}\) and \(\eta_0\). If \(\gamma/K_{24}>4\pi\) then a twisted loxodrome solution cannot occur. However, if \(\gamma/K_{24} < 4\pi\) the onset of twisting is determined by a critical value of \(\eta_0\) below which this solution is physically realizable. In this framework we would therefore expect that for a system with fixed \(\gamma/K_{24}<4\pi\), that there is a critical spindle aspect ratio at which the onset of twisting occurs. Spindles with aspect ratios smaller than this critical value (larger \(\eta_0\)) will display an untwisted state while those with larger aspect ratios (smaller \(\eta_0\)) can exhibit twisting. The twist angle on the surface increases with the spindle aspect ratio, which is consistent with our previous observations in twisted nematic polymer particles~\cite{Ansell2019}. This twisted loxodrome solution is energetically favorable over the untwisted configuration if \(\Delta F = F(\beta_0) - F(0)\), the difference in free energy between the twisted loxodrome solution and the untwisted solution, is negative. We find that \(\Delta F < 0\) whenever the system parameters allow the twisted solution to be possible, meaning that the loxodrome solution is always energetically favorable over the untwisted solution. By numerically solving \cref{eq:eeq-beta(psi)} using the Matlab ODE15s solver~\cite{Shampine1997} with \(\Delta K = 0\) for a range of choices of \(\gamma/K_{24}\) and \(\eta_0\) with boundary conditions setting \(\beta'(0) = 0\) and \(\beta(0)\) to a chosen value, we have verified that the twisted loxodrome solution \(\beta(0) = \beta_0\) is the global minimum of the free energy if the choice of parameters allows that solution to exist. We now allow the value of \(\Delta K\) to be nonzero and consider the effect this has on the solutions of \cref{eq:eeq-beta(psi)}. Examining the \(\Delta K\) dependent terms in \cref{eq:eeq-beta(psi)}, we observe that final term depends on \(\sec{\psi}\). While well-behaved at the spindle equator, near to the spindle tips this term diverges and dominates the entire expression. Our problem therefore becomes a boundary layer problem in which we must separately consider solutions near to the equator and tips of the spindle. We first consider the behavior near to the equator, where the \(\Delta K\)-dependent terms do not diverge. We capture the key behavior of the system by performing a regular perturbation expansion in which we seek a solution of the form \(\beta^{\epsilon}(\psi) = \beta^0(\psi) + \epsilon \beta^1(\psi)\), where \(\epsilon\) is a small parameter. We substitute this solution into \cref{eq:eeq-beta(psi)} and expand in powers of \(\epsilon\). Gathering the leading order terms, those for which there is no \(\epsilon\) or \(\Delta K\) dependence, we find that the twisted loxodrome solution in \cref{eq:cos3beta} is a leading order solution, so \(\beta^0(\psi)\ = \beta_0\). In order to calculate the first correction term, we identify the small parameter \(\epsilon\) with \(\Delta K/K_m\) and expand the first-order terms in these parameters in powers of \(\psi\) around \(\psi = 0\). We solve for the correction \(\beta^1(\psi)\) with boundary conditions \(\beta^1(0) = 0\) and \(\beta^{1\prime}(0) = 0\), which respectively enforce that our approximate solution is exact at the equator and that the correction obeys the up-down symmetry of the spindle. To leading order in \(\psi\), our perturbation solution is therefore \begin{equation} \beta^{\epsilon}(\psi) = \beta_0 -\frac{\Delta K}{K_m} \frac{\psi^2 \sin{2\beta_0}}{2(1+\cos{\eta_0})}. \end{equation} From this, we observe that our leading order correction is quadratic in \(\psi\). If \(\Delta K\) is positive the correction causes the twist angle to decrease from the value of \(\beta_0\) away from the equator while a negative \(\Delta K\) causes the twist angle to increase. Experimentally, the value of \(\Delta K/K_m\) depends on the material. As an example, for 5CB \(\Delta K/K_m\) takes a value in the range 0.1-0.14~\cite{Bogi2001, Zakharov2002}, which, as we will demonstrate, can be considered sufficiently small that our perturbation solution is a reasonable approximation to the exact solution. Near to the spindle tip, the divergence of the \(\sec{\psi}\) term forces the \(\sin{2\beta(\psi)}\) term towards zero. Expanding \cref{eq:eeq-beta(psi)} near to the spindle tips, we find that the dominant term is \begin{equation} -\frac{\Delta K \sin{\eta_0}\sin{2\beta(\psi)}}{\pm\frac{\pi}{2}\mp\psi}, \end{equation} where the plus and minus respectively correspond to the tips at \(\pm \pi/2\). When \(|\pm\pi/2\mp\psi| \lesssim |\Delta K \sin{\eta_0}| \) this term drives \(\sin{2\beta(\psi)}\) towards zero. Based on our solution near to the equator, we would therefore expect that if \(\Delta K > 0\) this drives the twist angle towards zero while if \(\Delta K<0\) the twist angle tends towards \(\pi/2\). However, the \(\gamma\) dependent term in \cref{eq:eeq-beta(psi)} diverges as the twist angle tends towards \(\pi/2\). We therefore expect that for \(\Delta K < 0 \) the twist angle will initially increase towards \(\pi/2\) before dropping off to zero very close to the tips. To assess the validity of this solution, we numerically solve \cref{eq:eeq-beta(psi)} with initial conditions \(\beta(0) = \beta_0\) and \(\beta'(0) = 0\), which we expect due to up-down symmetry of the spindle. We choose the twist angle at the equator \(\beta_0 = \pi/4\), which maximizes the contribution of the \(\sin{2\beta_0}\) term in the correction, \(\eta_0 = \pi/3\), which corresponds to a spindle with aspect ratio \(\sqrt{3}\), and \(\gamma\) from the relation in \cref{eq:cos3beta}. We plot the numerical solutions in \cref{fig:first-correction-psi} for \((a)\) \(\Delta K/K_m = 0.01\) and (b) \(\Delta K/K_m = -0.1\). Numerical solutions are plotted for a range of \(K_{24}\) values along with the ideal loxodrome \(\beta_0\) and our perturbation solution at the equator \(\beta^{\epsilon}(\psi)\). Near to the equator, \(\beta^{\epsilon}(\psi)\) gives a very good approximation to the numerical solution, with the smaller magnitude \(\Delta K\) value giving a good approximation for a larger range of \(\psi\) values. As expected, the solution with \(\Delta K > 0 \) causes the numerical solution for the twist angle to decrease away from \(\beta_0\) while \(\Delta K < 0 \) causes the opposite behavior before all of the numerical solutions diverge at the tips. We observe that the numerical solutions are only weakly dependent on the value of \(K_{24}\), which is consistent with the leading behavior depending on its value only though \(\beta_0\). When \(\Delta K/K_m = -0.1\), the numerical solutions are all within \(5^{\circ}\) of the value of \(\beta_0\) up to \(\tilde\psi = \psi/(\pi/2) = 0.7\), meaning that the twist angle is within this tolerance of the loxodrome twist angle over \(85\%\) of the surface area. Decreasing the magnitude of the perturbation \(\Delta K/K_m\) to \(0.01\) results in the twist angle being within \(5^{\circ}\) of \(\beta_0\) up to \(\tilde\psi = 0.97\) and within \(1^{\circ}\) of \(\beta_0\) up to \(\tilde\psi = 0.82\). We therefore conclude that away from the spindle tips, the loxodrome solution is a valid approximation for the twist angle if \(\Delta K/K_m\) is small. \begin{figure} \centering \includegraphics[width = 0.49\textwidth]{surface-expansion-graphs.pdf} \caption{Solutions for the twist angle \(\beta(\psi)\), with \(\beta_0 = \pi/4\), \(\eta_0 = \pi/3\) and \(\gamma\) defined by \cref{eq:cos3beta} for (a) \(\Delta K/K_m = 0.01\) and (b) \(\Delta K/K_m = -0.1\). Solid gray lines correspond to numerical solutions for the values of \(K_{24}\) shown in the plot legend. The dashed line represents an ideal loxodrome solution while the red curve corresponds to our asymptotic solution \(\beta^{\epsilon}(\psi)\).} \label{fig:first-correction-psi} \end{figure} \section{Loxodromes on a general surface of revolution} Having established the conditions under which twisted loxodromes are exact or approximate solutions on the spindle surface, we now turn our attention to establishing conditions on the shape of a surface for it to support such a loxodrome solution. We consider a general surface of revolution, described in cylindrical coordinates \((\rho,\phi, z)\). The surface is formed by revolving its shape profile \(\rho(z)>0\) about the \(z\) axis. We once again introduce a general director field \begin{equation} \vb{n} = \cos{\beta(z)}\vu{e}_v+\sin{\beta(z)}\vu{e}_{\phi} \end{equation} that lies tangent to the surface and follows the azimuthal symmetry of the surface. As before, \(\vu{e}_{\phi}\) is a unit vector in the azimuthal direction while \(\vu{e}_{v}\) is a unit vector tangential to lines of constant \(\phi\), equivalent to \(\vu{e}_{\psi}\) in the bispherical coordinates. We again calculate the free energy \(F = F_F + F_l\) for our surface and minimize for \(\beta(z)\). A loxodrome solution, \(\beta(z) = \beta_0\) and \(\beta'(z) = \beta''(z) = 0\) leads us to require that \begin{align} 0 &= -K_{24}\rho(z)[\kappa_{\nu}(z)-\kappa_{\phi}(z)]\sin{2\beta_0}+ \frac{\gamma}{2\pi}\tan{\beta_0}\sec{\beta_0} \nonumber\\ & - \Delta K \left[\frac{1}{\rho(z)}+\rho(z)(\kappa_{\nu}(z)-\kappa_{\phi}(z))\right]\sin{2\beta_0}. \label{eq:eeqz} \end{align} Here we have introduced \(\kappa_{v}(z)\) and \(\kappa_{\phi}(z)\), the principal curvatures of the surface, which depend on \(\rho(z)\) and its derivatives. On the spindle surface, we found an exact twisted loxodrome solution when \(\Delta K = 0\). In this case the coefficient of the \(K_{24}\) term was constant and the loxodrome solution resulted in the balance of this term with the \(\gamma\) term. Imposing the same conditions here, we therefore need to determine the conditions under which \(\rho(z)[\kappa_{\nu}(z)-\kappa_{\phi}(z)]\) takes on a constant value \(\xi\). Solving for \(\xi\) with the requirement that \(\rho(z) = \rho(-z)\), we first find that cylinders, for which \(\rho(z)\) is constant, and conical surfaces, for which \(\rho(z)\) is linear in \(z\) satisfy this condition. The most general shape profile for which \(\xi\) is constant satisfies \((\rho \pm R_1)^2 + z^2 = R_2^2\). This is the equation of a circle of radius \(R_2\) centered at \((\pm R_1,0)\) in the \((\rho,z)\) plane. The constants \(R_1\) and \(R_2\) can be taken to be non-negative without loss of generality. The constant \(\xi = R_1/R_2>0\) defines the shape profile of the surface formed by revolving the section with \(\rho(z)>0\). We note that a spherical surface, for which \(\xi = 0\), does not support our twisted loxodrome solution because it causes the \(K_{24}\) term to vanish. The general shape profile we have derived generates surfaces of the standard torus, as depicted in \cref{fig:torus}. If \(\xi<1\) the circles forming the shape profile cross the \(z\)-axis and generate a spindle torus. Taking the \(\rho(z)>0\) arcs of the generating circles, the plus sign solution generates an apple surface, which corresponds to the outer surface of the spindle torus, while the minus sign generates the spindle surface we have previously considered. In both of these cases the twist angle obeys an equivalent expression to \cref{eq:cos3beta}. When \(\xi>1\), only one of the generating circles lies within the \(\rho(z)>0\) region and the resulting surface is a hole torus while the case \(\xi=1\) gives the limiting case of a horn torus. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{torus-progression.pdf} \caption{Schematics of surfaces and cross sections of the general torus. From left to right: spindle torus, horn torus and hole torus. The black vertical lines indicate the axis of revolution.} \label{fig:torus} \end{figure} On the spindle and apple surfaces, it is natural for the fixed length of the loxodrome to span the range of \(z\) values between the two singularities on the surface. This means that for a fixed length curve, the twist angle of the loxodrome is uniquely determined by the value of \(\xi\) on the surface. By contrast, for the hole torus, cylinder and cone there is not a naturally defined range of \(z\) values that the loxodrome should span. For given properties of these surfaces, a choice of the span of \(z\) must be chosen to determine the loxodrome twist angle. On the hole torus, special choices of twist angle allow a closed loop to form, which therefore allows a clear length constraint to be defined. We now restrict ourselves to spindle-like surfaces and consider the effect of shape perturbations on the stability of the loxodrome solution. To do this, we construct a generalized spindle for which the shape profile is now an arc of an ellipse instead of a circle. As is the case for the spindle generated from the arc of a circle, which we refer to as an ideal spindle, the generalized spindle is symmetric upon reflection in the \(z\)-axis and has pointed tips except in the case of a perfect sphere. The generalized spindle has aspect ratio \(u_0\), defined so that \(u_0 = 1\) corresponds to a perfect sphere and our prolate spindles have \(u_0>1\). The profile curve has eccentricity \(0\leq e < 1\), where \(e = 0\) corresponds to a perfect circle and \(e = 1\) corresponds to a parabola. A general expression for the shape profile \(\rho(z)\) of the generalized spindle is therefore \begin{equation} \small \rho(z)=-\frac{u_0^2-1+e^2}{2(1-e^2)} + \frac{\sqrt{(1-e^2+u_0^2)^2-4(1-e^2)z^2}}{2(1-e^2)}. \end{equation} We examine perturbations away from the ideal spindle by treating \(e\) as a small parameter and expanding \cref{eq:eeqz} in powers of \(e\) with \(\Delta K = 0\). We introduce a perturbation expansion for the twist angle \(\beta^{\epsilon}(z) = \beta^0(z) + \epsilon \beta^1(z)\), where \(\epsilon\) is a again a small parameter that we identify with \(e^2\), the lowest nonzero power of \(e\) in our expansion. We find that the leading order expression allows for loxodromes of constant angle \(\beta_0\) with the condition \begin{equation} \cos^3{\beta_0} = \frac{\gamma(u_0^2+1)}{4\pi K_{24}(u_0^2-1)}. \end{equation} In the ideal spindle, \(\cos{\eta_0} \equiv (u_0^2-1)/(u_0^2+1)\), so this expression is equivalent to \cref{eq:cos3beta}. We calculate the correction term by Taylor expanding the contributing terms to leading order in \(z\) and using boundary conditions that again set \(\beta^1(0) = 0\) and \(\beta^{1'}(0) = 0\). Our perturbation solution therefore becomes \begin{equation} \beta^{\epsilon}(z) = \beta_0 + e^2 \frac{K_{24}\sin{2\beta_0}}{K_m(1+u_0^2)^2}z^2. \end{equation} Given that the constants multiplying \(e^2\) result in an expression that is order one, we find that the correction is quadratically small in the correction to the constant loxodrome solution. We therefore find that the loxodrome solution is stable to small perturbations away from the ideal circle-arc spindle with \(e = 0\). \section{Loxodrome solutions in the bulk} We now turn our attention to bulk twisted structures enclosed within a spindle-shaped surface parameterized by \(\eta = \eta_0\). Following prior investigations of bipolar spindles, we take the director field inside the spindle to adopt a bispherical configuration in which the director field is tangential to surfaces of constant \(\eta\) in the bispherical coordinate system~\cite{Williams1985-028, Kaznacheev2002, Kaznacheev2003, Prinsen2003, Prinsen2004, Prinsen2004b}. We consider only truly bispherical structures in which the boojum defects of the nematic sit at the spindle tips, which we justify based on the observed structures of our polymer particles~\cite{Ansell2019}. Also in line with prior investigations~\cite{Williams1986, Prinsen2004}, we assume that in twisted bipolar configurations, the twisting maintains the bispherical structure and therefore occurs within the local tangent plane to the surface of constant \(\eta\) upon which any point in the bulk resides. Starting from these assumptions, Williams~\cite{Williams1986} investigated the twisting behavior of a nematic confined to a spherical bipolar droplet with strong planar anchoring by assuming that the twist angle of the nematic followed loxodromes on surfaces of constant \(\eta\) and could be parameterized in terms of a function \(\beta_0(\eta)\). Using the ansatz \(\beta_0(\eta)\propto \sin{\eta}\), which satisfies the key requirements of the solution, Williams showed that a twisted bipolar configuration lowers the Frank free energy of the system if the elastic constants obey the inequality \(K_3 \lesssim 2.32 (K_1 -K_2)\). Prinsen and van der Schoot~\cite{Prinsen2004} then used these same assumptions to extended this inequality to bipolar and quasi-bipolar spindle-shaped systems. This twisted loxodrome structure does not analytically minimize the Frank free energy of the system. In order to verify the validity of the loxodrome assumption, we therefore investigate numerical solutions for the twist angle \(\beta(\eta,\psi)\) of a director field constrained to be tangential to the bispherical structure. We use the MATLAB PDE Toolbox\texttrademark~\cite{Matlab} to solve for the twist angle in the \(\psi>0\) region of the spindle and determine the \(\psi<0\) solution using the inversion symmetry of the spindle. We set the boundary conditions \(\beta(0,\psi) = 0\), which is required to ensure that no defects are present along the central axis of revolution, and the derivative \(\beta_{\psi}(\eta,0) = 0\) to ensure the solution obeys the inversion symmetry of the spindle. We have more freedom with the choice of the boundary conditions on the spindle surface. Given that we are interested in the twisted loxodrome structure, we impose that on the surface the twist pattern follows a loxodrome with chosen twist angle \(\beta_0 \) such that \(\beta(\eta_0, \psi) = \beta_0\). \Cref{fig:bulk-numsol}(a) shows contours of constant twist angle in the numerical solution for \(\beta_0 = 7\pi/36\) on a spindle with aspect ratio \(u_0 = 1.6\) (\(\eta_0 = 1.18\)) in the one-constant approximation, which appear consistent with a twisted bipolar structure. In \cref{fig:bulk-numsol}(b) we therefore show how the twist angle of this solution varies with \(\psi\) on surfaces of constant \(\eta\). In this plot, a loxodrome twist pattern corresponds to a horizontal line. We observe that the solutions are very close to being straight lines over much of the range of \(\psi\). Near to the spindle tips, we observe that the twist angle decreases on each of the surfaces, with the most deviation for the values of \(\eta\) furthest from those at which the boundary conditions are imposed. The decrease in twist angle near to the tip is likely due to the system trying to mitigate the large twist energy in this region. Our results verify that the twisted loxodrome solution is a good approximation to the internal twist structure. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{bulk-eta-slices.pdf} \caption{(a) Contours of constant \(\beta\) in the numerical solution for the bulk twist angle \(\beta(\eta,\psi)\) for a spindle with size \(\eta_0 = 1.18\) and surface twist angle \(\beta(\eta_0,\psi) = 7\pi/36\). (b) Slices through this solution along surfaces of constant \(\eta\). The colored lines show the numerical solutions while the gray lines indicate the solution \(\beta(\eta,0)\), corresponding to the expected loxodrome twist angle at a given \(\eta\) value.} \label{fig:bulk-numsol} \end{figure} We analyze the bulk Frank free energy of these numerical solutions for different values of \(u_0\), \(\beta_0\) and the ratio of elastic constants \(K_2/K\), where \(K = K_1 = K_3\). As expected, our numerical results are consistent with the scaling analysis of Prinsen and van der Schoot~\cite{Prinsen2004}. That is, at aspect ratios close to one at the smaller \(K_2/K\) values there is a minimum in the free energy at a non-zero twist angle while at higher \(K_2/K\) values the untwisted configuration minimizes the free energy. At larger \(u_0\) values the bound on \(K_2/K\) at which twisting is preferred decreases until it reaches zero at some critical value above which the system remains untwisted. While our numerical results are consistent with the scaling theory, they do not capture the observed behavior of our twisted polymer system. In the experimental system, there is a critical aspect ratio for the onset of twisting above which the twist angle increases with aspect ratio thereby displaying the opposite trend to the twisting behavior expected in a system governed by the Frank free energy alone. As was the case when considering a spindle-shaped surface, we therefore seek to incorporate additional terms into the free energy to account for the polymer nature of the system under consideration. In line with our results on the spindle surface and our previous geometric model~\cite{Ansell2019}, we once again impose a length constraint term in addition to the Frank free energy. We take a geometric approach to incorporating the length constraint condition and then examine the optimal twist pattern of the resulting free energy. We now assume that that the twist pattern follows loxodromes on surfaces of constant \(\eta\), which we justify using the numerical results presented in \cref{fig:bulk-numsol}. In our previous geometric model of the surface twisting~\cite{Ansell2019}, the length of the meridians of the spindle surface at the critical aspect ratio gave us fixed length of the twisted loxodrome curves at larger aspect ratios (smaller volumes). We therefore model the bulk structure by mapping the length of meridians on internal surfaces of constant \(\eta\) enclosed within a spindle at the critical aspect ratio to the length of a twisted loxodrome on a surface of constant \(\eta\) in a more elongated twisted spindle. In order to construct the mapping, we introduce the fractional distance \(x\) along the minor axis of a spindle between its central axis and outer surface. The surfaces of constant \(\eta\) that form the bulk structure can therefore all be ascribed an \(x\) value. The length of a meridian on a surface at a given \(x\) value in the spindle with critical aspect ratio \(u^*\) becomes the length of the twisted loxodrome at the same \(x\) values in the smaller twisted spindle with aspect ratio \(u_0\). If the length of a meridian is \(l_m\), the length of a loxodrome with twist angle \(\beta_0\) is \(l_m \sec{\beta_0}\). Our mapping can therefore be expressed as \(l_m(u^*, x) = l_m(u_0,x)\sec{\beta_0(x)}\), which leads to the loxodrome twist angle satisfying \begin{equation} \cos{\beta_0(x)} = \frac{u^*(x^2+u_0^2)\tan^{-1}\qty(x/u_0)}{u_0(x^2+u^{*2})\tan^{-1}(x/u^*)}. \label{eq:betax} \end{equation} Plots of this expression for a spindle with \(u^*=1.1\) are shown in \cref{fig:internal-twist} (solid lines) for a range of \(u_0\) values. The expression gives the general behavior we would expect from the twist angle in that there is no twisting at the center (\(x=0\)) and the twist angle monotonically increases to a maximum value on the spindle surface. The relation is also consistent with the approximate solution \(\beta(\eta)\propto\sin{\eta}\) used in prior investigations~\cite{Williams1986, Prinsen2004}, as shown in the dashed lines in \cref{fig:internal-twist} for which the proportionality constant has been chosen to match the twist angle at the surface. We note that in this mapping we have fixed the major axis length of the spindle, meaning that changes in aspect ratio are as a direct result of changes in minor axis length. Setting \(x=1\) therefore gives the idealized behavior of the geometric model we developed for our experimental polymer system~\cite{Ansell2019}, in which we had to adapt the model to account for an amount of length reduction in the major axis due to polymer chain folding. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{length-sin-eta-internal.pdf} \caption{Plots of the expression in \cref{eq:betax} showing the predicted loxodrome twist angle in the bulk spindle for \(u^* = 1.1\) and a range of \(u_0\) values (solid lines). The dashed lines correspond to the expression \(\beta(\eta)\propto\sin(\eta)\) with the proportionality constant chosen to match the twist angle at the surface in each case.} \label{fig:internal-twist} \end{figure} From our bulk geometric model, we construct a new total bulk free energy \(F = F_b + F_l\), where \(F_b\) is the bulk part of the Frank free energy. We consider the influence of the saddle-splay energy on the behavior of the model later in this section. The constraint term \(F_l\) takes the form \begin{equation} F_l = \int \dd \eta \gamma(\eta)\qty[l_m(\eta_0,\eta)\sec{\beta_0(\eta)}-l_m(\eta^*,\eta)], \end{equation} where \(\gamma(\eta)\) is a Lagrange multiplier that constrains the lengths on each surface of constant \(\eta\) and we convert \(x\) values into \(\eta\) values in the final spindle using \(x = \tan(\eta/2)\cot(\eta_0/2)\). The constraint term ensures that the twist profile is given by the expression in \cref{eq:betax} while \(\gamma(\eta)\) can be determined by solving the Euler-Lagrange equations of the free energy. We explore the behavior of the free energy as a function of the twisted spindle aspect ratio \(u_0>u^*\) for different values of \(u^*\) and \(K_2/K\), with a cut off \(\eta_{\text{min}} = 10^{-6}\) at the center of the spindle to prevent the free energy from diverging. We observe that there is always a single minimum in the free energy at an optimal aspect ratio \(\bar{u}_0\), the value of which allows us to classify the expected behavior of such a system into one of three regimes. The first case is that the minimum in the free energy occurs when \(\bar{u}_0 = u^*\), meaning that the optimal configuration is for the spindle to remain untwisted. This regime is observed above a particular value of \(K_2/K\) that depends on the value of \(u^*\). In the second case, which occurs when \(K_2/K\) is below a particular value, the optimal aspect ratio is \(\bar{u}_0\to\infty\) meaning that the free energy wants the system to twist as much as possible. In a real system, the physical bulk of the material would prevent the system from twisting this far and would have an effect in determining the optimal aspect ratio. In the final case there is a minimum in the free energy at a finite value of \(\bar{u}_0\), resulting an an optimal aspect ratio and therefore optimal twist angle in the system. \Cref{fig:bulk-twist} shows the regions of the \(K_2/K\)--\(u^*\) parameter space for which the optimal twist behavior falls into each of these three regimes. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{bulk-K2K-u-params.pdf} \caption{Optimal spindle configurations in the \(K_2/K\)-\(u^*\) parameter space. In region I, the untwisted configuration (\(\bar{u}_0\to u^*\)) is optimal while in region II the spindle twists as much as possible (\(\bar{u}_0\to\infty\)). Region III is the intermediate region in which the optimal aspect ratio takes on a value between these two limiting cases. The plotted points indicate the numerically determined boundaries of each region.} \label{fig:bulk-twist} \end{figure} Comparing the twisting behavior of our new free energy that incorporates the constraint condition to that of the bulk Frank free energy alone, we observe that including the additional term results in twisting being favorable over a wider parameter range. In a spherical bipolar system, Williams~\cite{Williams1986} showed that if \(K_1 = K_3 = K\) the Frank free energy allows a twisted loxodrome solution if \(K_2/K < 0.57\). Following on from this, Prinsen and van der Schoot~\cite{Prinsen2004} showed that this bound is highest for spherical tactoids and that increasing the aspect ratio decreases the maximum value of \(K_2/K\) at which twisting is preferable. By contrast, in our formulation we observe that, while the maximum value of \(K_2/K\) at which twisting can occur varies with the reference aspect ratio \(u^*\), when \(u^* = 1.0\) twisting can occur up to \(K_2/K \lesssim 3.0\) and this value gradually reduces to \(K_2/K\lesssim 1.17\) when \(u^*=3\). In particular, including the length constraint condition means that in the one-constant approximation \(K_2/K = 1\), we would expect to observe a twisted configuration in our system. Having considered the influence of the bulk elasticity on the twisting behavior of our system, we now turn our attention to the influence of the saddle-splay energy on the optimal spindle structure. The saddle-splay energy is a surface term, and therefore depends only on the value of the twist angle at the surface. The expression for the saddle-splay is given by \(F_{SS} = -4 \pi K_{24} \eta_0 \cot{\eta_0} \sin^2{\beta_0(1)} \), where \(\beta_0(x)\) is defined in \cref{eq:betax} and we have chosen the zero-point of the energy such that the saddle-splay contribution is zero when there is no twisting. We know that the saddle-splay elastic constant obeys \(\abs{K_{24}} \leq 2 K\). If we assume the value of \(K_{24}\) is of the same order of magnitude as the bulk elastic constants, the contribution from the saddle-splay energy is of the same order of magnitude as the bulk free energy. Given that we expect \(K_{24}\) to be positive, the saddle-splay energy favors a highly twisted configuration. We observe that including the saddle-splay term in our model raises the bound on \(K_2/K\) at which the system becomes as twisted as possible (\(\bar{u}_0\to\infty\)). The bound on \(K_2/K\) at which the system is able to adopt a twisted configuration initially decreases as \(K_{24}/K\) increases. Above some critical value of \(K_{24}/K\), the intermediate regime \(u^*<\bar{u}_0<\infty\) no longer exists and a single curve separates the untwisted and maximally twisted regions in the \(K_{24}/K\)--\(K_2/K\) parameter space. \Cref{fig:ss-twist} shows the influence of \(K_{24}/K\) on the twisting behavior for a spindle with reference aspect ratio \(u^* = 1.1\) in which we observe that the intermediate regime does not exist for \(K_{24}/K>0.5\). Across reference aspect ratios explored in the range \(1\leq u^* \leq 3\) we observe the same general behavior as observed in \cref{fig:ss-twist} with the magnitude of the critical aspect value of \(K_{24}/K\) increasing with \(u^*\) up to \(K_{24}/K = 0.9\) at \(u^* = 3\). We therefore find that the saddle-splay term does indeed make the twisted structure more preferable within our system. \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{K2-K24-ui-11.pdf} \caption{Optimal spindle configurations for different values of the saddle-splay constant \(K_{24}/K\) for a spindle with \(u^* = 1.1\). As in \cref{fig:bulk-twist}, region I corresponds to the optimal structure being the untwisted configuration \(\bar{u}_0 = u^*\), region II corresponds to the optimal structure being \(\bar{u}_0\to\infty \), and in region III, which exists only at smaller \(K_{24}/K\) values, the optimal structure is between these two limits.} \label{fig:ss-twist} \end{figure} \section{Conclusions} We have investigated the twisting behavior of spindle-shaped systems with bipolar nematic ordering, with a particular focus on developing a model of liquid crystalline polymer particles that display twisting behavior during deswelling. We have demonstrated that including a constraint on the length of the integral curves of the system produces a model that captures the behavior of this system, thereby providing an energetic pathway to the twisted loxodrome pattern that we previously derived using a geometric approach~\cite{Ansell2019}. It is the inclusion of this length constraint that results in the model predicting remarkably different behavior to that expected for a typical nematic system that is governed by the Frank free energy alone. We demonstrated that an exact twisted loxodrome solution minimizes our total free energy on a spindle-shaped surface, subject in the one-constant approximation (\(K_1 = K_3)\). Allowing small deviations away from this condition, or small perturbations in the shape-profile of the spindle, leads to solutions for which the twisted loxodrome is the leading order term and the correction terms are small. In a bulk bipolar nematic confined to a spindle-shaped region, we showed that if the twist pattern on the surface follows loxodromes then the bulk twisting structure is well approximated by twisted loxodromes on surfaces of constant \(\eta\). By developing a geometric model of this twisted structure in which the loxodrome twist angle on each surface of constant \(\eta\) is determined by a length constraint condition, we have formulated a model for which twisting behavior is optimal in the system over a larger parameter range than in a nematic system governed by the Frank free energy alone. Crucially, the model we developed in section 5 also captures the shape change and twisting behavior of the polymer system~\cite{Ansell2019} and we look forward to future exploration of the internal structures of these spindles to test the predicted model. \section*{Conflicts of interest} There are no conflicts to declare. \section*{Acknowledgements} H.S.A. and R.D.K. were supported by NSF MRSEC Grant DMR-1720530 and a Simons Investigator Grant from the Simons Foundation to R.D.K.
2,877,628,091,433
arxiv
\section{Introduction} Recently, deep neural networks have achieved state-of-the art results in a number of machine learning tasks \citet{lecun2015deep}. Training such networks is computationally intensive and often requires dedicated and expensive hardware. Furthermore, the resulting networks often require a considerable amount of memory to be stored. Using a Pascal Titan X GPU the popular AlexNet and VGG-16 models require 13 hours and 7 days, respectively, to train, while requiring 200MB and 600MB, respectively, to store. The large memory requirements limit the use of DNNs in embedded systems and portable devices such as smartphones, which are now ubiquitous. A number of approaches have been proposed to reduce the DNN size during training time, often with little or no degradation to classification performance. Approaches include introducing bayesian, sparsity-inducing priors \citet{louizos2017bayesian} \citet{blundell2015weight} \citet{molchanov2017variational} and binarization \citet{hou2016loss} \citet{courbariaux2016binarized}.Other methods include the hashing trick used in \citet{chen2015compressing}, tensorisation \citet{novikov2015tensorizing} and efficient matrix factorisations \citet{yang2015deep}. However, trained DNN models are used by researchers and developers that do not have dedicated hardware to train them, often as general feature extractors for transfer learning. In such settings it is important to introduce a \textit{cheap} compression method, i.e., one that can be implemented as a postprocessing step with little or no retraining. Some first work in this direction has been \citet{kim2015compression} \citet{han2015deep} \citet{han2015learning} although these still require a lengthy retraining procedure. Closer to our approach recently in \citet{aghasi2016net} the authors propose a convexified layerwise pruning algorithm termed Net-Trim. Building upon Net-Trim, the authors in \citet{dong2017learning} propose LOBS, an algorithm for layerwise pruning by loss function approximation. Pruning a neural network layer introduces a perturbation to the latent signal representations generated by that layer. As the pertubated signal passes through layers of non-linear projections, the perturbation could become arbitrarily large. DNN robustness to hidden layer perturbations has been investigated for random noise in \citet{raghu2016expressive}. For the case of pruning in \citet{aghasi2016net} and \citet{dong2017learning} the authors conduct a theoretical analysis using the Lipschitz properties of DNNs showing the stability of the latent representations, over the training set, after pruning. The methods employed have connections to recent work \citet{sokolic2017robust} \citet{bartlett2017spectrally} \citet{neyshabur2017pac} that have used the Lipschitz properties to analyze the Generalization Error (GE) of DNNs, a more useful performance measure. \subsection{Contributions} In this work we introduce a cheap pruning algorithm for dense layers of DNNs. We also conduct a theoretical analysis of how pruning affects the Generalization Error of the trained classifier. \begin{itemize} \item We show that the sparsity-inducing objective proposed in \citet{aghasi2016net} can be cast as a difference of convex functions problem, that has an efficient solution. For a fully connected layer with input dimension $d_1$, output dimension $d_2$ and $N$ training samples, Net-Trim and LOBS scale like $\mathcal{O}(Nd_1^3)$ and $\mathcal{O}((N+d_2)d_1^2 )$, respectively. Our iterative algorithm scales like $\mathcal{O}(K(N+\frac{Nk}{N+\sqrt{k}}) \text{log}(\frac{1}{\epsilon}) d_1 d_2)$, where $\epsilon$ is the precision of the solution, $k$ is related to the Lipschitz and strong convexity constants, $d_2 \ll d_1$ and $K$ is the outer iteration number. Emprirically, our algorithm is orders of magnitude faster than competing approaches. We also extend our formulation to allow retraining a layer with any convex regulariser. \item We build upon the work of \citet{sokolic2017robust} to bound the GE of a DNN for the case of bounded perturbations to the hidden layer weights, of which pruning is a special case. Our theoretical analysis provides a principled way of pruning while managing the GE. In sharp contrast to the analysis of \citet{aghasi2016net} and \citet{dong2017learning} our analysis correctly predicts the previously observed phenomenon that accuracy degrades exponentially with the remaining depth of the pruned layer. \end{itemize} Experiments on common feedforward architectures show that our method is orders of magnitude faster than competing pruning methods, while allowing for a controlled increase in GE. \subsection{Notation and Definitions} We use the following notation in the sequel:matrices ,column vectors, scalars and sets are denoted by boldface upper-case letters ($\boldsymbol{X}$), boldface lower-case letters ($\boldsymbol{x}$), italic letters ($x$) and calligraphic upper-case letters ($\mathcal{X}$), respectively. The covering number of $\mathcal{X}$ with $d$-metric balls of radius $\rho$ is denoted by $\mathcal{N}(\mathcal{X};d,\rho)$. A $C_M$-regular $k$-dimensional manifold, where $C_M$ is a constant that captures "intrinsic" properties, is one that has a covering number $\mathcal{N}(\mathcal{X};d,\rho)=(\frac{C_M}{\rho})^k$. \section{Our formulation} \subsection{DC decomposition} We consider a classification problem, where we observe a vector $\boldsymbol{x} \in \mathcal{X} \subseteq \mathbb{R}^N$ that has a corresponding class label $y \in \mathcal{Y}$. The set $\mathcal{X}$ is called the input space, $\mathcal{Y} = \{1,2,...,N_{\mathcal{Y}}\}$ is called the label space and $N_{\mathcal{Y}}$ denotes the number of classes. The samples space is denoted by $\mathcal{S}=\mathcal{X} \times \mathcal{Y}$ and an element of $\mathcal{S}$ is denoted by $s = (\boldsymbol{x},y)$. We assume that samples from $\mathcal{S}$ are drawn according to a probability distribution $P$ defined on $\mathcal{S}$. A training set of $m$ samples drawn from $P$ is denoted by $S_m = \{s_i\}^m_{i=1}=\{(\boldsymbol{x}_i,y_i)\}^m_{i=1}$. We start from the Net-Trim formulation and show that it can be cast as a difference of convex functions problem. For each training signal $\boldsymbol{x} \in \mathbb{R}^{N}$ we assume also that we have access to the inputs $\boldsymbol{a} \in \mathbb{R}^{d_1} $ and the outputs $\boldsymbol{b} \in \mathbb{R}^{d_2} $ of the fully connected layer, with a rectifier non-linearity $\rho(x)=\textbf{\text{max}}(0,x)$. The optimisation problem that we want to solve is then \begin{equation} \min_{\boldsymbol{U}} \frac{1}{m} \sum_{s_j \in \mathcal{S}_m}||\rho(\boldsymbol{U}^{T}\boldsymbol{a}_j)-\boldsymbol{b}_j||^2_2+ \lambda \Omega (\boldsymbol{U}), \end{equation} where $\lambda$ is the sparsity parameter. The term $||\rho(\boldsymbol{U}^{T}\boldsymbol{a}_j)-\boldsymbol{b}_j||^2_2$ ensures that the nonlinear projection remains the same for training signals. The term $ \lambda \Omega (\boldsymbol{U}) $ is the convex regulariser which imposes the desired structure on the weight matrix $\boldsymbol{U}$. The objective in Equation 1 is non-convex. We show that the optimisation of this objective can be cast as a difference of convex functions (DC) problem. We assume just one training sample $\boldsymbol{x} \in \mathbb{R}^{N}$, for simplicity, with latent representations $\boldsymbol{a} \in \mathbb{R}^{d} $ and $\boldsymbol{b} \in \mathbb{R}^{z} $ \begin{equation} \begin{split} & ||\rho(\boldsymbol{U}^{T}\boldsymbol{a})-\boldsymbol{b}||^2_2+ \lambda\Omega (\boldsymbol{U}) \\ & = \sum_{i}[\rho(\boldsymbol{u_i}^{T}\boldsymbol{a})-\boldsymbol{b_i} ]^2+\lambda \Omega (\boldsymbol{U}) \\ & = \sum_{i}[\rho^{2}(\boldsymbol{u_i}^{T}\boldsymbol{a})-2\rho(\boldsymbol{u_i}^{T}\boldsymbol{a})\boldsymbol{b_i}+\boldsymbol{b_i}^{2} ]+\lambda \Omega (\boldsymbol{U}) \\ & = \sum_{i}[ \rho^{2}(\boldsymbol{u_i}^{T}\boldsymbol{a})+\boldsymbol{b_i}^{2} ]+\lambda \Omega (\boldsymbol{U}) +\sum_{i}[-2\boldsymbol{b_i}\rho(\boldsymbol{u_i}^{T}\boldsymbol{a})] \\ & = \sum_{i}[ \rho^{2}(\boldsymbol{u_i}^{T}\boldsymbol{a})+\boldsymbol{b_i}^{2} ]+\lambda \Omega (\boldsymbol{U}) \\ & +\sum_{\substack {i \\ b_i<0} }[-2\boldsymbol{b_i}\rho(\boldsymbol{u_i}^{T}\boldsymbol{a})] + \sum_{\substack {i \\ b_i\geq0} }[-2\boldsymbol{b_i}\rho(\boldsymbol{u_i}^{T}\boldsymbol{a})].\\ \end{split} \end{equation} Notice that after the split the first term ($b_i < 0$) is convex while the second ($b_i \geq 0$) is concave. We note that $b_i \geq 0$ by definition of the ReLu and set \begin{equation} g(\boldsymbol{U};\boldsymbol{x}) = \sum_{i}[ \rho^{2}(\boldsymbol{u_i}^{T}\boldsymbol{a})+\boldsymbol{b_i}^{2} ], \end{equation} \begin{equation} h(\boldsymbol{U};\boldsymbol{x}) = \sum_{\substack {i \\ b_i>0} }[2\boldsymbol{b_i}\rho(\boldsymbol{u_i}^{T}\boldsymbol{a})]. \end{equation} Then by summing over all the samples we get \begin{equation} \begin{split} f(\boldsymbol{U}) &= \sum_{j}g(\boldsymbol{U};\boldsymbol{x}_j)+\lambda \Omega (\boldsymbol{U}) - \sum_{j} h(\boldsymbol{U};\boldsymbol{x}_j) \\ &= g(\boldsymbol{U})+\lambda \Omega (\boldsymbol{U}) - h(\boldsymbol{U}), \\ \end{split} \end{equation} which is difference of convex functions. The rectifier nonlinearity is non-smooth, but we can alleviate that by assuming a smooth approximation. A common choice for this task is $\rho(x) = \frac{1}{\beta}\text{log}(1+\text{exp}(\beta x))$, with $\beta$ a positive constant. \subsection{Optimisation} It is well known that DC programs have efficient optimisation algorithms. We propose to use the DCA algorithm \citet{tao1997convex}. DCA is an iterative algorithm that consists in solving, at each iteration, the convex optimisation problem obtained by linearizing $h(\cdot)$ (the non-convex part of $f = g - h$) around the current solution. Although DCA is only guaranteed to reach local minima the authors of \citet{tao1997convex} state that DCA often converges to the global minimum, and has been used succefully to optimise a fully connected DNN layer \citet{fawzi2015dictionary}. At iteration $k$ of DCA, the linearized optimisation problem is given by \begin{equation} \argmin_{\boldsymbol{U}}\{g(\boldsymbol{U})+\lambda \Omega (\boldsymbol{U})-Tr(\boldsymbol{U}^{T}\nabla h(\boldsymbol{U}^k))\}, \end{equation} where $\boldsymbol{U}^{k}$ is the solution estimate at iteration $k$. The detailed procedure is then given in algorithms 1 and 2. We assume that the regulariser is convex but possibly non-smooth in which case the optimisation can be performed using proximal methods. \begin{algorithm}[h!] \caption{FeTa (Fast and Efficient Trimming Algorithm)} \label{alg:algorithm1} \begin{algorithmic}[1] \STATE Choose initial point: $\boldsymbol{U}^0$ \FOR {k = 1,...,K} \STATE Compute $C \gets \nabla h(\boldsymbol{U}^k)$. \STATE Solve with Algorithm 2 the convex optimisation problem: \begin{equation} \boldsymbol{U}^{k+1} \gets \argmin_{\boldsymbol{U}}\{g(\boldsymbol{U})+\lambda \Omega (\boldsymbol{U})-Tr(\boldsymbol{U}^{T}C)\} \end{equation} \ENDFOR \STATE If $\boldsymbol{U}^{k+1} \approx \boldsymbol{U}^{k}$ return $\boldsymbol{U}^{k+1}$. \end{algorithmic} \end{algorithm} \begin{algorithm}[h!] \caption{Acc-Prox-SVRG} \label{alg:algorithm3} \begin{algorithmic}[1] \STATE \textbf{Initialization}: $\tilde{\boldsymbol{x} }_0 \gets \boldsymbol{U}^k , \beta , \eta $ \FOR {s = 1,...,S} \STATE $\tilde{\boldsymbol{u} } = \nabla g(\tilde{\boldsymbol{x} }_s)$ \STATE $\boldsymbol{x}_1 = \boldsymbol{y}_1 = \tilde{\boldsymbol{x} }_s$ \FOR {t = 1,2,...,T} \STATE Choose $(\boldsymbol{A},\boldsymbol{B})$ randomly chosen minibatch. \STATE $\boldsymbol{u}_t = \nabla g_{\boldsymbol{A},\boldsymbol{B}}(\boldsymbol{y}_t) - \nabla g_{\boldsymbol{A},\boldsymbol{B}}(\tilde{\boldsymbol{x} }_s)+\tilde{\boldsymbol{u} }$ \STATE $\boldsymbol{x}_{t+1} = \text{prox}_{\eta h}(\boldsymbol{y}_t - \eta \boldsymbol{u}_t)$ \STATE $\boldsymbol{y}_{t+1} = \boldsymbol{x}_{t+1} + \beta(\boldsymbol{x}_{t+1}-\boldsymbol{x}_t)$ \ENDFOR \STATE $\tilde{\boldsymbol{x} }_{s+1} = \boldsymbol{x}_{T+1}$ \ENDFOR \STATE Return $\boldsymbol{U}^{k+1} \gets \tilde{\boldsymbol{x} }_{S+1}$ \end{algorithmic} \end{algorithm} In order to solve the linearized problem we propose to use Accelerated Proximal SVRG (Acc-Prox-SVRG), which was presented in \citet{nitanda2014stochastic}. We detail this method in Algorithm 2b. At each iteration a minibatch $\boldsymbol{A}$ and $\boldsymbol{B}$ is drawn. The gradient for the smooth part is calculated and the algorithm takes a step in that direction with step size $\eta$. Then the proximal operator for the non-smooth regulariser $\lambda \Omega(\cdot)$ is applied to the result. The hyperparameters for Acc-Prox-SVRG are the acceleration parameter $\beta$ and the gradient step $\eta$. We have found that in our experiments, using $\beta = 0.95$ and $\eta \in \{0.001 , 0.0001 \}$ gives the best results. We name our algorithm FeTa, Fast and Efficient Trimming Algorithm. \section{Generalization Error} \subsection{Generalization Error of Pruned Layer} Having optimized our pruned layer for the training set we want to see if it is stable for the test set. We denote $f^1(\cdot,\boldsymbol{W}^1)$ the original representation and $f^2(\cdot,\boldsymbol{W}^2)$ the pruned representation. We assume that after training $\forall s_i \in \mathcal{S}_m \: ||f^1(\boldsymbol{a_i},\boldsymbol{W}^1)-f^2(\boldsymbol{a_i},\boldsymbol{W}^2)||_2^2 \leq C_1$. Second, we assume that $\forall s \in \mathcal{S} \; \exists s_i \in \mathcal{S}_m \Rightarrow ||a-a_i||^2_2 \leq \epsilon $. Third, the linear operators in $\boldsymbol{W}^1$ , $\boldsymbol{W}^2$ are frames with upper frame bounds $B_1$ , $B_2$ respectively. \begin{theorem} For any testing point $s \in \mathcal{S}$, the distance between the original representation $f^1(\boldsymbol{a},\boldsymbol{W}^1)$ and the pruned representation $f^2(\boldsymbol{a},\boldsymbol{W}^2)$ is bounded by $||f^1(\boldsymbol{a},\boldsymbol{W}^1)-f^2(\boldsymbol{a},\boldsymbol{W}^2)||^2_2 \leq C_2$ where $C_2 = C_1 + (B_1+B_2)\epsilon$. \end{theorem} the detailed proof can be found in Appendix A. \subsection{Generalization Error of Classifier} In this section we use tools from the robustness framework \citet{xu2012robustness} to bound the generalization error of the new architecture induced by our pruning. We consider DNN classifiers defined as \begin{equation} g(\boldsymbol{x}) = \max_{i \in [N_y] } (f(\boldsymbol{x}))_i , \end{equation} where $(f(\boldsymbol{x}))_i$ is the $i-$th element of $N_{y}$ dimensional output of a DNN $f:\mathbb{R}^N \rightarrow \mathbb{R}^{N_y}$. We assume that $f(\boldsymbol{x})$ is composed of $L$ layers \begin{equation} f(\boldsymbol{x})=f_L(f_{L-1}(...f_1(\boldsymbol{x},\boldsymbol{W}_1),...\boldsymbol{W}_{L-1}),\boldsymbol{W}_L) , \end{equation} where $f_l(\cdot,\boldsymbol{W}_l)$ represents the $l-$th layer with parameters $\boldsymbol{W}_l$, $l = 1,...,L$. The output of the $l-$th layer is denoted $\boldsymbol{z}^l$, i.e. $\boldsymbol{z}^l=f_l(\boldsymbol{z}^{l-1},\boldsymbol{W}_l)$. The input layer corresponds to $\boldsymbol{z}^{0} = \boldsymbol{x}$ and the output of the last layer is denoted by $\boldsymbol{z} = f(\boldsymbol{x})$. We then need the following two definitions of the classification margin and the score that we take from \citet{sokolic2017robust}. These will be useful later for measuring the generalization error. \begin{definition} (\normalfont{Score}). For a classifier $g(\boldsymbol{x})$ a training sample $s_i = (\boldsymbol{x}_i,y_i)$ has a score \begin{equation} o(s_i)=o(\boldsymbol{x}_i,g(\boldsymbol{x}_i))=\min_{j \neq g(\boldsymbol{x}_i)}\sqrt{2}(\delta_{g(\boldsymbol{x}_i)}-\delta_{j})^{T}f(\boldsymbol{x}_i), \end{equation} where $\delta_i \in \mathcal{R}^{N_y}$ is the Kronecker delta vector with $(\delta_i)_i=1$, and $g(\boldsymbol{x}_i)$ is the output class for $s_i$ from classifier $g(\boldsymbol{x})$ which can also be $g(\boldsymbol{x}_i) \neq y_i$. \end{definition} \begin{definition} (\normalfont{Training Sample Margin}). For a classifier $g(\boldsymbol{x})$ a training sample $s_i = (\boldsymbol{x}_i,y_i)$ has a classification margin $\gamma(s_i)$ measured by the $l_2$ norm if \begin{equation} g(\boldsymbol{x})=g(\boldsymbol{x}_i); \;\;\; \forall \boldsymbol{x} : ||\boldsymbol{x}-\boldsymbol{x}_i||_2< \gamma(s_i). \end{equation} \end{definition} The classification margin of a training sample $s_i$ is the radius of the largest metric ball (induced by the $l_2$ norm) in $\mathcal{X}$ centered at $\boldsymbol{x}_i$ that is contained in the decision region associated with the classification label $g(\boldsymbol{x}_i)$. Note that it is possible for a classifier to misclassify a training point $g(\boldsymbol{x}_i) \neq y_i$. We then restate a useful result from \citet{sokolic2017robust}. \begin{corollary} Assume that $\mathcal{X}$ is a (subset of) $C_M$-regular k-dimensional manifold, where $\mathcal{N}(\mathcal{X};d;\rho) \leq (\frac{C_M}{\rho})^k$. Assume also that the DNN classifier $g(\boldsymbol{x})$ achieves a lower bound to the classification score $o(\tilde{s}) < o(s_i), \; \forall s_i \in S_m$ and take $l(g(\boldsymbol{x}_i),y_i)$ to be the $0-1$ loss. Then for any $\delta > 0$, with probability at least $1-\delta$, \begin{equation} \text{GE}(g) \leq A \cdot (\gamma)^{-\frac{k}{2}}+B, \end{equation} where $A = \sqrt{ \frac{\log{(2)} \cdot N_y \cdot 2^{k+1} \cdot (C_M)^k}{ m } }$ and $B = \sqrt {\frac{2\log{1/\delta}}{m}}$ can be considered constants related to the data manifold and the training sample size, and $\gamma = \frac{o(\tilde{s})}{\prod_i ||\boldsymbol{W}_i||_2 }$. \end{corollary} We are now ready to state our main result. \begin{theorem} Assume that $\mathcal{X}$ is a (subset of) $C_M$-regular k-dimensional manifold, where $\mathcal{N}(\mathcal{X};d;\rho) \leq (\frac{C_M}{\rho})^k$. Assume also that the DNN classifier $g_1(\boldsymbol{x})$ achieves a lower bound to the classification score $o(\tilde{s}) < o(s_i), \; \forall s_i \in S_m$ and take $l(g(\boldsymbol{x}_i),y_i)$ to be the $0-1$ loss. Furthermore assume that we prune classifier $g_1(\boldsymbol{x})$ on layer $i_{\star}$ using Algorithm 1, to obtain a new classifier $g_2(\boldsymbol{x})$. Then for any $\delta > 0$, with probability at least $1-\delta$, when $(\gamma-\sqrt{C_2} \cdot \frac{ \prod_{i > i_{\star}}||\boldsymbol{W}_i||_2}{ \prod_i||\boldsymbol{W}_i||_2}) > 0$, \begin{equation} \text{GE}(g_2) \leq A \cdot (\gamma-\sqrt{C_2} \cdot \frac{ \prod_{i > i_{\star}}||\boldsymbol{W}_i||_2}{ \prod_i||\boldsymbol{W}_i||_2})^{-\frac{k}{2}}+B, \end{equation} where $A = \sqrt{ \frac{\log{(2)} \cdot N_y \cdot 2^{k+1} \cdot (C_M)^k}{ m } }$ and $B = \sqrt {\frac{2\log{1/\delta}}{m}}$ can be considered constants related to the data manifold and the training sample size, and $\gamma = \frac{o(\tilde{s})}{\prod_i ||\boldsymbol{W}_i||_2 }$. \end{theorem} The detailed proof can be found in Appendix B. The bound depends on two constants related to intrinsic properties of the data manifold, the regularity constant $C_M$ and the intrinsic data dimensionality $k$. In particular the bound depends exponentially on the intrinsic data dimensionality $k$. Thus more complex datasets are expected to lead to less robust DNNs. This has been recently observed empirically in \citet{bartlett2017spectrally}. The bound also depends on the spectral norm of the hidden layers $||\boldsymbol{W}_i ||_2$. Small spectral norms lead to a larger base in $(\cdot)^{-\frac{k}{2}} $ and thus to tigher bounds. With respect to pruning our result is quite pessimistic as the pruning error $\sqrt{C_2}$ is multiplied by the factor $\prod_{i > i_{\star}}||\boldsymbol{W}_i||_2$. Thus in our analysis the GE grows exponentially with respect to the remaining layer depth of the pertubated layer. This is in line with previous work \citet{raghu2016expressive} \citet{han2015learning} that demonstrates that layers closer to the input are much less robust compared to layers close to the output. Our algorithm is applied to the fully connected layers of a DNN, which are much closer to the output compared to convolutional layers. We can extend the above bound to include pruning of multiple layers. \begin{theorem} Assume that $\mathcal{X}$ is a (subset of) $C_M$-regular k-dimensional manifold, where $\mathcal{N}(\mathcal{X};d;\rho) \leq (\frac{C_M}{\rho})^k$. Assume also that the DNN classifier $g_1(\boldsymbol{x})$ achieves a lower bound to the classification score $o(\tilde{s}) < o(s_i), \; \forall s_i \in S_m$ and take $l(g(\boldsymbol{x}_i),y_i)$ to be the $0-1$ loss. Furthermore assume that we prune classifier $g_1(\boldsymbol{x})$ on all layers using Algorithm 1, to obtain a new classifier $g_2(\boldsymbol{x})$. Then for any $\delta > 0$, with probability at least $1-\delta$, when $(\gamma- \frac{ \sum_{i=0}^L \sqrt{C_{i2} } \prod_{j=i+1}^L||\boldsymbol{W}_j||_2}{ \prod_i||\boldsymbol{W}_i||_2}) > 0$, \begin{equation} \text{GE}(g_2) \leq A \cdot (\gamma-\frac{ \sum_{i=0}^L \sqrt{C_{i2} } \prod_{j=i+1}^L||\boldsymbol{W}_j||_2}{ \prod_i||\boldsymbol{W}_i||_2})^{-\frac{k}{2}}+B, \end{equation} where $A = \sqrt{ \frac{\log{(2)} \cdot N_y \cdot 2^{k+1} \cdot (C_M)^k}{ m } }$ and $B = \sqrt {\frac{2\log{1/\delta}}{m}}$ can be considered constants related to the data manifold and the training sample size, and $\gamma = \frac{o(\tilde{s})}{\prod_i ||\boldsymbol{W}_i||_2 }$. \end{theorem} The detailed proof can be found in Appendix C. The bound predicts that when pruning multiple layers the GE will be much greater than the sum of the GEs for each individual pruning. We note also the generality of our result; even though we have assumed a specific form of pruning, the GE bound holds for any type of bounded perturbation to a hidden layer. \section{Experiments} We make a number of experiments to compare FeTa with LOBS and NetTrim-ADMM. All experiments were run on a MacBook Pro with CPU 2.8GHz Intel Core i7 and RAM 16GB 1600 MHz DDR3. \subsection{Time Complexity} First we compare the execution time of FeTa with that of LOBS and NetTrim-ADMM. We set $\Omega (\boldsymbol{U}) = ||\boldsymbol{U}||_1$ and aim for $95\%$ sparsity. We set $d_1$ to be the input dimensions, $d_2$ to be the output dimensions and $N$ to be the number of training samples. Assuming that each $g(\boldsymbol{U};\boldsymbol{x}_j)$ is $L$-Lipschitz smooth and $g(\boldsymbol{U})$ is $\mu$-strongly convex, if we optimise for an $\epsilon$ optimal solution and set $k = L/\mu$, $\text{FeTa}$ scales like $\mathcal{O}(K(N+\frac{Nk}{N+\sqrt{k}}) \text{log}(\frac{1}{\epsilon}) d_1 d_2)$. We obtain this by multiplying the number of outer iterations $K$ with the number of gradient evaluations required to reach an $\epsilon$ good solution in inner Algorithm 2, and finally multiplying with the gradient evaluation cost. Conversely LOBS scales like $\mathcal{O}((N+d_2)d_1^2)$ while NetTrim-ADMM scales like $\mathcal{O}(N d_1^3)$ due to the required Cholesky factorisation. This gives a computational advantage to our algorithm in settings where the input dimension is large. We validate this by constructing a toy dataset with $d_2 =10$ , $d_1 =\{2000:100:3000\}$ and $N =1000$. The samples $\boldsymbol{a} \in \mathbb{R}^{d1}$ and $\boldsymbol{b} \in \mathbb{R}^{d2}$ are generated with $i.i.d$ Gaussian entries. We plot in Figure 1 the results, which are in line with the theoretical predictions. \subsection{Classification Accuracy} \begin{figure}[t!] \includegraphics[scale = 0.55]{Time_complexity_temp.png} \centering \caption{\textbf{Time Complexity}: We plot the calculation time for FeTa, NetTrim and LOBS for the toy dataset. We see that the computation time is in line with theoretical predictions. FeTa scales roughly as $\mathcal{O}(Nd_2d_1)$ while NetTrim and LOBS scale like $\mathcal{O}(Nd_2^3)$ and $\mathcal{O}(Nd_2^2)$. As the size of the input dimensions increases FeTa becomes orders of magnitude faster than the competing approaches.} \end{figure} \subsubsection{Sparse Regularisation} In this section we perform experiments on the proposed compression scheme with feedforward neural networks. We compare the original full-precision network (without compression) with the following compressed networks: (i) $\text{FeTa}$ with $\Omega (\boldsymbol{U}) = ||\boldsymbol{U}||_1$ (ii) Net-Trim (iii) LOBS (iv) Hard Thresholding. We refer to the respective papers for Net-Trim and LOBS. Hard Thresholding is defined as $F(\boldsymbol{x})=\boldsymbol{x} \odot I(|\boldsymbol{x}|>t)$, where $I$ is the elementwise indicator function, $\odot$ is the Hadamard product and $t$ is a positive constant. Experiments were performed on two commonly used datasets: \begin{enumerate} \item \textit{MNIST}: This contains $28 \times 28$ gray images from ten digit classes. We use 55000 images for training, another 5000 for validation, and the remaining 10000 for testing. We use the LeNet-5 model: \begin{equation} \begin{split} &\text{Input} \rightarrow (1 \times 6C5) \rightarrow MP2 \rightarrow (6 \times 16C5) \\ & \rightarrow MP2 \rightarrow 120FC \rightarrow 84FC \rightarrow 10SM \rightarrow \text{Output}, \end{split} \end{equation} where $C5$ is a $5 \times 5$ ReLU convolution layer, $MP2$ is a $2 \times 2$ max-pooling layer, $FC$ is a fully connected layer and $SM$ is a linear softmax layer. \item \textit{CIFAR-10}:This contains 60000 $32 \times 32$ color images for ten object classes. We use 50000 images for training and the remaining 10000 for testing. The training data is augmented by random cropping to $24 \times 24$ pixels, random flips from left to right, contrast and brightness distortions to 200000 images. We use a smaller variant of the AlexNet model: \begin{equation} \begin{split} &\text{Input} \rightarrow (3 \times 64C5) \rightarrow MP2 \rightarrow (64 \times 64C5) \\ & \rightarrow MP2 \rightarrow 384FC \rightarrow 192FC \rightarrow 10SM \rightarrow \text{Output}. \end{split} \end{equation} \end{enumerate} We first prune \textbf{only the first} fully connected layer (the one furthest from the output) for clarity. Figure 2 shows the classification accuracy vs compression ratio for $\text{FeTa}$, $\text{NetTrim}$, LOBS and Hard Thresholding. We see that Hard Thresholding works adequately up to $85\%$ sparsity. From this level of sparsity and above the performance of Hard Thresholding degrades rapidly, FeTa has $\boldsymbol{10\%}$ higher accuracy on average while being the same or marginally worse than LOBS and NetTrim. \begin{figure*}[t!] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=0.55]{LeNet_sparse.png} \caption{} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=0.55]{Cifar_sparse.png} \caption{} \end{subfigure} \caption{\textbf{Accuracy vs Sparsity}: (a)We plot the classification accuracy of the pruned LeNet-5 architecture for different sparsity levels. Until the 80\% sparsity level roughly all methods are equal. For sparsity levels greater than 80\% FeTa clearly outperforms Hard Thresholding while remaining competitive with LOBS. (b)We plot the classification accuracy of the pruned CifarNet architecture for different sparsity levels. The results are consistent with the LeNet-5 experiment. } \end{figure*} For the task of pruning the first fully connected layer we also show detailed comparison results for all methods in Table 1. For the LeNet-5 model, FeTa achieves the same accuracy as Net-Trim while being $25 \times$ faster. This is expected as the two algorithms optimise a similar objective, while FeTa exploits the structure of the objective to achieve lower complexity in optimisation. Furthermore FeTa achieves marginally lower classification accuracy compared to LOBS while being $5 \times$ faster, and is significantly better than Thresholding. \begin{table}[h!] \caption{Test accuracy rates (\%) prune only first fully connected layer.} \label{tab:title2} \label{sample-table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{ lcccc } \toprule LeNet-5 & Original & CR & Pruned & Time \\ \midrule Net-Trim & 99.2\% & 95\% & 95\% & 455s \T\\ LOBS & 99.2\% & 95\% & 97\% & 90s \\ Threshold & 99.2\% & 95\% & 83\% & - \\ $\textbf{FeTa}$ & 99.2\% & 95\% & $\boldsymbol{95\%}$ & $\boldsymbol{18}$\textbf{s} \\ \midrule CifarNet & Original & CR & Pruned & Time \\ \midrule Net-Trim & 86\% & - & - & - \T\\ LOBS & 86\% & 90\% & 83.4\% & 3h 15min \\ Threshold & 86\% & 90\% & 73\% & - \\ $\textbf{FeTa}$ & 86\% & 90\% & $\boldsymbol{80\%}$ & $\boldsymbol{20}$\textbf{min} \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} \begin{table}[h!] \captionof{table}{Test accuracy rates (\%) prune all fully connected layers.} \label{tab:title2} \label{sample-table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{ lccccc } \toprule LeNet-5 & Original & CR & Pruned & Time \\ \midrule Net-Trim & 99.2\% & 90\% & 95\% & 500s \\ LOBS & 99.2\% & 90\% & 97\% & 97s \\ Threshold & 99.2\% & 90\% & 64\% & - \\ $\textbf{FeTa}$ & 99.2\% & 90\% & $\boldsymbol{95\%}$ & $\boldsymbol{38}$\textbf{s} \\ \midrule CifarNet & Original & CR & Pruned & Time \\ \midrule Net-Trim & 86\% & - & - & - \\ LOBS & 86\% & 90\% & 83.4\% & 3h 15min \\ Threshold & 86\% & 90\% & 64\% & - \\ $\textbf{FeTa}$ & 86\% & 90\% & $\boldsymbol{71\%}$ & $\boldsymbol{25}$\textbf{min} \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} For the CifarNet model we see in Table 1 that Net-Trim is not feasible on the machine used for the experiments as it requires over 16GB of RAM. Compared to LOBS FeTa again achieves marginally lower accuracy but is $8\times$ faster. Next we prune both the fully connected layers in the two architectures to the same sparsity level and show the results in Table 2. We lower the achieved sparsity for all methods to $90\%$. For MNIST The accuracy results are the same as pruning a single layer, with FeTa achieving the same or marginally worse results while being $13\times$ faster than Net-Trim and $2.5\times$ faster than LOBS. For the Cifar experiment FeTa shows a bigger degradation in performance compared to LOBS while remaining $8\times$ faster. Thresholding achieves a notably bad result of $\boldsymbol{64\%}$ accuracy, which makes the method essentially inapplicable for multilayer pruning. We note here that the degraded performance of FeTa for two layer pruning in Cifar is due to a poor solution for the second dense layer. By combining FeTa for the first dense layer and Thresholding for the second dense layer one can achieve $\boldsymbol{77\%}$ accuracy for the same computational cost. Furthermore as mentioned in \citet{dong2017learning} and \citet{wolfe2017incredible} retraining can recover classification accuracy that was lost during pruning. Starting from a good pruning which doesn't allow for much degradation significantly reduces retraining time. \subsubsection{Low Rank Regularisation} As a proof of concept for the generality of our approach we apply our method while imposing low-rank regularisation on the learned matrix $\boldsymbol{U}$. For low rank $k$ we compare two methods (i) $\text{FeTa}$ with $\Omega (\boldsymbol{U}) = ||\boldsymbol{U}||_{\star}$ and optimised with Acc-Prox-SVRG and (ii) Hard Thresholding of singular values using the truncated SVD defined as $\boldsymbol{U} = \boldsymbol{N \Sigma} \boldsymbol{V}^{\star}, \; \boldsymbol{\Sigma} = \text{diag}(\{\sigma_i\}_{1 \leq i \leq k})$. We plot the results in Figure 3. \begin{figure}[h!] \centering \begin{subfigure}{.24\textwidth} \centering \includegraphics[scale=0.28]{LeNet_lowrank.png} \caption{LeNet-5} \end{subfigure}% \begin{subfigure}{.24\textwidth} \centering \includegraphics[scale=0.28]{Cifar_lowrank.png} \caption{CifarNet} \end{subfigure} \caption{\textbf{Accuracy vs CR}: (a)We plot the classification accuracy of the low-rank compressed LeNet-5 architecture for different CR levels. Until the 85\% CR level roughly all methods are equal. For CR levels greater than 85\% FeTa clearly outperforms Hard Thresholding. (b)We plot the classification accuracy of the pruned CifarNet architecture for different CR levels. The results are consistent with the LeNet-5 experiment. } \end{figure} In the above given $\boldsymbol{U} \in \mathbb{R}^{d_1 \times d_2}$ the Commpression Ratio (CR) is defined as $\text{CR} = (k*d_1 + k + k*d_2)/(d_1 * d_2)$. The results are in line with the $l_1$ regularisation, with significant degredation in classification accuracy for Hard Thresholding above $85\%$ CR. \subsection{Generalization Error} \begin{figure*}[t!] \centering \begin{subfigure}{.5\textwidth} \includegraphics[scale = 0.55]{theory.png} \centering \caption{Single Layer} \end{subfigure}% \begin{subfigure}{.5\textwidth} \includegraphics[scale = 0.55]{theory2.png} \centering \caption{Multiple Layers} \end{subfigure} \caption{\textbf{Layer Robustness}: We plot the theoretical prediction for the GE (dashed lines) and the empirical value of the GE (solid lines) for single layer pruning (a) and multilayer (b) pruning. Our theoretical predictions are tight for layers with small remaining depth but are loose for layers with big remaining depth. We first focus on pruning for $80\%$ sparsity. Layer $i=0$ is as predicted exponentially less robust compared to layers $i=\{1,2,3\}$. We then focus on pruning layer $i=0$ and layers $i\geq0$ for 80\% sparsity. We see that even though the GE errors for $i>0$ are negligible the GE error for $i\geq0$ is exponentially greater than the sum of the GEs when pruning $i=0$ and $i>0$. Interestingly in the empirical GE estimate there exists an artifact around 90\% sparsity which is partially captured by our prediction. } \end{figure*} According to our theoretical analysis the GE drops exponentially with remaining layer depth. To corroborate this we train a LeNet-5 to high accuracy, then we pick a single layer and gradually increase its sparsity using Hard Thresholding. We find that the layers closer to the input are exponentially less robust to pruning, in line with our theoretical analysis. We plot the results in Figure 4.a. For some layers there is a sudden increase in accuracy around $90\%$ sparsity which could be due to the small size of the DNN. We point out that in empirical results \citet{raghu2016expressive} \citet{han2015learning} for much larger networks the degradation is entirely smooth. Next we test our multilayer pruning bound. We prune to the same sparsity levels all layers in the sets $i \geq 0$ , $i \geq 1$ , $i \geq 2$ , $i \geq 3$. We plot the results in Figure 4.b. It is evident that the accuracy loss for layer groups is not simply the addition of the accuracy losses of the individual layers, but shows an exponential drop in accordance with our theoretical result. We now aim to see how well our bound captures this exponential behaviour. We take two networks $g_a$ pruned at layer 3 and an unpruned network $g_b$ and make a number of simplifying assumptions. First we assume that in Theorem 3.3 $B=0$ such that $\text{GE}(g_{\star}) \leq A \cdot (\gamma-\frac{ \sum_{i=0}^L \sqrt{C_{i2} } \prod_{j=i+1}^L||\boldsymbol{W}_j||_2}{ \prod_i||\boldsymbol{W}_i||_2})^{-\frac{k}{2}}$. This is logical as $B$ includes only log terms. Assuming that the bounds are tight we now aim to calculate \begin{equation} \begin{split} & \frac{\text{GE}(g_{a})}{\text{GE}(g_{b})} = \left( \frac{\gamma - \sum_{i=0}^L (\sqrt{C_{i2}^a } / \prod_{j=0}^i ||\boldsymbol{W}_j||_2)}{\gamma}\right)^{-\frac{k}{2}} \\ &= \left( \frac{o(\tilde{s})}{o(\tilde{s}) - \sum_{i=0}^L (\sqrt{C_{i2}^a } \prod_{j=i+1}^N||\boldsymbol{W}_j||_2)}\right)^{\frac{k}{2}} \\ \end{split} \end{equation} We can use the above to make predictions for the GE of the pruned network by noting that $\text{GE}(g_{a}) = \text{GE}(g_{b})\left( o(\tilde{s}) / (o(\tilde{s}) - \sum_{i=0}^L (\sqrt{C_{i2}^a } \prod_{j=i+1}^N||\boldsymbol{W}_j||_2))\right)^{\frac{k}{2}}$ as we know that $\text{GE}(g_{b}) \approx 0.01$ for the unpruned network and we have managed to avoid the cumbersome $A$ parameter. Next we make the assumption that $k \approx 20$. Dimensionality values $20-40$ are common for the MNIST dataset and result from a simple dimensionality analysis using PCA. We also deviate slightly from our theory by using the minimum layerwise error $\min_i[\sqrt{C_{i2}^a }]$ for each sparsity level, as well as the average scores $\mathbb{E}_{s \sim S}[o(\boldsymbol{x},g(\boldsymbol{x}))]$. We plot the theoretical predictions for single layer pruning in Figure 4.a and the theoretical predictions for multilayer pruning in Figure 4.b. We see that, while loose, the theoretical predictions correctly capture qualitatively the behaviour of the GE. Specifically, layers, as predicted, are exponentially less robust with remaining layer depth. Also , as predicted, when pruning multiple layers the resulting GE is exponentially greater than the sum of the individual GEs. \section{Conclusion} In this paper we have presented an efficient pruning algorithm for fully connected layers of DNNs, based on difference of convex functions optimisation. Our algorithm is orders of magnitude faster than competing approaches while allowing for a controlled increase in the GE. We provided a theoretical analysis of the increase in GE resulting from bounded perturbations to the hidden layer weights, of which pruning is a special case. This analysis correctly predicts the previously observed phenomenon that network layers closer to the input are exponentially less robust to pruning compared to layers close to the output. Experiments on common feedforward architectures validated our results.
2,877,628,091,434
arxiv
\section{Introduction} Probabilistic graphical models with latent variables are powerful in modeling many important problems in machine learning and artificial intelligence \cite{Koller09,Bishop06}. The existence of latent variables in such models provides us with the ability to capture richer statistical dependencies among observed variables. However, learning latent-variable graphical models is often difficult, due to the non-convex nature of their parameter learning problem and the intractability in the corresponding inference procedure. Currently, there are two main types of learning algorithms for latent-variable graphical models. The first one is the Expectation-Maximization algorithm \cite{Dempster77}, which transforms the parameter learning problem into an iterative procedure that maximizes a non-convex objective function. However, the EM algorithm only provides weak theoretical guarantees, can lead to bad local optima and has slow convergence. To address these problems of EM, recently a second type of methods called spectral learning has been proposed \cite{Hsu09,Anima14,Parikh11}. Spectral learning algorithms are based on the idea of method of moments and reparametrize the latent graphical models such that the learning procedure can be performed through tensor algebra using only observed quantities. Although spectral algorithms enjoy the benefits of being provably consistent, computationally efficient and local-optima-free, they have three key limitations. First, most spectral learning algorithms only apply to restricted types of latent structures (mostly trees), and are hard to generalize to more complicated latent structures beyond trees (e.g. loopy graphs). Second, most spectral learning algorithms can only deal with discrete random variables and cannot be easily extended to handle continuous random variables. Third, the current spectral algorithms are generally idiosyncratic to the specific model structures that they are targeted to learn, and thus cannot provide a flexible learning framework to incorporate different prior knowledge and probabilistic assumptions when facing different learning scenarios. \begin{figure} \centerline{\includegraphics[width=0.95\columnwidth]{pbp.pdf}} \caption{Comparison between predictive belief propagation and conventional methods for belief propagation during inference over latent-variable junction trees.} \end{figure} In order to overcome these limitations of previous methods, in this paper we propose a new algorithm for learning general latent-variable graphical models that applies to all different types of latent structures, can handle both discrete and continuous variables of arbitrary forms of probability distribution in a nonparametric fashion, and allows us to incorporate different types of prior knowledge into the learning process, while still remaining provably consistent, local-optima-free and fast to compute. To achieve this, we introduce a new way of formulating message-passing inference over junction trees called \textbf{predictive belief propagation}. In contrast to conventional formulations of message passing, which treat a message as a direct summary of all the probabilistic information seen in the past, we instead think of a message as encoding our predictions about the probabilistic information of all the variables in the future part of the graphical model given what we have seen in the past. This new perspective allows us to systematically reparametrize message passing inference on latent junction trees purely in terms of observable variables, and to directly learn this alternative parametrization from observed quantities in training data. During learning, our algorithm first converts a latent graphical model into its corresponding latent junction tree, and then reduces the parameter learning problem down to a sequence of regression problems using a simple and fast approach called \textit{Two-Stage Regression}, which also allows us to incorporate prior knowledge into the learning process. Moreover, our proposed learning algorithm is also flexible enough to allow us to easily extend it to handle graphical models with continuous random variables using the technique of \textit{Hilbert Space Embeddings}. When dealing with continuous variables, we first embed their distributions into reproducing-kernel Hilbert spaces, and then use the kernel trick to perform all necessary learning operations over latent junction trees via tensor algebra. The main contributions of our work are: (1) We introduce a novel formulation of message-passing inference over latent-variable junction trees named \textbf{predictive belief propagation} (Section 3), and then propose a new algorithm for learning general latent-variable graphical models based on it using \textit{Two-Stage Regression} (Section 3 and 4). Our new algorithm overcomes many severe limitations faced by previous methods for learning latent graphical models, including EM and the spectral algorithms, and provides a general algorithmic framework that unifies the learning of all different kinds of latent graphical models. (2) We prove the correctness of our new algorithm by showing that it learns to compute a statistically consistent estimator of any conditional probability distribution over observable variables that we may query in a latent-variable graphical model during inference (Appendix \textbf{A.5}). (3) We extend our algorithm from discrete domain to continuous domain using \textit{Hilbert Space Embeddings} of distributions (Section 5). (4) We demonstrate that our learning algorithm outperforms both the EM algorithm and the spectral algorithm and runs significantly faster in experiments on both synthetic and real datasets (Section 6). \subsection{Related Work} Previously, Parikh et al. proposed a spectral algorithm \cite{Parikh12} for learning latent junction trees based on an alternative tensor parameterization. However, the algorithm in \cite{Parikh12} can only yield the marginal joint probability of all the observable variables in a latent graphical model together, without the ability to flexibly compute the posterior probability of arbitrary observable variables given other observed variables as evidence in a tractable manner. In contrast, our new algorithm overcomes this limitation and supports arbitrary inference in tractable forms by introducing a more flexible predictive message-passing paradigm. Moreover, our algorithm provides the ability to freely incorporate prior knowledge and to handle continuous variables, which also cannot be achieved by the spectral algorithm in \cite{Parikh12}. \section{Formulation of the Learning Problem} The central machine learning problem that we are dealing with in this paper is to learn general latent-variable probabilistic graphical models such that we can perform accurate inference over them among the observable variables. Traditionally, latent-variable graphical models are often parametrized using a set of local conditional probability tables (CPTs) that are associated with the edges in the graphs, and learning these models would mean to explicitly recover their CPT parameters from training data \cite{Koller09}. However, in most cases of application, the primary goal of learning a latent-variable graphical model is to be able to make accurate inference and predictions over its observable variables, and recovering its original CPT parameters is not needed at all. Therefore, in this paper we develop a new learning method that learns an alternative parametrization of general latent-variable graphical models purely based on observable quantities, such that we can directly perform accurate probabilistic inference on them using the learned alternative parametrization. We don't aim to learn the original CPT parameters of latent graphical models, and bypass them using our alternative parametrization. For a latent-variable probabilistic graphical model $G$ of arbitrary graph structure, let $\mathscr{O}$ denote the set of all observable variables in $G$: $\mathscr{O} = \{ X_1, ..., X_{|\mathscr{O}|} \}$, and let $\mathscr{H}$ denote the set of all latent variables in $G$: $\mathscr{H} = \{ X_{|\mathscr{O}|+1}, ..., X_{|\mathscr{O}| + |\mathscr{H}|}\}$. Now our learning and inference problem at hand can be mathematically formulated as the following desired input-output behavior: \iffalse \begin{itemize} \item \enspace \textbf{Input:} given a training dataset of $N$ $i.i.d.$ samples of the set of all observable variables $\{ x_1^d, ..., x_{|\mathscr{O}|}^d \}_{d=1}^{N}$, a set of observed evidence $\{ X_i = x_i \}_{i \in \mathcal{E}}$ (here $\mathcal{E}$ denotes the set of index for the set of observable variables that are observed as evidence), and the index $Q$ of the query node $X_Q$. \enspace \item \textbf{Output:} calculate an estimate of the posterior distribution of the query node conditioned on the observed evidence: $\widehat{\mathbb{P}}[X_Q \mid \{ X_i = x_i\}_{i \in \mathcal{E}}]$. \end{itemize} \fi \textbf{Input:} given a training dataset of $N$ $i.i.d.$ samples of the set of all observable variables $\{ x_1^d, ..., x_{|\mathscr{O}|}^d \}_{d=1}^{N}$, a set of observed evidence $\{ X_i = x_i \}_{i \in \mathcal{E}}$ (here $\mathcal{E}$ denotes the set of index for the set of observable variables that are observed as evidence), and the index $Q$ of the query node $X_Q$. \enspace \textbf{Output:} calculate an estimate of the posterior distribution of the query node conditioned on the observed evidence: $\widehat{\mathbb{P}}[X_Q \mid \{ X_i = x_i\}_{i \in \mathcal{E}}]$. Here the variables in $G$ can be either discrete-valued or continuous-valued, and if the variables are continuous-valued, they are not restricted to have any specific functional forms of probability density function. Our algorithm can handle all of them gracefully in a nonparametric fashion. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[align = c, width=0.3\columnwidth]{markov_net.pdf}& \includegraphics[align = c, width=0.6\columnwidth]{junction_tree.pdf}\\ \makecell{(a)} & \makecell{(b)} \end{tabular} \caption{(a) An example of a latent-variable graphical model. Blue nodes indicate observable variables and white nodes indicate latent variables. (b) A corresponding latent junction tree that the graphical model in (a) is converted to. Pink squares indicate clique nodes and yellow squares indicate separator sets. Variable nodes with red circles around them are associated with their current leaf clique nodes.} \end{figure} \section{Predictive Belief Propagation} \subsection{Preliminaries} Here we first introduce two important notions that will be used throughout this paper.\\ \textbf{Sufficient Statistics Feature Vector} \quad In statistics and machine learning, the \textit{sufficient statistics feature vector} of a random variable $X$ is such a feature vector $v(X)$ that knowing its expectation $\mathbb{E}_{X \sim p}[v(X)]$ under $X$'s probabilistic distribution $p$ would completely determine that distribution $p$. And similarly, the \textit{sufficient statistics feature vector} of a group of random variables $\{X_1, ... , X_k\}$ is such a feature vector $v(\{X_1, ... , X_k\})$ that knowing its expectation $\mathbb{E}_{\{X_1, ... , X_k\} \sim p}[v(\{X_1, ... , X_k\})]$ under $\{X_1, ... , X_k\}$'s joint probabilistic distribution $p$ would completely determine that joint distribution $p$. For example, if $X$ is a discrete random variable, then its sufficient statistics feature vector $v(X)$ can be the vector of indicator functions whose $i$-th entry is 1 if $X$ takes the $i$-th value, and is 0 if otherwise. And for a group of discrete-valued random variables, their sufficient statistics feature vector can be the vectorized version of the outer product of all their individual vectors of indicator functions. See Appendix \textbf{A.4} for the continuous-valued case.\\ \textbf{Junction Trees} \quad In probabilistic graphical models, junction trees are a classical type of transformation that allows efficient message-passing inference algorithms to be performed over loopy graphical models. In order to design a learning and inference algorithm for general latent-variable graphical models, which may have loopy or non-loopy graph structures, we need to first resort to the junction tree algorithm \cite{Lauritzen88} to transform latent graphical models into their corresponding latent junction tree representations, over which we can perform message-passing inference. Consider the latent graphical model $G$ defined in Section 2. We first run the junction tree algorithm to convert $G$ into a latent junction tree $T$, and then associate each observable variable in $G$ with one leaf clique in $T$. Now pick a non-leaf clique node $C_r$ as the root of $T$, which naturally sets a topological order over $T$. Then for each separator set $S$ in $T$, we define its inside tree $In(S)$ to be the subtree rooted at $S$, and define its outside tree $Out(S)$ to be the rest of $T$ excluding $S$ and $In(S)$. See Figure 2 and 3 for an example. \subsection{Difficulty Facing Conventional Belief Propagation} In conventional methods for running belief propagation inference over junction trees, such as the Shafer-Shenoy algorithm \cite{Shenoy90} and the Hugin algorithm \cite{Lauritzen88,Anderson89}, the messages are defined to integrate together all the local probabilistic information from the past part of the junction tree in the form of partial results of sum-product calculations and send this compact summary of the past to the future part of the junction tree.\footnote{Formally, when we pass a message across a separator set in a junction tree, we can split the junction tree from this separator set into two separate subtrees. We refer to the subtree that the message-passing direction is pointing toward as the \textit{future} part of the junction tree, and the other subtree as the \textit{past} part of the junction tree. See Figure 3 for an example.} This requires the learning algorithm to be able to estimate the innate parametrization of the original latent graphical model, in the form of local conditional probability tables or local potential functions, directly from the training data. However, the innate parametrization of latent graphical models heavily involves hidden variables that we cannot observe, which makes it hard to directly learn this parametrization from training data. This discrepancy gives rise to the key difficulty in learning general latent graphical models, and forces previous methods to resort to inefficient local search heuristics such as EM. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{future.pdf} \caption{Illustration of the inside tree (future), outside tree (past), core group $\alpha(S^{AB})$ and evidence set $\beta(S^{AB})$ for a specific separator set $S^{AB}$ in the latent junction tree in Figure 2(b).} \end{figure} \subsection{The Central Idea of Predictive Belief Propagation} To overcome this difficulty, we draw inspiration from the notion of \textit{predictive state representations} (PSRs) \cite{Littman01,Singh04}, which are a popular class of models for discrete-time dynamical systems. The key idea behind PSRs is to represent the state of a dynamical system as a set of predictions of features of observable variables in the future, which distinguishes them from the traditional history-based models and hidden-state-based models \cite{Kaelbling98}. Such an alternative representation of state based on observable quantities allows us to directly learn to perform filtering and prediction over dynamical systems from training data in a fast, provably consistent and local-optima-free fashion \cite{Boots11}. In essence, the state of a system is a time bottleneck that compactly summarizes everything we need to know about the past in order to predict the future. From this viewpoint, when we perform belief propagation over a latent junction tree, the message that we send across a separator set can be viewed as the current state of the junction tree inference system at that particular separator set. Therefore, in analogy to PSRs, we can let the messages to encode our posterior predictions about the probabilistic information of the observable variables in the future part of the junction tree, given the observed evidence that we have absorbed from the past part of the junction tree. In PSRs, a common approach is to use a vector of sufficient statistics for a finite window of future observations to represent the state of a dynamical system. And analogously, in our case, at each separator set we can use a vector containing features of sufficient statistics for a subset of core observable variables in the future part of the junction tree to represent the message that we send across that separator set. Then during inference, we pass these predictive messages around over the latent junction tree to collect and propagate information we observe from evidence nodes and to compute the results of our inference queries. This forms our central idea of \textbf{predictive belief propagation}, which is the foundation of our new algorithm for learning latent-variable graphical models. The major advantage of this new way of thinking about message passing from a predictive perspective is that it enables us to efficiently learn general latent graphical models directly from observed quantities under a unified framework. Figure 1 provides an illustration of the comparison between predictive belief propagation and conventional belief propagation. \subsection{Definition of Predictive Messages} Now under the predictive belief propagation (PBP) framework, we first need to define what exactly are the predictive messages that we pass around over a latent junction tree during inference. As discussed above, the predictive messages in PBP are analogous to the predictive states in PSRs. In PSRs, a common approach is to use a vector of sufficient statistics (called the core set of tests) for a finite window of future observations to represent the predictive state of a dynamical system, because the infinite system-dynamics matrix can often be proven to have finite rank. Then analogously, in our PBP case, at each separator set we can use a sufficient statistics feature vector for a group of core observable variables in the future part of the junction tree to represent the predictive message that we send across that separator set. Now we make the following two key definitions:\\ \noindent \textbf{Definition 1.} We define the \textbf{core group of observable variables} $\alpha(S)$ for each separator set $S$ to be a subset of all the observable variables that are associated with the leaf clique nodes in $In(S)$ whose posterior joint distribution conditioned on evidence from $Out(S)$ completely determines the posterior joint distribution of all the observable variables that are associated with the leaf clique nodes in $In(S)$. More formally, let $\mathbf{OV}[In(S)]$ denote all the observable variables that are associated with the leaf clique nodes in the inside tree $In(S)$, and let $\mathbf{OV}[Out(S)]$ denote all the observable variables that are associated with the leaf clique nodes in the outside tree $Out(S)$, then the core group $\alpha(S)$ satisfies the conditional independence property that: $\{\mathbf{OV}[In(S)] \setminus \alpha(S)\} \mathrel{\perp\mspace{-10mu}\perp} \mathbf{OV}[Out(S)] \mid \alpha(S)$. Such a core group of observable variables $\alpha(S)$ always exists for each separator set $S$, since at least the set $\mathbf{OV}[In(S)]$ would certainly qualify as being $\alpha(S)$, by definition. But in order to reduce the computational complexity of our learning and inference algorithm, it is desirable to find the minimal core group $\alpha(S)$ for each $S$. We present the process for determining the minimal core group in Appendix \textbf{A.2}.\\ \noindent \textbf{Definition 2.} Let $\theta^S[\alpha(S)]$ denote a sufficient statistic feature vector for $\alpha(S)$, and let $\Omega$ denote the evidence information that we observe from $Out(S)$. Then we define the \textbf{predictive message} that we send across each separator set $S$ to be the conditional expectation of $\theta^S[\alpha(S)]$ conditioned on $\Omega$, i.e. $\mathbb{E}\big[\theta^{S}[\alpha(S)] \mid \Omega \big]$. \subsection{Relationship between Predictive Messages} Now the goal of our learning algorithm is to learn how these predictive messages relate to each other during PBP inference over a latent junction tree from training data. Without loss of generality, let's consider a non-leaf separator set $S$ in a latent junction tree $T$, where $S$ is connected with $K$ child separator sets $\{ S_1, S_2, ..., S_K\}$ below it through a clique node $C$. According to the definition of core groups in Definition 1, since $\alpha(S_1)$, $\alpha(S_2)$, ... , $\alpha(S_K)$ are all contained in $In(S)$, their posterior joint distributions $\mathbb{P}\big[\alpha(S_1) \mid \Omega \big], \mathbb{P}\big[\alpha(S_2) \mid \Omega \big], ..., \mathbb{P}\big[\alpha(S_K) \mid \Omega \big]$ are thus all completely determined by $\mathbb{P}\big[\alpha(S) \mid \Omega\big]$. Therefore, the conditional expectations $\mathbb{E}\big[\theta^{S_1}[\alpha(S_1)] \mid \Omega \big], \mathbb{E}\big[\theta^{S_2}[\alpha(S_2)] \mid \Omega \big], ..., \mathbb{E}\big[\theta^{S_K}[\alpha(S_K)] \mid \Omega \big]$, as well as their outer product, must also be fully determined by the conditional expectation $\mathbb{E}\big[\theta^S[\alpha(S)] \mid \Omega\big]$ (because all the $\theta$'s here are sufficient statistics feature vectors). That is to say, there exists a linear operator $\mathcal{W}^S$ in the form of a ($K$+1)-th order tensor with each mode corresponding to $S, S_1, S_2, ... , S_K$ respectively, such that: \[\operatorname*{\otimes}\limits_{k=1}^K \mathbb{E}\big[\theta^{S_k}[\alpha(S_k)] \mid \Omega\big] = \mathcal{W}^S \times_{S} \mathbb{E}\big[\theta^S[\alpha(S)] \mid \Omega\big] \addtag \] for any outside tree evidence $\Omega$, where $\otimes$ denotes outer product and $\times_S$ denotes mode-specific tensor multiplication\footnote{For a detailed introduction to tensor algebra, we refer the readers to \cite{Kolda09}. In this paper we adopt the notations from \cite{Parikh12} to label the modes of tensors with random variables.} along mode $S$. Therefore, the predictive messages in PBP relate to each other through $\mathcal{W}^S$ according to Eq. (1). This linear operator $\mathcal{W}^S$ essentially acts as a message processor and distributor during PBP inference. \subsection{Two-Stage Regression on Latent Junction Trees} Therefore, the major goal of our learning algorithm is to learn the linear operator $\mathcal{W}^S$ for each non-leaf separator set $S$ in $T$ from training data. Previously, \cite{Hefny15} proposed a method named \textit{Two-Stage Regression} (2SR) to learn PSRs of linear dynamical systems. 2SR learns PSRs by solving a sequence of regression problems, and is fast and statistically consistent. The key idea of 2SR is to use instrumental variable regression \cite{Stock11} to recover an unbiased estimate of the linear mappings in PSRs. Here we generalize 2SR to latent junction trees to learn $\mathcal{W}^S$. First, pick a feature vector for $Out(S)$ and denote it by $\eta^S[Out(S)]$. Then using $\eta^S[Out(S)]$ as our instrumental variable, we can perform our \textit{Two-Stage Regression} to learn $\mathcal{W}^S$ in three steps: (1) regress $\theta^S[\alpha(S)]$ on $\eta^S[Out(S)]$; (2) regress $\operatorname*{\otimes}\limits_{k=1}^K \theta^{S_k}[\alpha(S_k)]$ on $\eta^S[Out(S)]$; (3) run a linear regression from the predictions obtained in step (1) to the predictions obtained in step (2) to recover an unbiased estimate of $\mathcal{W}^S$. These three steps of supervised learning constitute our new \textit{Two-Stage Regression} approach for latent junction trees, which will manifest itself in the learning algorithm in Section 4. \section{Main Algorithm} We are now ready to present our new algorithm for learning general latent-variable probabilistic graphical models based on the framework of predictive belief propagation. It has two components: a learning algorithm and a corresponding inference algorithm that allows us to perform probabilistic inference over latent graphical models using the results learned from the learning algorithm. Here we first describe the basic version of our algorithm for the case where all observable variables in a graphical model are discrete-valued. Then we show how to extend our algorithm to handle graphical models with continuous-valued variables in Section 5. See Appendix \textbf{A.3} for a pseudocode summary of our algorithm and see Appendix \textbf{A.5} for the proof of consistency of our algorithm. \subsection{The Learning Algorithm} \quad \textbf{Step 1. Model Construction:} Run the junction tree algorithm to convert $G$ into an appropriate latent-variable junction tree $T$ and pick a root $C_r$ for $T$, such that each observable variable in $G$ can be associated with one leaf clique in $T$. See Figure 2(a) and 2(b) for a concrete example.\\ \textbf{Step 2. Model Specification:} For each separator set $S$ in $T$, among all the observable variables that are associated with the leaf clique nodes in its inside tree $In(S)$, determine its minimal core group $\alpha(S) = \{A_1, A_2, ... , A_{|\alpha(S)|}\}$ using the procedure described in Appendix \textbf{A.2}. And among all the observable variables associated with its outside tree $Out(S)$, select a subset of variables $\beta(S) = \{B_1, B_2, ... , B_{|\beta(S)|}\}$ (this can be any subset). (See Figure 3 for a concrete example.) Now pick a feature vector $\theta^S[\alpha(S)]$ for $\alpha(S)$ and a feature vector $\eta^S[\beta(S)]$ for $\beta(S)$, where we require that $\theta^S$ must be a sufficient statistics feature vector for $\alpha(S)$ and $\eta^S$ can be any feature vector. For discrete-valued $\alpha(S)$, this sufficient statistic feature vector can simply be the (vectorized) outer product of the vectors of indicator functions of all the variables in $\alpha(S)$; and for continuous-valued $\alpha(S)$, this sufficient statistic feature vector can be the (vectorized) outer product of the implicit feature map of characteristic kernels of all the variables in $\alpha(S)$ (see Appendix \textbf{A.4}).\\ \textbf{Step 3. Stage 1A Regression (S1A):} At each non-leaf separator set $S$ in $T$, learn a (possibly non-linear) regression model to estimate $\bar{\theta^S} = \mathbb{E}[\theta^S \mid \eta^S]$. The training data for this regression model is $\big\{\big(\theta^S[\alpha(S)^d], \eta^S[\beta(S)^d]\big)\big\}_{d=1}^N$ across all $N$ $i.i.d.$ training samples.\\ \textbf{Step 4. Stage 1B Regression (S1B):} At each non-leaf separator set $S$ in $T$, where $S$ is connected with $K$ child separator sets $\{ S_1, S_2, ..., S_K\}$ below it, learn a (possibly non-linear) regression model to estimate $\bar{\operatorname*{\otimes}\limits_{k=1}^K \theta^{S_k}} = \mathbb{E}[\operatorname*{\otimes}\limits_{k=1}^K \theta^{S_k} \mid \eta^S]$. The training data for this regression model are $\big\{\big(\operatorname*{\otimes}\limits_{k=1}^K \theta^{S_k}[\alpha(S_k)^d], \eta^S[\beta(S)^d]\big)\big\}_{d=1}^N$ across all $N$ $i.i.d.$ training samples. \textit{Note}: In the S1A and S1B regression steps above, we can use any supervised learning algorithm as our regression model. This provides us with the flexibility to incorporate different prior knowledge into our learning process.\\ \textbf{Step 5. Stage 2 Regression (S2):} At each non-leaf separator set $S$ in $T$, use the feature expectations estimated in S1A and S1B to train a linear regression model to predict $\bar{\operatorname*{\otimes}\limits_{k=1}^K \theta^{S_k}} = \mathcal{W}^S \times_{S} \bar{\theta^S}$, where $\mathcal{W}^S$ is the linear operator associated with $S$. Output the learned parameter tensor $\mathcal{W}^S$. The training data for this linear regression model are estimates of $\big( \bar{\operatorname*{\otimes}\limits_{k=1}^K \theta^{S_k}}, \bar{\theta^S} \big)$ for all the training samples that we obtained from S1A and S1B regressions.\\ \textbf{Step 6. Root Tensor Estimation:} At the root $C_r$, estimate the expectation of the outer product of the inside tree feature vectors of all adjacent separator sets that are connected with $C_r$ by taking average across all the $N$ $i.i.d.$ training samples: $\mathcal{T}^{C_r} = \widehat{\mathbb{E}}\big[\operatorname*{\otimes}\limits_{S \in \gamma(C_r)} \theta^S[\alpha(S)]\big] = \dfrac{1}{N} \sum\limits_{d = 1}^{N} \operatorname*{\otimes}\limits_{S \in \gamma(C_r)} \theta^S[\alpha(S)^d]$ where $\gamma(C_r)$ denotes the set of all separator sets that are connected to $C_r$. Output the learned parameter tensor $\mathcal{T}^{C_r}$. This root tensor $\mathcal{T}^{C_r}$ will later serve the function of exchanging information at the root clique $C_r$ during Step 3(2) of the inference algorithm below.\\ \textbf{Output:} The final outputs of our learning algorithm are the linear operators $\mathcal{W}^S$ for each non-leaf separator set that we obtained from Step 5 above, and the root tensor $\mathcal{T}^{C_r}$ that we obtained from Step 6 above. These $\mathcal{W}^S$ and $\mathcal{T}^{C_r}$ essentially serve as an alternative parametrization of the latent-variable graphical model $G$ that supports inference based on \textit{predictive belief propagation}. In our inference algorithm below, we will use $\mathcal{W}^S$ and $\mathcal{T}^{C_r}$ to build our message-passing protocols and to calculate the result of our inference query. \subsection{The Inference Algorithm} Our inference algorithm uses the learned alternative parametrization $\mathcal{W}^S$ and $\mathcal{T}^{C_r}$ outputed by the learning algorithm above to compute inference query through \textit{predictive belief propagation}.\\ \textbf{Step 1. Leaf Tensor Construction:} For each leaf separator set $S_l$ of $T$, whose core group is $\alpha(S_l) = \{A_1, A_2, ... , A_{|\alpha(S_l)|}\}$, construct a leaf tensor $\Phi^{S_l}$ with modes $\theta^{S_l}$, $A_1$, $A_2$, ..., $A_{|\alpha(S_l)|}$ and dimensions $length(\theta^{S_l}) \times \mathcal{N}(A_1) \times \mathcal{N}(A_2) \times ... \times \mathcal{N}(A_{|\alpha(S_l)|})$, where $\mathcal{N}(X)$ denotes the number of values for a discrete random variable $X$. Each fiber\footnote{As explained in \cite{Kolda09}, a fiber of a tensor is a higher-order analogue of matrix rows and columns. A fiber along a mode is defined by fixing every mode of the tensor except for that one.} of the tensor $\Phi^{S_l}$ along mode $\theta^{S_l}$ takes the value of the feature vector $\theta^{S_l}$ evaluated at the corresponding values of variables $A_1, A_2, ... , A_{|\alpha(S_l)|}$, i.e., \begin{center} $\quad \Phi^{S_l}\big(:,\mathcal{I}(a_1),\mathcal{I}(a_2),...,\mathcal{I}(a_{|\alpha(S_l)|})\big) = \theta^{S_l}[a_1,a_2,...,a_{|\alpha(S_l)|}]$ \end{center} where $\mathcal{I}(a)$ denotes the order index of a value $a$ in all the possible values of $A$. The purpose of these leaf tensors $\Phi^{S_l}$ is to completely encode the sufficient statistic feature information of all the observable variables in $G$ into our PBP inference system.\\ \textbf{Step 2. Initial Leaf Message Generation:} At each leaf clique node $C_l$ in $T$, let $\delta(C)$ denote the set of all observable variables that are associated with $C_l$, and let $S_l$ denote the separator set right above $C_l$. We define a function $\zeta(X)$ of an observable variable $X$ that evaluates to an all-one vector if $X$ is not observed in the evidence and evaluates to a one-hot-encoding value indicator vector $e_x$ if $X$ is observed to have value $x$ in the set of observed evidence\footnote{For example, if $X$ is a discrete random variable that can take 6 possible values $\{1,2,3,4,5,6\}$, and X is observed to have value 3 in the evidence, then $e_{3} = [0,0,1,0,0,0]$}. That is to say: \[\zeta(X) = \begin{cases} \vec{1}, & \text{if } X \text{ is not observed} \\ e_x, & \text{if } X \text{ is observed to have value } x \end{cases}\] Then the upward message that we send from $C_l$ to its parent clique node $C_p$ can be calculated as: \[ m_{C_l \rightarrow C_p} = \Phi^{S_l^{\dagger}} \times_{\{\delta(C)\}} \big[\operatorname*{\otimes}\limits_{X \in \delta(C)} \zeta(X)\big] \addtag \] where ${\dagger}$ denotes Moore-Penrose pseudoinverse. This step serves the purpose of collecting all the observed information in the evidence.\\ \textbf{Step 3. Message Passing:} (1) \textit{From leaf to root (the upward phase):} \enspace For each non-root parent clique node $C$ in $T$ where it is separated by a separator set $S$ from its parent node $C_p$, once it has received the messages from all of its child nodes $C_1, C_2, ..., C_K$, which are separated from it by separator sets $S_1, S_2, ..., S_K$ respectively, compute and send the upward message: \[ m_{C \rightarrow C_p} = \mathcal{W}^S \operatorname*{\times_{S_{j}}}\limits_{j \in \{1,2,...,K\}} m_{C_j \rightarrow C} \addtag \] This step gradually collects all the local observed evidence information from all the leaf cliqes up to the root clique. (2) \textit{At the Root:} \enspace At the root clique node $C_r$ of $T$, which is surrounded by its $K$ child nodes $C_1,...,C_K$ (each separated from it by separator sets $S_1,...,S_K$ respectively), for each $k \in \{1,2,...,K\}$, once $C_r$ has received all the $K-1$ upward messages from all of its other $K-1$ child nodes except for $C_k$, compute and send the downward message: \[ m_{C_r \rightarrow C_k} = \mathcal{T}^{C_r} \operatorname*{\times_{S_{j}}}\limits_{j \in (\{1,2,...,K\} \backslash k)} m_{C_j \rightarrow C_r} \addtag \] This step summarizes and exchanges evidence information from different subtrees at the root clique. (3) \textit{From root to leaf (the downward phase):} \enspace For each non-root parent clique node $C$ in $T$ where it is separated from its parent node $C_p$ by a separator set $S$ and separated from its $K$ child nodes $C_1, C_2, ..., C_K$ by separator sets $S_1, S_2, ..., S_K$ respectively, once it has received the downward message $m_{C_p \rightarrow C}$ from $C_p$, for each $k \in \{1,2,...,K \}$, compute and send the downward message: \[m_{C \rightarrow C_k} \nonumber = \mathcal{W}^S \times_{S} m_{C_p \rightarrow C} \operatorname*{\times_{S_{j}}}\limits_{j \in (\{1,2,...,K\} \backslash k)} m_{C_j \rightarrow C} \addtag \] This step is the core operation of \textit{predictive belief propagation}. We gradually compute predictive messages level by level from the root clique down to the leaf cliques. All the downward messages $m_{C \rightarrow C_k}$ in this step are \textit{predictive messages}, as defined in Definition 2.\\ \textbf{Step 4. Computing Query Result:} For the query node $X_Q$ associated with $C_Q$, denote $C_Q$'s parent node as $C_p$ and the separator set between them as $S_Q$. First use the leaf tensor $\Phi^{S_Q}$ and its Moore-Penrose pseudoinverse $\Phi^{S_Q^{\dagger}}$ to transform the downward incoming message $m_{C_p \rightarrow C_Q}$ and the upward outgoing message $m_{C_Q \rightarrow C_p}$, respectively, and then compute the Hadamard product of these transformed versions of the two messages to obtain an estimate of the unnormalized conditional probability of $\delta(C_Q)$ given all the evidence $\{X_i = x_i\}_{i \in \mathcal{E}}$: \begin{center} $\widehat{\mathbb{P}}[\delta(C_Q) \mid \{ X_i = x_i \}_{i \in \mathcal{E}}] \propto \big(m_{C_p \rightarrow C_Q} \times_{S_Q} \Phi^{S_Q^{\dagger}} \big) \circ \big( \Phi^{S_Q} \times_{S_Q} m_{C_Q \rightarrow C_p} \big)$ \end{center} Now we marginalize out the variables in $\delta(C_Q) \backslash X_Q$ and renormalize to obtain the final query result - the estimate of the conditional probability distribution of the query variable $X_Q$ given all the evidence: $\widehat{\mathbb{P}}[X_Q \mid \{ X_i = x_i\}_{i \in \mathcal{E}}]$.\\ \textbf{[Additional Note]}: Another important type of query that we can also compute here is the joint probability of all the observed evidence: $\widehat{\mathbb{P}}[ \{ X_i = x_i\}_{i \in \mathcal{E}}]$. In Step 4 above, before marginalization and renormalization, the Hadamard product is indeed equal to $\widehat{\mathbb{P}}[\delta(C_Q), \{ X_i = x_i \}_{i \in \mathcal{E}}]$ (see Appendix \textbf{A.5} for the proof). We can marginalize out all the variables in $\delta(C_Q)$ from it to obtain $\widehat{\mathbb{P}}[ \{ X_i = x_i\}_{i \in \mathcal{E}}]$. For example, this type of query is used for classification in the handwritten digit recognition task in the experiment in Section 6. \section{Extending to Continuous Domain through RKHS Embeddings} One of the biggest advantages of our new algorithm compared to previous methods is that it can be seamlessly extended from discrete domain to continuous domain in a nonparametric fashion using the technique of kernel embeddings. When encountering continuous random variables in a latent graphical model, we use reproducing-kernel Hilbert space (RKHS) embeddings \cite{Boots13,Song10b,Song10,Song11b} of distributions as sufficient statistic features and express all the learning and belief propagation operations as tensor algebra in the infinite-dimensional Hilbert space, and then employ the kernel trick to transform these operations back into tractable finite-dimensional linear algebra calculations over Gram matrices \cite{Song11,Grune12}. We present our full derivation of this extension in Appendix \textbf{A.4}. \section{Experiments} We design two sets of experiments to evaluate the performance of our proposed algorithm, one using synthetic data and the other one using real data. \begin{figure}[h] \centering \includegraphics[align = c, width=0.4\columnwidth]{toy_exp.pdf} \caption{A directed latent-variable graphical model. Green nodes indicate observable variables and red nodes indicate latent variables.} \end{figure} \subsection{Synthetic Dataset} In this experiment, we test the performance of our algorithm on the task of learning and running inference on the discrete-valued latent-variable graphical model depicted in Figure 4 using artificially generated synthetic data and compare it with both the standard EM algorithm \cite{Dempster77} and the stepwise online EM algorithm \cite{Liang09}. \begin{figure}[h] \centering \includegraphics[align = c, width = 0.7\columnwidth]{toy_performance.pdf} \\ \quad\\ \quad\\ \includegraphics[align = c, width = 0.7\columnwidth]{toy_time.pdf} \caption{Comparison between our learning algorithm, the EM algorithm and the stepwise online EM algorithm on the synthetic dataset.} \end{figure} We randomly initialize all the conditional probability tables in this model as the ground truth parameters, and then sample a dataset containing joint observations of the observable variables. Next we apply our proposed algorithm to learn this model, and evaluate its performance on the task of inferring the posterior distribution of variable $D$ given the observed values at variables $G$, $H$, and $E$. In our experiment, we use ridge regression \cite{Friedman06} for S1A and S1B. We compute the Kullback-Leibler divergence between our algorithm's inferred posterior and the ground truth posterior calculated using the exact Shafer-Shenoy algorithm and average across all possible joint realizations of the variables $(g,h,e)$. We report the results in Figure 5, where we see that the average KL divergence between our algorithm's results and the ground truth posterior quickly decreases and approaches 0 as the size of the training data increases. This result demonstrates that our algorithm learns quickly to perform accurate inference over latent graphical models. We also run the standard EM algorithm \cite{Dempster77} and the stepwise online EM algorithm \cite{Liang09} to learn the same model with the same synthetic dataset, and compare their performance and training time with our algorithm (Figure 5).\footnote{In our experiment, we give both EM and online EM 10 random restarts and take the best ones as their performance evaluation.} Our algorithm achieves equally good learning performance as EM and online EM do, but is much faster to train than both of them. The spectral algorithms can not perform such inference task on individual observable variables in a tractable manner, so we didn't include them as our baselines here. \begin{figure}[h] \centering \includegraphics[align = c, width=0.8\columnwidth]{digit_model.pdf} \caption{A directed latent-variable graphical model to model the generative process of the 16-dimensional feature vectors of handwritten digits.} \end{figure} \subsection{Handwritten Digit Recognition} In this experiment we consider the task of recognizing handwritten digits using the Pen-Based Recognition of Handwritten Digits dataset in the UCI machine learning repository \cite{UCI}. This dataset collected 10992 handwritten digit samples from 44 writers on a tablet with $500\times500$ pixel resolution and then normalized all the coordinates into integer values between 0 and 100. It then used spatial resampling to obtain 8 regularly spaced points to represent each handwritten digit, and the feature vector for each digit is the 16-dimensional vector consisting of the $(x,y)$ coordinates of the 8 representative points. In order to learn to classify these handwritten digits, we design a latent variable graphical model structure (shown in Figure 6) to model the generative process of the 16-dimensional feature vectors of handwritten digits. The blue nodes indicate observable variables that corresponds to the coordinate values, and the orange nodes indicate latent variables. We apply our learning algorithm to learn a different generative model for each of the 10 digit categories, and then during test time, we use our inference algorithm to calculate the probability that a test instance is generated from each of the 10 different models, and choose the one with the highest probability as our predicted category. Here we use 7000 samples as our training set, 494 samples as our validation set, and the other 3498 samples as our testing set. In our experiment, we use Gaussian radial basis function kernel embeddings with bandwidth parameter $\sigma = 10$ as our feature vectors, and use ridge regression \cite{Friedman06} with regularization parameter $\lambda = 0.1$ for S1A and S1B. \begin{figure}[h] \begin{center} \includegraphics[align = c, width=0.8\columnwidth]{left_plot.pdf}\\ \quad\\ \quad\\ \includegraphics[align = c, width=0.8\columnwidth]{right_plot.pdf} \caption{Comparison of our learning algorithm, the spectral algorithm, the EM algorithm and the stepwise online EM algorithm on the handwritten digit recognition experiment.} \end{center} \end{figure} We also run the standard EM algorithm \cite{Dempster77}, a stepwise online EM algorithm \cite{Liang09}, and a spectral algorithm as the baselines to learn the same model, and compare their classification accuracy and training time with our algorithm (Figure 7). From the experimental results we can clearly see that our learning algorithm performs much better and is much faster to train than both the spectral algorithm and the two EM algorithms on this handwritten digit classification task. And moreover, we observe that our algorithm is also very robust and yields good performance even when the size of the training data is relatively small, while the other three algorithms all perform poorly in this scenario. \section{Conclusion} In this paper, we have introduced predictive belief propagation as a new formulation of message-passing inference over latent junction trees and developed a new algorithm for learning general latent-variable graphical models based on it. Our new algorithm unifies the learning and inference of all different types of latent graphical models under a single flexible framework, and overcomes many severe limitations faced by previous methods like EM and spectral algorithms (see Appendix \textbf{A.1} for a detailed comparison between our algorithm and previous algorithms). We also proved that our new algorithm gives a consistent estimator of inference queries over all latent graphical models. We evaluated its performance on both synthetic and real datasets, and showed that it learns different types of latent graphical models efficiently and achieves superior inference performance. Therefore, we believe that our algorithm provides a powerful and flexible new learning framework for general latent-variable graphical models. \bibliographystyle{aaai}
2,877,628,091,435
arxiv
\subsection{Reconstruction Algorithm} The main result of this subsection is summarized in the following theorem. \begin{theorem}\label{th:naive} If a string ${\boldsymbol x}\in\Sigma^n$ is $ (L-1,4s+1) $-substring distant, then it is $(L,t,s)$-reconstructible. \end{theorem} The proof of Theorem~\ref{th:naive} is given by an explicit reconstruction algorithm, presented in Algorithm~\ref{alg:reconstruct-substitutions}. The algorithm receives an erroneous multispectrum $ U \in {\cal B}_{L,t,s}({\boldsymbol x}) $ for $ {\boldsymbol x} \in {\cal S}_n(L-1,4s+1) $ and reconstructs the maximum reconstructible substring ${\mathbf W}_2(U)$. The algorithm uses the substring-distant property of $ {\boldsymbol x} $ to identify the correct order of the substrings of $ U $. Then, it takes for each entry of $ {\boldsymbol x} $ the majority vote of its occurrences in $ U $. \begin{algorithm} \caption{Reconstruct($ U,t,s $)}\label{alg:reconstruct-substitutions} \algorithmicrequire \ $U \in {\cal B}_{L,t,s}({\boldsymbol x})$ for $ {\boldsymbol x} \in {\cal S}_{n}(L-1,4s+1)$ \\ \algorithmicensure \ ${\mathbf W}_2(U) $ the maximum reconstructible-substring of $U $ \begin{algorithmic}[1] \State{Initialize $ B[1,\ldots,n] $ as an array of $ n $ empty vectors, set $ i = 1, A = U $} \State{Pick $ {\boldsymbol w}_1 \in A $ such that for every other $ {\boldsymbol w} \in A $, $ d_H($Pref$_{L-1}({\boldsymbol w}_1), $Suff$_{L-1}({\boldsymbol w}) ) \ge 2s+1 $}\label{step:pickw1} \State{Set $A = A\setminus \{{\boldsymbol w}_1\}$} \State{\textbf{For} every $ j = 1, \dots, L $, append $ ({\boldsymbol w}_1)_j $ to $ B[j] $ } \While{$ |A| \neq 0 $} \State{Pick $ {\boldsymbol w}_{i+1} \in A $ such that $ d_H( $Suff$_{L-1}({\boldsymbol w}_{i}), $ Pref$_{L-1}({\boldsymbol w}_{i+1})) \le 2s $} \label{step:match-suff-pref2} \State{Set $A = A\setminus \{{\boldsymbol w}_{i+1}\}, i = i+1 $} \State{\textbf{For} every $ j = 1, \dots, L $, append $ ({\boldsymbol w}_i)_j $ to $ B[i+j-1] $ } \EndWhile \State{Return $ {\boldsymbol y} = (y_1,\ldots,y_n)$ where $y_j = maj(B[j]) $}\label{step:makey} \end{algorithmic} \end{algorithm} Let $ U =\{ {\boldsymbol u}_1, \dots, {\boldsymbol u}_{n-L+1} \} $ be the input set of the algorithm, ordered with respect to $ S_L({\boldsymbol x}) $, similarly to Definition~\ref{def:ts-erroneous}. A demonstration of the execution of Algorithm~\ref{alg:reconstruct-substitutions} is presented in the next example. \begin{example} Let $ n,L,{\boldsymbol x},U $ from Example~\ref{ex:ts-erroneous}. The string $ {\boldsymbol x} $ is $ (L-1,5) $-substring distant and therefore $ U $ is a valid input for Algorithm~\ref{alg:reconstruct-substitutions} with $ t=3, s=1 $. Let $ {\boldsymbol u}_1, \dots, {\boldsymbol u}_7 $ denote the elements of $ U $ similarly to Example~\ref{ex:ts-erroneous}. The algorithm picks at Step~\ref{step:pickw1} the substring $$ {\boldsymbol w}_1 = {\boldsymbol u}_1 = 0011100010 $$ since for every other $ i \in [2,7] $, $$ d_H(001110001, \text{Suff}_{9}({\boldsymbol u}_i)) \ge 3. $$ Then, it continues to pick the other substrings of $ U $ in increasing order at Step~\ref{step:match-suff-pref2}, since for every substring $ {\boldsymbol u}_i$ for $ i \in [1,6] $, only $ {\boldsymbol u}_{i+1} $ satisfies $$ d_H(\text{Suff}_9({\boldsymbol u}_{i}), \text{Pref}_9({\boldsymbol u}_{i+1})) \le 2. $$ For example, both $ {\boldsymbol u}_3,{\boldsymbol u}_4 $ are erroneous but yet satisfy $$ d_H(\text{Suff}_{9}({\boldsymbol u}_3), \text{Pref}_{9}({\boldsymbol u}_4))\hspace{-0.5ex} =\hspace{-0.5ex} d_H(100001011,110001111)\hspace{-0.5ex} =\hspace{-0.5ex} 2 $$ while for every $ i \neq 4 $, $$ d_H(\text{Suff}_{9}({\boldsymbol u}_3), \text{Pref}_{9}({\boldsymbol u}_i)) \ge 3. $$ Therefore, Algorithm~\ref{alg:reconstruct-substitutions} holds for every $ j \in [n] $ all the occurrences of $ x_j $ in $ U $ inside the vector $ B[j] $. For example, $ B[1] = (0), B[5] = (1,1,0,1,1) $ and so on. Thus, following the construction of the result string in Step~\ref{step:makey}, the algorithm returns $$ {\boldsymbol y} = {\mathbf W}_2(U) = 0011100010110111. $$ \end{example} We prove next the correctness of Algorithm~\ref{alg:reconstruct-substitutions}. \begin{lemma}\label{lem:ts-erroneous} Algorithm~\ref{alg:reconstruct-substitutions} successfully reconstructs $ {\mathbf W}_2(U) $. \end{lemma} \begin{IEEEproof} First, we show that the algorithm matches two substrings $ {\boldsymbol w}_i, {\boldsymbol w}_{i+1} $ in Step~\ref{step:match-suff-pref2} if and only if $ {\boldsymbol w}_i = {\boldsymbol u}_j $ and $ {\boldsymbol w}_{i+1} = {\boldsymbol u}_{j+1} $ for some consecutive $ {\boldsymbol u}_j, {\boldsymbol u}_{j+1} \in U $. That is, since from $ U \in {\cal B}_{L,t,s}({\boldsymbol x})$ we have that \begin{flalign*} d_H(\text{Suff}&_{L-1}({\boldsymbol u}_j), \text{Pref}_{L-1}({\boldsymbol u}_{j+1})) \\&\le\begin{aligned}[t] &d_H(\text{Suff}_{L-1}({\boldsymbol u}_j),{\boldsymbol x}_{j+1,L-1}) \\ &+ d_H( \text{Pref}_{L-1}({\boldsymbol u}_{j+1}),{\boldsymbol x}_{j+1,L-1}) \end{aligned} \\& \le s+s = 2s, \end{flalign*} and on the other hand, for $ {\boldsymbol u}_j, {\boldsymbol u}_{k} \in U $ with $ k \neq j+1 $, it follows from $ {\boldsymbol x} \in {\cal S}_{n}(L-1,4s+1) $ and from $ U \in {\cal B}_{L,t,s}({\boldsymbol x}) $ that \begin{align*} d_H(\text{Suff}&_{L-1}({\boldsymbol u}_j), \text{Pref}_{L-1}({\boldsymbol u}_{k})) \\&\ge \begin{aligned}[t] & d_H({\boldsymbol x}_{j+1,L-1}, {\boldsymbol x}_{k,L-1}) - d_H(\text{Suff}_{L-1}({\boldsymbol u}_j),{\boldsymbol x}_{j+1,L-1}) \\&-d_H(\text{Pref}_{L-1}({\boldsymbol u}_{k}),{\boldsymbol x}_{k,L-1}) \end{aligned} \\&\ge 4s+1 -s -s = 2s + 1. \end{align*} Using the same arguments, we pick at Step~\ref{step:pickw1} $ {\boldsymbol w}_1 = {\boldsymbol u}_1 $. Using a simple induction, it follows that for every $ i \in [n-L+1] $, $ {\boldsymbol w}_i = {\boldsymbol u}_i $. Hence, for every $ j \in [n] $, $$ B[j] = \{ ({\boldsymbol u}_i)_k \mid i \in [n-L+1], k \in [L], i + k - 1 = j \}, $$ and therefore the string constructed by the algorithm in Step~\ref{step:makey} is $ {\mathbf W}_2(U) $. \end{IEEEproof} The proof of Lemma~\ref{lem:ts-erroneous} also verifies the correctness of Theorem~\ref{th:naive}. \subsection{Cardinality Analysis of $(L,d)$-Substring Distant Strings} In this subsection we study the cardinality of the set of substring distant strings for different parameters of $L$ and $d$. For simplicity, all the results in the of this section are presented for the binary case. The next lemma assures that for given $d$ and for $n$ large enough, the redundancy of the set ${\cal S}_n(L,d)$ is at most a single bit, when $L=2\log (n) +(d-1)\log (\log (n)) +{\cal O}(1)$. \begin{lemma}\label{lem:Ld_red1} For $L=2\log (n) +(d-1)\log (\log (n)) +{\cal O}(1)$ and $n$ large enough, it holds that $S_n(L,d) \geq 2^{n-1}$ and hence the redundancy of the set ${\cal S}_n(L,d)$ is at most a single bit. \end{lemma} \begin{IEEEproof} Let $L=2\log (n) +(d-1)\log (\log (n)) +C(d-1)$ for some positive constant $C$ that will be determined later. If a string is not an $(L,d)$-substring distant, then it contains at least two length-$L$ substrings which their Hamming distance is at most $d-1$. Hence, according to the union bound, the number of strings that are not $(L,d)$-substring distant can be bounded from above by \begin{align*} n^2 2^{n-L} L^{d-1} & = 2^n \cdot \frac{n^2L^{d-1}}{2^L} = 2^n \cdot \frac{L^{d-1}}{(\log (n))^{d-1}2^{C(d-1)}} & \\ & = 2^n \cdot \left(\frac{L}{2^C\log (n) }\right)^{d-1} \overset{(a)}{\leq} 2^n \cdot \left(\frac{3\log (n)}{2^C\log (n)}\right)^{d-1} & \\ & = 2^n \cdot \left(\frac{3}{2^C}\right)^{d-1}, & \end{align*} where inequality (a) holds for $n$ large enough. Hence, by choosing $C=\log(3)+1/(d-1)$ we get that the number of strings that are not $(L,d)$-substring distant is at most $2^{n-1}$, which accordingly implies that $S_n(L,d) \geq 2^{n-1}$. \end{IEEEproof} Our next result claims that the asymptotic rate of the set ${\cal S}_n(L,d)$, when $L=\left\lceil a\log(n)\right\rceil$ and $a>1$, is 1. The proof follows the same structure as the one from~\cite{EliGabMedYaa19}, but we present it here for the completeness of the results in the paper. \begin{lemma} For fixed $d$, $a>1$, and $L=\left\lceil a\log (n) \right\rceil$, it holds that the asymptotic rate of the set ${\cal S}_n(L,d)$ is 1. \end{lemma} \begin{IEEEproof} We will show that for any fixed $d$ and $a>1$, $$\lim_{n\rightarrow \infty}\frac{\log(S_n(L,d))}{n}=1.$$ We follow the proof derived in~\cite{EliGabMedYaa19} for the case of $d=1$ and extend for arbitrary $d$. We denote by ${\cal S}(a,d)$ the set of all length-$n$ $(\left\lceil a\log (n)\right\rceil,d)$-substring distant strings, that is, ${\cal S}(a,d) \triangleq \bigcup_{n>0}S_n(\left\lceil a\log (n)\right\rceil,d)$. Let us start with the following observation. \begin{align*} Pr\left({\boldsymbol w} \in {\cal S}_{(m+1)L}(L,d)\right) & = Pr \left(\sum_{h=1}^mX_h <1\right) & \\ &= 1- Pr \left(\sum_{h=1}^mX_h \geq 1\right).& \end{align*} In this case, the random variable $X_h$ is defined to be $$X_h = \sum_{j=(h-1)L+1}^{hL}\sum_{i=0}^{j-1} \mathbf{1} ({\boldsymbol w}_{i,L}\in B_{d-1}({\boldsymbol w}_{j,L})),$$ where $\mathbf{1} ({\boldsymbol w}_{i,L}\in B_{d-1}({\boldsymbol w}_{j,L}))$ is a binary function that returns 1 if and only if ${\boldsymbol w}_{i,L}\in B_{d-1}({\boldsymbol w}_{j,L})$. For all $h>0$ \begin{align*} P_h & \triangleq Pr \left( {\boldsymbol w}_{1,(h+1)L} \in {\cal S}(a,d) | {\boldsymbol w}_{1,hL} \in {\cal S}(a,d) \right) & \\ & = 1-Pr(X_h\geq 1).& \end{align*} In order to bound from above the probability $Pr(X_h\geq 1)$ we calculate \begin{align*} E[X_h] & = \sum_{j=(h-1)L+1}^{hL}\sum_{i=0}^{j-1} Pr ({\boldsymbol w}_{i,L}\in B_{d-1}({\boldsymbol w}_{j,L})) & \\ & \overset{(a)}{\leq} \sum_{j=(h-1)L+1}^{hL}\sum_{i=0}^{j-1} \frac{L^{d-1}}{2^L} \leq \frac{L^{d-1}}{2^L} \cdot hL^2 = \frac{hL^{d+1}}{2^L},& \end{align*} where in inequality (a) we used that $Pr ({\boldsymbol w}_{i,L}\in B_{d-1}({\boldsymbol w}_{j,L})) = \frac{|B_{d-1}({\boldsymbol w}_{j,L})|}{2^L} \leq \frac{L^{d-1}}{2^L}$. Hence, according to Markov inequality $Pr(X_h \geq 1) \leq \frac{hL^{d+1}}{2^L},$ and thus $$P_h \geq 1- \frac{hL^{d+1}}{2^L}.$$ Next, we get that \begin{align*} Pr\left({\boldsymbol w} \in {\cal S}_{n}(L,d)\right) & \geq \prod_{h=0}^{\lfloor n/L\rfloor}\left( 1- \frac{hL^{d+1}}{2^L} \right) & \\ & = 2^{\sum_{h=0}^{\lfloor n/L\rfloor}\log\left( 1- \frac{hL^{d+1}}{2^L} \right)} & \\ & \overset{(b)}{\geq} 2^{-\sum_{h=0}^{\lfloor n/L\rfloor}\frac{hL^{d+1}}{(2^L-hL^{d+1})\ln(2)} }& \\ & \overset{(c)}{\geq} 2^{-(n/L+1)\frac{nL^{d}}{(2^L-nL^{d})\ln(2)} }& \\ & \overset{(d)}{\geq} 2^{-\frac{n^2L^{d-1}}{2^L-nL^d}} \overset{(e)}{\geq} 2^{-\frac{n^2L^{d-1}}{2^{L-1}}},& \end{align*} where inequality (b) follows from $\log_q(1-x) \geq -\frac{x}{(1-x)\ln q}$ for $0<x<1$, inequality (c) is a result of $\frac{hL^{d+1}}{(2^L-hL^{d+1})} \leq \frac{(n/L)L^{d+1}}{(2^L-(n/L)L^{d+1})}$, for all $0\leq h\leq\lfloor n/L\rfloor$, inequality (d) follows from simple arithmetic operations, and lastly inequality (e) holds for $n$ large enough. Therefore, \begin{align*} \frac{\log(S_n(L,d))}{n} & \geq \frac{\log(2^{n-\frac{n^2L^{d-1}}{2^{L-1}}})}{n} = \frac{{n-\frac{n^2L^{d-1}}{2^{L-1}}}}{n} & \\ & \geq 1- \frac{nL^{d-1}}{2^{L-1}}& \end{align*} and finally we conclude that $\lim_{n\rightarrow \infty}\frac{\log(S_n(L,d))}{n}=1.$ \end{IEEEproof} \subsection{Encoding of $ (L,d) $-Distant Strings} In this section, a generic encoding algorithm is presented that uses a single redundancy bit in order to encode length-$ n $ strings that are $ (L,d) $-distant, for $$ L = 2 \log (n) + 2 (d-1) \log (\log (n)) + 4. $$ Note that this value of $L$ is far from the value derived in Lemma~\ref{lem:Ld_red1} only by roughly $(d-1) \log (\log (n))$. First, we present some helpful definitions. Let $ {\boldsymbol w},{\boldsymbol w}' \in \Sigma^n $ be strings such that $ d_H({\boldsymbol w}, {\boldsymbol w}') \le \rho $ for an integer $ \rho \le n $. The construction $ EncDist_{n,\rho}({\boldsymbol w}, {\boldsymbol w}') $ is taken from \cite{LevYaa18} and encodes the distance between $ {\boldsymbol w}, {\boldsymbol w}' $. Let $ p_1,\dots, p_{d_H({\boldsymbol w},{\boldsymbol w}')} $ denote the indices of the entries which $ {\boldsymbol w}, {\boldsymbol w}' $ do not agree upon. For every $ i \in [\rho] $ let $ {\boldsymbol y}_i \in \Sigma^{\log (n)}$ be the following value: \[ {\boldsymbol y}_i = \begin{cases} b(p_i) & i \le d_H({\boldsymbol w},{\boldsymbol w}') \\ 0^{\log (n)} & \text{Otherwise} \end{cases} \] Thus, \[ EncDist_{n,\rho}({\boldsymbol w},{\boldsymbol w}') = {\boldsymbol y}_1 \circ \cdots \circ {\boldsymbol y}_{\rho}. \] Notice that the size of the output is independent of $ {\boldsymbol w}, {\boldsymbol w}' $ and equals $ \rho \cdot \log (n) $. We sometimes omit the parameter $ n $ if it is clear from the context. We utilize a marker substring, first introduced in~\cite{LevYaa18}, which we notate as a \emph{$ d$-auto cyclic string}. A string $ {\boldsymbol u} \in \Sigma^n $ is a $ d $-auto cyclic string, if it satisfies $$ d_H({\boldsymbol u}, 0^i \circ {\boldsymbol u}_{1,n-i}) \ge d $$ for every $ 1 \le i \le d $. The authors of~\cite{LevYaa18} also presented a construction of such strings of length $ d \lceil \log (d) \rceil +2d $. Let $ {\boldsymbol u}_d $ denote a $ d $-auto cyclic string for the rest of this section. Next, let $ {\boldsymbol w} \in \Sigma^k $ for $ k \le n $. We want to construct a set of length-$ n $ strings, that contains all $ {\boldsymbol y} \in\Sigma^n $ that satisfy \begin{align}\label{eq:cb-t} d_H(\text{Pref}_{n}({\boldsymbol w} \circ {\boldsymbol y}), {\boldsymbol y}) \le t, \end{align} for some $ t \le n $. Therefore, we construct the \emph{concatenation ball of radius-$ t $} around $ {\boldsymbol w} $, denoted as ${\cal C}{\cal B}_{n,t}({\boldsymbol w})$. For this purpose, let $ m = \lceil n / k \rceil $ and let $ t_1, \dots, t_m $ be a series of integers such that $ \sum_{i=1}^m t_i \le t $. Furthermore, let $ {\boldsymbol w}_0, {\boldsymbol w}_1, \dots, {\boldsymbol w}_m $ be a series of substrings such that $ {\boldsymbol w}_0 = {\boldsymbol w} $ and for every $ i \in [m] $, $ {\boldsymbol w}_i \in {\cal B}_{t_i}({\boldsymbol w}_{i-1}) $. Thus, the string $ \text{Pref}_{1,n}({\boldsymbol w}_1 \circ \cdots \circ {\boldsymbol w}_m) $ belongs to the set $ {\cal C}{\cal B}_{n,t}({\boldsymbol w}) $. Namely, \begin{align*} {\cal C}{\cal B}&_{n,t}({\boldsymbol w}) = \\&\hspace{-1ex}\left\{ \text{Pref}_{1,n}({\boldsymbol w}_1 \circ \cdots \circ {\boldsymbol w}_m)\ \middle\vert \begin{array}{l} \exists \{t_i \}_{i=1}^m \text{ s.t. } \sum_{i=1}^m t_i \le t, \\ \exists \{{\boldsymbol w}_i \}_{i=0}^m \text{ s.t. } {\boldsymbol w}_0 = {\boldsymbol w} \text{ and} \\ \forall i \in [m], {\boldsymbol w}_i \in {\cal B}_{t_i}({\boldsymbol w}_{i-1}) \end{array}\right\}. \end{align*} One can verify that for every $ {\boldsymbol y} \in \Sigma^n$ that satisfies (\ref{eq:cb-t}), then $ {\boldsymbol y} \in {\cal C}{\cal B}_{n,t}({\boldsymbol w}) $. Algorithm~\ref{alg:ld-dist-encode1} receives a string $ {\boldsymbol w} \in \Sigma^{n-1} $, and outputs a string $ {\boldsymbol x} \in {\cal S}_n(L,d) $. The algorithm shares ideas with the encoding scheme of \emph{repeat-free words} from \cite{EliGabMedYaa19}, and consists of two main procedures, elimination and expansion. First, we append to $ {\boldsymbol w} $ a marker substring of length $ L/2+d+1 $ that contains the $ d $-auto cyclic string $ {\boldsymbol u}_d $, which is used by the decoder to identify the end of the elimination procedure. Then, at the elimination procedure we repeatedly look for substrings of length $ L $ that their Hamming distance is less than $ d $. When found, we remove the first of the substrings and encode the occurrence using the function $ EncDist_{L,d-1} $. Likewise, we eliminate occurrences of substrings of length $ L/2 $ that their Hamming distance from the $ (L/2) $-suffix of the string is less than $ d $. During this procedure, we ensure that the marker substring located at the suffix of the string remains intact. Later, at the expansion procedure we enlarge the string to length $ n $ by inserting substrings of length $ L/2 $ while making sure that the string remains $ (L,d) $-distant. We denote for the rest of this section $$ \ell = L/2 = \log (n) + (d-1) \log (\log (n)) +2. $$ \begin{algorithm*} \caption{LDEncode($ {\boldsymbol w}, L,d $)}\label{alg:ld-dist-encode1} \begin{algorithmic}[1] \Require {A string $ {\boldsymbol w} \in \Sigma^{n-1} $} \Ensure {A string ${\boldsymbol x} \in S_n(L,d)$} \State Set $ {\boldsymbol x} = {\boldsymbol w} \circ 0 \circ 1^d \circ 0^{\ell-|{\boldsymbol u}_d|} \circ {\boldsymbol u}_d $ \label{step:LDdtp-init} \Statex{\emph{Elimination}:} \While{exist indexes $ i < j $ such that $ d_H({\boldsymbol x}_{i,L},{\boldsymbol x}_{j,L}) < d $ \textbf{or} an index $ i \le |{\boldsymbol x}| -2\ell + |{\boldsymbol u}_d| $ where $ d_H({\boldsymbol x}_{i,\ell},0^{\ell-|{\boldsymbol u}_d|}\circ {\boldsymbol u}_d) < d $ }\label{step:LDdtp-elim-while} \Case \textbf{1}: violating substrings $ {\boldsymbol x}_{i,L},{\boldsymbol x}_{j,L} $ exists \If{$ i,j \in J_1 = [|{\boldsymbol x}|-L -\ell -d, |{\boldsymbol x}|- L +1] $ ( $ {\boldsymbol x}_{i,L} $ contains the suffix $ 0 \circ 1^d \circ 0^{\ell-|{\boldsymbol u}_d|} \circ {\boldsymbol u}_d $)} \State{Remove $ {\boldsymbol x}_{i,L- (|{\boldsymbol x}|-L -\ell -d -i) + 1} $, append $ 100 \circ b_{J_1}(i) \circ b_{J_1}(j) \circ EncDist_{L,d-1}({\boldsymbol x}_{i,L}, {\boldsymbol x}_{j,L}) $ to the left of $ {\boldsymbol x} $ }\label{step:LDdtp-elim-if12-state} \Else \State{Remove $ {\boldsymbol x}_{i,L} $, append $ \ 101 \circ b(i) \circ b(j) \circ EncDist_{L,d-1}({\boldsymbol x}_{i,L}, {\boldsymbol x}_{j,L}) $ to the left of $ {\boldsymbol x} $} \label{step:LDdtp-elim-if13-state} \EndIf \EndCase \Case \textbf{2}:{ a substring $ {\boldsymbol x}_{i,\ell} $ with $ i < |{\boldsymbol x}| -2\ell + |{\boldsymbol u}_d| $ such that $ d_H({\boldsymbol x}_{i,\ell},0^{\ell-|{\boldsymbol u}_d|}\circ {\boldsymbol u}_d) < d $ exists }\label{step:LDdtp-elim-if2-cond} \If{$ i \in J_2 = [|{\boldsymbol x}|-2\ell -d, |{\boldsymbol x}|- 2\ell + |{\boldsymbol u}_d| -1] $ ( $ {\boldsymbol x}_{i,\ell} $ contains the suffix $ 0 \circ 1^d \circ 0^{\ell-|{\boldsymbol u}_d|} \circ {\boldsymbol u}_d $)} \State{Remove $ {\boldsymbol x}_{i,\ell- (|{\boldsymbol x}|-2\ell -d -i) + 1} $, append $ 11 \circ b_{J_2}(i) \circ EncDist_{\ell,d-1}({\boldsymbol x}_{i,\ell}, 0^{\ell-|{\boldsymbol u}_d|}\circ {\boldsymbol u}_d) $ to the left of $ {\boldsymbol x} $ } \label{step:LDdtp-elim-if22-state} \Else \State{Remove $ {\boldsymbol x}_{i,\ell} $, append $ 0 \circ b(i) \circ EncDist_{\ell,d-1}({\boldsymbol x}_{i,\ell}, 0^{\ell-|{\boldsymbol u}_d|}\circ {\boldsymbol u}_d) $ to the left of $ {\boldsymbol x} $ }\label{step:LDdtp-elim-if23-state} \EndIf \EndCase \EndWhile \State {\textbf{if } $|{\boldsymbol x}| \ge n $, return $ {\boldsymbol x}_{1,n}$}\label{step:after-elimination1} \Statex{\emph{Expansion}:} \While{ $ |{\boldsymbol x}| < n $ } \State{Set \[ B = \bigg( \bigcup_{i \in [1,|{\boldsymbol x}|-\ell ]} {\cal B}_{d-1}({\boldsymbol x}_{i,\ell}) \bigg) \cup \bigg( \bigcup_{i \in [|{\boldsymbol x}| - \ell + 1 , |{\boldsymbol x}|]} {\cal C}{\cal B}_{\ell,d-1}({\boldsymbol x}_{i,|{\boldsymbol x}|-i+1}) \bigg) \]}\label{step:consB} \State{ Pick $ {\boldsymbol y} \in \Sigma^{\ell} \setminus B $ and append $ {\boldsymbol x} = {\boldsymbol x} \circ {\boldsymbol y} $ }\label{step:LDdtp-exp-expand} \EndWhile \State{Return $ {\boldsymbol x}_{1,n} $}\label{step:LDdtp-exp-ret} \end{algorithmic} \end{algorithm*} We prove the correctness of the algorithm in the next few claims. \begin{claim}\label{lem:LDdtp-elim-term} Algorithm~\ref{alg:ld-dist-encode1} reaches Step~\ref{step:after-elimination1}, i.e. the elimination procedure terminates. \end{claim} \begin{IEEEproof} We prove by showing that at each iteration of the elimination loop, the length of $ {\boldsymbol x} $ decreases. We analyze each case of removal and insertion independently. All length comparisons are taken for a large enough $ n $. \\ \emph{Step~\ref{step:LDdtp-elim-if12-state}:} The minimal possible size of the removed substring at this step is achieved when $ i = |{\boldsymbol x}|-L $. Thus, the algorithm removes a substring of length at least $$ L - \ell -d +1 = \log (n) + (d-1) \log (\log (n)) -d + 3 $$ and inserts a smaller substring of length $$ (d-1) \log (L) + 2\log(\ell+d) + 3. $$ \emph{Step~\ref{step:LDdtp-elim-if13-state}:} The algorithm removes a substring of length $$ L = 2\log (n) + 2(d-1)\log (\log (n)) +4 , $$ and inserts a substring of length $$ 2\log (n) + (d-1) \log (L) + 3 . $$ \emph{Step~\ref{step:LDdtp-elim-if22-state}:} The minimal possible length of the removed substring at this step is reached when $ i = |{\boldsymbol x}|- 2\ell + |{\boldsymbol u}_d| -1 $. Therefore, the algorithm removes a substring of length at least $$ \log (n) + (d-1)\log (\log (n)) - (|{\boldsymbol u}_d|+d+1) $$ and inserts a smaller substring of length $$ (d-1) \log (\ell) + \log (|{\boldsymbol u}_d|+d+1) + 2. $$ \emph{Step~\ref{step:LDdtp-elim-if23-state}:} The algorithm removes a substring of length $$ \ell = \log (n) + (d-1) \log (\log (n)) +2 $$ and inserts a substring of length $$ \log (n) + (d-1) \log (\ell) + 1. $$ \end{IEEEproof} \begin{claim}\label{lem:LDdtp-elim} At Step~\ref{step:after-elimination1} of Algorithm~\ref{alg:ld-dist-encode1}, the string $ {\boldsymbol x} $ \begin{enumerate}[(1)] \item is $ (L,d) $-substring distant, \item ends with $ 0 \circ 1^d \circ 0^{\ell-|{\boldsymbol u}_d|} \circ {\boldsymbol u}_d $, \item contains no other $ \ell $-substring from $ {\cal B}_{d-1}(\text{Suff}_\ell({\boldsymbol x})) $ besides its $ \ell $-suffix. \end{enumerate} \end{claim} \begin{IEEEproof} Properties (1) and (2) follow immediately from the algorithm, since the loop continues as long as $ {\boldsymbol x} $ is not $ (L,d) $-distant, while ensuring that Suff$_{\ell+d+1}({\boldsymbol x})$ is not touched. As for (3), from (2) we have that $ \text{Suff}_\ell({\boldsymbol x}) = 0^{\ell-|{\boldsymbol u}_d|}\circ {\boldsymbol u}_d $ and from the definition of a $ d $-auto cyclic string, for every $ i \in [d], $ $$ d_H({\boldsymbol u}_d,0^i\circ ({\boldsymbol u}_d)_{1,|{\boldsymbol u}_d|-i-1}) \ge d $$ and hence for every $ i \in [|{\boldsymbol x}| - \ell - d, |{\boldsymbol x}| - \ell] $ we have $$ d_H(\text{Suff}_\ell({\boldsymbol x}), {\boldsymbol x}_{i,\ell}) \ge d. $$ For $ i \in [|{\boldsymbol x}| - 2\ell + |{\boldsymbol u}_d|, |{\boldsymbol x}| - \ell -d - 1] $, the substring $ {\boldsymbol x}_{i,\ell} $ has $ 1^d $ starting at position $ (|{\boldsymbol x}| -\ell -d)-i $ while $ \text{Suff}_\ell({\boldsymbol x}) $ has $ 0^d $ at this position. Other cases, for $ i <|{\boldsymbol x}| - 2\ell + |{\boldsymbol u}_d| $, are eliminated at Step~\ref{step:LDdtp-elim-if2-cond}. \end{IEEEproof} \begin{claim}\label{lem:LDdtp-exp-yexist} For every iteration of the expansion loop of Algorithm~\ref{alg:ld-dist-encode1}, the set $ \left|\Sigma^{\ell} \setminus B\right| $ constructed in Step~\ref{step:consB} is not empty. \end{claim} \begin{IEEEproof} Using simple counting arguments we have that for every $ {\boldsymbol w} \in \Sigma^\ell $, the size of the radius-$(d-1)$ Hamming ball around $ {\boldsymbol w} $ satisfies $$ |{\cal B}_{d-1}({\boldsymbol w})| = \sum_{d'=0}^{d-1} \binom{\ell}{d'} \le \ell^{d-1}. $$ Similarly, for every $ {\boldsymbol w} \in \Sigma^k $ with $ k \in [\ell] $, $ {\cal C}{\cal B}_{\ell,d-1}({\boldsymbol w}) $ can be bounded using the same value. Thus, the size of $B$ is bounded by $ n \cdot \ell^{d-1} $. It is left to show that $ n \cdot \ell^{d-1} \le |\Sigma^\ell| = 2^\ell $. By taking a logarithm from both sides of the equation, we derive that it is necessary that \begin{align*} \log (n) &+ (d-1)\log (\ell) \le \ell \end{align*} which is satisfied for $ n $ large enough by the value $$ \ell = \log (n) + (d-1)\log (\log (n)) + 2. $$ \end{IEEEproof} Let $ m $ denote the number of iterations of the expansion loop of Algorithm~\ref{alg:ld-dist-encode1} that were executed. For every $ k \in [m] $, let $ {\boldsymbol x}_k $ denote the value of $ {\boldsymbol x} $ at the end of the $ k $-th iteration, and let $ {\boldsymbol y}_k $ denote the string $ {\boldsymbol y} $ that the algorithm picked at Step~\ref{step:LDdtp-exp-expand} of that iteration. We notate by $ {\boldsymbol x}_0 $ the value of $ {\boldsymbol x} $ before the first iteration of the expansion loop. In the next two lemmas, when referring to $ {\boldsymbol x}_k $ we sometimes omit the subscript $ k$ if it is clear from the context. \begin{claim}\label{lem:LDdtp-exp-suffix} For every iteration $ k \in [m] $, the string $ {\boldsymbol x} = {\boldsymbol x}_{k-1} \circ {\boldsymbol y}_k $ satisfies that for every $ i \in [1,|{\boldsymbol x}| - \ell],$ $$ d_H({\boldsymbol x}_{i,\ell},{\boldsymbol y}_k) \ge d. $$ \end{claim} \begin{IEEEproof} According to the construction of $ B $, for every $i \in [1,|{\boldsymbol x}_{k-1}| - \ell + 1]$ the ball ${\cal B}_{d-1}({\boldsymbol x}_{i,\ell}) $ is contained in $ B $ and since $ {\boldsymbol y}_k \not \in B $ then $d_H({\boldsymbol x}_{i,\ell},{\boldsymbol y}_k) \ge d$. Otherwise, let $ i \in [|{\boldsymbol x}_{k-1}|-\ell+2, |{\boldsymbol x}_{k-1}|] $, assume in the contrary that $d_H({\boldsymbol x}_{i,\ell},{\boldsymbol y}_k) < d$ and thus $$ d_H(\text{Pref}_\ell({\boldsymbol x}_{i,|{\boldsymbol x}|-i+1} \circ {\boldsymbol y}_k), {\boldsymbol y}_k) < d. $$ However, it follows that $ {\boldsymbol y}_k \in {\cal C}{\cal B}_{\ell,d-1}({\boldsymbol x}_{i,|{\boldsymbol x}|-i+1}) $ which is a contradiction. Since $ |{\boldsymbol x}| -\ell = |{\boldsymbol x}_{k-1}| $, this concludes the proof. \end{IEEEproof} \begin{claim}\label{lem:LDdtp-exp-expand} For every iteration $ k \in [m] $, the string $ {\boldsymbol x}_{k} $ is $ (L,d) $-substring distant. \end{claim} \begin{IEEEproof} We prove the lemma by induction over the values of $ k $. For the base case $ k = 1 $, let $ {\boldsymbol x} = {\boldsymbol x}_0 \circ {\boldsymbol y}_1 $ and assume in the contrary that there are two substrings $ {\boldsymbol x}_{i,L}, {\boldsymbol x}_{j,L} $ of Hamming distance less than $ d $. Since $ {\boldsymbol x}_0 $ is $ (L,d) $-substring distant from Claim~\ref{lem:LDdtp-elim} Statement (1), we only need to consider the cases where $ {\boldsymbol x}_{j,L} $ overlaps with $ {\boldsymbol y}_1 $. Therefore, using lengths considerations, $ {\boldsymbol x}_{j,L} $ contains Suff$_{\ell}({\boldsymbol x}_0) = 0^{\ell-|{\boldsymbol u}_d|}\circ {\boldsymbol u}_d $ at some position $ r \in [\ell] $. It follows that $ d_H({\boldsymbol x}_{i+r,\ell}, \text{Suff}_{\ell}({\boldsymbol x}_0)) < d $ which contradicts Claim~\ref{lem:LDdtp-elim} Statement (3). Next, we assume the lemma holds for $ {\boldsymbol x}_{k-1} $ with $ k \ge 1 $ and prove its correctness for $ {\boldsymbol x} = {\boldsymbol x}_{k-1} \circ {\boldsymbol y}_k $. Assume in the contrary that $ {\boldsymbol x}_{i,L}, {\boldsymbol x}_{j,L} $ satisfy $ d_H( {\boldsymbol x}_{i,L}, {\boldsymbol x}_{j,L}) < d $. Using the induction assumption, we only need to consider the values of $ i,j $ where $ {\boldsymbol x}_{j,L} $ overlaps with $ {\boldsymbol y}_{k} $. Thus, it follows that $ {\boldsymbol x}_{j,L} $ contains the substring $ {\boldsymbol y}_{k-1} $, at some position $ r \in [\ell] $. However, this implies that $ {\boldsymbol x}_{i+r,\ell} $ is a substring of $ {\boldsymbol x}_{k} $ that satisfies $ d({\boldsymbol x}_{i+r,\ell}, {\boldsymbol y}_k) < d $ while $ i + r \le |{\boldsymbol x}| - \ell $ which is a contradiction to Claim~\ref{lem:LDdtp-exp-suffix}. \end{IEEEproof} \begin{theorem} Algorithm~\ref{alg:ld-dist-encode1} successfully returns a string from $ {\cal S}_n(L,d) $. \end{theorem} \begin{IEEEproof} If the condition in Step~\ref{step:after-elimination1} holds then according to Claim~\ref{lem:LDdtp-elim} Statement (1), $ {\boldsymbol x} $ is $ (L,d) $-substring distant. Since every substring of $ {\boldsymbol x} $ is $ (L,d) $-substring distant as well, the algorithm returns in this case a string that belongs to $ {\cal S}_n(L,d) $. Otherwise, from Claim~\ref{lem:LDdtp-exp-expand}, the algorithm returns a $ (L,d) $-substring distant string of length $ n $ at Step~\ref{step:LDdtp-exp-ret}. \end{IEEEproof} The decoding scheme receives $ {\boldsymbol x} $ which is an output of Algorithm~\ref{alg:ld-dist-encode1} and outputs $ {\boldsymbol w} \in \Sigma^{n-1} $. First, we look for the leftmost occurrence of the substring $ {\boldsymbol v} = 0 \circ 1^d \circ 0^{\ell-|{\boldsymbol u}_d|} \circ {\boldsymbol u}_d $ in $ {\boldsymbol x} $. According to Claim~\ref{lem:LDdtp-elim} Statement (2), the part of the string to the right of this substring was added during the expansion procedure and therefore we remove it from $ {\boldsymbol x} $. If the substring $ {\boldsymbol v} $ is not present, we look for its longest prefix that is located as a suffix of $ {\boldsymbol x} $. The substring we found is a part of the substring $ {\boldsymbol v} $ the algorithm added at Step~\ref{step:LDdtp-init} since the output of the algorithm is longer than the input. Thus, we can complete the substring to ${\boldsymbol v} $ and receive $ {\boldsymbol x} $ at the stage of after the elimination procedure. Next, we iteratively inverse the elimination procedure. We identify using the first three entries of $ {\boldsymbol x} $ the last step at which the data was encoded. If we encoded the data at Step~\ref{step:LDdtp-elim-if12-state} or Step~\ref{step:LDdtp-elim-if13-state}, we decode $ i,j $ from the function $ b $, and recover $ {\boldsymbol x}_{j,L} $ using $ {\boldsymbol x}_{i,L} $ and the encoded distance. If $ j \le i + \ell $ this has to be done carefully, by restoring every $ j-i $ entries of $ {\boldsymbol x}_{j,L} $ a time. If we encoded the data at Step~\ref{step:LDdtp-elim-if22-state} or Step~\ref{step:LDdtp-elim-if23-state}, we decode from the outputs of the functions $ b $ and $ EncDist_{L,d-1} $ the position $ i $ and the substring $ {\boldsymbol x}_{i,\ell} $, and insert the substring at position $ i $. We repeat this process until we obtain a substring of length $ n + \ell + d $, and return its $ (n-1)$-prefix as $ {\boldsymbol w} $. \subsection{Motivation} We describe next a partial model of the reconstruction process of a large DNA string, $ {\boldsymbol x} $, using by reading from a multispectrum of its DNA substrings, $ S_L({\boldsymbol x}) $. At each read, we pick randomly with uniform distribution a single substring from the multispectrum. Denote the number of reads as $ M = \alpha \cdot n $, and denote by $ P_0 $ the probability we read all substrings of $ S_L({\boldsymbol x}) $. We would like to estimate the minimal value of $ \alpha $ such that $ P_0 \ge 1 - \epsilon $ for some $ \epsilon < 1 $. Let $ P_\alpha $ denote the probability that a single substring is not read at all. Then, we have \begin{align}\label{eq:p0} 1 - P_0 \le n \cdot P_\alpha \end{align} The probability $ P_\alpha $ can be approximated using \begin{align}\label{eq:palpha} P_\alpha = \bigg(1 - \frac{1}{n }\bigg) ^{M} = \bigg(1 - \frac{1}{n}\bigg)^{\alpha n } = e^{-\alpha} \end{align} When plugging this into (\ref{eq:p0}), we receive that in order to have $ P_0 \ge 1 - \epsilon $ it is necessary that \[ n e^{-\alpha} \le \epsilon \] By taking $ \ln $ from both sides and arranging we deduce \[ \alpha \ge \ln n - \ln \epsilon \] Next, we would like to estimate the number of uniformly randomized reads necessary to read a multiset $ U \in {\cal B}_{L,t}({\boldsymbol x})$ for some integer $ t $. Let $ P_t $ denote the probability of successful read of such $ U $, where $ 1 - \epsilon $ is the target probability like earlier. An unsuccessful read occurs when there are at least $ t+1 $ substrings that we were not able to read. Therefore we can bound \begin{align}\label{eq:pt} 1 - P_t \le \binom{n}{t+1} \cdot P_\alpha^{t+1} \end{align} By approximating $ \binom{n}{t+1} \le n^{t+1} $ for large enough $ n $, and by plugging $ P_\alpha $ from (\ref{eq:palpha}) we have that \[ n^{t+1} \cdot e^{-(t+1)\alpha} \le \epsilon \] and by taking $ \ln $ from both sides and arranging we receive \[ \alpha \ge \ln n - \frac{\ln \epsilon}{t+1} \] \end{comment} \subsection{Reconstruction Constraints} The goal of this subsection is to construct $t$-losses $L$-reconstructible strings. This will be given by strings that satisfy a few constraints, given in the next definition. For simplicity, we consider here only the binary case, so $\Sigma=\{0,1\}$. For the rest of this section, we denote the integers $ \ell_1 = L - \floorenv{t/3} -1, \ell_2 = L - \ceilenv{2t/3} -1, \ell_3 = L-t-1 $ and the sets $ I_2 = [n- \ell_2 - t+1,n- \ell_2 + 1], I_3 = [n- \ell_3 - t+1,n- \ell_3 + 1]$. \begin{definition}\label{def:reconstruction-constratints} A string ${\boldsymbol x}\in\Sigma^n$ is said to satisfy the \textbf{$(n,L,t)$-lossy reconstruction (LREC) constraints} if it fulfills the following three constrains. \begin{enumerate} \item $ {\boldsymbol x}$ is a $ \ell_1$-substring unique string. \item The first and last $ t +1 $ length-$ \ell_2$ substrings are not identical to all other length-$ \ell_2$ substrings. Namely, for all $ i \in [t \scalebox{0.75}[1.0]{$+$} 1], j \in [n- \ell_2 + 1]$ with $ i \neq j $ then $ {\boldsymbol x}_{i,\ell_2} \neq {\boldsymbol x}_{j,\ell_2}$ and for all $i \in [n-\ell_2+1], j \in I_2$ with $ i \neq j $, then $ {\boldsymbol x}_{i,\ell_2} \neq {\boldsymbol x}_{j,\ell_2}$. \item The first $ t + 1 $ length-$\ell_3$ substrings are not identical to the last $ t + 1 $ length-$\ell_3$ substrings. Namely, for all $ i \in [t \scalebox{0.75}[1.0]{$+$} 1], j \in I_3 $, ${\boldsymbol x}_{i,\ell_3} \neq {\boldsymbol x}_{j,\ell_3}$. \end{enumerate} \end{definition} According to~\cite{EliGabMedYaa19}, we learn that Constraint 1 imposes that $\ell_1= L-\floorenv{t/3}-1 > \lceil \log (n) \rceil $. Additionally, Constraint 3 requires that $\ell_3= L-t-1 > 0 $. Therefore, the value of $t$ necessary satisfies that \begin{align}\label{eq:t-constraint} t < \min\{L-1,3(L- \lceil \log (n) \rceil - 1)\}. \end{align} For $n,L,t$, denote by ${\cal D}_n(L,t)$ the set of all strings that satisfy the $(n,L,t)$-LREC constraints and let $D_n(L,t) = |{\cal D}_n(L,t)|$. Note that by definition, if a string satisfies the $(n,L,t)$-LREC constraints it satisfies the $(n,L,t')$-LREC constraint for all $t'\leq t$, that is, ${\cal D}_n(L,t') \subseteq {\cal D}_n(L,t)$. \begin{example} Let $ n, L, {\boldsymbol x} $ from Example~\ref{ex2}. The string $ {\boldsymbol x} $ satisfies the $ (n,L,4) $-LREC constraints. The first constraint follows from the fact that $ {\boldsymbol x} $ is $ 6 $-substring unique and it is possible to verify that the two other constraints are satisfied as well. Therefore, $ {\boldsymbol x} \in {\cal D}_n(L,4) $ and also $ {\boldsymbol x} \in {\cal D}_n(L,3) $. \end{example} In \cite{GabMil18}, the authors focused on a type of errors which corresponds to occurrence of bursts of substring losses. They identified a lossy multispectrum $ U \subseteq S_L(x) $ to have $ G $-maximal coverage gap if $ G $ is the maximum number of consecutive substrings that are not included in $ S_L({\boldsymbol x}) $. Based on this characterization, they showed that if $ {\boldsymbol x} $ is $ (L-G-1) $-substring unique it is reconstructible from such a lossy multispectrum $U$. When applying this constraint to our problem, assume that $ U \in {\cal B}_{L,t}({\boldsymbol x}) $, then it is necessary that $ G = t $ since all the losses can occur consecutively. Based on the results of \cite{EliGabMedYaa19}, in order to construct a rate-1 code of $ (L,t) $-reconstructible strings for given $ n $ and $ t $, the construction proposed in \cite{GabMil18} requires that $ L > \lceil a\log (n) \rceil +t $ for some $a>1$. It will be shown in Section~\ref{chp:cardinality} that the $ (n,L,t) $-LREC constraint composes a rate-1 code for values of $L$ that satisfies $ L > \lceil a\log (n) \rceil + \lfloor t/3 \rfloor $, where $a>1+b/3$ and $t=\lceil b\log (n) \rceil + o(\log (n))$ for $ b < 3 $. Hence, for these parameters, the construction proposed in this paper imposes a weaker constraint on the value of $L$ than the construction proposed in \cite{GabMil18}. \subsection{Reconstruction Algorithm} Our next goal is showing that every string which satisfies the $(n,L,t)$-LREC constraint is an $(L,t)$-reconstructible string, that is, its maximal-reconstructible substring can be uniquely decoded even if at most some $t$ substrings are not read. Namely, we prove the following theorem. \begin{theorem}\label{th:LREC reconstruction} Every string ${\boldsymbol x} \in {\cal D}_n(L,t)$ is an $(L,t)$-reconstructible string. \end{theorem} The proof of Theorem~\ref{th:LREC reconstruction} is given by an explicit decoding algorithm which receives a multiset $U\in {\cal B}_{L,t}({\boldsymbol x})$ for some ${\boldsymbol x}\in {\cal D}_n(L,t)$. First, we present in Algorithm~\ref{alg:Stitch}, an auxiliary procedure, called the \emph{Stitching Algorithm}, which receives two inputs: 1) A set $ A $ of substrings that we aim to stitch, and 2) $ \rho \le t $, a parameter that will indicate the minimum overlapping size of two substrings in order to be stitched together. The stitching algorithm is based on iterative stitching steps and is composed of three nested loops. At the most inner loop, two substrings are stitched if the suffix of the first is identical to the prefix of the second. This will later indicate that these substrings originated from the same positions in the input string. The middle loop constructs continuous substrings of $ U $ by finding a prefix of such a substring and repeatedly applying the inner loop in order to correctly concatenate to it more bits. The outer loop iterates over $ k = 0, \dots, \rho $ and at every iteration we bridge gaps that were created by losses of $ k $ consecutive substrings. This is accomplished by reducing the substring length used at the suffix-prefix matching codition method of the inner loop. The stitching algorithm returns a set of continuous substrings reconstructed from $ U $. which its size is smaller than the input set size, or equal if no stitching occurred. We say that an operation of the stitching algorithm is \emph{successful} if the output set size is strictly smaller than the input set size. \begin{algorithm} \caption{Stitch($ A,\rho $)}\label{alg:Stitch} \begin{algorithmic}[1] \For{$ k = 0, \dots, \rho $ } \State{$ B = \emptyset $} \While{$ A \neq \emptyset $} \State{pick $ {\boldsymbol w} \in A $ such that for every other $ {\boldsymbol w}' \in A $, $ $Pref$_{L-k-1}({\boldsymbol w}) \neq $Suff$_{L-k-1}({\boldsymbol w}') $ }\label{step:pick-unique} \While{there exists $ {\boldsymbol w}' \in A $ such that Suff$_{L-k-1}({\boldsymbol w}) = $ Pref$_{L-k-1}({\boldsymbol w}') $} \label{step:match} \State{set $ {\boldsymbol w} = {\boldsymbol w} \circ $ Suff$_{k+1}({\boldsymbol w}') $}\label{step:concat} \State{set $ A = A\setminus \{{\boldsymbol w}'\}$}\label{step:A_update} \EndWhile \State{$ B = B \cup \{{\boldsymbol w}\} $} \EndWhile \State{$ A = B $} \EndFor \State{\textbf{return} $ B $} \end{algorithmic} \end{algorithm} Algorithm~\ref{alg:reconstruct-losses}, called the \emph{Reconstruction Algorithm}, receives a $t$-losses $ L- $multispectrum $ U $ for some $ {\boldsymbol x} \in {\cal D}_n(L,t)$ and uses the stitching algorithm to reconstruct ${\mathbf W}_1(U)$, the maximal reconstructible substring of $U $. In case the returned set by the reconstruction algorithm consists of a single string we assume that the output is the string itself (i.e. not a set). \begin{algorithm} \caption{Reconstruct($U,t$)}\label{alg:reconstruct-losses} \begin{algorithmic}[1] \Require $ U\in {\cal B}_{L,t}({\boldsymbol x}) $ for some $ {\boldsymbol x} \in {\cal D}_n(L,t)$ \Ensure $ {\mathbf W}_1(U) $ the maximum reconstructible-substring of $ U$ \State{Invoke $ A_0 =$ Stitch$(U,\floorenv{t/3}) $. } \label{step:rec-loss} \State{If $ |A_0| = 1$ and $A_0=\{{\boldsymbol y}\}$: return ${\boldsymbol y}$.} \label{step:rec-loss1} \State{If $|A_0|=2$ and $A_0=\{{\boldsymbol y}_1,{\boldsymbol y}_2\}$: return Stitch$(A_0,t) $. }\label{step:rec-loss-2} \State{If $ |A_0| = 3 $ and $A_0=\{{\boldsymbol y}_1,{\boldsymbol y}_2,{\boldsymbol y}_3\}$: for $i=1,2,3$ invoke $ A_i =$ Stitch$(A_0 \setminus \{{\boldsymbol y}_i\},\ceilenv{2t/3})$ and if successful invoke $A_i'=$ Stitch$(A_i \cup \{{\boldsymbol y}_i\},\ceilenv{2t/3}) $. If successful again, return $A_i'$.}\label{step:rec-loss-3} \end{algorithmic} \end{algorithm} Assume that $U= \{{\boldsymbol x}_{i_1,L},\ldots,{\boldsymbol x}_{i_m,L}\}$, where $1\leq i_1<i_2<\cdots < i_m\leq n-L+1$. Let $ A_0 =$ Stitch$(U,\floorenv{t/3}) $ be the resulting set after Step~\ref{step:rec-loss}, and denote $A_0= \{{\boldsymbol y}_1,\ldots,{\boldsymbol y}_r\}$. Additionally, denote by $ B_k $ the set $ B $ after the $ k $-th iteration of the for loop of Algorithm~\ref{alg:Stitch}. Note that every substring in $U$ is a substring of \emph{exactly} one string from $A_0$, and that $ A_0 = B_{\floorenv{t/3}} $. The next two examples demonstrate how Algorithms~\ref{alg:Stitch} and~\ref{alg:reconstruct-losses} operate. \begin{example} Let $ n, L, {\boldsymbol x},U_1 $ from Example~\ref{ex2}, so that $ U_1 \in {\cal B}_{L,t}({\boldsymbol x})$, where $t=3$. Assume that we invoke Reconstruct$(U_1,t)$. First, the algorithm invokes $A_0=$ Stitch($ U_1, 1$). At the first iteration of the for loop where $k=0$, assume the algorithm picks $ {\boldsymbol x}_{2,8} = 10000011 $ and stitches to it $ {\boldsymbol x}_{3,8} = 00000111 $ followed by $ {\boldsymbol x}_{4,8} = 00001110 $. Next, the algorithm picks $ {\boldsymbol x}_{7,8} = 01110111 $ and stitches to it $ {\boldsymbol x}_{8,8} = 11101111$ followed by $ {\boldsymbol x}_{9,8} = 11011111 $. Thus, we have at the end of this iteration \[ B_0 = \{{\boldsymbol x}_{2,10}, {\boldsymbol x}_{7,10}\} = \{ 1000001110, 0111011111 \}. \] No stitching is made at the second iteration for $k=1$ and thus $ A_0 = B_1 = B_0 $ is the output of the stitching algorithm. Since $ |A_0|=2 $, we execute next in Step~\ref{step:rec-loss-2}, Stitch($ A_0, 3 $). Then, the two substrings of $A_0$ are stitched at iteration $ k=2 $, since Suff$ _5({\boldsymbol x}_{2,10}) = $ Pref$ _5({\boldsymbol x}_{7,10}) $. Eventually, the string \[ {\boldsymbol x}_{2,15} = 1000001110111 = {\mathbf W}_1(U_1) \] is returned as expected. \end{example} \begin{example} Let $ n, L, {\boldsymbol x}, U_2 $ from Example~\ref{ex2}, so that $ U_2 \in {\cal B}_{L,t}({\boldsymbol x})$, where $t=4$. Invoking Stitch($ U_2, 1$) returns \begin{align*} A_0 = B_1=B_0 =&\{{\boldsymbol x}_{1,8}, {\boldsymbol x}_{4,8}, {\boldsymbol x}_{7,10} \} \\=&\{ 10000011,00011101,0111011111 \}. \end{align*} Since $ |A_0|=3 $, the reconstruction algorithm executes Step~\ref{step:rec-loss-3}. Assume that ${\boldsymbol y}_1 = {\boldsymbol x}_{4,8}, {\boldsymbol y}_2 = {\boldsymbol x}_{1,8}$, and ${\boldsymbol y}_3 = {\boldsymbol x}_{7,10}$. For $i=1$, the algorithm receives that Stitch($ A_0 \setminus \{{\boldsymbol y}_1\}, 3$) $=A_0 \setminus \{{\boldsymbol y}_1\}$ which yields with an unsuccessful result. However, for $i=2$, when invoking Stitch($ A_0 \setminus \{{\boldsymbol y}_2\}, 3$), the algorithm stitches the substrings ${\boldsymbol y}_1,{\boldsymbol y}_3$ at iteration $ k=2 $ using Suff$ _5({\boldsymbol x}_{1,8}) = $ Pref$ _5({\boldsymbol x}_{4,8}) $, and returns \[ A_2 = \{{\boldsymbol x}_{1,11}\} = \{ 10000011101 \}. \] Lastly, the algorithms applies Stitch($ A_2 \cup \{{\boldsymbol y}_2\}, 3$) and stitches again at iteration $ k=2 $ to receive $ A_2' = {\boldsymbol x} = {\mathbf W}_1(U_2) $ as the final result. \end{example} The correctness of Algorithms~\ref{alg:Stitch} and~\ref{alg:reconstruct-losses} is proved in the next few claims. \begin{claim}\label{claim:same-substring} For all $1\leq j\leq m-1$ if $i_{j+1} - i_j \leq \floorenv{t/3} + 1$, then ${\boldsymbol x}_{i_j,L}$ and ${\boldsymbol x}_{i_{j+1},L}$ are substrings of the same string in $A_0$. \end{claim} \begin{IEEEproof} First, from Constraint 1 it follows that \begin{align}\label{eq:cons1} \forall k \in [0,\floorenv{t/3}], {\boldsymbol x} \text{ is } (L-k-1) \text{-substring unique.} \end{align} We first claim that for every $ k \le k' = i_{j+1} - i_j - 1 $, ${\boldsymbol x}_{i_j,L},{\boldsymbol x}_{i_{j+1},L}$ are not substrings of the same string in $B_k$ and furthermore, ${\boldsymbol x}_{i_j,L}$ is a suffix of some substring $ {\boldsymbol w}_{1} \in B_k $, while ${\boldsymbol x}_{i_{j+1},L}$ is a prefix of another substring $ {\boldsymbol w}_{2} \in B_k $. To see this, assume the contrary and let $ k < k' $ be the first iteration where ${\boldsymbol x}_{i_j,L}$ is not a suffix of some substring in $ B_k $. That is, there exists $ {\boldsymbol w} \in B_{k-1}$ where $ \text{Suff}_{L}({\boldsymbol w})={\boldsymbol x}_{i_j,L} $, that is stitched to the left of another ${\boldsymbol w}' \in B_{k-1} $, which satisfies $ \text{Pref}_{L}({\boldsymbol w}')={\boldsymbol x}_{i_g,L} $ for another substring $ {\boldsymbol x}_{i_g,L} \in U $. However, it follows from (\ref{eq:cons1}) that $ i_j < i_g < i_{j+1} $ and therefore such a substring cannot exist in $ U $. In particular, ${\boldsymbol x}_{i_j,L},{\boldsymbol x}_{i_{j+1},L}$ are not substrings of the same string in $B_k$. Thus, at the $k'$-th iteration, the substrings are stitched since \begin{align*} \text{Suff}_{L-k'-1}({\boldsymbol w}_{1}) &= {\boldsymbol x}_{i_j + k'+1,L-k'-1} ={\boldsymbol x}_{i_{j+1},L-k'-1} \\&= \text{Pref}_{L-k'-1}({\boldsymbol w}_{2}). \end{align*} From (\ref{eq:cons1}), any other substring $ {\boldsymbol w}_{3} \in B_k \setminus~\{{\boldsymbol w}_{1},{\boldsymbol w}_{2}\}$ satisfies $ \text{Suff}_{L-k'-1}({\boldsymbol w}_{1})$ $\neq \text{Pref}_{L-k'-1}({\boldsymbol w}_{3}) $ and $ \text{Pref}_{L-k'-1}({\boldsymbol w}_{2}) \neq \text{Suff}_{L-k'-1}({\boldsymbol w}_{3}) $. \end{IEEEproof} It is said that a spectrum $U$ experienced a \emph{burst of losses} of length $ h $ at index $ i \le n-L-t+1 $ if $ {\boldsymbol x}_{i,L}, \ldots, {\boldsymbol x}_{i+h-1,L} \not \in U $. \begin{claim}\label{claim:first-op} The set $A_0 = $ Stitch$(U,\floorenv{t/3}) $ satisfies $ |A_0| \le 3 $. \end{claim} \begin{IEEEproof} Following Claim~\ref{claim:same-substring}, we will show that there are three possible cases for the size of the set $A_0$. First, if there are no bursts of losses longer than $\floorenv{t/3}$, then all substrings of $ U $ are contained in a single string of $ A_0 $, thus $ |A_0| = 1 $. Second, if there is a single burst of losses longer than $\floorenv{t/3}$, then the substrings of $ U $ are divided into two different strings of $ A_0$, thus $ |A_0| = 2 $. Similarly, at the third case there are two bursts of losses longer than $\floorenv{t/3}$, and then $ |A_0| = 3 $. Other cases are not possible, since the number of losses is at most $ t $. \end{IEEEproof} \begin{claim}\label{claim:second-op} At Step~\ref{step:rec-loss-2} of Algorithm~\ref{alg:reconstruct-losses}, the result of Stitch$(A_0,t)$ is ${\mathbf W}_1(U)$. \end{claim} \begin{IEEEproof} Let $ A_0 = \{{\boldsymbol y}_1,{\boldsymbol y}_2\} $, where Suff$_L({\boldsymbol y}_1) = {\boldsymbol x}_{i_j,L} $ and Pref$_L({\boldsymbol y}_2) = {\boldsymbol x}_{i_{j+1},L} $. It follows that Pref$_L({\boldsymbol y}_1) = {\boldsymbol x}_{i_1,L} $, where $i_1\leq t+1$. Therefore, the prefix Pref$_{L-t-1}({\boldsymbol y}_1)$ is one of the first $ t+1 $ length-$(L-t-1)$ substrings of $ {\boldsymbol x} $. Similarly, Suff$_L({\boldsymbol y}_2) = {\boldsymbol x}_{i_m,L} $, where $i_m\geq n-L-t+1$, and the suffix Suff$_{L-t-1}({\boldsymbol y}_2)$ is one of the last $ t+1 $ length-$(L-t-1)$ substrings of $ {\boldsymbol x} $. Thus, from Constraint 3, for every $ k \in [0,t] $, Pref$_{L-k-1}({\boldsymbol y}_1) \neq $ Suff$_{L-k-1}({\boldsymbol y}_2)$. Therefore, it is not possible to stitch the substring ${\boldsymbol y}_1$ to the right of $ {\boldsymbol y}_2$. Since there are at most $ t $ losses, it follows that $ i_{j+1} - i_j - 1 \le t $. Hence, these substrings are stitched correctly to a single string at iteration $k' = i_{j+1} - i_j - 1$, which results with the string ${\mathbf W}_1(U)$. \end{IEEEproof} \begin{claim}\label{claim:third-op} At Step~\ref{step:rec-loss-3} of Algorithm~\ref{alg:reconstruct-losses}, there exists a substring $ {\boldsymbol y}_i \in A_0 $ such that both operations of the stitching algorithm are successful. For such a $ {\boldsymbol y}_i$, the result of this step is the string ${\mathbf W}_1(U)$. \end{claim} \begin{IEEEproof} Let $ A_0 = \{{\boldsymbol y}_1, {\boldsymbol y}_2, {\boldsymbol y}_3\} $ where Suff$_L({\boldsymbol y}_1) = {\boldsymbol x}_{i_j,L}$, Pref$_L({\boldsymbol y}_2) = {\boldsymbol x}_{i_{j+1},L} $, Suff$_L({\boldsymbol y}_2) = {\boldsymbol x}_{i_h,L}$, and Pref$_L({\boldsymbol y}_3) = {\boldsymbol x}_{i_{h+1},L} $. Since the number of losses is at most $t$, it follows that $ \floorenv{t/3} < i_{j+1} - i_j -1 \le \ceilenv{2t/3} $ and $ \floorenv{t/3} < i_{h+1} - i_h - 1 \le \ceilenv{2t/3} $. Similarly to the proof of Claim~\ref{claim:second-op}, but in this case according to Constraint 2, for every $ k \in [0,\ceilenv{2t/3}] $, \begin{align}\label{eq:cons2_1} \text{Pref}_{L-k-1}({\boldsymbol y}_1) \neq \text{ Suff}_{L-k-1}({\boldsymbol y}_s) \text{ for } s \in [2,3], \end{align} \begin{align}\label{eq:cons2_2} \text{Suff}_{L-k-1}({\boldsymbol y}_3) \neq \text{ Pref}_{L-k-1}({\boldsymbol y}_s) \text{ for } s \in [1,2]. \end{align} Thus, if we pick $ {\boldsymbol y}_1 $, from (\ref{eq:cons2_2}) we can only stitch $ {\boldsymbol y}_2 $ to the left of $ {\boldsymbol y}_3 $ at Stitch$(A_0 \setminus \{{\boldsymbol y}_1\},\ceilenv{2t/3}) $ and from (\ref{eq:cons2_1}) we stitch the result to the right of $ {\boldsymbol y}_1 $ at Stitch$(A_1 \cup \{{\boldsymbol y}_1\},\ceilenv{2t/3}) $. The result is similar if we initially pick $ {\boldsymbol y}_3 $. If we pick ${\boldsymbol y}_2 $ it follows that it is only possible to incorrectly stitch $ {\boldsymbol y}_1$ to the left of ${\boldsymbol y}_3 $ at Stitch$(A_0 \setminus \{{\boldsymbol y}\},\ceilenv{2t/3}) $. However, it is ensured from (\ref{eq:cons2_1}) and (\ref{eq:cons2_2}) that the resulting string in this case cannot be stitched to $ {\boldsymbol y}_2 $ at the second operation of the stitching algorithm. Since both operations are successful, the result contains a single substring which contains all the substrings of $ U $. \end{IEEEproof} \begin{lemma}\label{lem:alg-reconstruct-incomplete} Algorithm~\ref{alg:reconstruct-losses} uniquely reconstructs ${\mathbf W}_1(U)$. \end{lemma} \begin{IEEEproof} Following Claim~\ref{claim:first-op}, there are three possible cases for the size of $A_0$. From Claims~\ref{claim:same-substring}, \ref{claim:second-op}, and \ref{claim:third-op}, Algorithm~\ref{alg:reconstruct-losses} returns a single string where all the elements of $ U $ are a substrings of it. That is the maximum reconstructible-substring of $ U $. \end{IEEEproof} Lemma~\ref{lem:alg-reconstruct-incomplete} verifies also the proof of Theorem~\ref{th:LREC reconstruction}. \subsection{Cardinality Analysis}\label{chp:cardinality} Our next goal is to estimate the value of $D_n(L,t)$ for some specific parameters of $n,L,t$. Our approach is based on the result from~\cite{EliGabMedYaa19} which claims that the asymptotic rate of the set ${\cal S}_{n,2}(L)$ approaches 1, when $L = \ceilenv{a \log (n)}$ and $a>1$. Building upon this result, for a given value of $t$ that satisfies $t=\lceil b\log(n) \rceil +o(\log(n))$ for some $0\leq b < 3$, we show how to choose the value of $L$ such that the first two constraints of the $ (n,L,t) $-LREC constraints hold. Then, it is shown that the third constraint does not effect the rate result. This result is proved in the following theorem. We note here that for simplicity, on many occurrences in the rest of this paper we drop notation of floors and ceilings. The affect of these roundings is negligible and does not affect the asymptotic rate results nor the correctness of algorithms. \begin{theorem}\label{th:D} If $t=\lceil b\log(n) \rceil +o(\log(n))$ for some $0\leq b < 3$ and $ L = \ceilenv{a \log (n)} + \floorenv{t/3} + 1 $, where $a > 1+b/3$, then it holds that $$\lim_{n\rightarrow\infty}\frac{\log_2(D_{n}(L,t))}{n} =1.$$ \end{theorem} \begin{IEEEproof} For the values of $t$ and $L$ stated in the theorem it holds that $ L = (a +b/3) \log (n) + o(\log(n))$, $\ell_1 = a \log (n) + o(\log(n))$, and $\ell_2 = (a-b/3) \log (n) + o(\log(n))$. According to~\cite{EliGabMedYaa19} it holds that the rate of the set ${\cal S}_{n,2}(L')$ when $ L' = \ceilenv{a' \log (n)}$ for all $a'>1$, that is, \begin{equation}\label{eq:cap} \lim_{n\rightarrow\infty}\frac{\log_2(S_{n}(L' = \ceilenv{a' \log (n)} ))}{n} =1. \end{equation} The outline of the proof works as follows. We consider the set ${\cal S}_{n',2}(\ell_2)$, where $n'=n-(t+\ell_3)$. According to (\ref{eq:cap}), it holds that $$\lim_{n\rightarrow\infty}\frac{\log_2(S_{n-(t+\ell_3)}(\ell_2))}{n-(t+\ell_3)} =\lim_{n\rightarrow\infty}\frac{\log_2(S_{n-(t+\ell_3)}(\ell_2))}{n} =1.$$ Next, we will show that $D_n(L,t) \geq S_{n-(t+\ell_3)}(\ell_2)$ and this will conclude the proof. In order to accomplish this result, we will show that every string in ${\cal S}_{n',2}(\ell_2)$ can be extended into a length-$n$ string in ${\cal D}_n(L,t)$. Let ${\boldsymbol w}\in{\cal S}_{n',2}(\ell_2)$, so it is an $\ell_2$-substring unique string. We show how to find a string ${\boldsymbol u}\in\Sigma^{t+\ell_3}$ such that ${\boldsymbol w}\circ {\boldsymbol u}\in{\cal D}_n(L,t)$, i.e., it satisfies all three $ (n,L,t) $-LREC constraints. In fact we will show how to find ${\boldsymbol u}$ such that ${\boldsymbol w}\circ {\boldsymbol u}$ is $\ell_2$-substring unique and it satisfies the third constraint of the $ (n,L,t) $-LREC constraints. First note that the string ${\boldsymbol u}$ has $2^{t+\ell_3}$ optional values. Since the string ${\boldsymbol w}\circ {\boldsymbol u}$ has to be $\ell_2$-substring unique, the number of options that are eliminated is at most \begin{equation}\label{eq:c2} n\cdot (t+\ell_2+\ell_3)\cdot 2^{t+\ell_3 - \ell_2} = n\cdot (t+\ell_2+\ell_3)\cdot 2^{\lceil 2t/3\rceil}. \end{equation} Since at least one of two identical substrings in $ {\boldsymbol w} \circ {\boldsymbol u} $ must overlap with $ {\boldsymbol u} $. Similarly, the number of strings that are eliminated by the third constraint is at most \begin{equation}\label{eq:c3} (t+1)\cdot (t+1) \cdot 2^{t}. \end{equation} Lastly, we have that $2^{t+\ell_3} = 2^{ \ceilenv{a \log (n)} + \floorenv{t/3}} $ and by comparing with (\ref{eq:c2}) we get for $n$ large enough $$2^{ \ceilenv{a \log (n)} + \floorenv{t/3}} > n\cdot (t+\ell_2+\ell_3)\cdot 2^{\lceil 2t/3\rceil}.$$ Moreover, by comparing with (\ref{eq:c3}) it also holds that $$2^{ \ceilenv{a \log (n)} + \floorenv{t/3}} > (t+1)\cdot (t+1) \cdot 2^{t}$$ since $b<3$. Thus, it is concluded that such a string ${\boldsymbol u}$ exists. \end{IEEEproof} \section{Introduction} \label{sec:intro} \input{Intro} \section{Definitions and Preliminaries}\label{sec:def} \input{Defs} \section{Reconstructing an Incomplete Multispectrum}\label{sec:rec-incomplete} \input{Incomplete-recons} \section{Reconstructing an Erroneous Multispectrum}\label{sec:rec-Erroneous} \input{Erroneous} \section{Alternative Construction for Erroneous Multispectrum}\label{sec:rec-Erroneous2} \input{Erroneous2} \section{Conclusion}\label{sec:concl} This paper studied the reconstruction of strings based upon noisy versions of their multispectrum. In the first model, we assumed that not all substrings in the multispectrum are read and in the second, it was assumed that all substrings are read, however several of them can be erroneous. In each case we studied code constructions of strings that can be uniquely reconstructed from the noisy version of the multispectrum. The cardinalities of the codes is studied along with specific code constructions. An important ingredient in our constructions is the set of $(L,d)$-substring distant strings. We studied when the redundancy of this set is as most a single bit and when its asymptotic rate approaches 1. We also presented specific encoding and decoding maps for this constraint. \bibliographystyle{IEEEtran}
2,877,628,091,436
arxiv
\section{Introduction} \label{sec:intro} Understanding inhomogeneities in the observed universe is one of the central challenges in cosmology. The tendency for gravitational collapse is much weaker in an expanding universe than in flat space. Therefore, mechanisms for the fast growth of cosmological perturbations are of much interest. For instance, in matter domination an overdensity $\delta = \delta\rho/\rho$ (where $\rho$ is the energy density and $\delta\rho$ is the perturbed energy density) grows linearly with the scale factor: $\delta(t) \propto a(t)$. Here, we will present a scenario for much faster growth. This growth will take place in a pressure-less fluid composed of non-relativistic fermionic particles $\psi$ in the early universe, prior to \textit{Big Bang Nucleosynthesis} (BBN).\footnote{For a recent review of pre-BBN physics see~\cite{Allahverdi:2020bys}.} The scenario has two essential ingredients: \textit{i)} an \textit{Early Matter Domination} (EMD) era;\footnote{Usually an epoch of EMD leads to linear growth of overdensities, similar to what happens in the dark matter dominated epoch in the late universe~\cite{Georg:2016yxa, Georg:2017mqk, Erickcek:2011us, Redmond:2018xty}.} \textit{ii)} an interaction between the $\psi$ particles mediated by a massive scalar field $\phi$ in the EMD epoch. These ingredients arise naturally in string/supergravity models. We present explicit examples of background solutions incorporating the interactions and then study perturbations to exhibit the rapid growth. The physical reason for the fast growth is the scalar field mediated force, as $\psi$ particles attract each other with a stronger than gravitational interaction. We will also initiate a study of the phenomenological implications of this scenario. These will include the production of \textit{Primordial Black Holes} (PBHs) and \textit{Gravitational Waves} (GWs).\\ Let us start by discussing the generic features of the essential ingredients of our setup.\\ \noindent {\it{Early Matter Domination}:} A generic feature of string/supergravity models is the existence of moduli, i.e. gravitationally coupled scalar fields whose \textit{vevs} parametrise the size and shape of the extra-dimensions. These are massless at tree level and typically acquire masses due to higher-order corrections or loop effects. In many cases, moduli masses are set by the scale of supersymmetry breaking and it is well below the Hubble scale at the end of inflation. This implies a displacement of the scalar fields from their late time minima. The displaced scalars oscillate about their minima when the Hubble constant falls below their masses, leading to epochs of matter domination (see e.g.~\cite{Coughlan:1983ci, hep-ph/9308292, hep-ph/9308325, hep-ph/9507453, Cicoli:2016olq, 1906.03025, Erickcek:2011us} and~\cite{1502.07746} for a review). To accommodate the successful predictions of BBN, an EMD epoch has to end with the universe reheating above $\sim 3 \, \rm{MeV}$. This happens with the decay of the constituents of the universe during the EMD epoch. We note that while an oscillating modulus is one of the most natural ways to enter an epoch of EMD, this is not necessary for our scenario, which is insensitive to the exact nature of the particle causing the EMD epoch (we will denote quantities associated with this by a sub/superscript ‘b', for \textit{background}).\\ \noindent {\it{Scalar Field Mediated Interactions}:} The other key element is a hidden sector fermion ($\psi$) experiencing a scalar ($\phi$) mediated force. Cosmological effects of a hidden fermion experiencing a scalar mediated force have been extensively studied in recent years, see e.g.~\cite{Das:2005yj, Amendola:1999er, Vagnozzi:2021quy, Tsai:2021irw, Savastano:2019zpr, Amendola:2017xhl, Damour:1990tw, hep-th/9408025, Amendola:1999er, gr-qc/0108016, astro-ph/0303145, astro-ph/0208032, astro-ph/0306343, astro-ph/0212518, astro-ph/0307350, Amendola:2003wa, Domenech:2021uyx}. We will take $\psi$ to be part of a hidden sector. Its interactions with the visible sector will be feeble\footnote{Therefore the usual fifth-force bounds are not relevant.}, but it will be strongly interacting with $\phi$. In the Einstein frame, the scalar couples to the trace of the energy-momentum of $\psi$: $g^{\mu \nu} T_{\mu \nu}^{(\psi)} = \rho_{\psi} - 3 p_\psi$, where $\rho_\psi$ and $p_\psi$ are the energy density and the pressure of the $\psi$ fluid respectively. At the level of cosmological fluids, this implies the non-conservation of stress tensors of individual components. The violation is proportional to the product between the trace of the energy-momentum tensor of the $\psi$ component and the gradient of the mediating scalar. At the background level, once the scalar field is oscillating about its minimum (i.e. when $H \lesssim m_\phi$, where $H$ is the Hubble parameter) the fluids reach a scaling regime in which each component redshifts as matter and the energy exchange between fluids stops. However, the presence of the coupling leads to an attractive force between $\psi$ particles. This force leads to the fast growth of $\psi$ perturbations.\footnote{Note that in~\cite{Savastano:2019zpr} a similar mechanism causes matter perturbations to grow much faster compared to what happens in $\Lambda$CDM, i.e. $\delta(t) \propto a(t)^{1.62}$, even in a radiation dominated epoch. The present work is much inspired by this.}\\ We note that, when the coupling is non-vanishing, the background dynamics of the scalar field $\phi$ is affected. A complete study of the system would then require tracking the dynamics starting from the end of inflation, or fine-tuning the initial conditions. This is an interesting and important issue in all models that feature similar couplings~\cite{Das:2005yj, Amendola:1999er, Vagnozzi:2021quy, Tsai:2021irw, Savastano:2019zpr, Amendola:2017xhl, Damour:1990tw, hep-th/9408025, Amendola:1999er, gr-qc/0108016, astro-ph/0303145, astro-ph/0208032, astro-ph/0306343, astro-ph/0212518, astro-ph/0307350, Amendola:2003wa, Domenech:2021uyx}. Alternatively, one can consider scenarios in which a change in the equation of state $w_\psi$ of the $\psi$ component takes place at the energy scale of interest, going from $w_\psi = 1/3$ to $w_\psi = 0$. Given the form of the coupling, $\propto \rho_\psi (1 - 3 w_\psi)$, this would imply that the coupling is turned on only when the equation of state deviates from $1/3$. There are in principle various possible ways to achieve this, such as scenarios where the $\psi$ particles get mass from a hidden sector symmetry breaking when a hidden scalar field acquires a vacuum expectation value~\cite{Gehrlein:2019iwl, Shelton:2010ta}. Before the symmetry breaking the $\psi$ particles would essentially be massless and behave as relativistic degrees of freedom. In general the time scale of such a symmetry breaking is much smaller than Hubble time, so one would expect a rapid transition in the $\psi$ equation of state as assumed in this paper. The second approach is a time dependent coupling constant which is a function of the scalar vacuum expectation value~\cite{Hinterbichler:2010es}. In this well studied symmetron-based model, one naturally turns on the fifth-force or a non-zero coupling between the fermion and the scalar when the universe starts expanding followed by a symmetry breaking in a hidden scalar sector. It is instructive to note that the actual model building of our scenario is not the focus of this work. In this paper, we will use a phenomenological approach, parametrising the equation of state to keep track of the coupling. We leave a study of the microscopic realisation of such a transition to a future work.\\ Before closing the introduction, let us briefly mention the possible phenomenological implications of our scenario. Estimates of the scales involved give production of PBHs in the sub-lunar mass range. These are known to be ideal candidates to constitute all of dark matter. GWs will be produced with the scalar field acting as a source for the perturbations and from the dynamics of the PBHs produced. This leads to GWs in a wide range of frequencies (from $10^{-3}$ to $10^{15}$ Hz).\\ This paper is structured as follows. In Sec.~\ref{background}, we describe our setup and obtain background solutions. In Sec.~\ref{sec:Perturbations} we analyse perturbations and exhibit their fast growth. We outline phenomenological implications in Sec.~\ref{pheno}, leaving detailed explorations for future work. We conclude in Sec.~\ref{conclude}. \section{Coupled Dynamics in the Early Universe} \label{background} In this section, we first describe the equations that govern the dynamics of our system. We then present the background (homogeneous) cosmology in which perturbations will exhibit fast growth. The solution settles into a matter dominated phase within a few e-foldings of cosmological expansion: it is in this epoch that the fast growth of perturbations takes place. As described in the introduction, the early universe we consider will have three constituents: a background component redshifting as matter ($b$)\footnote{E.g. an oscillating modulus.}, a dark sector fermion ($\psi$) and the scalar force mediator ($\phi$). The first two will be described by cosmological fluids, the latter by its equation of motion. In general, a coupling to the scalar via the trace of the stress tensor can exist for both `b' and `$\psi$'. The system is described by the following equations (see~\cite{Amendola:2003wa} and references therein): \begin{align} \label{eq:GeneralConservationBackground} \nabla^{\mu} T_{\mu \nu}^{b} &= - {\beta_{b}(\phi) \over M_{\rm pl} } g^{\rho \sigma} T_{\rho \sigma}^{b} \nabla_{\nu} \phi \,, \\ \label{eq:GeneralConservationPsi} \nabla^{\mu} T_{\mu \nu}^{\psi} &= - {\beta_{\psi}(\phi) \over M_{\rm pl}}g^{\rho \sigma} T_{\rho \sigma}^{\psi} \nabla_{\nu} \phi \,, \\ \label{eq:GeneralKGEquation} \left( \square + m^{2} \right) \phi &= {\beta_{b}(\phi) \over M_{\rm pl}} g^{\rho \sigma} T_{\rho \sigma}^{b} + {\beta_{\psi}(\phi) \over M_{\rm pl}} g^{\rho \sigma} T_{\rho \sigma}^{\psi} \ , \end{align} and % \begin{equation} \label{eq:Einstein} M_{\rm pl}^{2} G_{\mu \nu} = T_{\mu \nu}, \end{equation} where $g_{\mu \nu}$ is the spacetime metric, $G_{\mu \nu}$ is the Einstein tensor and $T_{\mu \nu} = T_{\mu \nu}^{(b)} + T_{\mu \nu}^{(\psi)} + T^{(\phi)}_{\mu \nu}$ is the total stress-energy tensor. The quantities $\beta_{b}(\phi)$ and $\beta_{\psi}(\phi)$ are (field dependent) coupling constants and $M_{\rm pl}$ is the reduced Planck mass. Note that large values of the coupling constants $(\beta_{i}(\phi) \gg 1)$ imply that the scalar mediates a force that is stronger than gravity. Hierarchies in the strengths of the couplings can arise naturally in string models as a result of physical separation in the extra-dimensions between different sectors (see e.g.~\cite{Acharya:2018deu} and references therein for a recent discussion in the context of quintessence models). The goal of this paper is to exhibit the phenomenon of fast growth in a specific setting and thereby provide proof of the concept. Hence we will consider a constant coupling $\beta_\psi(\phi) \equiv \beta_\psi \gg 1$, while we take $\beta_b(\phi) = 0$ (a detailed exploration of the dynamics treating both the couplings as parameters is left for future work). The stress tensors $T_{\mu \nu}^{(b)}$ and $T_{\mu \nu}^{(\psi)}$ will be taken to be of the perfect fluid form. The component `b' seeds the matter dominated epoch; we will take $w_b =0$. As mentioned in the introduction, the $\psi$ component will make a transition from being relativistic (at early times) to becoming non-relativistic within a few e-foldings of the expansion of the universe from the start of our numerical evolution. This transition sets the form of its (time-dependent) equation of state\footnote{We will describe its precise form soon.}, $w_{\psi}$. The scalar stress tensor $T^{(\phi)}_{\mu \nu}$ is given by $T^{(\phi)}_{\mu \nu} = \nabla_\mu \phi \nabla_\nu \phi - g_{\mu \nu} \left(\frac{1}{2} g^{\lambda \rho} \nabla_\lambda \phi \nabla_\rho \phi - V(\phi)\right)$. For simplicity, we work with $V(\phi) = {1 \over 2} m_\phi^{2} \phi^{2}$. Finally, the matter dominated epoch has to end before BBN. The time of the decays is controlled by the widths of the fields. We will treat the widths $\Gamma_{b}$, $\Gamma_{\psi}$ and $\Gamma_\phi$ as phenomenological parameters in our study. \subsection{Background Dynamics} \label{sec:BackgroundDynamics} Next, let us turn to the background solution. Using the number of e-foldings $N = \int H dt$ as the evolution variable, for a homogeneous background Eq.s~\eqref{eq:GeneralConservationBackground}-\eqref{eq:GeneralKGEquation} become \begin{align} \label{eq:ConservationBackground} &\rho_b' + 3 H \rho_b = 0 \,, \\ \label{eq:ConservationPsi} &\rho_\psi' + 3 (1 + w_\psi) H \rho_\psi = - \beta_\psi (1 - 3 w_\psi) \rho_\psi \phi' \,, \\ \label{eq:KGEquation} &H H' \phi' + H^2 \phi'' + m_\phi^2 \phi + 3 H^2 \phi' = \beta_\psi (1 - 3 w_\psi) \rho_\psi \,, \end{align} where we have used that $p_b = 0$, $p_\psi = w_\psi \rho_\psi$, and the primes denote derivatives with respect to $N$. The Friedmann equation reads: \begin{equation} H^2 = \frac{2 \left(\rho_b + \rho_\psi + \frac{1}{2} m_\phi^2 \phi^2 \right)}{6 - \phi'^2} \,. \label{eq:FriedmannEquation} \end{equation} An explicit form of the background solution will be presented for two benchmark values of $\beta_{\psi} =20, 30$. We will refer to these as examples 1 and 2 respectively. As already mentioned, we take a phenomenological approach to describe the equation of state of the $\psi$ component. Therefore, we parametrise $w_\psi$ in terms of the e-folding $N_{\rm NR}$ at which $w_\psi = 0.1$ and the width of the transition $\Delta N_{w_\psi}$: \begin{equation} \label{eq:EoS} w_\psi = \frac{1}{6} \left(- \tanh\left(\Delta N_{w_\psi} (N - (N_{\rm NR} - \delta N))\right)\right) \,, \end{equation} where $\delta N = \arctanh(0.6/\Delta N_{w_\psi})$ is adjusted so that $w_\psi = 0.1$ at $N_{\rm NR}$. In the examples reported below, we fix $N_{\rm NR} = 2$ and use two values for $\Delta N_{w_\psi}$: $\Delta N_{w_\psi} = 3$ (example 1) and $\Delta N_{w_\psi} = 2$ (example 2), see Fig.~\ref{fig:EoS}. \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{EquationOfState.pdf} \caption{Equation of state for $N_{\rm NR} = 2$ and $\Delta N_{w_\psi} = 2,3$.} \label{fig:EoS} \end{figure} At early times, the Hubble constant is much greater than the mass of the scalar and $w_{\psi} =1/3$. The former implies that the friction term in the left hand side of Eq.~\eqref{eq:KGEquation} vanishes and the latter implies that the right hand side of the same equation vanishes. This implies that the scalar is at rest at $\phi = \phi_{\rm in}$ at early times: it contributes to the energy density of the universe as a result of its initial misalignment. This energy density is $\rho_{\phi, \rm{in}} = {1 \over 2} m_\phi^{2} \phi_{\rm in}^2$. We will track the evolution starting from the point when the ‘initial’ Hubble parameter is $H_{\rm in} \sim m_\phi$ (this will be taken to correspond to $N=N_{\rm in}=0$). The other initial conditions that need to be specified are the initial energy densities in $b$ and $\psi$, $\rho_{\psi, \rm{in}}$ and $\rho_{b, \rm{in}}$. In the explicit examples that we will report, we use $\phi_{\rm{in}} = 0.1 M_{\rm pl}$ and $\rho_{\psi, \rm{in}}/\rho_{b, \rm{in}} = 1$, $m_\phi/H_{\rm in} \simeq 1.22$. The exact value of $m_\phi/H_{\rm in}$ does not affect the results reported below, as long as the field is initially at rest. As it is easy to guess (given that all components behave as matter when interactions are switched off), the system quickly settles into a scaling regime in which the energy densities of all the components redshift as $1/a^3$. We plot the fractional energy densities \begin{equation} \Omega_b = \frac{\rho_b}{\rho} \,, \quad \Omega_{\psi} = \frac{\rho_\psi}{\rho} \,, \quad \Omega_\phi = \frac{\rho_\phi}{\rho} \,, \end{equation} and $\rho = \rho_b + \rho_\psi + \rho_\phi$, for our benchmark examples in Fig.~\ref{fig:ScalingRegime1} and Fig.~\ref{fig:ScalingRegime2}. The evolution of the scalar field for example 1 $(\beta_{\psi} =20, \Delta N_{w_\psi} = 3)$ is shown in Fig.~\ref{fig:FieldDynamics} (the $\phi$ evolution is very similar for example 2). \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{FractionalEnergyDensityEx1.pdf} \caption{Scaling regime for example 1, with $\beta_\psi = 20$ and $\Delta N_{w_\psi} = 3$.} \label{fig:ScalingRegime1} \end{figure} \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{FractionalEnergyDensityEx2.pdf} \caption{Scaling regime for example 1, with $\beta_\psi = 30$ and $\Delta N_{w_\psi} = 2$.} \label{fig:ScalingRegime2} \end{figure} \begin{figure}[h!] \includegraphics[width=0.45\textwidth]{FieldDynamics.pdf} \caption{Scalar field dynamics for example 2, with $\beta_\psi = 20$ and $\Delta N_{w_\psi} = 3$.} \label{fig:FieldDynamics} \end{figure} \section{Fast Growth of Perturbations} \label{sec:Perturbations} Having obtained the homogeneous background in the previous section, we now turn to the study of perturbations in the background. We will see that there is a fast growth of the perturbations in the matter dominated epoch which the background solutions asymptote to. The equations governing the dynamics of the perturbations can be obtained in full generality by perturbing the Einstein equations as well as the conservation equations and the Klein-Gordon equation~\cite{Amendola:2003wa}. In our case, we will be interested in the perturbation of modes with wavelength $\lambda$ smaller than the Compton wavelength of the scalar field, namely $k/a \gg m_\phi$. In this limit, the equations of motion for the perturbations simplify. In Fourier space, the equations for the evolution of the fractional overdensities ($\delta_i$, $i=\psi, b$) and divergence of the dimensionless velocity perturbations ($\theta_i$, $i=\psi, b$) are\footnote{We follow the conventions of~\cite{Amendola:2003wa} for the definition of these.} \begin{align} \label{eq:PerturbationDeltaPsi} &\delta_\psi'' + \left[(w_\psi + 1) \theta_\psi\right]' + 3 \left[w_{\psi, \rho} (1-\beta \phi') \delta_\psi\right]' = 0 \,, \\ \label{eq:PerturbationDeltaB} &\delta_b'' + \left(2 + \frac{H'}{H}\right) \delta_b' - \frac{3}{2} \Omega_\psi \delta_\psi - \frac{3}{2} \Omega_b \delta_b = 0 \,, \end{align} where $w_{\psi, \rho} = \frac{d w}{d \log \rho} = \rho w_\psi'/\rho'$ and \begin{align} &\theta_\psi \simeq - \frac{1}{w_\psi+1} \left[\delta_\psi' + 3 w_{\psi, \rho} (1 - \beta \phi') \delta_\psi\right] \,, \nonumber\\ &\theta_\psi' \simeq - f \theta_\psi + \frac{w_\psi + w_{\psi, \rho}}{1 + w_\psi} \kappa^2 \delta_\psi - \omega_\psi \delta_\psi - \frac{3}{2} (1+w_\psi) \Omega_b \delta_b \,,\nonumber \end{align} where $\kappa = k/(a H)$ and \begin{align} & f = \left[(1-3 w_\psi)(1 - \beta \phi') - w_{\psi, \rho} A + 1 + \frac{H'}{H}\right] \theta_\psi \,, \nonumber\\ & A = 3 + \beta \phi' \frac{1-3w_\psi}{1+w_\psi} \,, \nonumber\\ &\text{\small $\omega_\psi = \frac{3}{2} (1+w_\psi) \Omega_\psi \left[1 + 2 \beta_\psi^2 \frac{(1-3w_\psi)(1-3w_\psi - 3 w_{\psi,\rho})}{(1+w_\psi)^2}\right] \,.$} \nonumber \end{align} We solve numerically this system of equation with ‘adiabiatic’ initial conditions $\delta_{\psi, \rm{in}} = \delta_{b, \rm{in}}$ and $(\delta'_{b, \rm{in}}, \delta'_{\psi, \rm{in}})=(0,0)$ for various\footnote{We do not consider smaller values of $\kappa$ than those reported in the legends of Fig.~\ref{fig:PerturbationsEx1} and Fig.~\ref{fig:PerturbationsEx2} because they do not go non-linear within the regime of validity of Eq.~\eqref{eq:PerturbationDeltaPsi} and Eq.~\eqref{eq:PerturbationDeltaB}.} values of $\kappa$ at $N = 0$, $\kappa_{\rm in}$. The solutions exhibit an exponentially fast growth as shown in Fig.~\ref{fig:PerturbationsEx1} and Fig.~\ref{fig:PerturbationsEx2} for example 1 and example 2 respectively. Since Eq.~\eqref{eq:PerturbationDeltaPsi} and Eq.~\eqref{eq:PerturbationDeltaB} are valid in the sub-Compton regime, the solution for the various modes is valid until $k/a \gtrsim m_\phi$. The maximum $N$ at which the solution is reliable is denoted by the dotted lines in Fig.~\ref{fig:PerturbationsEx1} and Fig.~\ref{fig:PerturbationsEx2}. The system of equations has an exactly solvable regime. Once the $\psi$ component becomes non-relativistic, so that $w_\psi, w_{\rho, \psi} \simeq 0$, in the limit $\delta_{\psi} \gg \delta_b$ Eq.~\eqref{eq:PerturbationDeltaPsi} and Eq.~\eqref{eq:PerturbationDeltaB} simplify significantly: \begin{align} \label{eq:EqDeltaPsiSimplified} &\delta_\psi'' + \frac{1}{2} \delta_\psi' - \tilde{\omega}_{\psi} \delta_\psi = 0 \,, \\ \label{eq:EqDeltaBSimplified} &\delta_b'' + \frac{1}{2} \delta_b' - \frac{3}{2} \Omega_\psi \delta_\psi - \frac{3}{2} \Omega_b \delta_b = 0 \,, \end{align} where we have defined $\tilde{\omega}_\psi = \frac{3}{2} \Omega_\psi (1 + 2 \beta_\psi^2)$ and used the fact that $1 + \frac{H'}{H} \simeq \frac{1}{2}$ in a matter dominated universe. Now, the equation for $\delta_\psi$ is decoupled and can be solved analytically. The growing mode is an exponential function: $\delta_\psi \propto \exp\left(\gamma_\beta N\right)$, with \begin{equation} \label{eq:Exponent} \gamma_\beta = \frac{1}{4} \left(-1 + \sqrt{1 + 16 \tilde{\omega}_\psi}\right) \,. \end{equation} The exponents for the two examples under consideration are $\gamma_1 \simeq 13$ and $\gamma_2 \simeq 23$. Interestingly, in the regime $N \gtrsim N_{\rm NR}$, this is in agreement with the growth exponents obtained numerically (Fig.~\ref{fig:PerturbationsEx1} and Fig.~\ref{fig:PerturbationsEx2}) for adiabatic initial conditions. This indicates that the growth in $\delta_{\psi}$ essentially drives the growth in the system even for adiabatic initial conditions, see Fig.~\ref{fig:PerturbationsNorm}. We have checked this numerically. Note that in the scaling regime $w_\psi = w'_\psi = 0$, so the term proportional to $\kappa^2$ in Eq.~\eqref{eq:PerturbationDeltaPsi} and Eq.~\eqref{eq:PerturbationDeltaB} vanishes. However, when the change of the equation of state described by Eq.~\eqref{eq:EoS} is taken into account, it is important that this term goes to zero sufficiently fast, otherwise it would compete with the term proportional to $\tilde{\omega}_\psi$ in Eq.~\eqref{eq:EqDeltaPsiSimplified} that drives the exponential growth. We emphasise that it is in the scaling regime that Eq.~\eqref{eq:EqDeltaPsiSimplified} and Eq.~\eqref{eq:EqDeltaBSimplified} are valid. The solutions of Eq.~\eqref{eq:PerturbationDeltaPsi} and Eq.~\eqref{eq:PerturbationDeltaB} can be trusted until the linear approximation breaks down, namely when $\delta_{b} \big{/} \delta_{b, \rm in} \lesssim A^{-1/2}_{\delta_{b, \rm in}}$ and $\delta_\psi \big{/} \delta_{\psi, \rm in} \lesssim A^{-1/2}_{\delta_{\psi, \rm in}}$, where $A^{-1/2}_{\delta_{b, \rm in}}$ and $A^{-1/2}_{\delta_{\psi, \rm in}}$ are the dimensionless strength of the perturbations as defined by the power spectra. We will use $N_{\rm NL}$ to denote the e-foldings at which the validity of the linear theory breaks down and non-linearities become important. In general, $N_{\rm NL}$ will depend on the initial conditions $(\delta_{b, \rm{in}}, \delta_{\psi, \rm{in}})$. From Fig.~\ref{fig:PerturbationsEx1} and Fig.~\ref{fig:PerturbationsEx2} it is immediate to see the regions in which the linear approximation is valid: $N_{\rm NL}$ is fixed by the intersection of the perturbation mode curves with the horizontal light green (for $A_{\delta_{\psi, \rm in}} = 10^{-10}$) and dark green (for $A_{\delta_{\psi, \rm in}} = 10^{-16}$) lines. Note that for perturbations with both the reported initial amplitude values\footnote{Note that $A_{\delta_{\psi, \rm in}}=10^{-10}$ is the value expected from a scale invariant inflationary power spectrum.}, the modes go non-linear within very few e-foldings after the onset of the matter dominated epoch and well within the regime of validity of the equations for each mode. We note that this estimate is conservative as the perturbations can in principle undergo some growth before we begin to track them leading to a higher value of $A_{\delta_{\psi, \rm in}}$. In both the examples and for the set of modes that we chose (which are among the first to go non-linear), we have $N_{\rm NL} \simeq 4.5$ for $A_{\delta_{\psi, \rm in}} = 10^{-10}$. We will use this value for the estimates presented in the following sections. \begin{figure}[h!] \includegraphics[width=0.45\textwidth]{PerturbationsEx1.pdf} \caption{Perturbations $\delta_\psi$ and $\delta_b$ for example 1. Solid lines represent $\delta_\psi$ and dashed lines represent $\delta_b$ for the various modes. The dotted lines indicate the maximum $N$ at which the solution with the corresponding color can be trusted, as Eq.~\eqref{eq:PerturbationDeltaPsi} and Eq.~\eqref{eq:PerturbationDeltaB} require $k/a \gtrsim m_\phi$. All the modes get to $\delta/\delta_{\rm in} \gtrsim 10^9$ in the regime of validity of the equations. When perturbations hit the light (for $A_{\delta_{\psi, \rm in}} = 10^{-10}$) and dark (for $A_{\delta_{\psi, \rm in}} = 10^{-16}$) green horizontal lines they go non-linear.} \label{fig:PerturbationsEx1} \end{figure} \begin{figure}[h!] \includegraphics[width=0.45\textwidth]{PerturbationsEx2.pdf} \caption{Perturbations $\delta_\psi$ and $\delta_b$ for example 2. See the caption of Fig.~\ref{fig:PerturbationsEx1} for the legend.} \label{fig:PerturbationsEx2} \end{figure} \begin{figure}[h!] \includegraphics[width=0.45\textwidth]{PerturbationsNorm.pdf} \caption{Ratio between the numerical and analytical solutions for example 1. Asymptotically perturbations grow like $\delta_\psi \propto \exp(\gamma_\beta N)$, where $\gamma_\beta$ is given in Eq.~\eqref{eq:Exponent}.} \label{fig:PerturbationsNorm} \end{figure} \subsection{Decays} \label{sec:Decay} To accommodate the successes of BBN, the matter dominated era has to end with a reheating temperature $T \gtrsim \text{a few MeV}$. This will happen if the widths of the three components $\Gamma_b$, $\Gamma_\psi$ and $\Gamma_\phi$ satisfy \begin{equation} \label{d1} \Gamma_i \gtrsim H_{\rm BBN} \,, \quad i = b, \psi, \phi \,, \end{equation} where $H_{\rm BBN} \simeq 10^{-24} \, \text{GeV}$. We also require that the decays do not occur before the epoch of fast growth sets in and the perturbations grow rapidly. This implies: \begin{equation} \label{d2} H_{\rm BBN} \lesssim \Gamma_{b, \phi, \psi} \lesssim H_{\rm NL} \,, \end{equation} where $H_{\rm NL}$ is the Hubble parameter at the time when the perturbations become non-linear.\\ If the background component is an oscillating modulus with mass $m_b$ and decay rate $\Gamma_b = m_b^3/M_{\rm pl}^2$, then Eq.~\eqref{d2} would imply \begin{equation} m_{b, \rm min} \lesssim m_b \lesssim m_{b, \rm min} \exp\left(\frac{3}{4} (N_{\rm BBN} - N_{\rm NL})\right) \,, \end{equation} where $m_{b, \rm min} = \left(H_{\rm BBN} M_{\rm pl}^2\right)^{1/3} \simeq 20 \, \text{TeV}$. The conditions in Eq.~\eqref{d1} and Eq.~\eqref{d2} also constrain the strength of interaction of $\phi$ and $\psi$. For instance, let us consider the case in which the decays take place via Yukawa interactions \begin{equation} \mathcal{L}_{\rm int} \supset y \phi \bar{\chi} \chi + g \Psi \bar{\psi} \chi + \text{h.c.} \,, \end{equation} where $\chi$ is a visible sector fermion, while $\Psi$ is a visible sector scalar and $y$, $g$ are Yukawa couplings. Then the decay rates are: \begin{equation} \Gamma_\phi \simeq \frac{y^2 m_\phi}{8 \pi} \,, \quad \Gamma_{\psi} \simeq \frac{g^2 m_\psi}{16 \pi}. \end{equation} Concerning the $\psi$ component, the constraints in Eq.~\eqref{d2} translate into constraints for the product $g^2 m_\psi$. In the case of the scalar field $\phi$, since $m_\phi \simeq H_{\rm in}$ we can write the constraints in terms of $N_{\rm BBN}$ and $N_{\rm NL}$: \begin{align} y_{\rm min} \lesssim y \lesssim y_{\rm min} \, \exp\left(\frac{3}{4} (N_{\rm BBN} - N_{\rm NL})\right) \,, \end{align} where $N_{\rm BBN}$ is the value of $N$ at the time of BBN and \begin{align} &y_{\rm min} = \sqrt{8 \pi} \, \exp\left(-\frac{3}{4} N_{\rm BBN}\right) \,. \nonumber \end{align} The equations used in the evolution of the background and perturbations (in Sec.~\ref{sec:BackgroundDynamics} and the previous part of this section) do not incorporate the effects of the decays. They are in an instantaneous decay approximation and valid well before the decay processes play a significant role. Note that for a decay process with rate $\Gamma$, taking place in a matter dominated epoch, at times two e-foldings before $ t = \Gamma^{-1}$, the fraction of decayed particles is approximately five per cent. Thus, as a rule of thumb, we will require that the decay takes place at $N_{\rm dec}$, with $N_{\rm NL} \lesssim N_{\rm dec} \lesssim N_{\rm NL} + 2 \lesssim N_{\rm BBN}$. For concreteness, taking $N_{\rm NR} = 2$ and $N_{\rm BBN} = N_{\rm NL} + 2$, we can give estimates for the Yukawa couplings. For instance, taking $N_{\rm NL} = 4.5$ from the numerical examples in Sec.~\ref{sec:Perturbations} yields $0.038 \lesssim y \lesssim 0.17$. \section{Phenomenological Implications} \label{pheno} The fast growth of perturbations can have various interesting phenomenological implications. Here, we initiate their study. Understanding them in detail so as to extract precise predictions requires detailed studies which we leave for future. \subsection{Primordial Black Holes} \label{sec:PBHs} Once the perturbations become non-linear, it is reasonable to expect that the overdensities will collapse, forming either PBHs~\cite{1966AZh, 10.1093/mnras/152.1.75, Grindlay:1975eb, Chapline:1975ojl, Khlopov:1980mg, Polnarev:1985btg, Carr:2016drx} or other kinds of compact objects, such as oscillons (see e.g.~\cite{Antusch:2017flz} for a study of oscillon formation in the context of an EMD model), primordial halos~\cite{Savastano:2019zpr}, miniclusters~\cite{Hogan:1988mp, Fairbairn:2017sil} or star-like objects, see e.g.~\cite{Krippendorf:2018tei, Visinelli:2021uve} for two comprehensive reviews. The formation of PBHs in a matter dominated universe is on one hand facilitated by the fact that the background pressure vanishes~\cite{Harada:2016mhb}. On the other hand, though, any deviation from spherical symmetry will tend to virialise the collapsing system, avoiding the formation of a horizon. We defer a detailed numerical study of the formation of PBHs and microhalos, along the lines of~\cite{Helfer:2016ljl, Widdicombe:2018oeo, Muia:2019coe, Nazari:2020fmk, Eggemeier:2020zeg, Eggemeier:2021smj}, to a future work. In the present paper, we will provide simple estimates to exhibit the potentially rich phenomenology.\\ The easiest way to determine the mass scale of the PBHs that can be potentially formed is by isolating the scales for which the $\psi$ perturbations go non-linear~\cite{Amendola:2017xhl, Georg:2016yxa, Georg:2017mqk}. In our setup, the growth involves modes that are sub-Compton, i.e. $k/a > m_\phi$. For this reason, one can expect the maximum mass of the PBH formed to be \begin{equation} M_{\rm PBH} \simeq \rho(N_{\rm NL}) \times \left(\frac{\epsilon_m}{m_\phi}\right)^3 \,, \label{eq:MPBH0} \end{equation} where $(\epsilon_m/m_\phi)$ with $\epsilon_m < 1$ parametrises the wavelength of the collapsing mode. We can estimate the various terms in this expression in terms of $N_{\rm NL}$ and $N_{\rm BBN}$. Assuming that the background is always matter dominated, we can write $H_{\rm NL}/H_{\rm in} \simeq \exp\left(- 3 N_{\rm NL}/2\right)$ and $H_{\rm BBN}/H_{\rm in} \simeq \exp\left(- 3 N_{\rm BBN}/2\right)$. Therefore, we find \begin{equation} \rho({N_{\rm NL}}) = 3 H_{\rm NL}^2 M_{\rm pl}^2 = 3 \left(\frac{H_{\rm NL}}{H_{\rm in}}\right)^2 H_{\rm in}^2 M_{\rm pl}^2 \,. \end{equation} Furthermore, we can approximate $H_{\rm in} \simeq m_\phi$ since the scalar field becomes dynamical at $N \simeq 0$. Hence, using for concreteness $\epsilon_m = 0.1$, we find from Eq.~\eqref{eq:MPBH0} \begin{equation} \label{eq:PBHMass} M_{\rm PBH} \simeq 3 \times 10^{34} \, \text{g} \, \times \exp\left(-\frac{3}{2} (2 N_{\rm NL} + N_{\rm BBN})\right) \,, \end{equation} where we have also used that $H_{\rm BBN} \simeq 10^{-24} \, \text{GeV}$. To make a concrete estimate, let us focus on the numbers that comes up from the numerical examples in Sec.~\ref{sec:Perturbations}. In those cases, perturbations with initial amplitude\footnote{We take the amplitude of the perturbations to be as given by the normalisation of scalar perturbations~\cite{Planck:2018vyg}, assuming a scale invariant power spectrum. $A_{\delta_{\psi, \rm{in}}} \simeq 10^{-10}$ enters the non-linear regime around $N_{\rm NL} = 4.5$ and $N_{\rm BBN} = N_{\rm NL} + 2 = 6.5$ (so that there is enough time for the various components to decay before the beginning of BBN, as explained in Sec.~\ref{sec:Decay}). In this case, Eq.~\eqref{eq:PBHMass} gives $M_{\rm PBH} \simeq 2.4 \times 10^{24} \, \text{g}$, which falls slightly above the sub-lunar mass range ($10^{17} \, {\rm g} \lesssim M_{\rm PBH} \lesssim 10^{23} \, {\rm g}$), in which PBHs can still compose $100 \%$ of dark matter~\cite{Carr:2021bzv}. Smaller values of $\beta$, smaller values for the initial amplitude of the perturbations or larger modes would give rise to larger values of $N_{\rm NL}$ and therefore slightly lighter PBHs, which would fall in the sub-lunar mass range. In Fig.~\ref{fig:ParameterSpace} we exhibit the PBH masses that can be obtained in our parameter space.} Note that the expression in Eq.~\eqref{eq:MPBH0} gives a maximum value for the mass of the PBHs that can be formed. However, lighter PBHs can also be formed\footnote{The growth takes place for all modes for which the term proportional to $\omega_{\psi}$ drives the dynamics in Eq.~\eqref{eq:PerturbationDeltaPsi} and Eq.~\eqref{eq:PerturbationDeltaB}.} PBHs with mass around $(10^{15}-10^{16}) \, \text{g}$ would be evaporating today and therefore are severely constrained from observations of the galactic and extra-galactic $\gamma$-rays background~\cite{Carr:2020gox}. Lighter PBHs, in the range $10^9 \, \rm{g} \lesssim M_{\rm PBH} \lesssim 10^{14} \, \rm{g}$ are subject to milder constraints due to BBN. Light PBHs, $M_{\rm PBH} \lesssim 10^{15} \, \rm{g}$ are quite interesting from the phenomenological point of view, as they might be a unique probe of the total number of light scalars in the fundamental theory~\cite{Calza:2021czr}, provide a baryogenesis mechanism~\cite{Hooper:2020otu}, reheat the universe~\cite{Lennon:2017tqq, Baldes:2020nuv} and produce GWs in the ultra-high-frequency band~\cite{Anantua:2008am, Dolgov:2011cq, Zagorac:2019ekv}. Of course, to connect to phenomenology, one has to compute the fraction of PBH dark matter, i.e. $\beta = \rho_{\rm PBH}/\rho_{\rm DM}$, where $\rho_{\rm PBH}$ is the current dark matter energy density in PBHs, while $\rho_{\rm DM}$ is the current total dark matter energy density. Such a computation would require the knowledge of the threshold value $\delta_{\psi, \rm{c}}$ for a sub-Compton spherical overdensity $\delta_\psi$ for it to collapse to a PBH. In turn, computing $\delta_{\psi, \rm{c}}$ requires a careful numerical simulation that we plan to report in future work. \begin{figure}[h!] \includegraphics[width=0.45\textwidth]{ParameterSpace.jpeg} \caption{Mass of the PBHs as a function of $N_{\rm NL}$ and $N_{\rm BBN}$.} \label{fig:ParameterSpace} \end{figure} An early pre-BBN epoch of matter domination generically produces early micro- or mini-halos~\cite{ Blinov:2021axd, Barenboim:2021swl} due to the early growth of perturbations on scales below the horizon size. If these micro-halos are stable over cosmological time scale, their annihilation signature at present epoch from the dense galactic center has been studied extensively~\cite{Blanco:2019eij}. But in our case the situation is different: the halos made of $\psi$ particles give away scalar radiation. In fact, it has been shown that scalar radiation from early halos favours PBH formation~\cite{Flores:2020drq}. The remnant halos which do not form PBHs will be destroyed as the $\psi$ particles decay into the radiation bath of the Standard Model particles. So one naively expects that the number density of micro-halos will be very tiny at the present epoch unlike~\cite{Blinov:2021axd}. \subsection{Gravitational Waves} \label{sec:GWs} Scalar perturbations generate GWs at second order in perturbation theory. This effect has been explored in several different contexts, see e.g.~\cite{Baumann:2007zm, Assadullahi:2009nf, Espinosa:2018eve, Kohri:2018awv, Inomata:2020yqv, Inomata:2020tkl, Domenech:2020ssp, Domenech:2021wkk} and~\cite{Domenech:2021ztg} for a recent comprehensive review on the subject. We can expect that this effect is also present in our scenario, as on sub-Compton scales the scalar field perturbations grow following the overdensity in the $\psi$ component~\cite{Amendola:2003wa}: \begin{equation} \label{eq:ScalarFieldMode} \delta\phi_k \simeq \left(\frac{a H}{k}\right)^2 \beta_\psi \Omega_\psi \delta_\psi \,, \qquad \frac{k}{aH} \gg 1 \,. \end{equation} In this section we sketch some estimates of the amount of GWs and frequency that one can expect in the scenario described in the previous sections, due to second order scalar perturbations.\footnote{As we will only report order of magnitude estimates, we neglect factors containing the number of degrees of freedom in this section.} We leave a detailed computation of the GW spectrum to a future work and we follow~\cite{Giblin:2014gra, Chatrchyan:2020pzh} to do the estimates. Since we have a multi-component setup, second order scalar perturbations are not the only possible source of GWs. For instance, if the $\psi$ or the background components develop an anisotropic stress-energy tensor due to the rapid growth, they could also source GWs.\\ In order to do some estimates, let us do a few simplifying assumptions. We will assume that most of the GW production occurs at $N_{\rm NL}$, i.e. when the perturbations go non-linear and the scalar field fragments. We also assume that most of the GW energy is deposited in a mode $k_p/a \gtrsim m_\phi$, as it is reasonable to expect from Eq.~\eqref{eq:ScalarFieldMode}: at larger $k$ the scalar field perturbation is suppressed by the prefactor $(aH/k)$, while at lower $k$ the enhancement does not occur at all.\\ First, we would like to understand what is the typical frequency range that is involved. Given Eq.~\eqref{eq:ScalarFieldMode} we would expect that the signal is maximized when $\delta\phi_k$ is maximized. As modes are not amplified at $k/a \lesssim m_\phi$, the GW spectrum features a lower cutoff, given by the mass of the field, namely\footnote{A subscript ‘0’ denotes quantities evaluated at the present time.} \begin{align} \label{eq:F0} f_0 \gtrsim \left(\frac{m_\phi}{10^{-17} \, \rm GeV}\right) \exp\left(N_{\rm NL} - N_{\rm BBN}\right) \times 10^{-3} \, \rm{Hz} \,, \end{align} where we have used that \begin{align} \label{eq:apa0} \frac{a_{\rm NL}}{a_0} &= \frac{a_{\rm NL}}{a_{\rm BBN}} \frac{a_{\rm BBN}}{a_{\rm 0}} \simeq \\ &\simeq \exp\left[(N_{\rm NL} - N_{\rm BBN})\right] \times 10^{-10} \times \, \frac{\text{MeV}}{T_{\rm BBN}} \,. \nonumber \end{align} Eq.~\eqref{eq:F0} tells us that the signal could cover a large fraction of the GW spectrum depending on the mass $m_\phi \gtrsim 10^{-17} \, \rm{GeV}$ of the scalar field. Depending on the amplitude of the GW spectrum, the range of frequencies that is in principle involved could be probed by current and future GW experiments including LISA~\cite{amaroseoane2017laser, Barausse:2020rsu} ($f_0 \sim 10^{-2} \, \rm{Hz}$), DECIGO~\cite{Seto:2001qf} and BBO~\cite{Yagi:2011wg} ($f_0 \sim 1 \, \rm{Hz}$), LIGO/Virgo/KAGRA~\cite{KAGRA:2021kbb}, Einstein Telescope~\cite{Maggiore:2019uih} and Cosmic Explorer~\cite{Evans:2021gyd} ($f_0 \sim 100 \, \rm{Hz}$) and ultra-high-frequency band proposals~\cite{Aggarwal:2020olq} ($f_0 \gtrsim 10^3 \, \rm{Hz}$).\\ In order to estimate the amplitude of the GW spectrum let us parametrise\footnote{As in~\cite{Chatrchyan:2020pzh}, we neglect the tensor structure of the perturbation for the purposes of making a basic estimate.} $\Pi^{\rm TT}_{ij} \simeq \alpha \rho_\phi$, where $\Pi^{\rm TT}_{ij}$ is the transverse traceless component of the scalar field stress-energy tensor and $\alpha \lesssim 1$. Then one can write the peak fractional energy density in GWs at production ($N_{\rm NL}$) as~\cite{Chatrchyan:2020pzh} \begin{align} \label{eq:OmegaGWk} \Omega_{{\rm GW}, p}(k_p) &\sim \frac{64 \pi^2}{3 M_{\rm pl}^4 H_{\rm NL}^4} \frac{\rho_{\phi}^2(N_{\rm NL})}{(k_p/(a_{\rm NL} H_{\rm NL}))^2} \frac{\alpha^2}{\lambda} \nonumber \\ & \simeq \frac{192 \pi^2 \, \Omega_\phi^2(N_{\rm NL})}{(k_p/(a_{\rm NL} H_{\rm NL}))^2} \frac{\alpha^2}{\lambda} \,, \end{align} where $\lambda$ parametrizes the logritmic width of the signal $\lambda = \Delta \log k$. In order to compute $k_p/(a_{\rm NL} H_{\rm NL})$ we impose that the relevant scale is sub-Compton at $N_{\rm NL}$: $\frac{k_p}{a_{\rm NL}} \gtrsim m_\phi$, which implies \begin{equation} \frac{k_p}{a_{\rm NL} H_{\rm NL}} \gtrsim \exp\left(\frac{3 N_{\rm NL}}{2}\right) \,, \end{equation} where we have used that $m_\phi \sim H_{\rm in}$. Using $N_{\rm NL}= 4.5$ from the previous sections, one gets $k_p/(a_{\rm NL} H_{\rm NL}) \gtrsim 850$. Taking $\Omega_\phi(N_{\rm NL}) \simeq5 \times 10^{-3}$ from the numerics of the previous section and $\alpha \sim \lambda \sim 1$ one finds $\Omega_{{\rm GW}, p}(k_p) \simeq 6.5 \times 10^{-8}$. The current fractional energy density in GWs can be computed by redshifting the result in Eq.~\eqref{eq:OmegaGWk} using Eq.~\eqref{eq:apa0} \begin{align} \Omega_{\rm GW, 0} (k_p) &= \left(\frac{a_{\rm NL}}{a_0}\right)^4 \frac{\rho_{\rm NL}}{\rho_0} \, \Omega_{{\rm GW}, p} (k_p) \simeq \\ & \simeq 10^{-4} e^{N_{\rm NL} - N_{\rm dec}} \, \Omega_{{\rm GW}, p}(k_p) \lesssim 2.4 \times 10^{-12} \,, \nonumber \end{align} where in the last step we have used the above estimate for $\Omega_{{\rm GW},p}(k_p)$ and $N_{\rm dec} - N_{\rm NL} = 1$.\\ Of course, in order to properly compute the the GW amplitude, we should track the behaviour of $h_k$ all the way through, starting from $N_{\rm NR}$ to the start of the standard radiation domination phase that begins when all the components $b$, $\phi$ and $\psi$ decay. This implies tracking perturbations also when they enter the non-linear regime, which would need a full numerical analysis. In this way one would be able to understand the effects of non-linearities~\cite{Delos:2020mtj, Delos:2019tsl,Delos:2018ueo} ad whether they would enhance the GW spectrum. We plan to further study GW production, including the numerical analysis, in a future work. Nevertheless, an amplitude $\Omega_{{\rm GW}, 0} \simeq 10^{-12}$ can be probed by most of the future GW detectors mentioned above (see~\cite{Thrane:2013oya, Mingarelli:2019mvk} for a detailed discussion).\\ Beyond the GW production mechanisms mentioned above, there are a couple of additional sources related to the formation of light PBHs: \begin{itemize}[leftmargin=*] \item Evaporation of light PBHs~\cite{Anantua:2008am, Dolgov:2011cq}, that produce a GW spectrum with peak at ultra-high-frequency, typically above $10^{10} \, \rm{Hz}$. In~\cite{Dolgov:2011cq}, the maximum amplitude for the GW spectrum is computed to be $\Omega_{\rm GW} \simeq 10^{-7}$. In our scenario, it is likely that the maximum amplitude would be slightly smaller, due to an additional period of EMD before the $b$, $\phi$ and $\psi$ components decay. \item Mergers of PBHs~\cite{Zagorac:2019ekv, Dolgov:2011cq}: in this case the frequency can be estimated as the \textit{Innermost Stable Circular Orbit} (ISCO) frequency, namely \begin{equation} f \simeq 10^{15} \, {\rm Hz} \left(\frac{10^{20} \, {\rm g}}{M_{\rm PBH}}\right) \,, \end{equation} where we have assumed an equal mass for the two merging PBHs. Note that for $M_{\rm PBH} \simeq 10^{20} \, \rm{g}$, which is the relevant mass range for this paper, the ISCO frequency roughly falls into the frequency range that will be accessible with axion experiments~\cite{Ejlli:2019bqj} like ALPS II~\cite{Bahre:2013ywa} and JURA~\cite{doi:10.1146/annurev-nucl-102014-022120}. In order to claim the detectability of such mergers an estimate of the number of expected events at a given distance is needed, that depends on the probability of forming a binary for such light PBHs. We will analyse these points carefully in a future work. \end{itemize} \section{Conclusions} \label{conclude} The key result of this article is a scenario for fast growth of cosmological perturbations. At first, we present cosmological solutions which asymptote to a matter dominated era in the early universe (prior to BBN). Density perturbations in this matter dominated epoch grow very fast, the primary reason for this is a scalar mediated force between dark fermions. Examples with explicit (numerical) computations of the growth exponent have been presented in Sec.~\ref{sec:Perturbations}. The goal of this paper is to present the first explicit examples, studies of the parameter space of the models and the exploration other related mechanisms for fast growth of perturbations will be carried out in a future work. We also took a phenomenological approach to the change in the equation of state of the $\psi$ component. An interesting future direction would be to present a microscopic description of such a transition of the $\psi$ equation of state and study the evolution of the perturbations in the setting. Fast growth of perturbations can potentially have a whole host of interesting phenomenological implications. We have outlined these in the context of primordial black holes and gravitational waves in Sec.~\ref{pheno}. Our estimates indicate that we can obtain PBHs in the sub-lunar window, that can in principle constitute $100\%$ of dark matter. Also, GWs can be expected in a wide range of frequencies, from $10^{-3}$ Hz to $10^{15}$ Hz, with amplitudes which are in the detectable range with future experiments. Extracting detailed predictions requires analysis which is beyond the scope of the present article. We plan to report on these in subsequent works. \section*{Acknowledgments} We thank Stefano Savastano and Luca Amendola for email exchange regarding the perturbation growth in coupled quintessence cosmology within scaling regime. AM is supported in part by the SERB, DST, Government of India by the grant MTR/2019/000267. SD acknowledges SERB grant CRG/2019/006147. FM is funded by a UKRI/EPSRC Stephen Hawking fellowship, grant reference EP/T017279/1 and partially supported by the STFC consolidated grant ST/P000681/1.
2,877,628,091,437
arxiv
\section{Introduction} In \cite{artin1987graded}, Artin and Schelter classified the Artin-Schelter regular algebras of global dimension 3, that is, graded associative algebras over $\mathbb{C}$ with excellent homological properties. The algebras of type A with 3 generators in this classification are the Sklyanin algebras, with defining relations \begin{align*} axy+byx+cz^2=0,\\ ayz+bzy+cx^2=0,\\ azx+bxz+cy^2=0, \end{align*} with some restrictions on $(a:b:c) \in \mathbb{P}^2$. In \cite{artin2007some}, Artin, Tate and Van den Bergh showed that these Sklyanin algebras depend on an elliptic curve $E$ and a point $\tau \in E$. It was noticed by Smith and Tate in \cite{smith1994center} that the Heisenberg group $H_3$ of order 27 acts on such an algebra as gradation preserving automorphisms. In \cite{odesskii1989sklyanin}, Odesskii and Feigin generalised the Sklyanin algebras to every dimension $n\geq 3$ and defined the $n$-dimensional Sklyanin algebras $A_n(\tau,E)$, where for each $n$ $H_n$ again works as algebra automorphisms on $A_n(\tau,E)$. In those cases, $A_n(\tau,E)_1 \cong V = \mathbb{C}x_0 + \ldots +\mathbb{C}x_n$ as an $H_n$-representation, with the action of $H_n$ given by \begin{displaymath} e_1 \cdot x_i = x_{i-1}, e_2 \cdot x_i= \omega^i x_i, \end{displaymath} $\omega$ being a primitive $n$th root of unity and the indices taken $\bmod n$. Since the Heisenberg group of order $n^3$ plays such an important role in the study of these Sklyanin algebras, other examples of graded Artin-Schelter regular algebras with $n$ generators and on which $H_n$ acts as gradation preserving automorphisms would be desired and in the best case, a complete classification. While this classification has been made for $n=3$, a classification for $n \geq 4$ is not yet found. \par Another interesting class of algebras is given by graded Clifford algebras, i.e. algebras depending on a quadratic form over a polynomial ring in an indeterminate number of variables, with entries of degree 2. Such algebras are always finite over their center and therefore they have a rich representation theory. \par This paper looks at algebras that belong to these two worlds: it will discuss graded algebras $\mathfrak{C}(a,b),(a,b) \in \mathbb{A}^2$ ($\mathfrak{C}(A:B:C), (A:B:C) \in \mathbb{P}^2$), with generators in degree 1 and relations of the form \begin{align*} x_1x_4+x_4x_1 = ax_0^2,&& x_2x_3+x_3x_2 = bx_0^2,\\ x_2x_0+x_0x_2 = ax_1^2,&& x_3x_4+x_4x_3 = bx_1^2,\\ x_3x_1+x_1x_3 = ax_2^2,&& x_4x_0+x_0x_4 = bx_2^2,\\ x_4x_2+x_2x_4 = ax_3^2,&& x_0x_1+x_1x_0 = bx_3^2,\\ x_0x_3+x_3x_0 = ax_4^2,&& x_1x_2+x_2x_1 = bx_4^2. \end{align*} We will call these $H_5$-Clifford algebras (although these algebras aren't always Clifford algebras). Generically, a $H_5$-Clifford algebra $\mathfrak{C}(a,b)$ will be a graded Clifford algebra generated in degree 1, with associated quadratic form \begin{equation} \begin{bmatrix} 2x_0^2 & bx_3^2 & a x_1^2 & a x_4^2 & b x_2^2 \\ b x_3^2 & 2x_1^2 & b x_4^2 & a x_2^2 & a x_0^2 \\ a x_1^2 & b x_4^2 & 2 x_2^2 & b x_0^2 & a x_3^2 \\ a x_4^2 & a x_2^2 & b x_0^2 & 2 x_3^2 & b x_1^2 \\ b x_2^2 & a x_0^2 & a x_3^2 & b x_1^2 & 2 x_4^2 \end{bmatrix} \end{equation} on the polynomial ring $\mathbb{C}[x_0^2,x_1^2,x_2^2,x_3^2,x_4^2]$. $\mathfrak{C}(a,b)_1$ will be isomorphic as an $H_5$-representation to $V_1$, where $V_1$ is the unique simple representation such that for $z=[e_1,e_2], \varphi(z) = \omega I_5$ with $\omega = e^{\frac{2\pi i}{5}}$ and $\varphi:H_5 \rightarrow \mathbf{M}_5(\mathbb{C})$ the morphism determined by $V_1$. Crucial to describing these algebras will be the Koszul dual $\mathfrak{C}(a,b)^!$, which will be a commutative algebra with 5 generators and 5 homogeneous relations and therefore it will define a projective variety in $\mathbb{P}^4$. \par Sometimes we will make the 10 relations we are interested in homogeneous, i.e. we will look at the algebras with equations \begin{align*} C(x_1x_4+x_4x_1) = Ax_0^2, &&C(x_2x_3+x_3x_2) = Bx_0^2,\\ C(x_2x_0+x_0x_2) = Ax_1^2, &&C(x_3x_4+x_4x_3) = Bx_1^2,\\ C(x_3x_1+x_1x_3) = Ax_2^2, &&C(x_4x_0+x_0x_4) = Bx_2^2,\\ C(x_4x_2+x_2x_4) = Ax_3^2, &&C(x_0x_1+x_1x_0) = Bx_3^2,\\ C(x_0x_3+x_3x_0) = Ax_4^2, &&C(x_1x_2+x_2x_1) = Bx_4^2. \end{align*} and the following relations, which come naturally if $C=1$, but must be included when $C = 0$. \begin{align*} B(x_1x_4+x_4x_1) = A(x_2x_3+x_3x_2),\\ B(x_2x_0+x_0x_2) = A(x_3x_4+x_4x_3),\\ B(x_3x_1+x_1x_3) = A(x_4x_0+x_0x_4),\\ B(x_4x_2+x_2x_4) = A(x_0x_1+x_1x_0),\\ B(x_0x_3+x_3x_0) = A(x_1x_2+x_2x_1). \end{align*} Considering this, if we put $C = 0$, we still get an algebra with 10 quadratic relations, but we cannot hope to find Artin-Schelter regular algebras of global dimension 5 this way, since in these cases the algebras will not be of finite global dimension. \par In the last section we will generalise certain phenomena in dimension 5 to $H_p$-Clifford algebras, $p$ prime. The main result regarding quantum spaces will be \begin{theorem} There are exactly $p+1$-points (corresponding to the points in $\mathbb{P}^1_{\mathbb{F}_p}$) in $\mathbb{P}^{\frac{p-1}{2}}$ for which the corresponding $H_p$-Clifford algebra will be isomorphic to the quantum space $\mathbb{C}_{-1}[x_0,\ldots,x_{p-1}]$. \end{theorem} Considering the quantum spaces, we also have the following duality \begin{theorem} There is a 1-to-1 correspondence between the $PSL_2(p)$-orbit of the point $(1:0:\ldots:0)$ and the $PSL_2$-orbit of the line $a_0 = 0$. This correspondence is a morphism of $PSL_2(p)$-sets and its action coincides with the canonical action of $PSL_2(p)$ on $\mathbb{P}^1_{\mathbb{F}_p}$. \end{theorem} The main result regarding the regular $H_p$-Clifford algebras will be \begin{theorem} The character series of a regular $H_p$-Clifford algebra is the same as the character series of the polynomial ring $\mathbb{C}[x_0,\ldots,x_{p-1}]$, with the degree 1 part of the $H_p$-Clifford algebra isomorphic to the degree 1 part of $\mathbb{C}[x_0,\ldots,x_{p-1}]$ as an $H_p$-representation. \end{theorem} This means that, as an $H_p$-module, a $H_p$-Clifford algebra cannot be distinguished from the polynomial ring in $p$ variables. \subsection*{Notations} Throughout the paper, we will use the following notations and conventions \begin{itemize} \item $\mathbb{C}_{-1}[y_1,\ldots,y_k]$ is the noncommutative algebra with defining relations $y_i y_j = -y_j y_i, 1 \leq i < j \leq k$. \item For a subset $S \subset \mathbb{C}[y_1,\ldots,y_k]$, $\mathbf{V}(S)$ is the Zariski-closed subset defined by the elements of $S$. It will be clear from the context whether we look at subsets of $\mathbb{A}^k$ or $\mathbb{P}^{k-1}$. \item $\mu_n$ is the set of $n$th roots of unity in $\,\mathbb{C}$. \item If $G$ is a finite group and $V$ is a $G$-representation, then the associated character of $G$ is denoted by $\chi_V$. \end{itemize} \section*{Acknowledgements} The author would like to thank M. Van den Bergh, who proposed to look at these algebras and who has been a tremendous help along the way and L. le Bruyn, who gave ideas on what to do with these algebras and who also gave tips on how to write a readable paper. The author would also like to thank T. Raedschelders for suggesting to include a section with preliminaries. \section{Preliminaries} Those readers that are familiar with Koszul algebras, graded Clifford algebras, the representation theory of the Heisenberg group $H_p$ with $p$ prime and/or the connection between modular curves $X(p)$ and $H_p$ may skip this section. The last part of the subsection Koszul algebras however will be unfamiliar to most readers. \subsection{Koszul algebras} \begin{definition} Given a quadratic algebra $A = T(V)/I$ with generators $V = \mathbb{C}x_0+\ldots+\mathbb{C}x_n$ and relations given by $I_2$, we define the Koszul dual to be the quadratic algebra $T(V^*)/J$, with $J_2$ defined as the subspace of $V^* \otimes V^*$ such that $\forall w \in J_2, \forall v \in I_2: w(v) = 0$. \end{definition} We say that $A$ is Koszul iff $A^! \cong \Ext_A(\mathbb{C},\mathbb{C})$. The standard properties of Koszul algebras we will need are that there is a relation between the Hilbert series of $A$ and $A^!$, given by \begin{displaymath} H_A(t)H_{A^!}(-t) = 1 \end{displaymath} and that $A$ is Koszul iff $A^!$ is Koszul. \par An important fact concerning Koszul algebras is that the Koszul complex associated to these algebras is of the form \begin{displaymath} \xymatrix{ \ldots \ar[r] & A \otimes (A^!)_n^* \ar[r]^-{(d_K)_n} & A \otimes (A^!)_{n-1}^* \ar[r] & \ldots} \end{displaymath} and that \begin{displaymath} (A^!)_n^* = V^{\otimes n-2} \otimes I_2 \cap \ldots \cap I_2 \otimes V^{\otimes n-2}. \end{displaymath} $(d_K)_n$ is given by taking the first component of $(A^!)_n^*$ and absorbing it in $A$, for example $(d_K)_1(a \otimes x) = ax \in A$. It follows from this description that each $(d_K)_n$ is a $G$-morphism, whenever $G$ acts on $A$ as gradation preserving algebra automorphisms. This is useful for finding the character series when $G$ is a finite (or more generally, reductive) group. \begin{definition} Let $G$ be a finite group. The character series for an element $g \in G$ and for a graded algebra $A$ on which $G$ acts as gradation preserving automorphisms is a formal sum \begin{displaymath} Ch_A(g,t) = \sum_{n \in \mathbb{Z}} \chi_{A_n}(g) t^n. \end{displaymath} \end{definition} For example, if $g = 1$, $Ch_A(1,t) = H_A(t)$, the Hilbert series of $A$. Since a character of a representation is constant on conjugacy classes, we can represent the decomposition of $A$ in simple $G$-representations as a vector of length equal to the number of conjugacy classes and in the $i$th place the character series $Ch_A(g,t)$ with $g \in C_i$, the $i$th conjugacy class. \par Suppose now that $A$ is a Koszul algebra and that a finite group $G$ acts on it as gradation preserving automorphisms. Because the Koszul complex is a free resolution of the trivial module $\mathbb{C}$, which is isomorphic as $G$-representation to the trivial representation and because the Koszul complex consists of $G$-morphisms, we have a similar formula for finding the character series of the Koszul dual as we have for the Hilbert series. More precisely, we have \begin{align} Ch_A(g,t) Ch_{(A^!)^*}(g,-t) = 1. \label{al:chKos} \end{align} This allows us to compute $Ch_{A^!}(g,t)$ whenever we know $Ch_A(g,t)$. To know the character series of $A^!$, we have to take the complex conjugates of the coefficients of $Ch_{(A^!)^*}(g,t)$. \subsection{Graded Clifford algebras} This subsection will deal with the case we are interested in, but this is not the general definition. For more information, see \cite{LeBruyn1994d} or \cite{vancliff1998some}. \par Given our algebra $\mathfrak{C}(a,b)$, we can associate to it 5 quadratic equations in the following way: we can write our equations as $x_i x_j + x_j x_i = (M_k)_{ij}x_k^2$ with $M_k \in \mathbf{M}_5(\mathbb{C})$. Taking $\{z_0,\ldots,z_4\}$ as the basis of $(\mathfrak{C}(a,b)_1)^*$ such that $z_i(x_j)=\delta_{ij}$, we get 5 quadratic equations \begin{displaymath} q_k=[z_0,z_1,z_2,z_3,z_4] M_k \begin{bmatrix} z_0 \\ z_1 \\ z_2\\z_3\\z_4 \end{bmatrix}, k = 0,\ldots ,4 \end{displaymath} This way, we get a quadric system. $\cap_{i=0}^4\mathbf{V}(q_i)$ defines a Zariski closed set in $\mathbb{P}(\mathfrak{C}(a,b)_1)$ and a point in this closed set is called a base point. It is also clear that this set parametrizes the degree 1 elements of $\mathfrak{C}(a,b)$ whose square is 0, this follows from the definition of a Clifford algebra. The algebra $\mathbb{C}[z_0,z_1,z_2,z_3,z_4]/(q_0,q_1,q_2,q_3,q_4)$ is the Koszul dual $\mathfrak{C}(a,b)^!$ of $\mathfrak{C}(a,b)$. \par In our case, the matrix $M_0$ for example is given by \begin{displaymath} M_0 = \begin{bmatrix} 2&0&0&0&0\\ 0&0&0&0&a\\ 0&0&0&b&0\\ 0&0&b&0&0\\ 0&a&0&0&0 \end{bmatrix} \end{displaymath} and it follows that $q_0 = 2z_0^2+2az_1z_4+2bz_2z_3$. Since our relations are Heisenberg invariant, the other $q_k, k = 1,\ldots 4$ are easily found by cyclic permutation of the indices. \par The next theorem will be crucial. \begin{theorem}[\cite{cassidy2010generalizations}] A graded Clifford algebra $\mathfrak{C}$ is quadratic, Auslander-regular of global dimension $n$, satisfies the Cohen-Macaulay property and has as Hilbert series $\frac{1}{(1-t)^n}$ if and only if the associated quadric system is base-point free. If this is the case, $\mathfrak{C}$ is also a noetherian domain and Artin-Schelter regular. \label{th:Reg} \end{theorem} \begin{remark} While we will mainly work with graded Clifford algebras, we will also study quadratic algebras that are not Clifford algebras. These algebras will have as a quotient a graded Clifford algebra, the extra relations being implied by the fact that every square of an element of degree 1 is central in a graded Clifford algebra. \end{remark} \subsection{The finite Heisenberg group(s)} \begin{remark} While a Heisenberg group of order $n^3$ can be defined for every $n \in \mathbb{N}$, the discussion here will only hold for $n=p$ prime. This is mainly to make things easier regarding the representation theory of these groups. \end{remark} \begin{definition} The Heisenberg group of order $p^3$ is the finite group given by the generators and relations \begin{displaymath} H_p = \langle e_1,e_2,z | e_1^p = e_2^p=z^p,[e_1,e_2] = z, e_1z = z e_1,e_2z = z e_2\rangle \end{displaymath} and it is a central extension of the group $\mathbb{F}_p \times \mathbb{F}_p$ \begin{displaymath} \xymatrixcolsep{4pc}\xymatrix{1 \ar[r]& \mathbb{F}_p \ar[r]^-{1 \mapsto z} & H_p \ar[r]^-{e_1 \mapsto (1,0)}_-{e_2 \mapsto (0,1)}&\mathbb{F}_p \times \mathbb{F}_p \ar[r]& 1}. \end{displaymath} \end{definition} All the 1-dimensional simple representations of $H_p$ are induced by the characters of $\mathbb{F}_p \times \mathbb{F}_p$. The other simple representations are $p$-dimensional and are determined by a primitive $p$th root of unity. They are defined in the following way: choose a primitive $p$th root of unity $\omega$, then define the following action of $H_p$ on the vector space $V = \mathbb{C}x_0 + \ldots + \mathbb{C}x_{p-1}$ \begin{displaymath} e_1 \cdot x_i = x_{i-1}, e_2 \cdot x_i= \omega^i x_i, \end{displaymath} indices taken $\bmod p$. Taking another primitive root gives you another simple representation. This means that there are $p^2$ 1-dimensional and $p-1$ $p$-dimensional irreducible representations, which are all the simple ones. There are $p^2+p-1$ conjugacy classes, 1 for each central element and the other $p^2-1$ classes contain a unique element of the form $e_1^a e_2^b$, $a,b \in \mathbb{F}_p, (a,b) \neq (0,0)$. \par The character of a simple $p$-dimensional representation $V$ is given by \begin{align*} \chi_V(z^k) &= p \omega^k \\ \chi_V(e_1^a e_2^b) &= 0, (a,b) \neq (0,0). \end{align*} Such a representation $V$ also defines an antisymmetric bilinear form on the $\mathbb{F}_p$-vector space $\mathbb{F}_p \times \mathbb{F}_p$. Identifying $e_1$ and $e_2$ with their images in $\mathbb{F}_p \times \mathbb{F}_p$, we get this form by setting $\langle e_1,e_2 \rangle = \omega$ and extending it linearly to $\mathbb{F}_p \times \mathbb{F}_p$, thus \begin{displaymath} \langle a e_1 + b e_2,c e_1 + d e_2 \rangle = \omega^{ad-bc}. \end{displaymath} If we define a group morphism $\langle z \rangle \stackrel{\phi}{\longrightarrow} \mu_p$ by $\phi(z) = \omega$ (written multiplicatively in $\mu_p$), then we have a commutative diagram \begin{displaymath} \xymatrix{H_p \times H_p \ar[r] \ar[d]^-{[,]} &\mathbb{F}_p \times \mathbb{F}_p \ar[d]^-{\langle , \rangle} \\ \langle z \rangle \ar[r]^-\phi & \mu_p } \end{displaymath} Since every $p$-dimensional representation is determined by the image of $z$, every nontrivial antisymmetric bilinear form on $\mathbb{F}_p \times \mathbb{F}_p$ uniquely defines a simple representation of $H_p$. Conversely, every simple $p$-dimensional representation $V$ of $H_p$ defines a unique nontrivial antisymmetric bilinear form on $\mathbb{F}_p \times \mathbb{F}_p$ by extending linearly $\langle e_1,e_2 \rangle = \frac{\chi_V(z)}{p}$. \subsection{The modular curve $X(n)$} \label{sub:Mod} As is well known, the modular group $\Gamma = PSL_2(\mathbb{Z})$ acts on the complex upper half-plane \begin{displaymath} \mathbb{H}=\{x+iy | y>0\} \end{displaymath} by M\"obius transformations. The fundamental domain of this action defines isomorphism classes of elliptic curves and its compactification, made by adding the $\Gamma$-orbit $\overline{\mathbb{Q}}=\mathbb{Q}\cup \{\infty\}$, is the Riemann sphere $S^2$. In general, one can take any other group $G$ of finite index in $\Gamma$, find its fundamental domain in $\mathbb{H}$ and check what information a point in this domain holds. The modular curve $X(n), n \in\mathbb{N}$ is made this way by taking $G = \Gamma(n)$, with \begin{displaymath} \Gamma(n) = \left\lbrace \begin{bmatrix} a & b \\ c & d \end{bmatrix} \in \Gamma | a,d \equiv 1 \bmod n , b,c \equiv 0 \bmod n \right\rbrace. \end{displaymath} A point on $X(n)$ holds 3 pieces of information: \begin{itemize} \item an elliptic curve $(E,O)$, \item an embedding of $\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/n\mathbb{Z}$ into $E$ or equivalently, two generators $e_1,e_2$ of $E[n] = \{ P \in E| [n]P = O\}$. \item a primitive $n$th root of unity $\omega$ such that $\langle e_1,e_2 \rangle = \omega$, where this bilinear antisymmetric form is found by the Weil-pairing. \end{itemize} $X(n)$ has an action of $PSL_2(n) = \Gamma/\Gamma(n)$ by definition. This action is defined by taking another set of generators of $E[n]$, $f_1,f_2$, but their inner product must still remain $\omega$. This defines an $SL_2(\mathbb{Z}/n\mathbb{Z})$-action, but since $-I_2$ works trivially on $X(n)$, we have an $PSL_2(\mathbb{Z}/n\mathbb{Z})$ action. \par Now let $n=p$ be prime. As we have seen in last subsection, a bilinear antisymmetric form on $\mathbb{F}_p \times \mathbb{F}_p$ defines a simple $H_p$-representation $V$. Let $P_1,P_2$ be a generating set of $E[p]$ and denote $P_{a,b} = [a]P_1+[b]P_2$, then there exists a (unique up to multiplication with a scalar) function $f$ on $E$ with divisor \begin{displaymath} -(P_{0,0} + \ldots + P_{0,p-1})+ P_{p-1,0}+\ldots + P_{p-1,p-1}. \end{displaymath} In \cite{silverman2009arithmetic} it is proved that there exists a primitive $p$th root of unity such that $\omega = \frac{f}{\phi_{P_2}^*(f)}$, where $\phi_{P}^*$ stands for the pullback under the morphism \begin{displaymath} E \stackrel{\phi}{\longrightarrow} E, \tau \mapsto \tau + P. \end{displaymath} Calculating the divisor, one finds that the function \begin{displaymath} N(f)=f \phi_{P_1}^*(f)\ldots (\phi_{P_1}^*)^{p-1}(f) \end{displaymath} is constant and not 0, which means we can rescale $f$ so that $N(f) = 1$. We will now define an action of $H_p$ on the vector space \begin{displaymath} \mathcal{L}(P_{0,0} + \ldots + P_{0,p-1}) = H^0(E,\mathcal{O}(P_{0,0} + \ldots + P_{0,p-1})). \end{displaymath} Let $x_0 = 1$ and define \begin{align*} e_1 \cdot g = f\phi_{P_1}^*(g),\\ e_2 \cdot g= \phi_{P_2}^*(g). \end{align*} If we set $x_i =e_1^{p-i}\cdot x_0$, we find that \begin{align*} e_1\cdot x_i = x_{i-1}, \\ e_2\cdot x_i = \omega^i x_i. \end{align*} This defines our action of $H_p$. These global sections define an embedding of $E$ into $\mathbb{P}^{p-1}$ and it is clear that the defining equations will be $H_p$-invariant. \section{The case $n=3$} Before we start with the case $n=5$, let us see what happens when $n = 3$. In this case, the classification of Artin-Schelter regular algebras has been done, see \citep{artin1987graded} and \citep{artin2007some}. \par When $n=3$, the algebras we want to study have the following relations for $t \in \mathbb{C}$ \begin{align*} xy+yx=t z^2, && yz+zy=t x^2, && zx+xz=t y^2. \end{align*} \begin{theorem} For generic values of $t$, the $H_3$-Clifford algebra $\mathfrak{C}(t)$ is a Sklyanin algebra associated to the elliptic curve \begin{displaymath} E \leftrightarrow t(x^3+y^3+z^3)+(2-t^3)xyz, O = (1,-1,0) \end{displaymath} and translation by the point $(1:1:-t)$, which is a point of order 2. \end{theorem} There are however $7$ values for $t \in \mathbb{C}$ where the corresponding $H_3$-Clifford algebra is not a Sklyanin algebra: $t=2,2\omega,2\omega^2,-1,-\omega,-\omega^2,0$, with $\omega$ being a primitive third root of unity. The noncommutative algebra corresponding to $t=\infty$ is the algebra with relations \begin{align*} z^2=x^2=y^2=0 \end{align*} and is clearly not regular, so we have a total of 8 points in $\mathbb{P}^1$ where we don't have a Sklyanin algebra. When $t = 0,2,2\omega,2\omega^2$, the corresponding algebra is still regular, but in the other $4$ cases the Koszul dual $\mathfrak{C}(t)^!$ does not define the empty set. For $t = -1,-\omega,-\omega^2, \infty$, $\mathfrak{C}(t)^!$ defines a set of three points \begin{flalign*} t=-1 \longleftrightarrow& \{(1:1:1),(1:\omega:\omega^2),(1:\omega^2:\omega)\}, \\ t=-\omega \longleftrightarrow& \{(1:1:\omega^2),(1:\omega:\omega),(1:\omega^2:1)\}, \\ t=-\omega^2 \longleftrightarrow &\{(1:1:\omega),(1:\omega:1),(1:\omega^2:\omega^2)\},\\ t=\infty \longleftrightarrow &\{(1:0:0),(0:1:0),(0:0:1)\}. \end{flalign*} Notice that we have an action of $PSL_2(3)$ on the moduli space $\mathbb{P}^1$ such that algebras in the same orbit are isomorphic, but the $H_3$-representation is twisted (but still as an $H_3$-module isomorphic to $V_1$): suppose that we have an algebra with relations \begin{align*} xy+yx=t z^2, && yz+zy=t x^2, && zx+xz=t y^2. \end{align*} Then we can twist the Heisenberg action by an automorphism $\phi$ of $H_3$ and take the eigenvector with eigenvalue $1$ of $\phi(e_2)$ and its orbit under $\phi(e_1)$ as generators of this algebra (just the same as when $\phi=Id$, where $x_0$ is an eigenvector with eigenvalue 1 of $e_2$ and $x_i = e_1^{-i} x_0$). We want the twist to preserve the character of the representation, which means that we need to preserve the antisymmetric inner product on $\mathbb{F}_3 \times \mathbb{F}_3$ defined by $\langle e_1,e_2 \rangle = \omega$ (written multiplicatively), where $e_1$ and $e_2$ are identified with their images in $\mathbb{F}_3\times \mathbb{F}_3$ like in the previous section. \par The condition $\langle \phi(e_1), \phi(e_2) \rangle = \omega$ means that we need to look at elements of $SL_2(3)$. The element $-I_2$ however will work trivially on $\mathbb{P}^1$, so it will be a $PSL_2(3)$-action. Using the following generators \begin{displaymath} U= \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}, V= \begin{bmatrix} 0 & 1 \\ -1 & 1 \end{bmatrix}, \end{displaymath} with $U^2 =V^3=I$, the action of $PSL_2(3)\cong A_4$ is given by the following M\"obius transformations: \begin{displaymath} U\leftrightarrow U'(t) =\frac{-t+2}{t+1}, V\leftrightarrow V'(t)=\frac{-\omega^2 t+2}{\omega^2 t+1}. \end{displaymath} This action is a \emph{right} action, for this action comes naturally by twisting the group morphism $\varphi: H_3 \rightarrow Aut(\mathfrak{C}(t))$ by an automorphism of $H_3$. So suppose that we have an automorphism $\psi:H_3 \rightarrow H_3$, then we get a new group morphism given by $\varphi \circ \psi: H_3 \rightarrow Aut(\mathfrak{C}(t))$. \par Using this action, the sets $\{0,2,2\omega,2\omega^2\}$ and $\{\infty, -1,-\omega,-\omega^2\}$ correspond to the action of $PSL_2(3)$ on $\mathbb{P}^1_{\mathbb{F}_3}$. \begin{example} Suppose we want to calculate the action of \begin{displaymath} M=VU=\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} \end{displaymath} on our moduli space $\mathbb{P}^1$. Taking our algebra $\mathfrak{C}(t)$ (and we allow $t = \infty$), this means we need to find the eigenvector of $M \cdot e_2$ with eigenvalue $1$. In this particular case, this eigenvector is still $x_0$, but now $x_2$ is defined as $(e_1 e_2)x_0$ instead of $e_1 x_0$. So if we take $y_0,y_1,y_2$ as the generators of our isomorphic algebra, we have \begin{align*} y_0 &= x_0,\\ y_1 &= \omega^2 x_1, \\ y_2 &= x_2. \end{align*} A quick calculation shows that $y_0 y_1 + y_1 y_0 = \omega^2 t y^2_2$ and so the corresponding M\"obius transformation is given by $t \mapsto \omega^2 t$, with fixed points $0$ and $\infty$. This shows it is a right action: the composition $U'\circ V'$ is equal to the action of $M$, but $M = VU$. \end{example} In this case, the action of $PSL_2(3)$ is not a projectification of a 2-dimensional representation, because all $PSL_2(3)$-representations of dimension 2 would be direct sums of 1-dimensional representations. This is impossible because then every element of order 2 would act trivially and this is clearly not the case. \par One other thing of importance is that we have a duality between the quantum spaces (the algebras isomorphic to the algebra $\mathfrak{C}(0)$) and the 4 nonregular algebras, given by the point varieties of the regular ones and the Koszul dual of the nonregular ones. Before we give this duality, we recall the definition of a point module. \begin{definition} Let $B$ be a positively graded, connected $\mathbb{C}$-algebra generated in degree 1. A point module $M$ is a cyclic, graded left $B$-module with Hilbert series $\frac{1}{1-t}$ with $M = B M_0$. \end{definition} \begin{example} For the quantum space $\mathbb{C}_{-1}[x_0,\ldots,x_{p-1}]$, the point modules are described by the full graph on $p$ points, the $p$ vertices given by the $H_p$-orbit of the point $(1:0:\ldots:0)$. \end{example} \par The duality we want to show is given in the following way: let $\mathfrak{C}(t_1)$ be an algebra isomorphic to the quantum plane. Then its point variety (the point modules of this algebra) is given by the full graph on 3 points, the 3 points corresponding to a unique $H_3$-orbit. Then there exists a unique nonregular algebra $\mathfrak{C}(t_2)$ in our moduli space such that its Koszul dual $\mathfrak{C}(t_2)^!$ has as its point variety this unique $H_3$-orbit. For example, the point variety of the quantum plane $\mathfrak{C}(0)$ \begin{displaymath} xy+yx=xz+zx=yz+zy=0 \end{displaymath} is given by the three lines $x=0,y=0,z=0$, and they intersect 2 by 2 in 3 points. These points are the solutions of the equations $xy=xz=yz=0$. These equations are the relations of the Koszul dual of the algebra with relations \begin{displaymath} x^2=y^2=z^2=0, \end{displaymath} which is the algebra $\mathfrak{C}(\infty)$. \begin{theorem} There is a 1-to-1 correspondence between the $H_3$-Clifford algebras isomorphic to $\mathbb{C}_{-1}[x_0,x_1,x_2]$ and the nonregular $H_3$-Clifford algebras, with the correspondence being given in the following way: the regular $H_3$-Clifford algebra $\mathfrak{C}(t)$ corresponds to the nonregular $\mathfrak{C}(t')$ iff $t$ and $t'$ are fixed by the same subgroup of order 3 in $PSL_2(3)$. This correspondence gives a natural bijection between $\mathbb{P}^1_{\mathbb{F}_3}$ and the points in $\mathbb{P}^1$ with corresponding $H_3$-Clifford algebra isomorphic to the quantum plane. \end{theorem} \begin{remark} Although $4$ is not prime, one can ask the natural question what happens if $n = 4$. This case is however not very interesting: there aren't any quadratic algebras $\mathfrak{C}$ with defining relations $x_ix_j+x_jx_i=a_{ij}x_k^2$ and with degree 1 part isomorphic to $V_1$, $V_1$ being the $H_4$ representation with $z$ working as $iI_4$ (apart from the quantum space with $a=0$). There aren't any squares of elements in $V_1$ on which $e_2$ works as multiplying by $\pm i$, so this forces that $x_i x_j + x_j x_i = 0$ whenever $j-i \equiv 1 \bmod 2$. When $j-i \equiv 0 \bmod 2$, we need to define $x_0 x_2 +x_2 x_0$ and $x_1 x_3+x_3 x_1$, so we need to find elements $v$ and $w$ in degree 1 such that the action on $v^2$ and $w^2$ by $e_2$ is respectively given by multiplication with $-1$ and $1$, with the extra condition that $e_1$ permutes $v$ and $w$. Such elements are impossible to find (except when $u=v=0$), so we are done. \end{remark} By the last remark we have proved \begin{theorem} The only $H_4$-Clifford algebra is $\mathbb{C}_{-1}[x_0,x_1,x_2,x_3]$, which is regular. \end{theorem} \section{The case $n=5$} From now on, $\omega = e^\frac{2\pi i}{5}$. Similar to the case of $n=3$, we have a right action of $PSL_2(5) \cong A_5$ on $\mathbb{P}^2$ that gives isomorphic algebras in the moduli space. This action is found using the exact same procedure as in the previous section. Again using $U,V$ as generators of $PSL_2(5)$, we get the following matrices \begin{displaymath} U\leftrightarrow \frac{1}{\sqrt{5}}\begin{bmatrix} \omega^2+\omega^3 & \omega+\omega^4 & 2 \\ \omega+\omega^4 & \omega^2+\omega^3 & 2 \\ 1&1&1 \end{bmatrix}, V\leftrightarrow \frac{1}{\sqrt{5}}\begin{bmatrix} \omega+\omega^2 & \omega^2+1 & 2 \\ \omega^3+1 & \omega^3+\omega^4 & 2 \\ \omega^4 & \omega &1 \end{bmatrix} \end{displaymath} and now this projective representation comes from a simple representation of $A_5$, the icosahedron representation. The factor $\frac{1}{\sqrt{5}}$ is necessary to get an $A_5$ representation, but it doesn't matter for our algebras. To discuss these algebras, we will sometimes use the algebras which correspond to points at infinity, because their equations are easier to work with. \subsection{The nonregular algebras} The nicest algebras are those that are noetherian domains, so to find them we will use theorem \ref{th:Reg}. Using this theorem and the fact that the equations of the quadric system correspond to the relations of the Koszul dual, we need to analyse the following equations \begin{subequations} \begin{equation} x_0^2+ax_1x_4+bx_2x_3=0, \end{equation} \begin{equation} x_1^2+ax_2x_0+bx_3x_4=0, \end{equation} \begin{equation} x_2^2+ax_3x_1+bx_4x_0=0, \end{equation} \begin{equation} x_3^2+ax_4x_2+bx_0x_1=0, \end{equation} \begin{equation} x_4^2+ax_0x_3+bx_1x_2=0 \end{equation} \label{eq:koszul} \end{subequations} and determine when the only solution is given by $(0,0,0,0,0)$. \par We can also look at the $H_5$-Clifford algebras corresponding to points on the line at infinity, so that we get a projective variety for every point in our moduli space $\mathbb{P}^2$. We do this by changing $(a,b)$ to $(A:B:C)$ and putting a $C$ before every $x_i^2$. However, since every point on the line at infinity of $\mathbb{P}^2$ is in the $PSL_2(5)$-orbit of an affine point, this is not always interesting. The points at infinity however will give equations that are easier to handle in some cases. \begin{theorem} Generically, the $H_5$-Clifford algebra $\mathfrak{C}(A:B:C)$ is a regular graded Clifford algebra. \end{theorem} \begin{proof} Calculating the Gr\"obner basis of the relations of $\mathfrak{C}(a,b)^!$ using Mathematica, one finds that the following monomial belongs to the ideal $I$ defined by the equations from \ref{eq:koszul} \begin{equation} (1+a^5-4ab+a^6b+5a^3b^3+b^5+ab^6)x_4^6. \end{equation} This means that, if $1+a^5-4ab+a^6b+5a^3b^3+b^5+ab^6 \neq 0$, $x_4^6 \in I$. But $I$ is closed under the action of the Heisenberg group, so this implies that every $x_i^6$ belongs to $I$. This implies that $I$ defines the empty set and so $\mathfrak{C}(a,b)$ would have all the good properties we desire. \end{proof} \par So generically, we get a regular graded Clifford algebra. The only possible `bad' algebras are given by the curve $1+a^5-4ab+a^6b+5a^3b^3+b^5+ab^6 =0$. Adding the line $C=0$, decomposing the equation $1+a^5-4ab+a^6b+5a^3b^3+b^5+ab^6 =0$ and looking at the projective closure of this variety, we get 6 lines and a conic section \begin{subequations} \begin{equation} C = 0, \end{equation} \begin{equation}C+A+B=0, \end{equation} \begin{equation} C+\omega A + \omega^4 B=0, \end{equation} \begin{equation} C+\omega^4A+\omega B=0, \end{equation} \begin{equation} C+\omega^2 A + \omega^3 B=0 \end{equation} \begin{equation} C+\omega^3 A + \omega^2 B=0, \end{equation} \begin{equation} AB + C^2 = 0, \end{equation} \label{eq:6lines} \end{subequations} where the regularity condition possibly fails. Notice that the 6 lines we found form an orbit under $PSL_2(5)$ (as we expected). This means that, if we want to analyse how `far' these algebras are from being Artin-Schelter regular, we can restrict ourselves to studying the points on the line $C=0$ and on the conic section $AB+C^2$. For the line $C=0$, this boils down to describing the relations of the form \begin{subequations} \begin{equation} Ax_1x_4+Bx_2x_3=0, \end{equation} \begin{equation} Ax_2x_0+Bx_3x_4=0, \end{equation} \begin{equation} Ax_3x_1+Bx_4x_0=0, \end{equation} \begin{equation} Ax_4x_2+Bx_0x_1=0, \end{equation} \begin{equation} Ax_0x_3+Bx_1x_2=0, \end{equation} \label{eq:lininf} \end{subequations} excluding the case $(A,B)=(0,0)$. For a generic point on the line $C=0$, one can use Macaulay2 and find that the Hilbert series is given by \begin{displaymath} \frac{1+4t+5t^2-5t^4}{1-t}. \end{displaymath} So it follows that generically, the $H_5$-orbit of $(1:0:0:0:0)$ is the solution set of the equations from \ref{eq:lininf}. However, when you calculate the Gr\"obner basis (which determines the Hilbert series) using Mathematica, there are 7 points on this line where this (possibly) fails: \begin{align*} (A:B:C)&=(1:-\omega^k:0), k = 0\ldots 4,\\ (A:B:C)&=(1:0:0),\\ (A:B:C)&=(0:1:0). \end{align*} The Hilbert series indeed fails here. Calculating Hilbert series, we find \begin{theorem} For a point on the line $C = 0$, the algebra $\mathfrak{C}(A:B:C)^!$ determines the $H_5$-orbit of $(1:0:0:0:0)$ as Zariski-closed subset of $\mathbb{P}^4$, except in the following 7 cases: \begin{itemize} \item For $k=0,\ldots,4$, the point $(1:-\omega^k:0)$ is the intersection of the line $C=0$ and the line $C + \omega^{3k} A +\omega^{-3k} B = 0$. At the point $(1:-\omega^k:0)$, the projective variety determined by $\mathfrak{C}(1:-\omega^k:0)^!$ is given by 10 points. These 10 points form the union of 2 $H_5$-orbits, each orbit consisting of $5$ elements. Representatives of these 2 orbits are given by $(1:0:0:0:0)$ and $(1:1:\omega^{-2k}:\omega^{-k}:\omega^{-2k})$. The Hilbert series of the commutative algebra $\mathfrak{C}(A:B:C)^!$ is given by \begin{displaymath} \frac{1+4t+5t^2}{1-t}. \end{displaymath} \item For $(A:B:C)=(1:0:0)$, we get 5 lines, $\cup_{i=0}^4 \mathbf{V}(x_i,x_{i+1},x_{i+2})$. The Hilbert series $\mathfrak{C}(1:0:0)^!$ is given by \begin{displaymath} \frac{1+3t+t^2}{(1-t)^2}. \end{displaymath} The corresponding configuration with its vertices the 5 points in the orbit of $(1:0:0:0:0)$ is given by figure \ref{fig:Configuration 1}. \begin{center} \begin{figure}[H] \begin{tikzpicture}[style=thick] \draw (18:3cm) circle (2pt) node[above right=-1.75pt]{$(0:1:0:0:0)$} -- (90:3cm); \draw (90:3cm) circle (2pt) node[above]{$(1:0:0:0:0)$} -- (90+72:3cm); \draw (90+72:3cm) circle (2pt) node[above left=-1.75pt]{$(0:0:0:0:1)$} -- (90+72+72:3cm); \draw (90+72+72:3cm) circle (2pt) node[below]{$(0:0:0:1:0)$} -- (90+72+72+72:3cm); \draw (90+72+72+72:3cm) circle (2pt) node[below]{$(0:0:1:0:0)$} -- (90+72+72+72+72:3cm); \end{tikzpicture} \caption{First configuration} \label{fig:Configuration 1} \end{figure} \end{center} \item For $(A:B:C) = (0:1:0)$, there are again 5 lines, now given by $\cup_{i=0}^4 \mathbf{V}(x_i,x_{i+1},x_{i+3})$. The Hilbert series of $\mathfrak{C}(0:1:0)^!$ is given by \begin{displaymath} \frac{1+3t+t^2}{(1-t)^2}. \end{displaymath} The corresponding configuration is given by figure \ref{fig:Configuration 2}. \begin{center} \begin{figure}[H] \begin{tikzpicture}[style=thick] \draw (18:3cm) circle (2pt) node[above right=-1.75pt]{$(0:1:0:0:0)$} -- (90+72:3cm); \draw (90+72:3cm) circle (2pt) node[above left=-1.75pt]{$(0:0:0:0:1)$} -- (90+72+72+72:3cm); \draw (90+72+72+72:3cm) circle (2pt) node[below]{$(0:0:1:0:0)$} -- (90:3cm); \draw (90:3cm) circle (2pt) node[above]{$(1:0:0:0:0)$} -- (90+72+72:3cm); \draw (90+72+72:3cm) circle (2pt) node[below]{$(0:0:0:1:0)$} -- (18:3cm); \end{tikzpicture} \caption{Second configuration} \label{fig:Configuration 2} \end{figure} \end{center} \end{itemize} \end{theorem} For a point on the conic section $AB+C^2$, the relations from equation \ref{eq:koszul} determine a smooth genus 1 curve, except when $A=0$, $B=0$ or \begin{align*} (A:B:C) &= (\omega^k (\omega^2+\omega^3),\omega^{-k}(\omega+\omega^4),1),k=0,\ldots 4,\\ (A:B:C) &= (\omega^k (\omega+\omega^4),\omega^{-k}(\omega^2+\omega^3),1),k=0,\ldots 4. \end{align*} These 12 points are of course the intersection points with the 6 lines from equation \ref{eq:6lines}. For every affine point $(a,\frac{-1}{a})$ excluding the 12 special points, the point $(0:1:a:-a:-1)$ and its $H_5$-orbit lies on the curve defined by \ref{eq:koszul}. Taking this point to be the point $O$, the corresponding curve is an elliptic curve $E$, and the $H_5$-orbit of $O$ gives an embedding of $\mathbb{F}_5 \times \mathbb{F}_5$ in $E$. It follows from calculations using Mathematica and Macaulay2 that every point on $\mathbf{V}(AB+C^2)$ determines an elliptic curve, except for 12 points, which are the intersection points of the 6 lines from equation \ref{eq:6lines}. A $PSL_2(5)$-orbit on this conic sections determines an isomorphism class of elliptic curves, but changes the chosen generators of $E[5]$. These new generators however have the same Weil pairing as the original generators (as they determine the representation in degree 1, which is unchanged) and therefore the conic section $\mathbf{V}(AB+C^2)-\{12 \text{ points}\}$ with the $PSL_2(5)$-action is a model for the modular curve $X(5)$. The extra 12 points give the compactification of $X(5)$. Summarizing, we have the following theorem: \begin{theorem} For the points on the curve $\mathbf{V}(AB+C^2)-\{12 \text{ points}\}$, the corresponding algebra $\mathfrak{C}(A:B:C)^!$ is the homogeneous coordinate ring of an elliptic curve $E$ embedded in $\mathbb{P}^4$ with $O=(0:1:a:-a:-1)$. Furthermore, the point $(A:B:C)$ determines an embedding of $\mathbb{F}_5 \times \mathbb{F}_5$ in $E$ and a fixed $\mathbb{F}_5$-basis $(e_1 \cdot O, e_2 \cdot O)$. Conversely, every point in $X(5)$ determines a unique point on $\mathbf{V}(AB+C^2)$ and thus $\mathbf{V}(AB+C^2)$ is a model for $X(5)$, with the 12 extra points the points needed to make the compactification $\overline{X(5)}$. \end{theorem} \par Now, choose a line from equations \ref{eq:6lines}. This line will intersect $\mathbf{V}(AB+C^2)$ in exactly 2 points, $P_1$ and $P_2$. The commutative algebra corresponding to $P_1$ will describe the union of 5 lines that intersect 2 by 2. The intersection points will be a $H_5$-orbit of 5 elements and the configuration will be like figure \ref{fig:Configuration 1}. The point $P_2$ will also determine the union of 5 lines that intersect 2 by 2, with the intersection points the same as for $P_1$, but now the configuration will be like figure \ref{fig:Configuration 2}. \subsection{Being Koszul} This section gives a proof of the following theorem. \begin{theorem} Every $H_5$-Clifford algebra $\mathfrak{C}(A:B:C)$ is Koszul, except for the points on the 6 lines given by the $PSL_2(5)$-orbit of $C=0$ that do not lie on the conic section $\mathbf{V}(AB+C^2)$. \end{theorem} \begin{proof} When $\mathfrak{C}(A:B:C)$ is regular (the generic case), we can use theorem \ref{th:Reg} and apply theorem 2.2 of \cite{shelton2001koszul} to conclude that $\mathfrak{C}(A:B:C)$ is regular. When the Koszul dual is an elliptic curve, it follows from the Koszulity of the homogeneous coordinate ring of an elliptic curve that the algebras of the form (choosing representatives such that the sum of the indices is 0) \begin{align*} x_1x_4+x_4x_1 = a x_0^2, && x_1x_4+x_4x_1 = \frac{-1}{a} x_0^2 \end{align*} are indeed Koszul whenever the point $(a,\frac{-1}{a})$ determines an elliptic curve. For the 12 points on $\mathbf{V}(AB+C^2)$ that determine 5 lines, Koszulity follows from the fact that in this case $\mathfrak{C}(A:B:C)$ is isomorphic to $\mathfrak{C}(1:0:0)$. The relations of $\mathfrak{C}(1:0:0)^!$ are given by 5 monomials of degree 2 (and the commutator relations of course). Then theorem 3.15 of \citep{conca2013koszul} gives us that $\mathfrak{C}(1:0:0)^!$ is indeed Koszul, but then $\mathfrak{C}(1:0:0)$ is also Koszul. \par These are all the Koszul algebras. Assume that $(A:B:C)$ lies on 1 of the 6 lines from \ref{eq:koszul} (excluding the 12 points lying on the conic section and the 15 points at the intersections) and that $\mathfrak{C}(A:B:C)$ is Koszul. Then $\mathfrak{C}(A:B:C)$ has as Hilbert series \begin{displaymath} \frac{1+t}{1-4t+5t^2-5t^4}= 1+5 t+15 t^2+35 t^3+70 t^4+130 t^5+\ldots \end{displaymath} This would mean that the Hilbert series of this algebra is equal to $\frac{1}{(1-t)^5}$ up to degree 4. But then it follows from the fact that this algebra has as a quoti\"ent a graded Clifford algebra that it should have Hilbert series $\frac{1}{(1-t)^5}$, which is not the case. \par The same argument also applies when the Koszul dual of $\mathfrak{C}(A:B:C)$ determines 10 points, because now its Hilbert series should be \begin{displaymath} \frac{1+t}{1-4t+5t^2} = 1+5 t+15 t^2+35 t^3+65 t^4+85 t^5+\ldots \end{displaymath} and this is clearly not $\frac{1}{(1-t)^5}$. \end{proof} \subsection{The quantum planes and their relationship with the 6 lines} $PSL_2(5)$ has 6 5-Sylow groups, each one stabilizing one of the 6 lines of equation \ref{eq:6lines}. However, such a group doesn't fix the entire line, but rather 2 points (the intersection with $\mathbf{V}(AB+C^2)$). Since these groups are cyclic and they are all conjugated, it suffices to work with one 5-Sylow group, e.g. the group generated by $VU$. The corresponding matrix is given by \begin{displaymath} \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} \leftrightarrow\begin{bmatrix} \omega^4 & 0 & 0 \\ 0 & \omega & 0 \\ 0&0&1 \end{bmatrix}, \end{displaymath} so the third point fixed by this matrix is $(0:0:1)$, with corresponding algebra the quantum space $\mathfrak{C}(0:0:1)$ with relations $x_k x_l +x_l x_k = 0, k \neq l$. The other points in the $PSL_2(5)$-orbit of $(0:0:1)$ are given by $(2\omega^k:2\omega^{-k}:1), k = 0,\ldots 4$. This gives us 6 points with corresponding Clifford algebra isomorphic to the quantum space $\mathfrak{C}(0:0:1)$, with the $PSL_2(5)$-action the same as its action on $\mathbb{P}^2_{\mathbb{F}_5}$. \par Again we have a certain duality regarding these quantum algebras and the 6 lines from equation \ref{eq:6lines}. I will explain it using the example of $(0:0:1)$ and the line $C=0$. The point modules of the quantum plane in this case are parametrized by the full graph on 5 points, with its vertices given by the $H_5$-orbit of $(1:0:0:0:0)$. The configuration is given by figure \ref{fig:fullgraph}. \begin{center} \begin{figure}[H] \begin{tikzpicture}[style=thick] \draw (18:3cm) circle (2pt) node[above right=-1.75pt]{$(0:1:0:0:0)$} -- (90:3cm); \draw (18:3cm) circle (2pt) -- (90+72:3cm); \draw (18:3cm) circle (2pt) -- (90+2*72:3cm); \draw (90:3cm) circle (2pt) -- (90+3*72:3cm); \draw (90:3cm) circle (2pt) -- (90+2*72:3cm); \draw (90+72:3cm) circle (2pt) -- (90+3*72:3cm); \draw (90:3cm) circle (2pt) node[above]{$(1:0:0:0:0)$} -- (90+72:3cm); \draw (90+72:3cm) circle (2pt) node[above left=-1.75pt]{$(0:0:0:0:1)$} -- (90+72+72:3cm); \draw (90+72+72:3cm) circle (2pt) node[below]{$(0:0:0:1:0)$} -- (90+72+72+72:3cm); \draw (90+72+72+72:3cm) circle (2pt) node[below]{$(0:0:1:0:0)$} -- (90+72+72+72+72:3cm); \end{tikzpicture} \caption{The full graph on 5 points} \label{fig:fullgraph} \end{figure} \end{center} The graph of figure \ref{fig:fullgraph} is the union of figure \ref{fig:Configuration 1} and \ref{fig:Configuration 2}, which were projective varieties determined by the $\mathfrak{C}(1:0:0)^!$ and $\mathfrak{C}(0:1:0)^!$. More generally, every point on the line $C=0$ has the $H_5$-orbit of $(1:0:0:0:0)$ in the zeroset of $\mathfrak{C}(A:B:C)^!$, which are the vertices of the graph of figure \ref{fig:fullgraph}. Summarizing, we have \begin{theorem} There is a 1-to-1 correspondence between the $PSL_2(5)$-orbit of $(0:0:1)$ and the $PSL_2(5)$-orbit of the line $C=0$. Moreover, for every line in the $PSL_2(5)$-orbit of the line $C=0$, there are 2 points, the intersections with $\mathbf{V}(AB+C^2)$, whose Koszul dual determines a graph on 5 points. The union of these 2 graphs gives the full graph on 5 points, which is the point variety of the corresponding algebra isomorphic to the quantum space.\\ The 2 points of $\mathbf{V}(AB+C^2)$ that correspond to the same quantum space are determined by the fact that they are fixed by the same cyclic subgroup of order $5$ in $PSL_2(5)$. \end{theorem} \section{Generalities for prime dimension} In this section, $\omega = e^{\frac{2\pi i}{p}}$ with $p\geq 5$ prime. Many things described in the previous section for the case $p=5$ can be arbitrarily extended to every dimension $p$, $p$ prime. The relations we are now interested in are given by the $H_p$-action on (indices are taken $\bmod p$) \begin{align*} a_0(x_0x_i + x_i x_0) = a_i x_{\frac{i}{2}}^2, 0 < i \leq \frac{p-1}{2}, \end{align*} so our moduli space is given by $\mathbb{P}^{\frac{p-1}{2}}$ and again we have a $PSL_2(p)$ action on this space. \subsection{The quantum spaces} We will prove here the following theorem. \begin{theorem} There are exactly $p+1$ points in $\mathbb{P}^{\frac{p-1}{2}}$ for which the corresponding algebra $\mathfrak{C}(a_0:\ldots:a_{\frac{p-1}{2}})$ is isomorphic to the algebra with relations $x_i x_j + x_jx_i = 0,i \neq j$ and they form an orbit under the $PSL_2(p)$-action. \label{th:quantum} \end{theorem} Before we prove this, some considerations must be made. First of all, the elements of $\mathbb{P}^{p-1}$ with a nontrivial stabilizer in $H_p$ must be found. This is equivalent to finding the eigenvectors of all the elements of $H_p$. Since a central element has every vector as eigenvector (and works therefore as the identity on $\mathbb{P}^{p-1}$), it is sufficient to consider elements of the form $e_1^k e_2^l$. Since $e_1 e_2 = z e_2 e_1$ with $z$ central, we can take powers of $(e_1^k e_2^l)^m$ until we get something of the form $z^r e_1^{-1} e_2^t$, unless $k=0$. Disregarding the part $z^r$, it suffices to determine the eigenvectors for the elements $e_1^{-1} e_2^t,t=0,\ldots, p-1$ and the element $e_2$. Of course, points belonging to the same orbit have the same stabilizer (since the $H_p$-action on $\mathbb{P}^{p-1}$ is actually an action of $\mathbb{F}_p \times \mathbb{F}_p$ and this group is commutative), so it is sufficient to give one representative of such an orbit. This means we have a total of $p+1$ orbits in $\mathbb{P}^{p-1}$ with the property that such an orbit consists of $p$ elements instead of $p^2$ elements. \begin{lemma} The fixed points of $e_1^{-1}e_2^k, k=0,\ldots,p-1$ are given by the $H_p$-orbit of the following element: \begin{displaymath} (1:\omega^k:\omega^{3k}:\ldots : \omega^{k\frac{p(p-1)}{2}}), \end{displaymath} which has as its $i$th co\"ordinate $\omega^{k\frac{i(i+1)}{2}}$, if we start counting from 0. The fixed points of $e_2$ are given by the $H_p$-orbit of $(1:0:\ldots:0)$. These are all the points with a nontrivial stabilizer. \label{lem:fixpoint} \end{lemma} \begin{proof} We again identify the image of $e_1$ and $e_2$ in $\mathbb{F}_p \times \mathbb{F}_p$ with the generators of $H_p$. Since the order of $\mathbb{F}_p \times \mathbb{F}_p$ is $p^2$, every point in $\mathbb{P}^{p-1}$ has as its stabilizer the entire group, nothing, or a cyclic group of order $p$. The entire group is impossible, since the action of $H_p$ on $\mathbb{C}^p$ was irreducible. Every cyclic group of order $p$ in $\mathbb{F}_p \times \mathbb{F}_p$ has a generator of the form $e_2$ or $e_1^{-1}e_2^k, k=0,\ldots,p-1$. Since the eigenvalues of the matrices of the representation are all distinct, there are exactly $p$ points that are fixed by any such element. So if we check that the element claimed by the theorem indeed is fixed by $e_1^{-1}e_2^k$, we are done (the claim for the group generated by $e_2$ is trivial). By a calculation, we get \begin{align*} &e_1^{-1}e_2^k(1:\ldots:\omega^{k\frac{i(i-1)}{2}}:\omega^{k\frac{i(i+1)}{2}}:\ldots : \omega^{k\frac{p(p-1)}{2}})\\ =& e_1^{-1}(1:\ldots:\omega^{k\frac{i(i-1)}{2}+(i-1)k}:\omega^{k\frac{i(i+1)}{2}+ik}:\ldots : \omega^{k\frac{p(p-1)}{2}+k(p-1)})\\ =&(\omega^{k\frac{p(p-1)}{2}+k(p-1)}:\ldots:\underbrace{\omega^{k\frac{i(i-1)}{2}+(i-1)k}}_{i}:\underbrace{\omega^{k\frac{i(i+1)}{2}+ik}}_{i+1}:\ldots: \omega^{k\frac{(p-2)(p-1)}{2}+k(p-2)}) \end{align*} Now, we have that $\omega^{k\frac{p(p-1)}{2}+k(p-1)} = \omega^{-k}$, so we may multiply each coordinate with $\omega^k$ and this gives us \begin{align*} =&(1:\ldots:\underbrace{\omega^{k\frac{i(i+1)}{2}}}_{i}:\underbrace{\omega^{k\frac{(i+2)(i+1)}{2}}}_{i+1}:\ldots: \omega^{k\frac{p(p-1)}{2}}). \end{align*} \end{proof} Calculating what the action of $U$ on $(1:0:\ldots:0) \in \mathbb{P}^{\frac{p-1}{2}}$ in the moduli space exactly is, we see that $U$ switches the point $(1:0:\ldots:0)$ with $(1:2:\ldots:2)$. Next, the action of $VU$ fixes the point $(1:0:\ldots:0)$, but doesn't fix the point $(1:2:\ldots:2)$. Since the order of $VU$ is prime, this means that the $VU$-orbit of $(1:2:\ldots:2)$ consists of $p$ elements. Combining these 2 observations, we have \begin{lemma} The $PSL_2(p)$-orbit of $(1:0:\ldots:0)$ consists of at least $p+1$ elements. \label{lem:p1elem} \end{lemma} \begin{proof}\emph{[Theorem \ref{th:quantum}]} The quantum space $\mathfrak{C}(1:0:\ldots:0:0)$ has the following property: there are exactly $p$ points in $\mathbb{P}^{p-1}$ for which the associated quadratic form has rank 1 and these $p$ points form an orbit under the Heisenberg action. So every algebra determined by a point in our moduli space $\mathbb{P}^{\frac{p-1}{2}}$ and isomorphic to the quantum space must have the same property. In \ref{lem:fixpoint} we found that there are $p+1$ different orbits consisting of $p$ points. Let $\rho=\omega^2$. We may set $a_0 = 1$, because otherwise the algebra $\mathfrak{C}(a_0:\ldots:a_{\frac{p-1}{2}})$ is not a domain and therefore it can not be isomorphic to $\mathfrak{C}(1:0:\ldots:0:0)$. So if the algebra $\mathfrak{C}(a_1,\ldots ,a_{\frac{p-1}{2}})$ is isomorphic to $\mathfrak{C}(1:0:\ldots:0:0)=\mathfrak{C}(0,\ldots,0)$, we must have that the matrix \begin{displaymath} \begin{bmatrix} 2x_0^2 & a_1 x_{\frac{p+1}{2}}^2 & a_2 x_{1}^2 & \cdots & a_{\frac{p-1}{2}}x_{\frac{p+\frac{p-1}{2}}{2}}^2& \cdots & a_1 x_{\frac{p+(p-1)}{2}}^2 \\ a_1 x_{\frac{p+1}{2}}^2 & 2 x_1^2 & a_1 x_{\frac{p+3}{2}}^2 & \cdots & \cdots &\cdots& a_2 x_0^2 \\ \vdots & \vdots & \ddots \end{bmatrix} \end{displaymath} has rank 1 when $(x_0^2,\ldots,x_{p-1}^2)$ is equal to one of the points found in \ref{lem:fixpoint} (with $\omega$ changed by $\rho$ since $z$ acts on the representation $\mathbb{C}x_0^2 + \ldots +\mathbb{C}x_{p-1}^2$ as $\rho I_p$). These conditions completely determine $a_1,\ldots, a_{\frac{p-1}{2}}$ and therefore we can maximally have $p+1$ algebras isomorphic to $\mathfrak{C}(1:0:\ldots:0:0)$. But lemma \ref{lem:p1elem} gives us that there are minimally $p+1$ points, so we have exactly $p+1$ points. Therefore, the $PSL_2(p)$-orbit of $(1:0:\ldots:0)\in \mathbb{P}^{\frac{p-1}{2}}$ gives all the algebras isomorphic to the quantum space and this orbit consists of $p+1$ points. \end{proof} \subsection{Generalizing the duality} We have a refinement of the duality between the quantum space and $\frac{p-1}{2}$ nonregular algebras, the algebras $\mathfrak{C}(0:0:\ldots:\underbrace{1}_i:\ldots:0), i \neq 0$. This is established in the following way: the algebra $\mathfrak{C}(0:\ldots:\underbrace{1}_i:\ldots:0:0)$ can be recovered from the quantum plane by deleting the $H_p$-orbit of $a_0(x_0x_i + x_i x_0)= a_i x_{\frac{i}{2}}^2$ and instead adding the relations $x_0^2 = x_1^2 = \ldots = x_{p-1}^2=0$. The Koszul dual of this algebra is a commutative algebra with relations $x_0 x_i = x_1 x_{i+1} = \ldots =x_{p-1}x_{p-1+i}=0$. These relations determine $p$ linear spaces of dimension $\frac{p-1}{2}-1$ and for every $0<i\leq \frac{p-1}{2}$ the corresponding union of $p$-dimensional subspaces is different from the other ones. It also follows that the points $(0:0:\ldots:\underbrace{1}_i:\ldots:0)\in \mathbb{P}^{\frac{p-1}{2}}$ are fixed by the element \begin{displaymath} \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} \in PSL_2(p). \end{displaymath} \par Another duality is given between the $p+1$ hyperplanes in $\mathbb{P}^{\frac{p-1}{2}}$ determined by the $PSL_2(p)$-orbit of the hyperplane $a_0=0$ and the $p+1$ algebras isomorphic to the quantum space. Generically, a point on such a hyperplane determines a unique $H_p$-orbit of $p$ points. These $p$ points are the vertices of the full graph on these $p$ points, which will parametrize the point variety of the corresponding algebra isomorphic to the quantum plane. So we have the following theorem \begin{theorem} There is a 1-to-1 correspondence between the algebras isomorphic to the quantum space and the $PSL_2(p)$-orbit of the hyperplane $a_0=0$. This correspondence is given by the following rule: take the unique $p$-Sylow group of $PSL_2(p)$ that fixes the point in the $PSL_2(p)$-orbit of $(1:0:\ldots:0)$. Then there are $\frac{p-1}{2}$ other points that are also fixed by the same subgroup and the hyperplane through these points is the corresponding hyperplane. \end{theorem} \begin{proof} The only thing we need to check is that the action of a $p$-Sylow subgroup does indeed have exactly $p$ fixed points in $\mathbb{P}^{\frac{p-1}{2}}$, which amounts to proving that the matrix associated to a generator of the chosen subgroup has exactly $\frac{p-1}{2}$ different eigenvalues. Since all $p$-Sylow subgroups are conjugated, it suffices to prove this for the subgroup generated by \begin{displaymath} M=\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} \in PSL_2(p). \end{displaymath} Calculating the action of $e_1e_2$ on an algebra corresponding to the point $(a_1,\ldots,a_\frac{p-1}{2})$, we find that the new generators $y_i$ are equal to \begin{displaymath} y_i = \omega^{-\frac{i(i+1)}{2}}x_i. \end{displaymath} If we want to calculate the action of this element on a point $(a_1,\ldots,a_\frac{p-1}{2})$, we find that \begin{align*} y_0 y_i + y_i y_0 &= \omega^{-\frac{i(i+1)}{2}}(x_0 x_i + x_i x_0)\\ &= \omega^{-\frac{i(i+1)}{2}} a_i x_{\frac{i}{2}}^2\\ &= \omega^{-\frac{i(i+1)}{2}} a_i \omega^{\frac{i}{2}(\frac{i}{2}+1)} y_{\frac{i}{2}}^2 \\ &= \omega^{\frac{-i^2}{4}}a_i y_{\frac{i}{2}}^2. \end{align*} This makes it clear that the corresponding matrix is a diagonal matrix. Now, if $ \omega^{\frac{-i^2}{4}} = \omega^{\frac{-j^2}{4}}$, then we must have $i^2 \equiv j^2 \bmod p$ or put differently, $i \equiv \pm j \bmod p$. Since we have $1\leq i,j \leq \frac{p-1}{2}$, this ensures that $i = j$. \end{proof} \subsection{Character series} In the context of finding noncommutative algebras with properties similar to the polynomial rings, one of the similarities we would like to study are the character series. In this subsection we prove the following theorem. \begin{theorem} If the algebra $\mathfrak{C}(a_0:\ldots:a_{\frac{p-1}{2}})$ satisfies the conditions of theorem \ref{th:Reg}, it has the same character series as the polynomial ring in $p$ variables. \end{theorem} First, we need some representation theory of $H_p$. More importantly, we need to find the decompositions of tensor products of simple $p$-dimensional representations. Let $(V,\varphi)$ be the simple representation corresponding to $\varphi(z) = \omega I$ and let $(V_i,\varphi_i)$ be the simple representation defined by $\varphi_i(z) = \omega^i I, i = 1,\ldots p-1$. Let $W_{i,j},\chi_{i,j}$ be the $1$-dimensional representations of $H_p$ such that $\chi_{i,j}(e_1) = \omega^i, \chi_{i,j}(e_2) = \omega^j$ (in particular, $T = \chi_{0,0}$) and define $W = \oplus_{i,j=0}^{p-1} W_{i,j}$. Then character decomposition shows that \begin{align*} V \otimes W_{i,j} &= V, \\ V \otimes V_i &= V_{i+1}^{\oplus p}, i \neq p-1, \\ V \otimes V_{p-1} &= W. \end{align*} For example, the tensor algebra $T(V)$ has as character series \begin{align*} Ch_{T(V)}(z^k,t) &= \frac{1}{1-\omega^k pt},\\ Ch_{T(V)}(e_1^ke_2^l,t)&=1, (k,l)\neq (0,0). \end{align*} To find the character series of our graded Clifford algebras, we will use the fact that such an algebra is a free module of rank $2^p$ over a polynomial ring in $p$ variables, with a basis found by the ordered monomials \begin{displaymath} \{x_{i_1}\ldots x_{i_k}| 0\leq i_1<i_2<\ldots<i_k\leq p-1, 0\leq k \leq p-1\}. \end{displaymath} So in order to find the character series of the regular Clifford algebras, we first need to find the character series of $S(V)=\mathbb{C}[V]$. Since this is a Koszul algebra, it is more convenient to calculate the character series of $\wedge(V) = (S(V)^!)^*$, for this is a finite dimensional algebra and then use equation \ref{al:chKos}. In the tensor algebra $T(V)$, every degree not divisible by $p$ decomposes as one simple representation with a certain multiplicity. From this we immediately deduce \begin{align*} \wedge^0(V) &= T,\\ \wedge^i(V) &= V_i^{\oplus \frac{\binom{p}{i}}{p}},0\neq i \neq p. \end{align*} In degree $p$, it is easily checked that $x_0 \wedge \ldots \wedge x_{p-1}$ is fixed by $H_p$ and thus $\wedge^p(V) = T$. Therefore, we have \begin{align*} Ch_{\wedge(V)}(z^k,t)&=(1+\omega^k t)^p\\ Ch_{\wedge(V)}(e_1^ke_2^l,t)&=1+t^p, (k,l)\neq (0,0). \end{align*} We can now use equation \ref{al:chKos} to find the character series of $S(V)$. \begin{theorem} The character series of $S(V)$ is given by \begin{align*} Ch_{S(V)}(z^k,t)&=\frac{1}{(1-\omega^k t)^p}\\ Ch_{S(V)}(e_1^ke_2^l,t)&=\frac{1}{1-t^p}, (k,l)\neq (0,0). \end{align*} \end{theorem} If we want to calculate the character series of the regular Clifford algebra, we use the fact that this algebra is a free module of rank $2^p$ over the polynomial ring $\mathbb{C}[x_0^2,\ldots,x_{p-1}^2]$. The character series of $\mathbb{C}[x_0^2,\ldots,x_{p-1}^2]$ is given by \begin{align*} Ch_{\mathbb{C}[x_0^2,\ldots,x_{p-1}^2]}(z^k,t)&=\frac{1}{(1-\omega^{2k} t^2)^p}\\ Ch_{\mathbb{C}[x_0^2,\ldots,x_{p-1}^2]}(e_1^ke_2^l,t)&=\frac{1}{1-t^{2p}}, (k,l)\neq (0,0). \end{align*} As an $H_p$-representation, the vector space with basis \begin{displaymath} \{x_{i_1}\ldots x_{i_k}| 0\leq i_1<i_2<\ldots<i_k\leq p-1, 0\leq k \leq p\} \end{displaymath} is the same as $\wedge(V)$. Therefore, the character series of the graded Clifford algebra is given by \begin{align*} Ch_{\mathfrak{C}(a_0:\ldots:a_{\frac{p-1}{2}})}(z^k,t)&=Ch_{\mathbb{C}[x_0^2,\ldots,x_{p-1}^2]}(z^k,t)Ch_{\wedge(V)}(z^k,t)\\&=\frac{(1+\omega^k t)^p}{(1-\omega^{2k} t^2)^p} = \frac{1}{(1-\omega^k t)^p},\\ Ch_{\mathfrak{C}(a_0:\ldots:a_{\frac{p-1}{2}})}(e_1^ke_2^l,t)&=Ch_{\mathbb{C}[x_0^2,\ldots,x_{p-1}^2]}(e_1^ke_2^l,t)Ch_{\wedge(V)}(e_1^ke_2^l,t)\\&=\frac{1+t^p}{1-t^{2p}}=\frac{1}{1-t^p}, (k,l)\neq (0,0). \end{align*} This is indeed the character series of $S(V)$ as expected. \subsection{The character series of the center} We again assume that the requirements of theorem \ref{th:Reg} are fulfilled. Since $p\geq 5$ is prime, it is not divisible by 2. This means that the center of the Clifford algebra $\mathfrak{C}(a_0:\ldots:a_{\frac{p-1}{2}})=\mathfrak{C}(a_1,\ldots,a_{\frac{p-1}{2}})$ is a quadratic extension of the polynomial ring $\mathbb{C}[x_0^2,\ldots,x_{p-1}^2]$. The extra element is given by an element of degree $p$, $0 \neq g \in Z(\mathfrak{C}(a_1,\ldots,a_{\frac{p-1}{2}}))_p$ and satisfies the equation $g^2=det(M)$, with $M$ the quadratic form associated to $\mathfrak{C}(a_1,\ldots,a_{\frac{p-1}{2}})$. Since $H_p$ works as algebra automorphisms, it follows that the center is fixed by $H_p$ and so $\mathbb{C}g$ is a 1-dimensional $H_p$-representation. We will now prove \begin{proposition} As an $H_p$-representation, $\mathbb{C}g\cong T$. \end{proposition} \begin{proof} The proposition is equivalent with the statement that $\mathbb{C}det(M)\cong T$, so we will prove this fact. Since $g^2=det(M)$, $\mathbb{C}det(M)$ is indeed a $H_p$-representation. Fix a basis for $\mathfrak{C}(a_1,\ldots,a_{\frac{p-1}{2}})_{2p}$ determined by it being a free module over $\mathbb{C}[x_0^2,\ldots,x_{p-1}^2]$ of rank $2^p$ with the ordered monomials as basis and decompose this vector space as simple $H_p$-representations. Calculating the determinant of $M$, we get an element of the center of the form \begin{displaymath} det(M)=(2^p+f(a_1,\ldots,a_{\frac{p-1}{2}}))x_0^2x_1^2\ldots x_{p-1}^2 + \ldots \end{displaymath} with $2^p+f(a_1,\ldots,a_{\frac{p-1}{2}})\neq 0$ a polynomial (not $0$ because for the quantum plane we have $f(0,\ldots,0) = 0$). For the other elements of our basis, we get coefficients which are polynomials in $a_1,\ldots,a_{\frac{p-1}{2}}$. Because $2^p+f(a_1,\ldots,a_{\frac{p-1}{2}})\neq 0$, we have a Zariski-open subset of $\mathbb{A}^{\frac{p-1}{2}}$ for which $\mathbb{C}det(M)$ is indeed the trivial representation. Since $\mathbb{C}det(M)$ is always a $H_p$-representation, this means that on the same Zariski-open subset the coefficients of all the nontrivial representations in $\mathfrak{C}(a_1,\ldots,a_{\frac{p-1}{2}})_{2p}$ are 0. Since these coefficients are also polynomial functions on $\mathbb{A}^{\frac{p-1}{2}}$, they must be $0$ on $\mathbb{A}^{\frac{p-1}{2}}$. This means that $\mathbb{C}det(M) \cong T$. \end{proof} From this, one easily finds that the character series of $Z(\mathfrak{C}(a_1,\ldots,a_{\frac{p-1}{2}}))$ is given by \begin{align*} Ch_{Z(\mathfrak{C}(a_1,\ldots,a_{\frac{p-1}{2}}))}(z^k,t)& = \frac{1+t^p}{(1-\omega^{2k} t^2)^p},\\ Ch_{Z(\mathfrak{C}(a_1,\ldots,a_{\frac{p-1}{2}}))}(e_1^ke_2^l,t)&=\frac{1+t^p}{1-t^{2p}}\\ &= \frac{1}{1-t^p}, (k,l)\neq (0,0). \end{align*} \section{Future work} It is our hope to extend the dualities between the points necessary to make a compatification of $X(5)$ and the points isomorphic to the quantum space to the entire curve $\overline{X(5)}$. Hopefully there will be a subset of $\mathbb{P}^2$ that will determine $5$-dimensional Sklyanin algebras depending on an elliptic curve and a point of order 2. The quadratic form associated to this algebra will be crucial in understanding the representation theory and the underlying $H_5$-action, which will work on the variety that parametrizes representations of dimensions $1,2$ or $4$. Further generalizations to dimension $p$, $p$ being a prime number, will also be worked on in the future. \bibliographystyle{abbrvnat}
2,877,628,091,438
arxiv
\section{Additional Transferring Results}\label{sec:additional_transfer_results} \shortautoref{tab:transfer_results} shows additional results of transferring LibriSpeech pre-trained models to three out-of-domain datasets. We share the performance of the models with trained with only 100K updates enable future works for quick comparison. \input{tables/transfer_results} \section{CTC Decoding} \label{app:sec:ctc_decoding} \paragraphsq{Best-path (greedy) Decoding vs. Viterbi Decoding} Best path decoding finds the most likely decoding path. Viterbi decoding~\citep{Viterbi1967ErrorBF} finds the most likely collapsed token sequence by summing up all possible paths. Since the outputs of a CTC model at different timesteps are independent, the best path can be generated by taking the most likely token at each timestep. This makes decoding parallelizable and efficient on GPUs. The Viterbi decoding algorithm is a dynamic programming algorithm, which processes the model outputs sequentially and is implemented on CPUs in W2V2\xspace's implementation. \shortautoref{tab:greedy_vs_viterbi} shows the WER and inference time using best-path (greedy) and Viterbi decoding methods using four official W2V2\xspace ASR models. Two method achieve exactly the same WER while the best path decoding is faster. Although the difference is only about one second, it is significant when using tiny models. It implies that all models are very confident in their predictions. \input{tables/greedy_vs_viterbi} \paragraphsq{Inference Time with Language Models} \shortautoref{tab:lm_decode} shows W2V2\xspace's inference time with or without LM. Decoding with LM improves the WER significantly, but inference time is also increased dramatically. It is likely that the beam search implementation is sequential and CPU-bound which slows down decoding. Reducing the inference time with LM is an important direction for future work. Alternatively, recent work~\citep{xu2020iterative,Xu2020SelftrainingAP} shows pseudo-labeling can improve the model performance and close the gap between with and without LM. This can also solve the slow inference time with LM. We use W2V2\xspace's official inference script in these experiments. The slowdown depends on the CPU type. We observe smaller inference time overhead when decoding on faster CPUs, but for consistency, we use the same type of hardware as our pre-training setup. \input{tables/lm_decode} \section{Disentangled Attention} \label{app:sec:disent_attn} \citet{He2020DeBERTaDB} introduced disentangled attention as a component of their DeBERTa model. Unlike Transformer which adds absolute positional embeddings to the content embeddings at the beginning, disentangled attention keeps the positional embeddings and content embeddings separate and has three attention components in its attention weight computation: (a) content-to-content, (b) content-to-position, and (c) position-to-content. Given the content embeddings ${\bm{C}} \in \mathbb{R}^{T \times d}$ and position embeddings ${\bm{P}} \in \mathbb{R}^{(2k + 1) \times d}$, where k is the maximum relative position, the output embeddings ${\bm{O}} \in \mathbb{R}^{T \times d}$ are: \begin{small} \begin{equation} {\bm{O}} = \mathrm{softmax} \left( \frac{\tilde{{\bm{A}}}}{\sqrt{3d}} \right) {\bm{V}}^c, \quad \tilde{{\bm{A}}}_{i, j} = \underbrace{\left( {\bm{Q}}^c{\bm{K}}^{c\top} \right)_{i, j}}_\text{(a) content-to-content} + \underbrace{\left( {\bm{Q}}^c {\bm{K}}^{p\top} \right)_{i, \delta(i, j)}}_\text{(b) content-to-position} + \underbrace{\left( {\bm{Q}}^p {\bm{K}}^{c\top} \right)_{\delta(i, j), j}}_\text{(c) position-to-content}\;\;, \end{equation} \end{small} \noindent where ${\bm{Q}}^c = {\bm{C}} {\bm{W}}^{q,c}$, ${\bm{K}}^c = {\bm{C}} {\bm{W}}^{k,c}$, ${\bm{V}}^c = {\bm{C}} {\bm{W}}^{v,c}$, ${\bm{Q}}^p = {\bm{P}} {\bm{W}}^{q, p}$, ${\bm{K}}^p = {\bm{P}} {\bm{W}}^{k, p}$, and ${\bm{W}}^{q,c}, {\bm{W}}^{k,c}, {\bm{W}}^{v,c}, {\bm{W}}^{q,p}, {\bm{W}}^{k,p} \in \mathbb{R}^{d \times d}$ are trainable projection weights. $\delta(i, j)$ is the relative position of $i, j$ clamped to $[-k, k]$ and is used to index the corresponding row of ${\bm{P}}$ or its products. Unlike conventional self-attention which only has content-to-content and four matrix multiplication operations in total, disentangled attention has nine multiplication operations in total and is much slower to compute than the conventional self-attention. \section{Experimental Setup Details} \label{app:sec:exp_setup_details} \subsection{LibriSpeech} LibriSpeech (CC BY 4.0)~\citep{Panayotov2015LibrispeechAA} is a corpus of 16kHz read-English speech. The data are derived from read audiobooks from the LibriVox project. LibriSpeech includes a 960h training set (including three subsets: train-clean-100, train-clean-360, and train-other-500), two development sets, dev-clean (5.4h) and dev-other (5.3h), and two test sets, test-clean (5.4h) and test-other (5.1h). The data is designed to provide more challenging evaluation with the dev-other and test-other splits. We use train-clean-100 as the 100h supervised data. For 10min, 1h, and 10h subset of labelled data, we use splits provided by Libri-Light~\citep{librilight}.\footnote{\url{https://dl.fbaipublicfiles.com/librilight/data/librispeech_finetuning.tgz}} \paragraphsq{Pre-training} We use W2V2\xspace's official codebase as provided in fairseq~\citep{Ott2019fairseqAF} for all experiments. Following the provided configuration\footnote{\url{https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/config/pretraining/wav2vec2_base_librispeech.yaml}}, we use the Adam~\citep{Kingma2015AdamAM} optimizer with learning rate 0.0005, betas (0.9, 0.98), weight decay 0.01, and 32K linear warm-up steps~\citep{Goyal2017AccurateLM}. We apply Layerdrop~\citep{Huang2016DeepNW,Fan2020ReducingTD} of 0.05. We use layerdrop rate of 0.1 and 0.2 for \finalmodel and \finalmodelD. Otherwise, the models diverge. This follows the configuration of \wavvecsize{large} which also has 24 Transformer layers. The learning rate is decayed linearly to 0 from 32K steps to 400K steps. Time masking probability is set to 0.065 with mask length 10. Audio examples are batched by length to ensure that the total batch (across 64 GPUs) contains at most $5,600 = 64 \times 1,400,000 / 16,000$ seconds of audios. We use 8 gradient accumulation steps to simulate 64-GPU training with 8 GPUs. Each GPU processes at most $87.5 = 1,400,000 / 16,000$ seconds of audio in each forward-backward pass. For tiny models, the memory usage is lower, so we double this number to 175 seconds and half the gradient accumulation steps to 4, which makes GPUs less underutilized and shortens the pre-training time. This modification does not change the maximum size of the total batch size. We use half-precision (FP16) training. \paragraphsq{Fine-tuning on 10m or 1h Supervised Labels} We use the Adam optimizer with learning rate $3 \times 10^{-5}$, betas (0.9, 0.98) and tri-stage learning rate scheduler~\citep{Zhang2020TransformerTA} (10\% warm-up, 40\% constant, 50\% exponential decay to 5\% of the peak learning rate) for 13K updates . The models are fine-tuned for 13K updates. In the first 10K updates, the context network is frozen and only the additional linear layer is trained. The WFE is always frozen. The audios are batched by length to ensure that the total batch (across 8 GPUs) contains at most $1,600 = 8 \times 3,200,000 / 16,000 $ seconds of audios. We set gradient accumulation steps to 8 to simulate 8-GPU fine-tuning using a single GPU. \paragraphsq{Fine-tuning on 10h Supervised Labels} The models are fine-tuned for 20K updates. In the first 10K updates, the context network is frozen and only the additional linear layer is trained. The WFE is always frozen. The rest of the settings are the same as the 10-minute scenario. \paragraphsq{Fine-tuning on 100h Supervised Labels} The models are fine-tuned for 80K updates. The context network is fine-tuned from the beginning (i.e., not frozen). The WFE is frozen all the time. The rest of the settings are the same as the 10-minute scenario. To speed up the experiment, we set gradient accumulation steps to 2 to simulate fine-tuning with 4 GPUs. \paragraphsq{Inference} Without an LM, we decode the models with a CTC greedy decoding (a.k.a. best path decoding) algorithm, which takes the most likely token at each timestep, collapses the duplicated tokens, and removes all blank tokens. When using an LM, we use the wav2letter lexicon decoder~\citep{Collobert2016Wav2LetterAE} which uses beam search. We set the beam size to 50, LM weight to 2, and word score to -1. We use the official lexicon and a 4-gram LM.\footnote{\url{https://www.openslr.org/11/}} During decoding the audio examples are sorted by length and batched with at most $250 = 4,000,000 / 16,000$ seconds in a batch. All these hyper-parameters are the default values in W2V2\xspace's official inference code. The inference time is estimated on an AWS p3.2xlarge instance with 1 NVIDIA V100-SXM2-16GB GPU and 8 Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz. We run 5 trials and report the mean and standard deviation of the inference time. \subsection{TED-LIUM 3} TED-LIUM 3 (CC BY-NC-ND 3.0)~\citep{Hernandez2018TEDLIUM3T} is an English speech recognition corpus extracted from the public TED talks. It includes a 452h training set, a 1.6h development set, and a 2.6h test set. We follow the Kaldi~\citep{kaldi} data preparation recipe.\footnote{\url{https://github.com/kaldi-asr/kaldi/tree/master/egs/tedlium/s5_r3}} We randomly sample the 10h labelled data from the training set. \paragraphsq{Fine-tuning on 10h Supervised Labels} We following the same fine-tuning hyperparameters as the LibriSpeech 10h setup. \paragraphsq{Inference} We follow \citet{Hsu2021RobustW2} to create a 5-gram LM. Otherwise, we use the same inference setup for LibriSpeech. \subsection{VoxPopuli} VoxPopuli (CC0, CC BY-NC 4.0)~\citep{Wang2021VoxPopuliAL} is a large-scale multilingual speech corpus collected from 2009--2020 European Parliament event recordings. We use the transcribed English corpus which consists of a 522h training set, a 5h development set, and a 5h test set. We randomly sample the 10h labelled data from the training set. \paragraphsq{Fine-tuning on 10h Supervised Labels} We follow the same fine-tuning hyperparameters as the LibriSpeech 10h setup. \paragraphsq{Inference} We use the official lexicon and 5-gram LM.\footnote{\url{https://github.com/facebookresearch/voxpopuli}} Otherwise, we use the same inference setup for LibriSpeech. \subsection{Fisher+Switchboard} Fisher (LDC200\{4,5\}S13, LDC200\{4,5\}T19)~\citep{fisher-a,fisher-b} and Switchboard (LDC97S62)~\citep{switchboard} are conversational telephone speech corpora recorded in 8kHz. We combine them to create a 2250h training set. RT-03S (LDC2007S10, 6.3h)~\citep{rt-03s} and Hub5 Eval2000 (LDC2002S09, 3.6h)~\citep{eval2000} are used as a development set and a test set. We preprocess the data according to \citet{Hsu2021RobustW2} including re-sampling 8kHz data to 16kHz. We use the Kaldi~\citep{kaldi} data preparation and evaluation recipe.\footnote{\url{https://github.com/kaldi-asr/kaldi/tree/master/egs/fisher_swbd/s5}} We randomly sample the 10h labelled data from the training set. \paragraphsq{Fine-tuning on 10h Supervised Labels} We use the same fine-tuning hyperparameters as the LibriSpeech 10h setup. \paragraphsq{Inference} We follow \citet{Hsu2021RobustW2} to create a 4-gram LM using all the texts in the training set. Otherwise, we use the same inference setup for LibriSpeech. \section{Additional Experiments on the Kernel size of Downsampling}\label{app:sec:k127} While in \shortautoref{subsec:wfe_vs_context} we show that with small inferenc budget reducing the kernel size of the downsampling layer allows us to increase the size of WFE and leads to better performance. We conduct additional experiments with various model size attempting to understand why \citet{baevski2020wav2vec2} chose a large convolutional kernel size for their models. \shortautoref{tab:ls_100h_100k_k127} shows the performance of various models with kernel size 127. As we can see, at small model regime the overhead contributes to a large portion of the inference time and is prohibitive, while for large models, this overhead becomes relatively small and the boost of performance makes it favorable. \shortautoref{tab:ls_100h_400k_k127} shows the performance of the model pre-trained for 400K updates and using kernel size 127 leads to better WERs especially on dev-other and test-other sets. \shortautoref{tab:ls_1m_1h_10h_k127} shows the performance with less supervision. As we can see, using kernel size 127 leads to better WERs when decoding with a language model. \input{tables/ls_100h_100k_k127} \input{tables/ls_100h_400k_k127} \input{tables/ls_1m_1h_10h_k127} \section{Limitation}\label{app:sec:limitation} We focus on model inference time, without considering LM computation times. As discussed in \shortautoref{app:sec:ctc_decoding}, we perform beam-search decoding with LM on CPU using W2V2\xspace's implementation. This results in significant slowdown of tiny models. How to speed it up is an important direction for future work. Additionally, different hardware devices require different optimized models. All our ablation studies are done on GPUs. Our observation may change on other types of hardware, such as embedded systems or CPUs. As we discuss in \shortautoref{app:sec:social_impact}, there remain a need to study ASR across more diverse types of data (e.g., across languages, domains, ethnic groups, etc.). Similar to existing work, which we compare with, different data may lead to different performance observations. However, we do not expect significant changes in computation speedups, the main focus of our work. \section{Ablation Study on MLP Predictor Heads} \label{app:sec:mlp} \shortautoref{tab:ablation_mlp_heads} shows LibriSpeech 100h performance with various prediction heads. Similar to vision models, batch normalization in the prediction heads improves W2V2\xspace performance. Unlike vision models where the prediction head is applied to merely a single pooled vector for each example, in W2V2\xspace, the prediction heads are applied to all timesteps, which leads to higher pre-training overheads, but still no overhead during fine-tuning or inference where the prediction heads are dropped. \input{tables/ablation_mlp_heads} \section{Conclusion}\label{sec:conclusion} Our study is a detailed analysis of the architecture of W2V2\xspace, an influential pre-training method for spoken language tasks. Through careful consideration of both compute time and model expressivity we achieve better ASR performance with faster inference times. Aggregating our observations, we propose \finalmodel, a family of pre-trained models, with significantly better performance-efficiency trade-off than the existing W2V2\xspace architecture. \finalmodel models can function as direct replacement of W2V2\xspace models, including in recent work~\citep{hsu2020hubert,Hsu2021RobustW2,Xu2020SelftrainingAP}. In general, our approach outlines a recipe and set of considerations to apply when studying complex network architectures with the goal of finding better balance of performance and efficiency. While model performance is commonly prioritized in research, the economics of inference times are often as critical for model deployment in the real world. This study will inform practitioners optimizing complex models for deployment beyond this specific instance of W2V2\xspace. \section{Further Experiments}\label{sec:exp} We compare \finalmodel and \finalmodel-D to W2V2 using a variety of fine-tuning setups. \input{tables/ls_100h_100k} \paragraphsq{\finalmodel{} vs. \finalmodelD vs. W2V2\xspace on LibriSpeech 100h-960h} We pre-train W2V2\xspace, \finalmodel, and \finalmodelD on 960h LibriSpeech audios for 100K updates and fine-tune them on 100h labelled data. We follow the setup of \shortautoref{subsec:exp_setup}. \shortautoref{tab:ls_100h_100k} shows pre-training times, inference times, and WERs with and without an LM. Without an LM, compared with the \wavvecsize{tiny}, \finalmodelsize{tiny} reduces the WER by 53.5\% (22.8\% to 10.6\%) and 43.7\% (41.1\% to 23.7\%) on test-clean and test-other, while being faster. With an LM, WER improves by 38.6\% and 43.4\% on test-clean and test-other. Compared with the \wavvecsize{mid}, \finalmodelsize{mid} reduces WER by 30.2\% (9.6\% to 6.7\%) and 32.9\% (22.2\% to 14.9\%) with similar inference times. \finalmodel does incur slight increase in training time compared to W2V2\xspace with similar inference times. However, \finalmodel has lower WER even compared to a slower W2V2\xspace which takes longer to train (e.g., \finalmodelsize{small} vs. \wavvecsize{mid} or \finalmodelsize{mid} vs. \wavvecsize{base}). \finalmodelD has lower WER compared to \finalmodel{} even with smaller width and half of the parameters. With large models, \finalmodelD is also more efficient. However, \finalmodelDsize{tiny} is slower than \finalmodelsize{tiny}, due to the implementation difference.\footnote{We use the official PyTorch~\citep{Paszke2019PyTorchAI} implementation of disentangled attention (\href{https://github.com/microsoft/DeBERTa}{https://github.com/microsoft/DeBERTa}) for \finalmodelD, which uses BTC tensor format instead of a more efficient TBC tensor format used in fairseq~\citep{Ott2019fairseqAF}. Moreover, fairseq uses a dedicated CUDA implementation of self-attention which is more efficient.} \input{tables/ls_1m_1h_10h} \input{tables/ls_100h_400k} \paragraphsq{Less Supervision} To further test the performance of \finalmodelD, we experiment with only 10min, 1h, and 10h of supervised data (see \shortautoref{app:sec:exp_setup_details}). \shortautoref{tab:ls_1m_1h_10h} shows the WER of W2V2\xspace and \finalmodelD. \finalmodelDsize{mid} outperforms \wavvecsize{base} in the 1h and 10h scenarios while being more efficient. \finalmodelDsize{mid} is worse than \wavvecsize{base} in the extreme 10m scenario; however, we did not tune the fine-tuning hyper-parameters and use the ones tuned for \wavvecsize{base}. \finalmodelDsize{base+} achieves significantly better performance than \wavvecsize{base} in most of the setups except for using 10 minutes supervision and decoded with LM. Potentially due to a large model size, we observe that \finalmodelDsize{base+} is unstable to fine-tune; therefore, instead of using W2V2\xspace's tuned hyperparameters, we reduce the learning rate by 5 times to $10^{-5}$, set the dropout rates as the pre-training (W2V2\xspace uses different sets of dropouts during pre-training and fine-tuning), and do not freeze the context network at the beginning of the fine-tuning. These adjustments stabilizes the fine-tuning of the model. \paragraphsq{Comparison to Published Results} We continue training our best \finalmodelDsize{mid} model to 400K updates and compare it with the official \wavvecsize{base}\footnote{\href{https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_100h.pt}{https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec\_small\_100h.pt}} and \wavvecsize{large}\footnote{\href{https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_100h.pt}{https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec\_big\_100h.pt}} checkpoints~\citep{baevski2020wav2vec2}. \shortautoref{tab:ls_100h_400k} shows inference times and WERs with or without an LM. Compared to \wavvecsize{base}, \finalmodelDsize{mid} reduces inference time by 46.4\% (a 1.9$\times$ speed-up) and WER by 19.7\% and 13.5\% on both test sets without an LM; \finalmodelDsize{base+} reduces inference time by 9.7\% and WER by 27.9\% and 30.8\%. Compared to \wavvecsize{large}, \finalmodelDsize{base+} achieves 2.7$\times$ and 3.2$\times$ speed-ups for inference and pre-training with comparable WER and half of the number of parameters. \paragraphsq{Transferring to Out-of-domain Data} \input{tables/transfer_results_400k} We evaluate W2V2\xspace and \finalmodelD pre-trained models on three additional ASR datasets: TED-LIUM 3 (CC BY-NC-ND 3.0)~\citep{Hernandez2018TEDLIUM3T}, VoxPopuli (CC0, CC BY-NC 4.0)~\citep{Wang2021VoxPopuliAL}, and Fisher+Switchboard (LDC200\{4,5\}S13, LDC200\{4,5\}T19, LDC97S62)~\citep{switchboard,fisher-a,fisher-b} with a similar setup to \citet{Hsu2021RobustW2} (see \shortautoref{app:sec:exp_setup_details}). We use only 10h of supervised audio to stress test low resource domain transfer. \shortautoref{tab:transfer_results_400k} shows the inference times and WERs. \finalmodelDsize{mid} consistently reduces inference times by about 30\% while providing lower WERs on TED-LIUM 3, similar WERs on VoxPopuli, and slightly higher WERs on Fisher+Switchboard. \finalmodelDsize{base+} consistently outperforms \wavvecsize{base} by a large margin while being only 10\% slower. \subsection{Experimental Setup} \label{subsec:exp_setup} We use the official W2V2\xspace implementation in fairseq~\citep{Ott2019fairseqAF}, with the hyper-parameters of \wavvecsize{base}~\citep{baevski2020wav2vec2}. We describe key hyper-parameters; the linked configuration files provide the full details. \paragraphsq{Pre-training} We use the LibriSpeech (CC BY 4.0)~\citep{Panayotov2015LibrispeechAA} 960h training data for unsupervised pre-training, leaving 1\% as validation set for pre-training. We use the same hyperparameters as \wavvecsize{base}\footnote{\href{https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/config/pretraining/wav2vec2_base_librispeech.yaml}{https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/config/pretraining/wav2vec2\_base\_librispeech.yaml}} (\shortautoref{app:sec:exp_setup_details}). To speed up and reduce the cost of our experiments, we pre-train all models for 100K updates similar to \citet{hsu2020hubert}. All experiments use an AWS p3.16xlarge instance with 8 NVIDIA V100 GPUs and 64 Intel Xeon 2.30GHz CPU cores. Because \citet{baevski2020wav2vec2} use 64 GPUs, we set gradient accumulation steps to 8 to simulate their 64-GPU pre-training with 8 GPUs. \paragraphsq{Fine-tuning} We add a linear classifier to the top of the context network and fine-tune the model using a CTC objective on LibriSpeech train-clean 100h set for 80K updates using the same set of hyper-parameters as \wavvecsize{base} (\shortautoref{app:sec:exp_setup_details}).\footnote{\href{https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/config/finetuning/base_100h.yaml}{https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/config/finetuning/base\_100h.yaml}} \paragraphsq{Evaluation} We use CTC greedy decoding~\citep{Graves2006ctc} for all experiments because it is faster than Viterbi decoding~\citep{Viterbi1967ErrorBF} and we do not find any WER differences between the two using baseline W2V2\xspace models (\shortautoref{app:sec:ctc_decoding}). We use LibriSpeech dev-other for validation, and hold out test-clean and test-other as test sets. We consider three metrics to evaluate model efficiency and performance: pre-training time, inference time, and WER (word error rate). All evaluation is done on a NVIDIA V100 GPU with FP32 operations, unless specified otherwise. When decoding with a language model (LM), we use the official 4-gram LM\footnote{\href{https://www.openslr.org/resources/11/4-gram.arpa.gz}{https://www.openslr.org/resources/11/4-gram.arpa.gz}} and wav2letter~\citep{Collobert2016Wav2LetterAE} decoder\footnote{\href{https://github.com/flashlight/wav2letter/tree/v0.2/bindings/python}{https://github.com/flashlight/wav2letter/tree/v0.2/bindings/python}} with the default LM weight 2, word score -1, and beam size 50. Reducing the inference time with LM is an important direction for future work, as the wav2letter decoder is the bottleneck and is at least 3$\times$ slower than \wavvecsize{base} (\shortautoref{app:sec:ctc_decoding}). \section{Exploring Model Design Trade-offs} \label{sec:method} \input{sections/exp_setup} \subsection{Depth vs. Width} \label{subsec:depth_vs_width} The smallest W2V2\xspace model is \wavvecsize{base} with 94M parameters and is already relatively large compared to other ASR models~\citep{Han2020ContextNetIC,Gulati2020ConformerCT}. We study two strategies of changing the context network Transformer to reduce the model size and speed up the model, potentially at cost to performance: reducing model depth by using fewer Transformer layers or reducing model width by using smaller hidden state size in the Transformer. When reducing the hidden size, we fix the head dimension to 64. For example, a 12-head 768$d$ Transformer would be scaled down to a 4-head 256$d$ Transformer. The hidden size of the feed-forward network is always 4x as wide as the Transformer width. We also scale down the wave feature extractor to ensure that its width is not larger than the width of the Transformer. For fairness of comparison, the depth counterpart uses the same wave feature extractor as well. \shortautoref{fig:ablation_width_vs_depth} shows the performance-efficiency trade-offs of scaling down the depth or width of the model. Scaling down the width achieves better performance-efficiency compared to scaling down the depth; a deep and narrow model is more favorable than a shallow and wide model. These narrow models serve as the baselines for our following experiments. \subsection{Temporal Resolution vs. Model Size}\label{subsec:resolution} The wave feature extractor of W2V2 down-samples the raw audio to 50Hz frames with a stride size of 20ms, reducing the sequence length by a factor of 320 (\shortautoref{sec:w2v2}). However, even lower resolutions are common in prior end-to-end ASR approaches. For example, several methods~\citep{Han2020ContextNetIC,Gulati2020ConformerCT} use log-mel filterbank features with a stride of 10ms (100Hz) and down-sample them to 40ms (25Hz) with two layers of strided convolutions. The result is halving the sequence length, and reducing the computation and memory footprint of the context network. Reducing the sequence length may allow increasing the model size with the same computation costs. \begin{figure}[t] \centering \begin{minipage}[t]{.47\linewidth} \centering \includegraphics[width=\textwidth]{figs/ablation_width_vs_depth.pdf} \caption{LibriSpeech dev-other WER versus inference time (with 100h labeled data). Reducing the width of the model (E768 $\rightarrow$ E512 $\rightarrow$ E384 $\rightarrow$ E256) achieves better performance-efficiency trade-off compared to reducing the depth (L12 $\rightarrow$ L6 $\rightarrow$ L3). % % } \label{fig:ablation_width_vs_depth} \end{minipage} \hspace{10pt} \begin{minipage}[t]{.47\linewidth} \centering % % % % % % \includegraphics[width=\linewidth]{figs/squeezed-cn.pdf} \caption{Original vs. squeezed context network. The sequence length is halved by the down-sampling layer.} \label{fig:squeezed_cn} \end{minipage} \end{figure} \input{tables/ablation_resolution} \shortautoref{tab:ablation_resolution} shows the performance-efficiency trade-off of models with different temporal resolutions at context encoding, mask prediction, and CTC decoding. Reducing the temporal resolution while increasing the model size (first vs. second rows) effectively reduces the WER while maintaining the inference time. However, compared to a model with similar size but higher resolution (last row) there is a noticeable gap in WER. Increasing the output resolution to 50Hz while keeping the encoding resolution the same (25Hz) (third row) reduces this gap. We added a transposed 1$d$ convolution layer\footnote{We use a Linear layer instead of a ConvTranspose1d in PyTorch~\citep{Paszke2019PyTorchAI} for efficiency.} to the output of the context network during fine-tuning, which allows each frame (25Hz) to generate two predictions (50Hz). \paragraphsq{Squeezed Context Networks} To further close the gap, we propose to encode the features at a low resolution (e.g., 25Hz) while keeping contrastive learning at a high resolution (e.g., 50Hz). We add a down-sampling layer and an up-sampling layer around the original context network. Because there is already a convolution layer at the bottom of the W2V2\xspace context network, we simply change its stride size from 1 to $s$ to avoid additional computation, where $s$ is the squeezing factor.\footnote{There is a shortcut connection in W2V2\xspace --- it adds the inputs of the convolution to its outputs and passes it to the Transformer. We apply average pooling with kernel and stride sizes $s$ in this shortcut path which averages every two steps into one so that it can be added to the outputs of the strided convolution.} The up-sample layer is a transposed 1$d$ convolution with kernel size $s$ and stride size $s$ ($s = 2$ in our experiments). \shortautoref{fig:squeezed_cn} illustrates the context network squeezing. The fourth row in \shortautoref{tab:ablation_resolution} shows that using a squeezed context network further reduces the WER with similar inference time. \subsection{Wave Feature Extractors Design} W2V2\xspace has the same number of channels in all layers of its convolutional wave feature extractor (WFE-O; ``O'' stands for original). \shortautoref{tab:wfe_flops} (left) shows FLOPs and inference time of a WFE-O with width 512. The first few layers consume much of the computation time, while the last three consume less than 10\% of the total computation. We hypothesize that the first few layers are unnecessarily large, and that the computation can be more evenly distributed across layers. \paragraphsq{Compact Wave Feature Extractors (WFE-C)} We introduce a compact wave feature extractor (WFE-C) which doubles the number of channel when the sequence length is downsampled by 4 times. The progression of channel dimensionality is ($c$, 2$c$, 2$c$, 4$c$, 4$c$, 8$c$, 8$c$) across its 7 conv layers where $c$ is a hyper-parameter. We keep the kernel sizes (7, 3, 3, 3, 3, 2, 2) and strides (5, 2, 2, 2, 2, 2, 2) of WFE-O. \shortautoref{tab:wfe_flops} (right) shows the FLOPs and inference time of a WFE-C-c128-l0 (i.e., c = 128) feature extractor. The inference time is distributed more evenly across layers. \shortautoref{tab:ablation_wfe} presents the inference time and WER of a Squeezed wav2vec with different WFEs. WFE-C-c128-l0 achieves a similar performance as WFE-O-c512 while being much faster. \paragraphsq{WFEs Depth vs. Width} We study scaling up WFE-C by adding a point-wise (kernel size 1) convolutional layer after each original convolutional layer except for the first layer, which creates a 13-layers convolutional network with kernel sizes (7, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1) and strides (5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1). We refer this model as WFE-C-c128-l1, where ``l1'' denotes one additional intermediate layer between every two original layers. The last two rows of \shortautoref{tab:ablation_wfe} show the performance of increasing the width (WFE-C-c160-l0) and increasing the depth (WFE-C-c128-l1). \input{tables/wfe_flops} \input{tables/ablation_wfe} \subsection{Feature Extractor vs. Context Network}\label{subsec:wfe_vs_context} We study where to allocate computation budgets: the feature extractor or the context network. \shortautoref{tab:ablation_fix_infer_time} shows this study with a controlled inference time. The third row is a model with a squeezed context network as we show in \shortautoref{subsec:resolution}. The fourth row replaces WFE-O-c256 with WFE-C-c96-l1 and achieves better WER. The fifth row reduces the size of the WFE and increases the size of its context network. It outperforms the fourth row significantly in WER even as it shows a slightly lower inference time. Moreover, we observe that W2V2\xspace has a convolution layer with a particularly large kernel size of 128 at the bottom of the context network. The sixth row shows a reduction of the size of this kernel to 31, which allows using a larger WFE-C. It achieves similar WER as the fifth row while having lower pre-training and inference time. We provide an additional ablation study of the kernel size in \shortautoref{app:sec:k127}. \input{tables/ablation_fix_infer_time} \subsection{MLP Predictor Heads} \citet{Chen2020SimCLR} use MLP predictor heads instead of linear ones for unsupervised image representation learning, leading to better pre-trained features with little overhead during pre-training. We replace the linear projection of W2V2\xspace with a two-layer MLP with hidden size 4096, a ReLu activation in between, and BatchNorm~\citep{Ioffe2015BatchNA} after each linear layer. Because the predictor heads are discarded following pre-training, there is no inference overhead. The seventh row in \shortautoref{tab:ablation_fix_infer_time} shows the performance of using such an MLP predictor. Compared to the sixth row, using such an MLP predictor leads to better performance without any additional inference time. \shortautoref{app:sec:mlp} provides a more detailed ablation study on MLP predictor heads. \subsection{Raw Waveform vs. Filter Bank Features Inputs} \citet{Zhang2020PushingTL} propose a variant wav2vec 2.0 which removes the wave feature extractor and uses 80-dimensional log-mel filterbank (frame length 25ms and frame shift 10ms) and achieves superior performance with very large models. Their experiments apply other changes that can constitute confounding factors when interesting the results, including using the Conformer architecture~\citep{Gulati2020ConformerCT} and RNN-T~\citep{graves2012sequence} instead of CTC for ASR fine-tuning. We conduct additional experiments to evaluate the impact of using raw waveform inputs. The last row of \shortautoref{tab:ablation_fix_infer_time} shows the performance of our model using a FBFE (Filter Bank Feature Extractor) instead\footnote{Because \citet{Zhang2020PushingTL} do not provide implementation details, we borrow the publicly available implementation from ESPNet~\citep{watanabe2018espnet}, set the stride of the second convolution to 1 to match the encoding resolution, and reduce the number of channels to 160 to ensure the inference time is within the constraint.}. While using log-mel filter bank features can achieve reasonable performance, using raw waveform inputs with our WFE-C still achieves lower WER and faster inference time. \section{\finalmodel{} (\finalmodelfull)}\label{sec:final_model} We combine our observations from \shortautoref{sec:method} to propose \textit{\finalmodel\ (\finalmodelfull)}, an efficient pre-trained model architecture. \finalmodel\ differs from W2V2\xspace in: (a) using a squezeed context network, (b) replacing WFE-O with WFE-C, (c) reallocating computing across different components, and (d) using MLP predictor heads with BatchNorm. \shortautoref{tab:sew_config} shows the hyper-parameters of W2V2 and \finalmodel{} with different inference budgets, and \shortautoref{tab:ls_100h_100k} shows model performance. \input{tables/sew_config} \paragraphsq{Scaling Up} We adopt a simple scaling up recipe, leaving finding more optimal scaled-up configurations, an open research problem~\citep{Tan2019EfficientNetRM,Dollr2021FastAA}, for future work. We take the row 7 in \autoref{tab:ablation_fix_infer_time} as \finalmodelsize{tiny}, which has similar inference times to W2V2\xspace with width 256. We increase the width by 1.5 to create \finalmodelsize{small}, which has the same Transformer size as \wavvecsize{base}. Based on our observation that deep models are favorable (\shortautoref{subsec:depth_vs_width}), we create \finalmodelsize{mid} by making the model twice as deep. \paragraphsq{\finalmodelD (\finalmodel\ with Disentangled Attention)} Disetangled attention~\citep{He2020DeBERTaDB} is a variant of self-attention with relative position representations~\citep{Shaw2018SelfAttentionWR}, which outperforms Transformer's multi-head attention~\citep{Vaswani2017AttentionIA} on various NLP tasks. Unlike Transformer which adds absolute positional embeddings to the content embeddings at the beginning, disentangled attention keeps the positional embeddings and content embeddings separate and has 3 components in its attention weight computation: (a) content-to-content, (b) content-to-position, and (c) position-to-content attentions (\shortautoref{app:sec:disent_attn}). The disentangled computation requires more matrix multiplication operations compared to conventional self-attention, and is slower with a similar number of parameters. To retain similar computation costs, we reduce the Transformer width to reduce its parameters by half. \finalmodelD benefits from the advanced attention mechanism, while overall it displays faster inference time. In \shortautoref{sec:exp}, we show that \finalmodelD outperforms a 2x larger \finalmodel~counterpart. To have a model with comparable inference time as \wavvecsize{base}, we further increase the width of the context network of \finalmodelD by 1.5 to create \finalmodelDsize{base}. Because it is still slightly faster than \wavvecsize{base}, we further scale up the width of the WFE by 1.5 leading to a \finalmodelDsize{base+}. \section{Introduction} \label{sec:intro} \begin{wrapfigure}{r}{0.4\textwidth} \vspace{-20pt} \includegraphics[width=\linewidth]{figs/w2v2_vs_sew.pdf} \caption{Word error rate (WER) and average utterance inference time on LibriSpeech (dev-other) of wav2vec 2.0 and our \finalmodel and \finalmodelD models fine-tuned with 100h labeled data for 100K updates.} \label{fig:w2v2_vs_sew} \vspace{-5pt} \end{wrapfigure} Recently, there is significant interest in self-supervised pre-training using unlabeled audio data to learn versatile feature representations, that are subsequently fine-tuned on task-specific annotated audio~\citep{Zhang2020PushingTL,Wang2021LargeScaleSA,Xu2020SelftrainingAP,Pepino2021EmotionRF}. This follows similar trends in natural language processing~\cite[NLP;][]{devlin2018bert,liu2019roberta,He2020DeBERTaDB} and computer vision~\cite[CV;][]{he2019moco,Chen2020SimCLR,grill2020bootstrap}. Maybe the most prominent example of this class of models is wav2vec 2.0~\citep[W2V2;][]{baevski2020wav2vec2}, which achieves competitive word error rate (WER) following fine-tuning on only ten minutes of transcribed (labeled) data, when prior supervised approaches often require nearly a thousand hours. If recent developments in NLP and CV are any indication, the importance of such pre-trained audio models that are fine-tuned on expert tasks will only increase. Indeed, W2V2 has already been studied with focus on the impact of pre-training data~\citep{Conneau2020UnsupervisedCR,Hsu2021RobustW2}, pre-training task~\citep{hsu2020hubert}, or combination with pseudo labelling~\citep{Xu2020SelftrainingAP,Zhang2020PushingTL}. In this paper, we study W2V2's model design, and possible trade-offs between its components. Our focus is on efficiency for practical applications, rather than extending the model. As W2V2-type models become increasingly common, understanding their efficiency trade-offs is critical for transferring their benefits from the lab to the real world, where any increase in efficiency can substantially reduce the inference costs and energy footprints across a plethora of real world applications. We study several aspects of the W2V2 model. We focus on automatic speech recognition (ASR), while retaining the standard pre-training and few-sample fine-tuning setup.\footnote{While pre-training time can be considered a secondary metric for efficiency, it is not our primary goal.} First, we study how the temporal resolution of the network trades-off performance and efficiency, and show that using different resolutions for computing pre-trained representations and ASR decoding significantly reduces inference time, while retaining similar performance. Second, we propose an efficient family of waveform feature extractors, which achieves similar performance with half the inference time as the original W2V2 extractor. Finally, we study the impact of shifting model expressivity between different parts of the network. We observe that it is better to assign more parameters to later parts in the pre-trained network, compared to increasing capacity closer to the input waveform. We also see that increasing the expressivity of the pre-training predictor heads increases performance, while not influencing downstream-task computation as these heads are discarded. We combine our observations to propose two models: \finalmodel\ (\finalmodelfull) and \finalmodel-D (\finalmodel\ with disentangled attention~\citep{He2020DeBERTaDB}). We pre-train \finalmodel\ and \finalmodel-D on 960 hours of unlabelled audio from the LibriSpeech dataset~\citep{Panayotov2015LibrispeechAA}, and fine-tune with multiple ASR tasks. \finalmodel\ yields a significantly better performance-efficiency trade-off than the original W2V2. For example, with 100h labeled data, compared to a \wavvecsize{tiny} model, \finalmodel\ reduces the LibriSpeech test-clean WER from 22.8\% to 10.6\% while being slightly faster, even outperforming a larger W2V2\xspace model with 12.8\% WER. Compared to the official \wavvecsize{large} release, our best \finalmodelDsize{base+} achieves 2.7$\times$ and 3.2$\times$ speed-ups for inference and pre-training with comparable WER using half of the number of parameters. Compared to \wavvecsize{base}, our \finalmodelDsize{mid} achieves 1.9$\times$ inference speed-up with a 13.5\% relative reduction in WER. \shortautoref{fig:w2v2_vs_sew} shows the performance-efficiency trade-offs with various model size. \finalmodel-D outperforms W2V2 in most pre-training settings, when experimenting with LibriSpeech~\citep{Panayotov2015LibrispeechAA}, Ted-lium 3~\citep{Hernandez2018TEDLIUM3T}, VoxPopuli~\citep{Wang2021VoxPopuliAL}, and Switchboard~\citep{switchboard} datasets. Pre-trained models and code are available at \url{https://github.com/asappresearch/sew}. \section{Related Work}\label{sec:related_works} \paragraphsq{Unsupervised Audio Representation Learning} Contrastive predictive coding (CPC) is a general unsupervised learning method for speech, vision, text, and reinforcement learning~\citep{Oord2018RepresentationLW}. When applied to speech, it uses past audio to predict the future audio, similar to language modeling~\citep{mikolov2010recurrent,dauphin2017language,kaplan2020scaling} but with contrastive loss. Wav2vec~\citep{Schneider2019wav2vecUP} further improves the CPC model architecture design and focuses on unsupervised pre-training for end-to-end automatic speech recognition. Roughly speaking, wav2vec includes a feature extractor that generates a sequence of vectors from raw waveform audio, and a context network that encodes the features from the recent past to predict the features in the immediate future. This context network is only used to learn useful feature representations, and is typically discarded after pre-training. Recently, \citet{Baevski2020vqwav2vecSL} introduced vq-wav2vec and a combination of vq-wav2vec with a discrete BERT-like model~\citep{Devlin2019BERT,Baevski2019EffectivenessOS}. W2V2~\citep{baevski2020wav2vec2} combines vq-wav2vec and the BERT-like model into an end-to-end setup, where the BERT portion functions as the context network, but not discarded. More recently, \citet{hsu2020hubert} propose HuBERT and show that W2V2 can be pre-trained with clustered targets instead of contrastive objectives. Besides ASR-focused works, there is significant interest in learning representations for other speech tasks~\citep{synnaeve2016temporal,chung2018unsupervised,chuang2019speechbert,song2019speech}, music~\citep{yang2021deeper,zhao2021musicoder}, and general audio~\citep{saeed2021contrastive,gong2021psla,niizumi2021byol,wang2021multimodal}. \paragraphsq{End-to-end Automatic Speech Recognition (ASR)} As large datasets and fast compute become available, end-to-end ASR models~\citep{amodei2016deep,Zhang2020PushingTL} increasingly achieve state-of-the-art results, outperforming HMM-DNN hybrid systems~\citep{abdel2012applying,hinton2012deep}. End-to-end ASR models can be roughly categorized into three main types: connectionist temporal classification~\citep[CTC;][]{graves2013speech}, RNN transducers~\citep[RNN-T;][]{graves2012sequence,Han2020ContextNetIC,Gulati2020ConformerCT}, and sequence-to-sequence (a.k.a. Listen, Attend and Spell models)~\citep[Seq2seq;][]{chan2016listen,dong2018speech,watanabe2018espnet}. CTC models are extremely fast for batch decoding; RNN-T variants are often used in real-time systems; Seq2seq models are more popular in offline settings. Recently, and following success on NLP tasks, there is a transition in speech processing towards the Transformer architecture~\citep{Vaswani2017AttentionIA,dong2018speech} and its variants~\citep{Zhang2020TransformerTA,baevski2020wav2vec2,Gulati2020ConformerCT,Zhang2020PushingTL,yeh2019transformer}. \section{Technical Background: Wav2Vec 2.0 (W2V2)}\label{sec:w2v2} W2V2 is made of a waveform feature extractor that generates a sequence of continuous feature vectors, each encoding a small segment of audio, and a context network that maps these vectors to context-dependent representations. \begin{wrapfigure}{r}{0.43\textwidth} \vspace{-20pt} \includegraphics[width=\linewidth]{figs/w2v2_framework.pdf} \caption{Wav2vec 2.0 framework.} \label{fig:w2v2_framework} \vspace{-20pt} \end{wrapfigure} During pre-training, some of the features are masked out, and are not seen by the context network. In parallel, the pre-masking features are discretized as prediction targets. The context network aims to discriminate the discretized version of the original features at the masked positions from a pool of negative samples using the InfoNCE loss~\citep{Oord2018RepresentationLW}. \mbox{\shortautoref{fig:w2v2_framework}} shows the W2V2\xspace framework, including (a) a feature extractor, (b) a context network, (c) an optional quantization module, and (d) two projection heads. \paragraphsq{Wave Feature Extractor (WFE)} The wave feature extractor $f(\cdot)$ encodes and downsamples the raw waveform audio inputs ${\mathbf{X}} = ({\bm{x}}_1, ..., {\bm{x}}_{T_\text{input}}) \in \mathbb{R}^{T_\text{input} \times d_\text{input}}$ ($d_\text{input} = 1$ for single-channel audio) into an array of feature vectors ${\mathbf{Z}}= f({\mathbf{X}}) = ({\bm{z}}_1, ..., {\bm{z}}_T) \in \mathbb{R}^{T \times \mathbb{R}^{d_\text{feat}}}$. For example, W2V2\xspace maps 16KHz audio sequences to 50Hz frames using a convolutional WFE with receptive field size of 400 and stride size of 320. Each feature vector encodes the raw signals within a 25ms ($= 1000 / 16000 * 400$) window with a stride size 20ms ($= 1000 / 16000 * 320$). The reduced sequence length is $T = \frac{T_\text{input} - 400}{320} + 1 = \frac{T_\text{input} - 80}{320}$. \paragraphsq{Context Network} The context network $g(\cdot)$ follows a similar principle as masked language models in NLP (e.g., BERT~\citep{devlin2018bert} or RoBERTa~\citep{liu2019roberta}). During pre-training, each $z_t$ is masked and replaced with a trainable mask vector ${\bm{m}}$ with a predefined probability $p$. To illustrate, ${\mathbf{Z}} = ({\bm{z}}_1, {\bm{z}}_2, {\bm{z}}_3, {\bm{z}}_4, {\bm{z}}_5, {\bm{z}}_6, ..., {\bm{z}}_T)$ can become ${\mathbf{Z}}' = ({\bm{z}}_1, {\bm{m}}, {\bm{m}}, {\bm{z}}_4, {\bm{m}}, {\bm{z}}_6, ..., {\bm{z}}_T)$. The context network maps this masked sequence to a sequence of contextual representations ${\mathbf{C}} = g({\mathbf{Z}}') = ({\bm{c}}_1, ..., {\bm{c}}_T) \in \mathbb{R}^{T \times d_\text{feat}}$ to incorporate context information. Even if ${\bm{z}}_t$ is masked and replaced with ${\bm{m}}$, we anticipate that ${\bm{c}}_t$ can recover the information in ${\bm{z}}_t$ because it contains information from surrounding, not-masked input vectors. The context network is usually implemented with a Transformer architecture~\citep{Vaswani2017AttentionIA,Gulati2020ConformerCT}. \paragraphsq{Quantization Module} The quatization module $q(\cdot)$ maps each unmasked vector ${\bm{z}}_t$ into a quantized form ${\bm{q}}_t = q({\bm{z}}_t) \in \mathbb{R}^{d_\text{feat}}$ for each masked position $t$. Quantized ${\bm{q}}_t$'s are the prediction targets. The quantization module is based on Gumbel softmax with straight-through estimator~\citep{gumbel1954statistical,jang2016categorical,maddison2014sampling}. There are $G$ codebooks and each codebook has $V$ entries, giving $G \times V$ vectors ${\bm{e}}_{g,v} \in \mathbb{R}^{\frac{d_\text{feat}}{G}}$ where $g \in \{1, ..., G\}, v \in \{1, ..., V\}$. For each group $g$, the probability of assigning ${\bm{z}}_t$ to the $v$-th entry is $p_{g,v} = \frac{\exp({\bm{W}}^g_v \cdot {\bm{z}}_t / \tau_Q)}{\sum_{v'=1}^V \exp({\bm{W}}^g_{v'} \cdot {\bm{z}}_t / \tau_Q)}$, where ${\bm{W}}^g \in \mathbb{R}^{V \times d_\text{feat}}$ is a trainable matrix and $\tau_Q$ is the quantization temperature. For each group $g$, ${\bm{z}}_t$ is assigned to the $v_g^*$-th entry where $v_g^* = \argmax_{v} p_{g,v}$. The corresponding embedding vectors $({\bm{e}}_{1, v_1^*}, ..., {\bm{e}}_{G, v_G^*}$) are concatenated to a single vector ${\bm{q}}_t \in \mathbb{R}^{d_\text{feat}}$, and constitutes a quantized feature sequence ${\mathbf{Q}} = ({\bm{q}}_1, ..., {\bm{q}}_T) \in \mathbb{R}^{T \times d_\text{feat}}$. \paragraphsq{Projection Heads} Two linear projection heads $p_c(\cdot)$ and $p_q(\cdot)$ reduce the dimensionality of ${\mathbf{C}}$ and ${\mathbf{Q}}$. For a ${\bm{z}}_t$ that is masked and replaced with ${\bm{m}}$, we want $p_c({\bm{c}}_t) \in \mathbb{R}^{d_\text{proj}}$ to be similar to $p_q({\bm{q}}_t) \in \mathbb{R}^{d_\text{proj}}$. \citet{baevski2020wav2vec2} do not separate between $p_c$ and $g$ or $p_q$ and $q$ in their original notations. However, we keep the distinctions, as they serve different roles and are discarded before downstream fine-tuning. \paragraphsq{Pre-training Objective} W2V2\xspace combines contrastive and diversity losses in the pre-training loss: \begin{equation} {\mathcal{L}} = {\mathcal{L}}_m + \alpha {\mathcal{L}}_d\;\;. \end{equation} The goal of the contrastive loss ${\mathcal{L}}_m$ is to make the projected outputs $p_c({\bm{c}}_t)$ close to $p_q({\bm{q}}_t)$ and far away from any other $p_q({\bm{q}}_{t'})$, where ${\bm{z}}_t$ is masked and $t'$ is any other position in the same sequence. W2V2\xspace uses an InfoNCE loss~\citep{Oord2018RepresentationLW}: \begin{small} \begin{equation} {\mathcal{L}}_m = \mathbb{E}_{t \text{\ is masked}} \left[ -\log \frac{\exp(\mathrm{sim}(p_c({\bm{c}}_t), p_q({\bm{q}}_t)) / \kappa)}{\sum_{{\bm{q}}_{t'} \in \mathbb{Q}} \exp(\mathrm{sim}(p_c({\bm{c}}_t), p_q({\bm{q}}_{t'})) / \kappa)} \right], \end{equation} \end{small} \noindent with $\mathrm{sim}({\bm{a}}, {\bm{b}}) = \frac{{\bm{a}}^\top {\bm{b}}}{\norm{{\bm{a}}} \norm{{\bm{b}}}}$, $\mathbb{Q}$ is a set containing the positive sample ${\bm{q}}_t$ and $K$ negative samples, and $\kappa$ is the temperature. The expectation is computed over masked positions only. The diversity loss ${\mathcal{L}}_d$ prevents the quantization module from collapsing to a trivial mapping (e.g., by collapsing all inputs to a single discrete code). It encourages the quantization probability $p_{g, v}$ to be evenly distributed: \begin{small} \begin{equation} {\mathcal{L}}_d = \mathbb{E}_{t} \left[ 1 - \frac{1}{GV} \sum_{g=1}^G \exp \left( - \sum_{v=1}^V p_{g,v} \log p_{g,v} \right) \right]. % \end{equation} \end{small}
2,877,628,091,439
arxiv
\section{Introduction} One of the most important constitutive relations in continuum mechanics is Fourier's law of heat conduction. It states that, in an homogeneous, isotropic and thermally conducting medium, the heat flux $\boldsymbol{q}$ is determined by \[ \boldsymbol{q} = - \kappa \nabla \theta, \] where $\theta = \theta(x,t)$ denotes the absolute temperature at a point $x$ of the medium at time $t > 0$ and $\kappa > 0$ is the thermal conductivity. Fourier's law is also a key ingredient in the compressible Navier-Stokes system of equations that describes the dynamics of a viscous compressible heat-conducting fluid (cf. \cite{Da4e}), inasmuch as the equation of conservation of energy underlies the heat transfer law proposed by Fourier. One of the main drawbacks of Fourier's constitutive law, however, is that it predicts infinite speed of propagation of heat, that is, thermal disturbances in a continuous medium will be felt instantly (although unequally) at all other points of the medium no matter how distant they are located. This unphysical behavior violates the well-established principle of \emph{causality} in continuum mechanics. Even though Fourier's law has been widely and successfully used to approximate the phenomenon of heat propagation in continuous media, other models have been proposed to correct this unrealistic feature. One of the best known is the \emph{Cattaneo-Maxwell heat transfer law} (see, e.g., \cite{JoPr89}), \begin{equation} \label{CMlaw} \tau \boldsymbol{q}_t + \boldsymbol{q} = - \kappa \nabla \theta, \end{equation} where $\boldsymbol{q}_t = \partial \boldsymbol{q}/\partial t$ denotes the partial time-derivative of the heat flux and $\tau > 0$ is a constant. In the constitutive equation \eqref{CMlaw} (which can be traced back to the work of Maxwell \cite{Maxw1867} and was later reformulated by Cattaneo \cite{Catt49}), the parameter $\tau$ plays the role of an intrinsic relaxation time, or the time lag required for heat conduction to happen within a volume element once the temperature gradient has been established. Thus, this new term represents some sort of ``thermal inertia". Under this modification of Fourier's law, the flow of heat within the medium does not occur instantaneously but through the propagation of thermal waves with finite speed, a phenomenon known as \emph{second sound} (cf. \cite{Jrd14,LiSt78,Strau11}). Even though Maxwell-Cattaneo heat transfer law preserves the causality principle for heat propagation in \emph{steady} continuous media, it is incompatible with the Galilean postulate of frame-indifference when the medium is in motion. Christov and Jordan \cite{ChJo05} have shown, for instance, that equation \eqref{CMlaw} violates this fundamental principle of classical mechanics and leads to paradoxical descriptions of the evolution of thermal waves. The reason is simple: thermal inertia should be a property of the material point, and the rate of change of the heat flux with respect of time must be the result of a change in the geometrical point (the partial time derivative) plus a change due to the transport of material quantities if the medium is in motion. Consequently, Christov and Jordan propose that the partial time derivative in \eqref{CMlaw} should be replaced by a \emph{material} derivative. Under this viewpoint, Christov \cite{Chr09} formulated a material, frame-indifferent version of the Cattaneo-Maxwell law that replaces the partial time derivative of the heat flux by a Lie-Oldroyd upper convected material derivative (cf. \cite{Old50}). It reads \begin{equation} \label{Christovlaw} \tau \Big( \boldsymbol{q}_t + (\boldsymbol{u} \cdot \nabla) \boldsymbol{q} + (\nabla \cdot \boldsymbol{u}) \boldsymbol{q}- (\boldsymbol{q} \cdot \nabla) \boldsymbol{u} \Big) + \boldsymbol{q} = - \kappa \nabla \theta, \end{equation} where the vector $\boldsymbol{u} = \boldsymbol{u}(x,t)$ is velocity field of the medium. Coupling thermal relaxation of Cattaneo's type with the description of fluid flow has attracted the attention of the scientific community for a long time (see, e.g., \cite{CaMo72,JoPr89,LiSt78,Strau11}). In the description of moving continuous media, it is fundamental to consider a heat transfer law which, like \eqref{Christovlaw}, preserves the objectivity principle of Galileo. Hence, it is natural to couple Christov's constitutive law with the basic balance laws of mass, momentum and energy. The result is a system of equations of non-conservative type, due to the fact that now the heat flux variable $\boldsymbol{q}$ satisfies an evolutionary equation (the constitutive law) which does not express a balance law. In the context of moving continuous media, Christov's constitutive law \eqref{Christovlaw} has been recently investigated in the study of incompressible fluid flow \cite{TiZa11,Strau10b}, viscoelastic solids \cite{Morr10} and of compressible fluids with applications to acoustic wave propagation \cite{Jrd14,Strau10a}. As Straughan \cite{Strau10a} and Christov \cite{Chr09} point out, it is important to test this new model. In this paper, we consider a compressible, viscous, heat-conducting fluid exhibiting thermal relaxation according to Christov's constitutive heat transfer law \eqref{Christovlaw}. We refer to the resulting equations as \textit{Cattaneo-Christov systems}. Our investigation focuses on one of the most important properties of this type of evolutionary systems of partial differential equations: their \emph{strict dissipativity}, or, in lay terms, the property that solutions to the linearized problem around equilibrium states show some decay structure (see the precise statement in section \ref{secgencoup} below). In physical terms, this property is tantamount to requiring that the dissipation terms do not allow solutions of traveling wave type to be, simultaneously, solutions to the associated hyperbolic system without dissipation. This characterization of strict dissipativity, known as \textit{genuine coupling}, has been extensively studied by Kawashima, Shizuta and collaborators \cite{Ka86,KaSh88b,KY04,ShKa85}. In the well-known case of the compressible Navier-Stokes system, for example, the terms due to viscosity and to thermal diffusion (Fourier's law) make the system strictly dissipative (cf. Shizuta and Kawashima \cite{ShKa85}). In the case of Cattaneo-Christov systems, thermal relaxation and viscous terms account for dissipation effects. Are these terms truly dissipative? Our contribution is to answer this question in the positive for one-dimensional Cattaneo-Christov systems. We establish strict dissipativity in two cases: on one hand, for the system of equations with viscosity and thermal relaxation terms combined, and on the other, for the inviscid counterpart in which thermal relaxation is the only dissipative term and the viscosity coefficients are set to zero. For that purpose, we first recast the system of equations as a quasi-linear system for which a symmetrizer can be found. Once the system is put into symmetric form, it is shown that the dissipation and the hyperbolic terms are genuinely coupled. Furthermore, we explicitly show the existence of \textit{compensating matrix functions} (cf. \cite{Hu05,ShKa85}) of the state variables which allow, in turn, to verify directly the strict dissipativity of the one-dimensional Cattaneo-Christov system and to establish energy estimates that yield the decay of solutions to the linearized problem around equilibrium states. These results apply to both the thermally relaxed fluid without viscosity and to the case with relaxation and viscosity effects combined. \subsection*{Plan of the paper} In section \ref{secmodel} the Cattaneo-Christov model for one-dimensional compressible flow is introduced and recast as a system of equations in quasi-linear form. Section \ref{sechyp} contains the verification of the hyperbolicity of the system in the absence of dissipation terms and it is shown that Cattaneo-Christov systems are symmetrizable in one spatial dimension. In section \ref{secgencoup}, the genuine coupling condition of Kawashima and Shizuta, as well as the equivalence theorem for symmetric systems, are recalled. Moreover, it is shown that Cattaneo-Christov systems are genuinely coupled. Explicit forms of compensating functions in both the viscous and thermally relaxed cases are also provided via direct inspection. Section \ref{seclindec} contains the derivation of decay rates for solutions to the linearized system around equilibrium states. The paper ends with further discussion on the results and their possible extensions. \section{Cattaneo-Christov systems for compressible fluid flow} \label{secmodel} Consider the basic equations for a compressible, viscous, heat conducting fluid in one dimensional space, \begin{equation} \label{eqbasics} \begin{aligned} \rho_t + (\rho u)_x &= 0,\\ (\rho u)_t + (\rho u^2)_x &= {\sigma}_x, \\ \Big( \rho ( e + \tfrac{1}{2} u^2)\Big)_t + \Big( \rho u ( e + \tfrac{1}{2} u^2)\Big)_x &= ({\sigma} u)_x - q_x, \end{aligned} \end{equation} where $x \in \mathbb{R}$ and $t > 0$. According to custom, $\rho$ and $u$ denote the mass density and the velocity of the fluid, respectively, whereas $\sigma$ is the stress and has the form, \begin{equation} \label{eqstress} \sigma = (2 \mu + \lambda) u_x - p. \end{equation} $p$ is the thermodynamic pressure, $e$ denotes the internal energy density, $q$ is the heat flux and the viscosity coefficients, $\lambda$ and $\mu$, satisfy the inequalities \begin{equation} \label{eqvisccoeff} \mu \geq 0, \quad \tfrac{2}{3} \mu + \lambda \geq 0. \end{equation} The heat flux satisfies a constitutive relation that has the form of a heat transfer law. Instead of the usual Fourier's law, namely $q = - \kappa \theta_x$, where $\kappa = \kappa(\rho,\theta) > 0$ is the heat conductivity coefficient, we shall assume that the fluid, along with the property of conducting heat, exhibits thermal relaxation according to the following constituive equation, \begin{equation} \label{eqCC} \tau \big(q_t + u q_x\big) + q = - \kappa \theta_x, \end{equation} where $\tau > 0$ is a constant characteristic relaxation time. Equation \eqref{eqCC} is a modification of the Cattaneo-Maxwell transfer law, namely, $\tau q_t + q = - \kappa \theta_x$. It is the one-dimensional version of the frame-indifferent material constitutive law \eqref{Christovlaw} proposed by Christov in \cite{Chr09}. The evolution of the flow is thus governed by the three balance laws for mass, momentum and energy \eqref{eqbasics} and the constitutive evolution equation \eqref{eqCC}. Of course, the system should be closed by an equation of the state for the fluid under consideration that determines the form of $p$ and $e$. In this paper, we make the following assumptions about the fluid: \begin{equation} \label{H1} \tag{H$_1$} \begin{minipage}[c]{4.0in} The independent thermodynamic quantities are the mass density, $\rho > 0$, and the absolute temperature, $\theta > 0$. They vary within the domain \[ \mathcal{D} := \{ (\rho, \theta) \in \mathbb{R}^2 \, : \, \rho > 0, \, \theta > 0 \}. \] \end{minipage} \end{equation} \bigskip \begin{equation} \label{H2} \tag{H$_2$} \begin{minipage}[c]{4.0in} The pressure $p$, the internal energy density $e$, the heat conductivity coefficient $\kappa$ and the viscosity coefficients $\lambda$ and $\mu$ are smooth functions of $(\rho,\theta) \in \mathcal{D}$, \[ p, e, \lambda, \mu, \kappa \in C^\infty(\mathcal{D}). \] In addition, $\lambda$ and $\mu$ satisfy inequalities \eqref{eqvisccoeff} and $\kappa > 0$ for all $(\rho,\theta) \in \mathcal{D}$. \end{minipage} \end{equation} \bigskip \begin{equation} \label{H3} \tag{H$_3$} \begin{minipage}[c]{4.0in} The fluid satisfies the following conditions: \[ p> 0, \; \; p_\rho > 0, \;\; p_\theta > 0,\;\;e_\theta > 0, \qquad \text{for all} \;\; (\rho,\theta) \in \mathcal{D}. \] \end{minipage} \end{equation} \bigskip Finally, for convenience in notation we define the combined viscosity coefficient $\nu \in C^\infty(\mathcal{D})$ as \[ \nu(\rho,\theta) := 2 \mu + \lambda. \] Notice that $\nu \geq 0$ on $\mathcal{D}$ in view of \eqref{eqvisccoeff}. \begin{remark} Assumption \eqref{H3} is clearly satisfied by an ideal gas that satisfies Boyle's law, \[ p(\rho, \theta) = R \rho \theta, \qquad e(\rho, \theta) = \frac{R \theta}{\gamma -1}, \] where $R > 0$ is the universal gas constant and $\gamma > 1$ is the adiabatic exponent. Hypotheses \eqref{H3} are, of course, more general and applicable to compressible fluids satisfying the standard assumptions of Weyl \cite{We49}, namely, adiabatic increase of pressure effects compression ($p_\rho > 0$), a generalized Gay-Lussac's law ($p_\theta > 0$) and the increase of internal energy due to an increase of temperature at constant volume ($e_\theta > 0$). \end{remark} In this work we consider the basic equations \eqref{eqbasics} of conservation of mass, momentum and energy, coupled together with the evolution equation for the heat flux \eqref{eqCC}. As a result, we obtain the following quasi-linear system of equations \begin{equation} \label{CCvisc} \begin{aligned} \rho_t + (\rho u)_x &= 0,\\ (\rho u)_t + (\rho u^2 + p)_x &= \big( \nu u_x \big)_x, \\ \big( \rho ( e + \tfrac{1}{2} u^2)\big)_t + \big( \rho u ( e + \tfrac{1}{2} u^2)\big)_x &= (-p u + \nu u u_x)_x - q_x,\\ \tau q_t + \tau u q_x + q &= - \kappa \theta_x. \end{aligned} \end{equation} In the case when $\nu > 0$ for all $(\rho,\theta) \in \mathcal{D}$, we call this system the \emph{viscous Cattaneo-Christov system for compressible fluid flow}. We shall distinguish between the viscous ($\nu > 0$) and the pure thermally relaxed system where $\nu \equiv 0$ for all $(\rho, \theta) \in \mathcal{D}$, that reads, \begin{equation} \label{CCrel} \begin{aligned} \rho_t + (\rho u)_x &= 0,\\ (\rho u)_t + (\rho u^2 + p)_x &= 0, \\ \big( \rho ( e + \tfrac{1}{2} u^2)\big)_t + \big( \rho u ( e + \tfrac{1}{2} u^2)\big)_x &= -(p u)_x - q_x,\\ \tau q_t + \tau u q_x + q &= - \kappa \theta_x. \end{aligned} \end{equation} We denote system \eqref{CCrel} as the \emph{inviscid Cattaneo-Christov system}. These inviscid, thermally relaxed compressible fluids have been coined by Straughan as \emph{Cattaneo-Christov gases} \cite{Strau10a}. In this paper we establish that, under the generic assumptions \eqref{H1} - \eqref{H3}, both systems are dissipative in a precise sense as we shall see below. In the sequel, we denote $U = (\rho, u, \theta, q)^\top \in \mathcal{U} \subset \mathbb{R}^4$ as the vector of state variables, defined on the convex, open set \begin{equation} \label{defsetU} \mathcal{U} := \{ (\rho, u, \theta, q)^\top \in \mathbb{R}^4 \, : \, \rho > 0, \, \theta > 0 \}, \end{equation} known as the \emph{state space}. Using the well-known thermodynamic relation $\theta p_\theta = p - \rho^2 e_\rho$ (see, e.g., \cite{Bt}, pg. 42) and after some algebra, we recast \eqref{CCvisc} as the following quasi-linear system for the state variables $U \in \mathcal{U}$, \begin{equation} \label{eqfullsyst} A^0(U) U_t + A^1(U) U_x = B(U) U_{xx} + Q(U) + G(U, U_x), \end{equation} where \[ A^0(U) := \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \rho & 0 & 0 \\ 0 & 0 & \rho e_\theta & 0 \\ 0 & 0 & 0 & \tau \end{pmatrix}, \qquad A^1(U) := \begin{pmatrix} u & \rho & 0 & 0 \\ p_\rho & \rho u & p_\theta & 0 \\ 0 & \theta p_\theta & \rho u e_\theta & 1 \\ 0 & 0 & \kappa & \tau u \end{pmatrix}, \] \[ B(U) := \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & \nu & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, \qquad Q(U) := \begin{pmatrix} 0 \\ 0 \\ 0 \\ -q \end{pmatrix}, \] and $G(U,U_x)$ contains higher order (fully nonlinear) terms, \begin{equation} \label{defG} G(U,U_x) = \begin{pmatrix} 0 \\ \nu_x u_x \\ \nu u_x^2 \\ 0 \end{pmatrix} = O(|U_x|^2). \end{equation} Notice that $A^0, A^1, B \in C^\infty(\mathcal{U};\mathbb{R}^{4 \times 4})$, $Q \in C^\infty(\mathcal{U};\mathbb{R}^4)$, $G \in C^\infty(\mathcal{U} \times \mathbb{R}^4 ; \mathbb{R}^4)$. In view of hypotheses \eqref{H1} - \eqref{H3}, it is clear that for each $U \in \mathcal{U}$, $A^0(U) > 0$ is positive definite and hence, invertible, whereas $B(U) \geq 0$ is positive semi-definite. In the case where $\nu \equiv 0$ for all $(\rho, \theta) \in \mathcal{D}$ we recover the inviscid, thermally relaxed system \eqref{CCrel}, for which $B \equiv 0$. \begin{remark} Observe that the heat flux $q$ is regarded as a state variable and, thus, the constitutive heat transfer law \eqref{eqCC} is part of the time-dependent equations that determines the evolution of the system. As a result, system \eqref{eqfullsyst} is not expressed in conservation form. Instead, it is a quasi-linear, non-conservative system of equations with dissipation effects represented by viscosity (the term $B(U)U_{xx}$) and production terms due to relaxation (the thermal relaxation term $Q(U)$). \end{remark} As in the study of systems of conservation laws with relaxation \cite{Da4e,L4}, the large time behavior of solutions is determined by a ``relaxed" structure, chosen so that the dynamics leads solutions towards an \emph{equilibrium manifold}. In quasilinear systems of the form \eqref{eqfullsyst}, the equilibrium manifold $\mathcal{V} \subset \mathbb{R}^4$ is defined as \[ \mathcal{V} = \{ U \in \mathcal{U} \, : \, Q(U) = 0\}. \] Mimicking discrete kinetic theory \cite{Ka84}, the \emph{space of collision invariants} is defined as \[ \mathcal{M} = \{ \psi \in \mathbb{R}^4 \, : \, \psi^\top Q(U) = 0, \; \text{for any } \, U \in \mathcal{U}\} \subset \mathbb{R}^4. \] A solution $U = U(x,t)$ to system \eqref{eqfullsyst} is an \emph{equilibrium solution} (or a \emph{Maxwellian}) if it lies on the equilibrium manifold, that is, if $Q(U(x,t)) = 0$ for all $x \in \mathbb{R}$, $t > 0$. Clearly, any constant state in the equilibrium manifold, $\overline{U} \in \mathcal{V}$, is an equilibrium solution. In the case of the Cattaneo-Christov system \eqref{eqfullsyst} the equilibrium manifold is given by \begin{equation} \label{defV} \mathcal{V} = \{ (\rho, u, \theta, q)^\top \in \mathbb{R}^4 \, : \, \rho > 0, \, \theta > 0, \, q = 0 \}, \end{equation} that is, it corresponds to the states with zero heat flux. Also particular to the Cattaneo-Christov system is the following property, $\mathcal{V} = \mathcal{M} \cap \mathcal{U}$, as the reader may easily verify. \section{Hyperbolicity and symmetrizability} \label{sechyp} \subsection{Hyperbolicity} Let us consider the system \begin{equation} \label{hypsyst} A^0(U) U_t + A^1(U) U_x = 0, \end{equation} which results from neglecting thermal relaxation and dissipation due to viscosity in \eqref{eqfullsyst}. For any state $U \in \mathcal{U}$, \eqref{hypsyst} is a quasi-linear, strictly hyperbolic first order system. Although hyperbolicity has been mentioned before as a property of this ``inviscid" Cattaneo-Christov system in one dimension (see, for instance, \cite{Jrd14} and the references therein), for the sake of completeness we verify this fact by computing its characteristic speeds which (apparently) have not been reported before in the literature. For any $U \in \mathcal{U}$, set \begin{equation} \label{chardet} \pi(\zeta) = \det \Big( A^1(U) - \zeta A^0(U) \Big). \end{equation} The roots of $\pi(\zeta) = 0$ are called the \emph{characteristic speeds} of system \eqref{hypsyst}. If these roots are all real and different then it is said that the system \eqref{hypsyst} is strictly hyperbolic at $U \in \mathcal{U}$. \begin{remark} We remind the reader that the notion of hyperbolicity is motivated by the existence of traveling wave solutions to system \eqref{hypsyst} of the form $U(x,t) = \varphi(x - st)$, for some real propagating speed $s \in \mathbb{R}$ and a profile vector function $\varphi$. Substitution yields the spectral problem \begin{equation} \label{tws} (A^1(\varphi) - s A^0(\varphi) ) \varphi' = 0, \end{equation} with eigenvalue $s \in \mathbb{R}$ and eigenfunction $\varphi'$, which leads directly to the characteristic equation \eqref{chardet}. \end{remark} After a straightforward computation we see that \[ \pi(\zeta) = \det \begin{pmatrix} u - \zeta & \rho & 0 & 0 \\ p_\rho & \rho(u - \zeta) & p_\theta & 0 \\ 0 & \theta p_\theta & \rho e_\theta (u - \zeta) & 1 \\ 0 & 0 & \kappa & \tau (u - \zeta) \end{pmatrix}. \] Let us denote $m = u - \zeta$ and make the computations to arrive at \[ \pi(\zeta) = \rho (m^2 - p_\rho) (\rho e_\theta \tau m^2 - \kappa) - \theta p_\theta^2 \tau m^2. \] This is a second order polynomial in $m^2$. Therefore, we have that $\pi(\zeta) = 0$ if and only if \[ m^4 + \widetilde{b} m^2 + \widetilde{c} = 0, \] where \[ \widetilde{b} = - (\rho^2 e_\theta \tau)^{-1} ( \rho \kappa + \rho^2 p_\rho e_\theta \tau + \theta p_\theta^2 \tau), \qquad \widetilde{c} = (\rho^2 e_\theta \tau)^{-1} \rho p_\rho \kappa. \] Upon inspection of the discriminant \[ \begin{aligned} \Delta = \widetilde{b}^2 - 4 \widetilde{c} &= \left( p_\rho + \frac{\kappa}{\rho e_\theta \tau} + \frac{\theta p_\theta^2}{\rho^2 e_\theta} \right)^2 - \frac{4 \kappa p_\rho}{\rho e_\theta \tau} \\ &= \left( p_\rho - \frac{\kappa}{\rho e_\theta \tau} \right)^2 + \frac{\theta p_\theta^2}{\rho^2 e_\theta} \left( 2 p_\rho + \frac{2 \kappa}{\rho e_\theta \tau} + \frac{\theta p_\theta^2}{\rho^2 e_\theta}\right) > 0, \end{aligned} \] we conclude that the $m^2$-roots are real and positive, \[ 0 < m_-^2 = \tfrac{1}{2} |\widetilde{b}| - \tfrac{1}{2} \sqrt{\, \widetilde{b}^2 - 4 \widetilde{c} \; } < m_+^2 = \tfrac{1}{2} |\widetilde{b}| + \tfrac{1}{2} \sqrt{\, \widetilde{b}^2 - 4 \widetilde{c} \; }, \] yielding the characteristic speeds \[ \zeta_1 = u - \sqrt{m_+^2} < \zeta_2 = u - \sqrt{m_-^2} < \zeta_3 = u + \sqrt{m_-^2} < \zeta_4 = u + \sqrt{m_+^2}. \] We conclude that system \eqref{hypsyst} is strictly hyperbolic. We gather these observations into the following \begin{lemma} \label{lemcharspeed} Under assumptions \eqref{H1} - \eqref{H3} and for each $U = (\rho, u, \theta, q)^\top \in \mathcal{U} \subset \mathbb{R}^4$, the first order system \eqref{hypsyst} is strictly hyperbolic at $U \in \mathcal{U}$ and the characteristic speeds are given by \[ \begin{aligned} \zeta_1(U) &= u - \frac{1}{\sqrt{2}} {\sqrt{ p_\rho + \frac{\kappa}{\rho e_\theta \tau} + \frac{\theta p_\theta^2}{\rho^2 e_\theta} \, + \, \sqrt{\left( p_\rho + \frac{\kappa}{\rho e_\theta \tau} + \frac{\theta p_\theta^2}{\rho^2 e_\theta} \right)^2 - \frac{4 \kappa p_\rho}{\rho e_\theta \tau} \;\;} \;\;}}, \\ \zeta_2(U) &= u - \frac{1}{\sqrt{2}} {\sqrt{ p_\rho + \frac{\kappa}{\rho e_\theta \tau} + \frac{\theta p_\theta^2}{\rho^2 e_\theta} \, - \, \sqrt{\left( p_\rho + \frac{\kappa}{\rho e_\theta \tau} + \frac{\theta p_\theta^2}{\rho^2 e_\theta} \right)^2 - \frac{4 \kappa p_\rho}{\rho e_\theta \tau} \;\;} \;\;}}, \\ \zeta_3(U) &= u + \frac{1}{\sqrt{2}} {\sqrt{ p_\rho + \frac{\kappa}{\rho e_\theta \tau} + \frac{\theta p_\theta^2}{\rho^2 e_\theta} \, - \, \sqrt{\left( p_\rho + \frac{\kappa}{\rho e_\theta \tau} + \frac{\theta p_\theta^2}{\rho^2 e_\theta} \right)^2 - \frac{4 \kappa p_\rho}{\rho e_\theta \tau} \;\;} \;\;}}, \\ \zeta_4(U) &= u + \frac{1}{\sqrt{2}} {\sqrt{ p_\rho + \frac{\kappa}{\rho e_\theta \tau} + \frac{\theta p_\theta^2}{\rho^2 e_\theta} \, + \, \sqrt{\left( p_\rho + \frac{\kappa}{\rho e_\theta \tau} + \frac{\theta p_\theta^2}{\rho^2 e_\theta} \right)^2 - \frac{4 \kappa p_\rho}{\rho e_\theta \tau} \;\;} \;\;}}. \end{aligned} \] \end{lemma} In the case of the standard model for inviscid compressible fluid flow (namely, Euler equations), it is well-known \cite{Da4e,Smo94} that the three characteristic speeds (in one spatial dimension) are $u-c$, $u$ and $u + c$, where the positive quantity $c = \sqrt{p_\rho} > 0$ is known as \emph{the speed of sound}. In the present case we have two ``sound speeds", $c_1 = \sqrt{m_+^2}$ and $c_2 = \sqrt{m_-^2}$, and the characteristic speeds of the system split into $u-c_2 < u - c_1 < u + c_1 < u+ c_2$. These sound speeds convey both thermal and mechanical contributions due to the rate of change of the pressure with respect to changes in density and in temperature, respectively. Notice that when thermal effects are neglected, formally, in the limit when $\kappa \to 0^+$ and $p_\theta \to 0^+$, we have that $c_1, c_2 \to \sqrt{p_\rho}$, and the two sound speeds converge to the sole mechanical sound speed $c$ (the absence of thermal waves). On the other hand, if we take the (non-rigorous) limit when $p_\rho \to 0^+$ and $p_\theta \to 0^+$ then $c_1 \to 0$ and $c_2 \to \sqrt{\kappa/(\rho e_\theta \tau)}$; this last value is the thermal wave speed in the absence of mechanical effects as computed by Lindsay and Straughan (see equation (4.29) in \cite{LiSt78}; see also \cite{Strau10a}). The significance of the characteristic speeds of Lemma \ref{lemcharspeed} is that they comprise the exact way in which mechanical and thermal effects are combined. \subsection{Symmetrizability} We now show that system \eqref{eqfullsyst} can be put into symmetric form. Let us denote \[ D(U) := d_U Q(U) = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix}, \qquad U \in \mathcal{U}. \] \begin{definition} We say a quasilinear system of the form \eqref{eqfullsyst} is \emph{symmetrizable} provided that there exists a matrix function $S \in C^\infty(\mathcal{U}; \mathbb{R}^{4 \times 4})$, $S = S(U)$, symmetric and positive definite, such that the matrices $S(U)A^0(U), S(U)A^1(U), S(U)B(U)$ and $S(U)D(U)$ are symmetric for all $U \in \mathcal{U}$. \end{definition} \begin{lemma} Under assumptions \eqref{H1} - \eqref{H3}, Cattaneo-Christov system \eqref{eqfullsyst} is symmetrizable and the symmetrizer $S \in C^\infty(\mathcal{U}; \mathbb{R}^{4 \times 4})$ is given by \begin{equation} \label{symm} S(U) := \begin{pmatrix} \displaystyle{\frac{p_\rho}{\rho}} & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \displaystyle{\frac{1}{\theta}} & 0 \\ 0 & 0 & 0 & \displaystyle{\frac{1}{\kappa \theta}} \end{pmatrix}, \qquad U \in \mathcal{U}. \end{equation} \end{lemma} \begin{proof} Clearly, $S$ is smooth in the convex open set $\mathcal{U}$. Moreover, $S$ is symmetric (diagonal) and positive definite in view of \eqref{H1} - \eqref{H3}. That $S$ symmetrizes system \eqref{eqfullsyst} follows from straighforward computations that yield \begin{equation} \label{defhA0} \hat{A}^0(U) := S(U) A^0(U) = \begin{pmatrix} \displaystyle{\frac{p_\rho}{\rho}} & 0 & 0 & 0 \\ 0 & \rho & 0 & 0 \\ 0 & 0 & \displaystyle{\frac{\rho e_\theta}{\theta}} & 0 \\ 0 & 0 & 0 & \displaystyle{\frac{\tau}{\kappa \theta}} \end{pmatrix}, \end{equation} \begin{equation} \label{defhA1} \renewcommand\arraystretch{2} \hat{A}^1(U) := S(U) A^1(U) = \begin{pmatrix} \displaystyle{\frac{u p_\rho}{\rho}} & p_\rho & 0 & 0 \\ p_\rho & \rho u & p_\theta & 0 \\ 0 & p_\theta & \displaystyle{\frac{\rho u e_\theta}{\theta}} & \displaystyle{\frac{1}{\theta}} \\ 0 & 0 & \displaystyle{\frac{1}{\theta}} & \displaystyle{\frac{\tau u}{\kappa \theta}} \end{pmatrix}, \end{equation} \begin{equation} \label{defhB} \hat{B}(U) := S(U) B(U) = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & \nu & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, \end{equation} \begin{equation} \label{defhD} \hat{D}(U) := S(U) D(U) = \begin{pmatrix} 0 & 0 & 0& 0\\ 0& 0 & 0& 0\\ 0& 0& 0 & 0\\ 0& 0& 0& - \displaystyle{\frac{1}{\kappa \theta}} \end{pmatrix}, \end{equation} which are smooth symmetric matrix functions of $U \in \mathcal{U}$. \end{proof} \begin{remark} \label{remnoentropy} It is well-known \cite{Da4e} that symmetrizability implies hyperbolicity of system \eqref{hypsyst}. Also, since the works of Friedrichs \cite{Frd54} and Goudunov \cite{Godu61a}, symmetrizability has established itself as an important property. It plays a key role, for example, to perform energy estimates and to study existence and stability of solutions. For systems in conservation form the symmetrizer is usually the Hessian of a convex entropy function. Even in the case of quasi-linear systems not in conservation form (where the coefficients $A^j$ are not necessarily Jacobians of flux functions $f^j$) it is possible to define a convex entropy, as shown by Kawashima and Yong \cite{KY04}: if the symmetrizer is the Jacobian of a diffeomorphic change of variables, $S(U) = D_U \Psi(U)$, then a convex entropy function can be introduced. For Cattaneo-Christov systems, however, the symmetrizer \eqref{symm} is not the Jacobian of a particular diffeomorphism and the system is not necessarily endowed with a convex entropy function. \end{remark} \section{The genuine coupling condition} \label{secgencoup} \subsection{Strict dissipativity and genuine coupling} In order to define the strict dissipativity of the system, let us consider solutions around a constant equilibrium state \[ \overline{U} = (\overline{\rho}, \overline{u}, \overline{\theta}, 0)^\top \in \mathcal{V}, \] for which $Q(\overline{U}) = 0$. If $\overline{U} + U$ is a solution to \eqref{eqfullsyst} then we can recast the system as \[ A^0(\overline{U}) U_t + A^1(\overline{U}) U_x = B(\overline{U}) U_{xx} + D(\overline{U}) U + \mathcal{N}(U, U_x, U_t), \] where $\mathcal{N}$ comprises the nonlinear terms. Multiply on the left by the constant, symmetric, positive definite matrix $S(\overline{U})$ to arrive at the following symmetric system \begin{equation} \label{nlsyst} A^0 U_t + A^1 U_x + L U = B U_{xx} + \overline{\mathcal{N}}, \end{equation} where, \[ \begin{aligned} A^0 &:= S(\overline{U}) A^0(\overline{U}) = \hat{A}^0 (\overline{U}),\\ A^1 &:= S(\overline{U}) A^1(\overline{U}) = \hat{A}^1 (\overline{U}),\\ B &:= S(\overline{U}) B(\overline{U}) = \hat{B}(\overline{U}),\\ L &:= - S(\overline{U}) D(\overline{U}) = \hat{D}(\overline{U}), \end{aligned} \] and, once again, $\overline{\mathcal{N}} = S(\overline{U}) \mathcal{N}$ contains the nonlinear terms. Notice that $A^j$, $j =0,1$, $B$ and $L$ are real symmetric constant matrices, with $A^0 > 0$ (positive definite) and $B$, $L \geq 0$ (positive semi-definite). Let us consider the linear part of \eqref{nlsyst}, namely, the linear symmetric system \begin{equation} \label{lind} A^0 U_t + A^1 U_x + L U = B U_{xx}, \end{equation} which is the symmetric version of \eqref{eqfullsyst}, linearized around an equilibrium state $\overline{U} \in \mathcal{V}$. Since it is a system with constant coefficients the solution can be determined by its Fourier transform with respect to the spatial variable $x \in \mathbb{R}$. The resulting equation is \begin{equation} \label{Foulin} A^0 \widehat{U}_t + i \xi A^1 \widehat{U} + L \widehat{U} + \xi^2 B \widehat{U} = 0, \qquad t > 0, \;\, \xi \in \mathbb{R}, \end{equation} where $\widehat{U} = \widehat{U}(\xi,t)$ denotes the Fourier transform of $U$. The fact that $A^0 > 0$ and $L,B \geq 0$ is not enough to guarantee the decay of solutions to the linear problem \eqref{lind}. We resort to the following sufficient condition for the essential spectrum of the linear constant coefficient differential operator to be stable. For each $\xi \in \mathbb{R}$, $\xi \neq 0$, let $\lambda = \lambda(\xi) \in \mathbb{C}$ denote the eigenvalues of the corresponding characteristic equation, namely, the roots of the following dispersion relation, \begin{equation} \label{chareq} \det \big( \lambda A^0 + i \xi A^1 + L + \xi^2 B \big) = 0. \end{equation} \begin{definition}[strict dissipativity] System \eqref{lind} is said to be \emph{strictly dissipative} if $\Re \lambda(\xi) < 0$ for all $\xi \in \mathbb{R}$, $\xi \neq 0$. \end{definition} Closely related to the dissipativity condition is the following \begin{definition}[genuine coupling] System \eqref{lind} satisfies the \emph{genuine coupling condition} at any state $\overline{U} \in \mathcal{U}$ if for any $V \in \mathbb{R}^4$, $V \neq 0$, with $BV = LV = 0$ then we have that $(\lambda A^0 + A^1)V \neq 0$ for all $\lambda \in \mathbb{R}$. \end{definition} \begin{remark} This condition basically expresses that no eigenvector of the hyperbolic part of the operator lies in the kernel of the dissipative terms. Such property is physically relevant. For instance, loss of genuine coupling results into hyperbolic directions whereby traveling wave solutions to system \eqref{hypsyst} are not dissipated by the viscous and relaxation terms. In other words, wave solutions to \eqref{hypsyst} (hence satisfying the spectral equation \eqref{tws}) are also solutions to \eqref{lind} if the eigenvector $\varphi'$ lies in $\ker B \cap \ker L$. Genuine coupling has also deep consequences on the time asymptotic smoothing behavior of solutions to viscous and relaxation systems of conservation laws (see, for example, \cite{Hoff92}). This condition is also known in the literature as \emph{the Kawashima-Shizuta condition}, or simply, \emph{the Kawashima condition} (see \cite{LoRu06,MaN2,RuSe04} and some of the references therein). \end{remark} Let us now recall the concept of a compensating function in the sense of Kawashima and Shizuta \cite{ShKa85}, specialized to the present one-dimensional case. \begin{definition} \label{defK} A matrix $K$ is a \emph{compensating function} for system \eqref{lind} provided that \begin{itemize} \item[(a)] $K A^0$ is skew-symmetric, and \item[(b)] $\tfrac{1}{2} \big( KA^1 + (KA^1)^\top) + B + L$ is positive definite. \end{itemize} \end{definition} In the case of symmetric systems, the properties of genuine coupling, strict dissipativity and the existence of a compensating function are equivalent. This fact was first proved by Shizuta and Kawashima \cite{ShKa85} and fully characterizes the stability condition for system \eqref{lind} in the symmetric case (see also Humpherys \cite{Hu05} for an extension to higher order systems). \begin{theorem}[Shizuta-Kawashima \cite{ShKa85}] \label{theoequiv} Assume $A^j, B, L$, $j =0,1$, are real symmetric matrices, with $A^0 > 0$, $B, L \geq 0$. Then the following statements are equivalent: \begin{itemize} \item[(a)] System \eqref{lind} is strictly dissipative. \item[(b)] System \eqref{lind} satisfies the genuine coupling condition at $\overline{U} \in \mathcal{U}$. \item[(c)] There exists a compensating function $K$ for system \eqref{lind}. \item[(d)] There exists a positive constant $k > 0$ such that for any $\xi \in \mathbb{R}$, $\xi \neq 0$, and any root $\lambda = \lambda(\xi)$ of the characteristic equation \eqref{chareq} there holds \begin{equation} \label{lambdabd} \Re \lambda(\xi) \leq - \frac{k \xi^2}{1 + \xi^2}. \end{equation} \end{itemize} \end{theorem} \begin{remark} Notice that property (d) implies automatically property (a). It is easy to prove that genuine coupling is a necessary condition for strict dissipativity, i.e. that (a) implies (b). The equivalence theorem establishes the existence of a compensating function once the genuine coupling condition has been verified. It is worth mentioning that the general proof in \cite{ShKa85} (see also \cite{Hu05}) is constructive. It provides a formula for $K$ in terms of the eigenprojections of the hyperbolic part ($K$ is, in fact, a Drazin inverse of the conmutator operator; see Humpherys \cite{Hu05} for further information). \end{remark} \subsection{Genuine coupling of Cattaneo-Christov systems} We now show that Cattaneo-Christov systems are genuinely coupled. In the sequel, for any fixed state $\overline{U} = (\overline{\rho}, \overline{u}, \overline{\theta}, \overline{q})^\top \in \mathcal{U}$ we shall denote \[ \overline{p} := p(\overline{\rho},\overline{\theta}), \;\;\; \overline{e} := e(\overline{\rho},\overline{\theta}),\;\;\; \overline{\kappa} := \kappa(\overline{\rho},\overline{\theta}) \;\;\; \overline{\nu} := \nu (\overline{\rho},\overline{\theta}), \] \[ \overline{p}_\rho := p_\rho(\overline{\rho},\overline{\theta}), \;\;\; \overline{p}_\theta := p_\theta(\overline{\rho},\overline{\theta}), \;\;\; \overline{e}_\theta := e_\theta(\overline{\rho},\overline{\theta}). \] \begin{lemma} Under assumptions \eqref{H1} - \eqref{H3}, Cattaneo-Christov systems \eqref{eqfullsyst} satisfy the genuine coupling condition at any fixed state $\overline{U} = (\overline{\rho}, \overline{u}, \overline{\theta}, \overline{q})^\top \in \mathcal{U}$. \end{lemma} \begin{proof} As before, we denote $A^j = \hat{A}^j(\overline{U})$, $B = \hat{B}(\overline{U})$, $L = - \hat{D}(\overline{U})$, $j =0,1$. From the expression for $L$ in \eqref{defhD}, we see that any $V \in \ker L$ is of the form $V = (v_1, v_2, v_3, 0)^\top$, with $v_j \in \mathbb{R}$. Therefore, from \eqref{defhA0} and \eqref{defhA1} and for any $\lambda \in \mathbb{R}$ we have \[ (\lambda {A}^0 + {A}^1) V = \begin{pmatrix} \displaystyle{\frac{\overline{p}_\rho}{\overline{\rho}}(\lambda + \overline{u}) v_1} + \overline{p}_\rho v_2 \\ \overline{p}_\rho v_1 + \overline{\rho}(\lambda + \overline{u})v_2 + \overline{p}_\theta v_3 \\ \displaystyle{\frac{\overline{\rho} \, \overline{e}_\theta}{\overline{\theta}}(\lambda + \overline{u}) v_3 + \overline{p}_\theta v_2}\\ \displaystyle{ \frac{v_3}{\overline{\theta}}} \end{pmatrix}. \] Suppose that $V \in \ker L$, $V \neq 0$ and $(\lambda {A}^0 + {A}^1) V = 0$ for some $\lambda \in \mathbb{R}$. From $\overline{\theta} > 0$ we deduce that $v_3 = 0$. This yields $v_2 = 0$ as $\overline{p}_\theta > 0$. Finally, from $\overline{p}_\rho > 0$ we get $v_1 = 0$. Thus, we conclude that $V = 0$, a contradiction. \end{proof} \begin{remark} It is to be observed that the genuine coupling condition holds at any state $\overline{U} \in \mathcal{U}$ (not necessarily an equilibrium state). Also, notice that both the viscous, thermally relaxed Cattaneo-Christov system \eqref{CCvisc} with $\nu > 0$ and the relaxation system \eqref{CCrel} with $\nu \equiv 0$, are genuinely coupled. Indeed, in the viscous case with $V \in \ker B \cap \ker L$ the proof is exactly the same. \end{remark} Although genuine coupling readily implies the existence of a compensating function (thanks to Theorem \ref{theoequiv}), it is often possible to provide a formula for it by direct inspection. \begin{lemma} \label{lemviscK} Under assumptions \eqref{H1} - \eqref{H3} and in the viscous case ($\nu > 0$ for all $(\rho, \theta) \in \mathcal{D}$), for every equilibrium state $\overline{U} \in \mathcal{V}$ there exists a compensating function for system \eqref{lind}, which is given explicitly by \begin{equation} \label{viscK} K = \delta \begin{pmatrix} 0 & \overline{p}_\rho & 0 & 0 \\ - \overline{p}_\rho & 0 & - \overline{p}_\theta & 0 \\ 0 & \overline{p}_\theta & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \big( A^0 \big)^{-1}, \end{equation} for some $0 < \delta \ll 1$ sufficiently small. \end{lemma} \begin{proof} We verify directly that \eqref{viscK} is a compensating function for system \eqref{lind}. First observe from expression \eqref{viscK} that $KA^0$ is clearly skew-symmetric. Let us now compute \[ \begin{aligned} KA^1 &= \delta \begin{pmatrix} 0 & \overline{p}_\rho & 0 & 0 \\ - \overline{p}_\rho & 0 & - \overline{p}_\theta & 0 \\ 0 & \overline{p}_\theta & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} \displaystyle{\frac{\overline{\rho}}{\overline{p}_\rho}} & 0& 0& 0\\ 0& \displaystyle{\frac{1}{\overline{\rho}}} & 0& 0\\ 0& 0& \displaystyle{\frac{\overline{\theta}}{\overline{\rho} \overline{e}_\theta}} &0 \\ 0& 0& 0& \displaystyle{\frac{\bar{\kappa} \overline{\theta}}{\tau}}\end{pmatrix} \renewcommand\arraystretch{2} \begin{pmatrix} \displaystyle{\frac{\overline{u} \, \overline{p}_\rho}{\overline{\rho}}} & \overline{p}_\rho & 0& 0\\ \overline{p}_\rho & \overline{\rho} \, \overline{u} & \overline{p}_\theta & 0\\ 0& \overline{p}_\theta & \displaystyle{\frac{\overline{\rho} \, \overline{u} \, \overline{e}_\theta}{\overline{\theta}}} & \displaystyle{\frac{1}{\overline{\theta}}} \\ 0& 0& \displaystyle{\frac{1}{\overline{\theta}}} & \displaystyle{\frac{\tau \overline{u}}{\bar{\kappa} \overline{\theta}}}\end{pmatrix} \\ &= \delta \renewcommand\arraystretch{2} \begin{pmatrix} \displaystyle{\frac{\overline{p}_\rho^2}{\overline{\rho}}} & \overline{u} \, \overline{p}_\rho & \displaystyle{\frac{\overline{p}_\rho \, \overline{p}_\theta}{\overline{\rho}}} & 0 \\ -\overline{u} \, \overline{p}_\rho & \displaystyle{- \Big( \overline{\rho} \, \overline{p}_\rho + \frac{\overline{\theta} \, \overline{p}_\theta^2}{\overline{\rho} \, \overline{e}_\theta}\Big)} & -\overline{u} \overline{p}_\theta & \displaystyle{- \, \frac{\overline{p}_\theta}{\overline{\rho} \, \overline{e}_\theta}} \\ \displaystyle{\frac{\overline{p}_\rho \, \overline{p}_\theta}{\overline{\rho}}} & \overline{u} \, \overline{p}_\theta & \displaystyle{\frac{\overline{p}_\theta^2}{\overline{\rho}}} & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}. \end{aligned} \] Its symmetric part is \[ \tfrac{1}{2} \big( KA^1 + (KA^1)^\top \big) = \delta \renewcommand\arraystretch{2} \begin{pmatrix} \displaystyle{\frac{\overline{p}_\rho^2}{\overline{\rho}}} & 0 & \displaystyle{\frac{\overline{p}_\rho \, \overline{p}_\theta}{\overline{\rho}}} & 0 \\ 0 & \displaystyle{- \Big( \overline{\rho} \, \overline{p}_\rho + \frac{\overline{\theta} \, \overline{p}_\theta^2}{\overline{\rho} \, \overline{e}_\theta}\Big)} & 0 & \displaystyle{- \, \frac{\overline{p}_\theta}{2 \overline{\rho} \, \overline{e}_\theta}} \\ \displaystyle{\frac{\overline{p}_\rho \, \overline{p}_\theta}{\overline{\rho}}} & 0 & \displaystyle{\frac{\overline{p}_\theta^2}{\overline{\rho}}} & 0 \\ 0 & \displaystyle{- \, \frac{\overline{p}_\theta}{2 \overline{\rho} \, \overline{e}_\theta}} & 0 & 0 \end{pmatrix}. \] Therefore, for any $X = (x_1, \, x_2, \, x_3, \, x_4)^\top \in \mathbb{R}^4$, $X \neq 0$, we have the following quadratic form \[ \begin{aligned} Q(X) &:= X^\top \Big( \tfrac{1}{2} \big( KA^1 + (KA^1)^\top \big) + B + L \Big) X \\ &=\delta \frac{\overline{p}_\rho^2}{\overline{\rho}} x_1^2 + 2\delta \frac{\overline{p}_\theta \overline{p}_\rho}{\overline{\rho}} x_1 x_3 - \delta \frac{\overline{p}_\theta}{\overline{\rho} \, \overline{e}_\theta} x_2 x_4 + \delta \frac{\overline{p}_\theta^2}{\overline{\rho}} x_3^2 + \Big( \bar{\nu} - \delta \Big( \overline{\rho} \, \overline{p}_\rho + \frac{\overline{\theta} \, \overline{p}_\theta^2}{\overline{\rho} \, \overline{e}_\theta}\Big)\Big) x_2^2 + \frac{1}{\bar{\kappa} \overline{\theta}} x_4^2 \\ &\geq \frac{\delta}{2} \frac{\overline{p}_\rho^2}{\overline{\rho}} x_1^2 + \delta \frac{\overline{p}_\theta^2}{\overline{\rho}} x_3^2 + \Big( \bar{\nu} - \delta \Big( \overline{\rho} \, \overline{p}_\rho + \frac{\overline{\theta} \, \overline{p}_\theta^2}{\overline{\rho} \, \overline{e}_\theta} + \frac{\overline{p}_\theta}{2 \overline{\rho} \, \overline{e}_\theta}\Big)\Big) x_2^2 + \Big( \frac{1}{\bar{\kappa} \overline{\theta}} - \delta \frac{\overline{p}_\theta}{2 \overline{\rho} \, \overline{e}_\theta} \Big) x_4^2. \end{aligned} \] Thanks to hypotheses \eqref{H1} - \eqref{H3} and since $\bar{\nu} > 0$, one can choose $\delta > 0$ sufficiently small such that \[ 0 < \delta < \frac{2 \overline{\rho} \, \overline{e}_\theta}{\bar{\kappa} \overline{\theta} \overline{p}_\theta} \quad \text{and} \quad 0 < \delta < \bar{\nu} \Big( \overline{\rho} \, \overline{p}_\rho + \frac{\overline{\theta} \, \overline{p}_\theta^2}{\overline{\rho} \, \overline{e}_\theta} + \frac{\overline{p}_\theta}{2 \overline{\rho} \, \overline{e}_\theta}\Big)^{-1}, \] yielding \[ Q(X) \geq C_\delta |X|^2 > 0, \] for some $C_\delta > 0$ and all $X \neq 0$. \end{proof} In the case without viscosity the form of $K$ differs considerably, due to the fact that the only dissipation term is the thermal relaxation one. \begin{lemma} \label{remrelaxK} Under assumptions \eqref{H1} - \eqref{H3} and in the pure thermal relaxation case ($\nu \equiv 0$ for all $(\rho, \theta) \in \mathcal{D}$), for every equilibrium state $\overline{U} \in \mathcal{V}$ there exists a compensating function for system \eqref{lind}, which is given explicitly by \begin{equation} \label{relaxK} K = \delta \renewcommand\arraystretch{2} \begin{pmatrix} 0 & \displaystyle{\frac{\delta^2 \tau \overline{\theta}^2 \overline{p}_\theta^2 \overline{p}_\rho}{\overline{\rho}^2}} & 0 & 0 \\ - \displaystyle{\frac{\delta^2 \tau \overline{\theta}^2 \overline{p}_\theta^2 \overline{p}_\rho}{\overline{\rho}^2}} & 0 & \delta \overline{p}_\theta & 0 \\ 0 & - \delta \overline{p}_\theta & 0 & \displaystyle{\frac{\overline{\rho} \overline{e}_\theta}{\bar{\kappa} \overline{\theta}^2}} \\ 0 & 0 & - \displaystyle{\frac{\overline{\rho} \overline{e}_\theta}{\bar{\kappa} \overline{\theta}^2}} & 0\end{pmatrix} \big( A^0 \big)^{-1}, \end{equation} for some $0 < \delta \ll 1$ sufficiently small. \end{lemma} \begin{proof} We propose to take $K$ of the form \[ K = \begin{pmatrix} 0 & \alpha & 0 & 0 \\ - \alpha & 0 & - \beta & 0 \\ 0 & \beta & 0 & - \gamma \\ 0 & 0 & \gamma & 0 \end{pmatrix}\big( A^0 \big)^{-1}, \] and to appropriately choose constants $\alpha, \beta$ and $\gamma$. Performing the product yields the matrix \[ K A^1 = \renewcommand\arraystretch{2} \begin{pmatrix} \displaystyle{\frac{\alpha \overline{p}_\rho}{\overline{\rho}}} & \alpha \bar{u} & \displaystyle{\frac{\alpha \overline{p}_\theta}{\overline{\rho}}} & 0 \\ - \alpha \bar{u} & - \displaystyle{\big( \alpha \overline{\rho} + \frac{\beta \overline{\theta} \overline{p}_\theta}{\overline{\rho} \overline{e}_\theta}\big)} & - \beta \bar{u} & - \displaystyle{\frac{\beta}{\overline{\rho} \overline{e}_\theta}} \\ \displaystyle{\frac{\beta \overline{p}_\rho}{\overline{\rho}}} & \beta \bar{u} & \displaystyle{\frac{\beta \overline{p}_\theta}{\overline{\rho}} - \frac{\gamma \bar{\kappa}}{\tau}} & - \gamma \bar{u} \\ 0 & \displaystyle{\frac{\gamma \overline{\theta} \overline{p}_\theta}{\overline{\rho} \overline{e}_\theta}} & \gamma \bar{u} & \displaystyle{\frac{\gamma}{\overline{\rho} \overline{e}_\theta}} \end{pmatrix}, \] whose symmetric part is \[ \begin{aligned} \tfrac{1}{2} \big( KA^1 &+ (KA^1)^\top \big) = \\ &=\begin{pmatrix} \displaystyle{\frac{\alpha \overline{p}_\rho}{\overline{\rho}}} & 0 & \displaystyle{\frac{1}{2\overline{\rho}} \big( \beta \overline{p}_\rho + \alpha \overline{p}_\theta \big)} & 0 \\ 0 & - \displaystyle{\big( \alpha \overline{\rho} + \frac{\beta \overline{\theta} \overline{p}_\theta}{\overline{\rho} \overline{e}_\theta}\big)} & 0 & \displaystyle{\frac{1}{2\overline{\rho} \overline{e}_\theta} \big( \gamma \overline{\theta} \overline{p}_\theta - \beta\big)} \\ \displaystyle{\frac{1}{2\overline{\rho}} \big( \beta \overline{p}_\rho + \alpha \overline{p}_\theta \big)} & 0 & \displaystyle{\frac{\beta \overline{p}_\theta}{\overline{\rho}} - \frac{\gamma \bar{\kappa}}{\tau}} &0 \\ 0 & \displaystyle{\frac{1}{2\overline{\rho} \overline{e}_\theta} \big( \gamma \overline{\theta} \overline{p}_\theta - \beta\big)} & 0 & \displaystyle{\frac{\gamma}{\overline{\rho} \overline{e}_\theta}} \end{pmatrix}. \end{aligned} \] Thus, in view that $B = 0$, we have for any $X = (x_1, x_2, x_3, x_4)^\top$, $X \neq 0$, that the corresponding quadratic form is \[ \begin{aligned} Q(X) &:= X^\top \big( \tfrac{1}{2} ( KA^1 + (KA^1)^\top ) + L \big) X \\ &= \frac{\alpha \overline{p}_\rho}{\overline{\rho}} x_1^2 - \Big( \alpha \overline{\rho} + \frac{\beta \overline{\theta} \overline{p}_\theta}{\overline{\rho} \overline{e}_\theta}\Big) x_2^2 + \Big( \frac{\beta \overline{p}_\theta}{\overline{\rho}} - \frac{\gamma \bar{\kappa}}{\tau}\Big) x_3^2 + \Big( \frac{\gamma}{\overline{\rho} \overline{e}_\theta} + \frac{1}{\bar{\kappa} \overline{\theta}}\Big) x_4^2 + \\ &\; + \frac{1}{\overline{\rho}} \big( \beta \overline{p}_\rho + \alpha \overline{p}_\theta \big) x_1 x_3 + \frac{1}{\overline{\rho} \overline{e}_\theta} \big( \gamma \overline{\theta} \overline{p}_\theta - \beta \big) x_2 x_4. \end{aligned} \] Let us choose $\alpha$, $\beta$ and $\gamma$ such that \[ \alpha = \delta^3 \alpha_0, \quad \beta = - \delta^2 \beta_0, \quad \gamma = - \delta \gamma_0, \] where $\alpha_0, \beta_0, \gamma_0 > 0$ and $0 < \delta \ll 1$ are constants to be determined. Then the quadratic form reads \[ Q(X) = a_1 x_1^2 + a_2 x_2 ^2 + a_3 x_3^2 + a_4 x_4^2 + b_{13} x_1 x_3 + b_{24}x_2 x_4, \] where, \[ \begin{aligned} a_1 &:= \delta^3 \, \frac{\alpha_0 \overline{p}_\rho}{\overline{\rho}},\\ a_2 &:= \delta^2 \left( \frac{\beta_0 \overline{\theta} \overline{p}_\theta}{\overline{\rho} \overline{e}_\theta} - \delta \alpha_0 \overline{\rho}\right),\\ a_3 &:= \delta \left( \frac{\gamma_0 \bar{\kappa}}{\tau} - \delta \frac{\beta_0 \overline{p}_\theta}{\overline{\rho}} \right),\\ a_4 &:= \frac{1}{\bar{\kappa} \overline{\theta}} - \delta \frac{\gamma_0}{\overline{\rho} \overline{e}_\theta},\\ b_{13} &:= \frac{\delta^2}{\overline{\rho}} \left( \delta \alpha_0 \overline{p}_\theta - \beta_0 \overline{p}_\rho \right),\\ b_{24} &:= \frac{\delta}{\overline{\rho} \overline{e}_\theta} \left( \delta \beta_0 - \gamma_0 \overline{\theta} \overline{p}_\theta \right). \end{aligned} \] Assuming that \begin{equation} \label{condsas} \begin{aligned} a_1 &> 0,\\ a_4 &> 0,\\ a_2 - \frac{b_{24}^2}{2 a_4} &> 0,\\ a_3 - \frac{b_{13}^2}{2a_1} &> 0, \end{aligned} \end{equation} clearly we have \[ Q(X) \geq \tfrac{1}{2}a_1 x_1^2 + \left( a_2 - \frac{b_{24}^2}{2 a_4} \right) x_2^2 + \left( a_3 - \frac{b_{13}^2}{2a_1} \right) x_3^2 + \tfrac{1}{2} a_4 x_4^2 \geq C |X|^2 > 0, \] for all $X \neq 0$, $X \in \mathbb{R}^4$ and some positive constant satisfying, \[ 0 < C < \tfrac{1}{2}\min \left\{ \tfrac{1}{2}a_1, \tfrac{1}{2}a_4, a_2 - \frac{b_{24}^2}{2a_4}, a_3 - \frac{b_{13}^2}{2a_1} \right\}. \] Therefore, we need to find values of $\alpha_0,\beta_0, \gamma_0 > 0$ and $0 < \delta \ll 1$ sufficiently small such that conditions \eqref{condsas} hold. First, notice that under assumptions \eqref{H1} - \eqref{H3} and $\alpha_0 > 0$, the first condition in \eqref{condsas} is already satisfied. If we further choose parameter values $\alpha_0$, $\beta_0$ and $\gamma_0$ such that \begin{equation} \label{212} \frac{\gamma_0 \bar{\kappa}}{\tau} - \frac{\beta_0^2 \overline{p}_\rho}{2 \alpha_0 \overline{\rho}} > 0, \end{equation} then, for $\delta > 0$ sufficiently small such that \begin{equation} \label{condE} 0 < \delta < \frac{2 \overline{\rho} \overline{p}_\rho}{\alpha_0 \overline{p}_\theta^2} \left( \frac{\gamma_0 \bar{\kappa}}{\tau} - \frac{\beta_0^2 \overline{p}_\rho}{2 \alpha_0 \overline{\rho}} \right), \end{equation} we can assure that the fourth condition in \eqref{condsas} also holds, as the reader may easily verify. For small $\delta$ we write \[ \frac{1}{2a_4} = \frac{1}{2} \left( \frac{1}{\bar{\kappa} \overline{\theta}} - \delta \frac{\gamma_0}{\overline{\rho} \overline{e}_\theta}\right)^{-1} = \frac{1}{2} \bar{\kappa}\overline{\theta} + \delta \frac{\bar{\kappa}^2 \overline{\theta}^2 \gamma_0}{2 \overline{\rho} \overline{e}_\theta} + O(\delta^2). \] Hence, it suffices to take $\delta$ small enough such that \begin{equation} \label{condF} 0 < \delta < \frac{\overline{\rho} \overline{e}_\theta}{\bar{\kappa} \overline{\theta} \gamma_0}, \end{equation} and to choose values of $\beta_0$ and $\gamma_0$ satisfying \begin{equation} \label{216} \frac{\overline{\theta} \overline{p}_\theta}{\overline{\rho} \overline{e}_\theta} \left( \beta_0 - \gamma_0^2 \frac{\bar{\kappa} \overline{\theta}^2 \overline{p}_\theta}{2 \overline{\rho} \overline{e}_\theta}\right) > 0, \end{equation} in order to obtain \begin{equation} \label{215} a_2 - \frac{b_{24}^2}{2 a_4} = \frac{\overline{\theta} \overline{p}_\theta}{\overline{\rho} \overline{e}_\theta} \left( \beta_0 - \gamma_0^2 \frac{\bar{\kappa} \overline{\theta}^2 \overline{p}_\theta}{2 \overline{\rho} \overline{e}_\theta}\right) + O(\delta) > 0, \end{equation} that is, the third condition in \eqref{condsas}. Finally, the second inequality in \eqref{condsas} follows from \eqref{condF}. Hence, it suffices to choose positive values of $\alpha_0, \beta_0, \gamma_0$ such that conditions \eqref{216} and \eqref{212} hold. For instance, we can define \[ \begin{aligned} \alpha_0 &:= \frac{\tau^2 \overline{\theta}^2 \overline{p}_\theta^2 \overline{p}_\rho}{\overline{\rho}^2} > 0,\\ \beta_0 &:= \overline{p}_\theta > 0,\\ \gamma_0 &:= \frac{\overline{\rho} \overline{e}_\theta}{\bar{\kappa} \overline{\theta}^2} > 0 \end{aligned} \] (all positive because of \eqref{H1} - \eqref{H3}). Once these values are determined, we can always find $0 < \delta \ll 1$ sufficiently small such that \eqref{condE}, \eqref{condF} and \eqref{215} hold as well. Substitute $\alpha = \delta^4 \alpha_0$, $\beta = - \delta^2 \beta_0$ and $\gamma = - \delta \gamma_0$ back into the expression of $K$ to obtain the result. \end{proof} \section{Linear decay rates} \label{seclindec} In this section we describe how to obtain decay rates for solutions to the linearized system \eqref{lind} using the properties of the compensating function $K$. We gloss over some details, because the arguments are very similar to those in the case of hyperbolic conservation laws with relaxation (see section 3 in \cite{KY09}), with a slight modification due to the presence of viscous and relaxation terms combined. It is also to be noticed that we are not applying the equivalence result (Theorem \ref{theoequiv}) inasmuch we are explicitly providing the form of $K$. The estimates hold for both the pure relaxation ($\nu \equiv 0$) and the viscous with thermal relaxation ($\nu > 0$) cases. Let us denote the standard inner product in $\mathbb{C}^n$ as $\< \, , \, \>$ and let $[A]^s := \tfrac{1}{2} (A + A^\top)$ be the symmetric part of any real matrix $A$. Under the previous assumptions, namely, that \begin{itemize} \item[(i)] $A^j$, $L$, $B$, $j = 0,1$, are real symmetric matrices; \item[(ii)] $A^0 > 0$, $L, B \geq 0$; and \item[(iii)] there exists a compensating function $K$, \end{itemize} let $U$ be the solution to linearized system \eqref{lind}. \begin{lemma} There exists $k > 0$ such that the solutions $U$ to the linear system \eqref{lind} satisfy \begin{equation} \label{est12} |\widehat{U}(\xi,t)| \leq C |\widehat{U}(\xi,0)| \exp \left( - \frac{k \xi^2 t}{1 + \xi^2}\right), \end{equation} for all $t \geq 0$, $\xi \in \mathbb{R}$ and some uniform constant $C > 0$. \end{lemma} \begin{proof} Take the Fourier transform to get equation \eqref{Foulin}. Since the coefficient matrices are symmetric, if we take the inner product of \eqref{Foulin} with $\widehat{U}$ and take the real part we obtain \begin{equation} \label{la4} \tfrac{1}{2} \partial_t \< \widehat{U}, A^0 \widehat{U} \> + \< \widehat{U}, L\widehat{U} \> + \xi^2 \<\widehat{U}, B\widehat{U}\> = 0. \end{equation} Now multiply \eqref{Foulin} by $-i\xi K$ and take the inner product with $\widehat{U}$. The result is \[ - \< \widehat{U}, i\xi KA^0 \widehat{U}_t \> + \xi^2 \< \widehat{U}, KA^1 \widehat{U} \> - \< \widehat{U}, i\xi KL \widehat{U} \> - \< \widehat{U}, i \xi^3 KB \widehat{U} \> = 0. \] Use the fact that $KA^0$ is skew-symmetric to verify that \[ \Re \< \widehat{U}, i\xi KA^0 \widehat{U}_t \> = \tfrac{1}{2} \xi \partial_t \< \widehat{U}, iKA^0 \widehat{U} \>. \] Thus, taking the real part of the previous equation yields \[ - \tfrac{1}{2} \xi \partial_t \< \widehat{U}, iKA^0 \widehat{U} \> + \xi^2 \< \widehat{U}, [KA^1]^s \widehat{U} \> = \Re \big( i \xi \< \widehat{U}, KL \widehat{U} \> \big) + \Re \big( i\xi^3 \< \widehat{U}, KB \widehat{U} \> \big). \] Since $L, B \geq 0$ and by symmetry, we obtain the estimate \begin{equation} \label{la7} - \tfrac{1}{2} \xi \partial_t \< \widehat{U}, iKA^0 \widehat{U} \> + \xi^2 \< \widehat{U}, [KA^1]^s \widehat{U} \> \leq \epsilon \xi^2 |\widehat{U}|^2 + C_\epsilon \big( \< \widehat{U}, L\widehat{U} \> + \xi^4 \< \widehat{U}, B\widehat{U} \> \big), \end{equation} for any $\epsilon > 0$ and where $C_\epsilon > 0$ is a uniform constant depending only on $\epsilon > 0$, $|K L^{1/2}|$ and $|KB^{1/2}|$. Now multiply equation \eqref{la4} by $1 + \xi^2$, equation \eqref{la7} by $\delta > 0$ and add them up. The result is \begin{equation} \label{la8} \begin{aligned} \tfrac{1}{2} \partial_t &\left( (1 + \xi^2) \< \widehat{U}, A^0 \widehat{U} \> - \delta \xi \< \widehat{U}, iKA^0 \widehat{U} \> \right) + \< \widehat{U}, L\widehat{U} \> + \xi^4 \< \widehat{U}, B \widehat{U} \> + \\ &\; + \xi^2 \left( \delta \< \widehat{U}, [KA^1]^s \widehat{U} \> + \< \widehat{U}, L\widehat{U} \> + \< \widehat{U}, B \widehat{U} \> \right) \\ &\leq \epsilon \delta \xi^2 |\widehat{U}|^2 + \delta C_\epsilon \big( \< \widehat{U}, L\widehat{U} \> + \xi^4 \< \widehat{U}, B \widehat{U} \>\big). \end{aligned} \end{equation} Now define \[ M:= \< \widehat{U}, A^0 \widehat{U} \> - \frac{\delta \xi}{1 + \xi^2} \< \widehat{U}, iKA^0 \widehat{U} \>. \] Notice that $M$ is real because $A^0$ is symmetric and $KA^0$ is skew-symmetric. Since $A^0 > 0$ there exists $C_0 > 0$ such that $\< \widehat{U}, A^0 \widehat{U} \> \geq C_0|\widehat{U}|^2$. It is then easy to show that there exists $\delta_0 > 0$, sufficiently small, such that if $0 < \delta < \delta_0$ then \[ \frac{1}{C_1}|\widehat{U}|^2 \leq M \leq C_1 |\widehat{U}|^2, \] for some uniform $C_1 > 0$. Now from property (b) of the compensating function $K$ (see Definition \ref{defK}), there exists $\gamma > 0$ such that $\< \widehat{U}, ([KA^1]^s + L + B) \widehat{U} \> \geq \gamma |\widehat{U}|^2$. Therefore, by taking $0 <\delta < 1$ we arrive at \[ \< \widehat{U}, (\delta [KA^1]^s + L + B) \widehat{U} \> \geq \delta \gamma |\widehat{U}|^2. \] Choose $\epsilon = \gamma/2$ and $0 < \delta < \min \{ 1, \delta_0, 1/C_\epsilon \}$ to obtain \[ \tfrac{1}{2} \partial_t M + \tfrac{1}{2} \left( \frac{\xi^2}{1 + \xi^2} \right) \delta \gamma |\widehat{U}|^2 + \frac{(1 - \delta C_\epsilon)}{1 + \xi^2} \big( \< \widehat{U}, L\widehat{U} \> + \xi^4 \< \widehat{U}, B \widehat{U} \>\big) \leq 0. \] This yields \[ \tfrac{1}{2} \partial_t M + \frac{2k \xi^2}{1 + \xi^2} M \leq 0, \] with $k = \tfrac{1}{2} \delta \gamma/C_1 > 0$. This inequality readily implies the desired estimate \eqref{est12}. \end{proof} \begin{theorem}[linear decay rates] \label{thmlindecr} Under the assumptions (i) - (iii) suppose that $U_0 \in H^s(\mathbb{R}) \cap L^1(\mathbb{R})$, with $s \geq 2$. Then the solution to the Cauchy problem for linear system \eqref{lind} with $U(x,0) = U_0$ satisfies the decay rate \begin{equation} \label{lindec} \| \partial_x^l U \|_{L^2}^2 \leq C \left( e^{-k t} \| \partial_x^l U_0 \|_{L^2}^2 + (1 + t)^{-(l + 1/2)} \| U_0 \|_{L^1}^2 \right), \end{equation} for $0 \leq l \leq s-1$ and some uniform $C > 0$. \end{theorem} \begin{proof} Multiply estimate \eqref{est12} by $\xi^{2l}$ to obtain \[ \int_\mathbb{R} \xi^{2l} |\widehat{U}(\xi,t)|^2 \, d\xi \leq C \int_{\mathbb{R}} \xi^{2l} |\widehat{U}(\xi,0)|^2 \exp \left( - \frac{2k \xi^2 t}{1 + \xi^2}\right) \, d\xi =: C(I_1(t) + I_2(t)), \] where $I_1$ denotes the integral on the right hand side computed on the set $\xi \in (-1,1)$ and $I_2$ is the integral on $|\xi| > 1$. Since $\xi^2/(1+\xi^2) \geq \tfrac{1}{2}\xi^2$ for $\xi \in (-1,1)$, we have the estimate \[ \begin{aligned} I_1(t) &= \int_{-1}^1 \xi^{2l} |\widehat{U}(\xi,0)|^2 \exp \left( - \frac{2k \xi^2 t}{1 + \xi^2}\right) \, d\xi \\&\leq \Big( \sup_{\xi \in \mathbb{R} } |\widehat{U}_0(\xi)|^2 \Big) \int_{-1}^1 \xi^{2l} e^{-k\xi^2 t} \, d\xi \\ &\leq \| U_0 \|_{L^1}^2 \int_{-1}^1 \xi^{2l} e^{-k\xi^2 t} \, d\xi. \end{aligned} \] Using standard calculus tools it is easy to verify that \[ A(t) := (1 + t)^{l + 1/2} \int_{-1}^1 \xi^{2l} e^{-k\xi^2 t} \, d\xi \] is continuous and uniformly bounded above for all $t \geq 0$. Therefore we arrive at \[ I_1(t) \leq C (1 + t)^{-(l + 1/2)} \| U_0 \|_{L^1}^2, \] for some $C > 0$ and all $t \geq 0$. Now, if $\xi^2 \geq 1$ then clearly $\exp( -2k\xi^2 t /(1+\xi^2)) \leq e^{-kt}$. Together with Plancherel's theorem, this yields the estimate \[ \begin{aligned} I_2(t) &= \int_{|\xi| \geq 1} \xi^{2l} |\widehat{U}(\xi,0)|^2 \exp \left( - \frac{2k \xi^2 t}{1 + \xi^2}\right) \, d\xi \\&\leq e^{-kt} \int_\mathbb{R} \xi^{2l} |\widehat{U}_0(\xi)|^2 \, d\xi \\ &\leq e^{-kt} \|\partial_x^l U_0 \|_{L^2}^2. \end{aligned} \] Combining both estimates we arrive at \eqref{lindec}. \end{proof} \begin{corollary} \label{corCC} Under hypotheses \eqref{H1} - \eqref{H3} for a compressible fluid, let $\overline{U} = (\overline{\rho}, \overline{u}, \overline{\theta}, 0)^\top \in \mathcal{V}$ be a constant equilibrium state. If $U_0 - \overline{U} \in H^s(\mathbb{R}) \cap L^1(\mathbb{R})$, with $s \geq 2$, is an initial perturbation (with finite energy and finite mass) of the equilibrium state $\overline{U}$ then the solutions $U - \overline{U}$ to the linearized equations around $\overline{U}$ satisfy the decay estimates \begin{equation} \label{lindecU} \| \partial_x^l (U - \overline{U}) \|_{L^2}^2 \leq C \left( e^{-k t} \| \partial_x^l (U_0 - \overline{U}) \|_{L^2}^2 + (1 + t)^{-(l + 1/2)} \| U_0 - \overline{U} \|_{L^1}^2 \right), \end{equation} for $0 \leq l \leq s-1$ and some uniform $C, k > 0$. These linear decay rates hold for solutions to the linearization of both the viscous Cattaneo-Christov system \eqref{CCvisc} (for which $\nu > 0$) and the inviscid Cattaneo-Christov model \eqref{CCrel} (for which $\nu \equiv 0$). \end{corollary} \begin{proof} Both systems \eqref{CCvisc} and \eqref{CCrel} can be recast in the quasilinear symmetric form \eqref{nlsyst}, where the solutions are written as $U - \overline{U}$, that is, as perturbations of the equilibrium state. Under hypotheses \eqref{H1} - \eqref{H3}, the coefficients $A^0$, $A^1$, $B$ and $L$ satisfy assumptions (i) - (iii), where the compensating function $K$ is given by \eqref{viscK} in the viscous case ($\overline{\nu} > 0$), and by \eqref{relaxK} in the pure thermal relaxation case ($\overline{\nu} \equiv 0$). Thus, the hypotheses of Theorem \ref{thmlindecr} are satisfied and any solution $U - \overline{U}$ to the linearized system \eqref{lind} with initial condition $U_0 - \overline{U}$ obeys the desired linear decay rates, as claimed. \end{proof} \section{Discussion} In this paper we have shown that one-dimensional Cattaneo-Christov systems for compressible fluid flow are strictly dissipative. This property holds for the case in which viscous and thermal relaxation effects are combined, as well as for the case where viscosity is neglected and the only dissipation terms are due to thermal relaxation. We have proved strict dissipativity for these systems by verifying the genuine coupling condition, as well as by providing explicit forms for the compensating functions which allow, in turn, to establish energy estimates leading to the decay structure of solutions to the linearized problem around equilibrium states. In the process, we have shown, for instance, that Cattaneo-Christov systems in one dimension are symmetrizable. As we have pointed out, symmetrizability is a fundamental property in the theory. It is natural to ask whether multi-dimensional Cattaneo-Christov systems are strictly dissipative. With respect to this problem, it is important to remark, however, that not even the existence of a symmetrizer in several space dimensions is yet clear. As the seasoned reader might readily have noticed, the material derivative in Christov's constitutive law in more than one dimension prevents expression \eqref{symm} to be a symmetrizer for the multi-dimensional case. This is the subject of current investigations. Finally, even though the estimates performed to obtain the decay rates in Theorem \ref{thmlindecr} are very similar (at the linear level) to those for hyperbolic balance laws \cite{KY09} (see also \cite{KaTh83}), we call upon the attention of the reader that the statement of Corollary \ref{corCC} should not be taken for granted. For instance, the analyses pertaining to the local existence of solutions for viscous systems of conservation laws \cite{KaTh83,Ser10b}, the global existence of solutions for hyperbolic balance laws \cite{HaNa03,Y04}, as well as the global stability of constant equilibrium states for dissipative balance laws \cite{RuSe04}, they all consider the existence of a convex entropy structure which lacks in the present case because the system is not in conservation form. Therefore, the linear decay rates around equilibrium states for Cattaneo-Christov systems constitute the first step to show that constant equilibrium states are asymptotically stable under small perturbations even in the absence of a convex entropy. \section*{Acknowledgements} RGP thanks Jeffrey Humpherys for useful discussions. The work of FA was partially supported by CONACyT (Mexico), through a scholarship for doctoral studies, grant no. 465484. The work of RGP was partially supported by DGAPA-UNAM, program PAPIIT, grant IN-100318. \def$'${$'$}
2,877,628,091,440
arxiv
\section{Introduction} CMB's temperature anisotropies which were detected by COBE in 1992 are believed to result from inhomogeneities in the matter distribution at the recombination epoch \cite{cobe}. Because Thomson scattering is an isotropic process, any primordial anisotropies (as opposed to inhomogeneities) should have been smoothed out before decoupling \cite{Gaw}. This certifies to the interpretation of the observed anisotropies as the result of density perturbations which can be a source for the formation of galaxies and clusters. The temperature anisotropies which is discovered by COBE can be taken as evidence that such density inhomogeneities existed in the early universe \cite{Gaw,kosowsky19991,kosowsky19992}. Gravitational collapse of these primordial density inhomogeneities appears to have formed the large-scale structures of clusters, super-clusters and galaxies which is observed today \cite{Gaw}.\\ Due to the anisotropic Compton scattering around the recombination epoch, the generation of some relevant linear polarization (about 10 percents) in CMB radiation is expected \cite{cosowsky1994,zal,hu}, and these polarization fluctuations should be smaller than the temperature fluctuations \cite{nature}. Most attractive results of Planck on cosmological parameters which included $r$ (scalar-tensor ratio) and linear polarization map is reported in \cite{planck, planck1,planck2}. On the other hand, according to the standard scenario of cosmology (considering Compton scattering as the mean interaction of CMB and cosmic matter), there is no physical mechanism to generate a circularly polarized radiation at the last scattering surface. It should be note that circular polarization measurements can provide valuable information to test the standard cosmological model and the physics beyond the standard model of elementary particles. But experimental results confirm that one can still have circular polarization contribution in CMB anisotropy. Yet, there are relatively few published limits on the CMB circular polarization \cite{exp}. Almost all the experimental results have reported an upper limit for circular polarization (V-mode) around $\Delta_V/T_{CMB}<10^{-4}$. \\ Many reasons can be provided as to the generation of the circular polarization. In the case of a renormalizable and gauge-invariant standard model, extension of photon coupling to an external vector field via Chern-Simons term, which arises as a radiative correction, if gravitational torsion couples to fermions, will be the source of circular polarization of CMB radiation \cite{Alexander}. The linear polarization of the CMB in the presence of a large-scale magnetic field B can be converted to the circular polarization under the formalism of the generalized Faraday rotation (FR)\cite{Jones, Cooray} known as the Faraday conversion (FC). Also, the V-mode can be produced with the same mechanism \cite{Massim, Massimo}. In a background magnetic field or the quantum electrodynamics sector of standard model which is extended by Lorentz non-invariant operators as well as non-commutativity, the CMB polarization acquires a small amount of circular polarization \cite{Bavarsad}. Photon-photon interactions mediated by the neutral hydrogen background, $\gamma+\gamma+atom \rightarrow \gamma+\gamma+atom$, through forward scattering \cite{Sawyer} and photon-neutrino scattering \cite{Mohammadi} and Euler-Heisenberg effective Lagrangian given in\cite{Euler} can produce the circular polarization. Also see other interesting mechanisms (like photon-graviton interaction, Magneto-optic effects,...) \cite{other}. \\ The scattering of a photon from the polarized electron is another mechanism which can be important for generating circular polarization. The description of polarization phenomena for both electromagnetic radiation and elementary particles as a matrix representation of polarization is given \cite{McMaster}. Some works are investigated phenomena involving electrons and photons from polarization effects considerations such as Compton scattering, bremsstrahlung and so on \cite{Lipp, Lipp2}. They studied the detection and production of circular polarized gamma radiation by Compton scattering. The production of polarized electron by photoionization of the polarized atomic beam has been reported which is the useful source of polarized electrons\cite{Long}. A Compton scattering based polarimeter for measuring the linear polarization of hard X-rays (100-300 keV) from astrophysical sources has been developing \cite{Mc}. A Monte Carlo method is described for the multi-pole scattering of linearly polarized gamma rays in non-magnetized solid state targets and then the cross section and Stokes parameters for spin-polarized have been discussed \cite{Bell}. Also, the investigation of $ \gamma- $ray polarizations leads to the insertion of the constraints on Planck scale violation of special relativity\cite{Kirk}. The final electron polarization was calculated for the scattering of the polarized photon by a polarized electron\cite{Kotkin}.\\ In this work, it is shown that Compton scattering of photons from polarized electrons\footnote{We call this in the remain of paper as “Polarized Compton Scattering”. } can generate circular polarization in contrast to the ordinary Compton scattering \cite{cosowsky1994}. The asymmetry between left- and right-handed number density of electrons can be obtained from several sources. For example, beta decays in Neutron stars \cite{neutronstar} as it is clearly shown, nature just accepts left-handed neutrinos and left-handed electrons flux intends in beta decay but their right-handed partner remains. Another one, axions as one of the dark-matter candidates can couple to fermions during inflation and produce both two helicity states of the electron but in asymmetrical amounts \cite{axion}. In the presence of magnetic field, electron should fill Landau levels \cite{landau}, while lowest Landau level can be filled only by left-handed electrons, higher levels filled with both helicity states of electrons. This will cause an asymmetry between left and right-handed electrons distribution which is in the order of $ \sim \dfrac{e B}{p^2} $, where $ B $ is the amplitude of magnetic field and $p$ is the linear momentum of electrons. By reviewing the chiral magnetic instability for electrons with only electromagnetic interaction, the chiral charge density, $ n_{5} $ is of order $ \backsimeq 10^{-14} n_{e} $ and $ n_{e} $ is the number density of electrons\cite{landau2}. Also, electromagnetic interaction of the massive spin-$ 1/2 $ Dirac particles can flip their helicity \cite{Accioly}. The above mentioned mechanisms motivated us to investigate the circular polarization generation of CMB via polarized Compton scattering.\\ \section{CMB Interaction with Polarized Electrons } To describe an assumable of a photon like CMB radiation, one can start with the density matrix: eqnarray}{\begin{eqnarray}{eqnarray}\label{rho} \hat{\rho}=\dfrac{1}{tr(\hat{\rho})}\int \dfrac{d^{3}k}{(2\pi)^{3}}\rho_{\rm ij}(\bold{k})D_{\rm ij}(\bold{k})eqnarray}{\end{eqnarray}{eqnarray} where $ D_{\rm ij} (\bold{k})\equiv a_{\rm i}^{\dagger}(\bold{k}) a_{\rm j}(\bold{k}) $ and $ \rho_{\rm ij} $ are the photon number operator and the general density-matrix component in the space of polarization states and $ \bold{k} $ indicates the momentum of photons. $I$, $Q$, $U$ and $V$ are Stokes parameters which is related to $\rho_{\rm ij}(\bold{k})$ as following eqnarray}{\begin{eqnarray}{equation} \hat{\rho}=\frac{1}{2}\left( eqnarray}{\begin{eqnarray}{matrix} I+Q &\,\, U-iV \\ U+iV&\,\, I-Q \\ eqnarray}{\end{eqnarray}{matrix} \right)\label{matrix} eqnarray}{\end{eqnarray}{equation} The time evolution of $\rho_{\rm ij}(\bold{k})$ as well as Stokes parameters is given \cite{cosowsky1994}, eqnarray}{\begin{eqnarray}{eqnarray}\label{h0} (2\pi)^3 \delta^3(0)2k^0 \frac{d}{dt}\rho_{\rm ij}(\bold{k}) \!\!&=& i\langle[H^0_{\rm I}(t),D^0_{\rm ij}(\bold{k})]\rangle-\frac{1}{2}\int dt\langle\left[H^0_{\rm I}(t),[H^0_{\rm I}(0),D^0_{\rm ij}(\bold{k})]\right]\rangle eqnarray}{\end{eqnarray}{eqnarray} where $k^0=|\bold{k}|$ and $H^0_{\rm I}(t)$ is the first order of the interacting Hamiltonian. The first term on the right-handed side of eq.(\ref{h0}) is a forward scattering term, and the second one is a higher order collision term. Using standard calculations of Quantum Electrodynamics (QED), interacting Hamiltonian for electron-photon scattering ($\gamma(p)+e(q)\rightarrow \gamma(p')+e(q')$) is given eqnarray}{\begin{eqnarray}{eqnarray}\label{h0interaction} H^0_{\rm I}(t)=\int d\bold{q}d\bold{q'}d\bold{p}d\bold{p'}(2\pi)^3\delta^3(\bold{q'}+\bold{p'}-\bold{q}-\bold{p})exp{[it(q'^0+p'^0-q^0-p^0)]}\nonumber\\ \times[b^\dagger_{ r'}(\bold{q'})a^\dagger_{s'}(\bold{p'})\,\mathcal{M}(q'r',ps_1,qr,p's'_1)\,a_s(\bold{p})b_r(\bold{q})], eqnarray}{\end{eqnarray}{eqnarray} where $a_{\rm s}, a^{\dagger}_{\rm s'}$ and $b_{\rm r}, b^{\dagger}_{\rm r'}$ are annihilation and creation operators of the quantized photon and electron fields, respectively. $\mathcal {M}$ is Compton scattering amplitude eqnarray}{\begin{eqnarray}{eqnarray}\label{12} \mathcal{M}(q'r',ps_{1},qr,p's'_{ 1})=-ie^2\bar{U}_{\rm r'}(\bold{q'})\Big [\frac{\epsilon\!\!\!/_{\rm s'_1}({p'})\ (p\!\!\!/+q\!\!\!/+m)\epsilon\!\!\!/_{\rm s_1}(p)}{2q.p}-\frac{\epsilon\!\!\!/_{\rm s_1}(p)\ (q\!\!\!/-p'\!\!\!\!/+m)\epsilon\!\!\!/_{\rm s'_1}(p')}{2p'.q}\Big]U_r(\bold{q}), eqnarray}{\end{eqnarray}{eqnarray} where $r, r'$ and $s_1, s'_1$ indices run over electron and photon spin states. Note phase space elements are defined eqnarray}{\begin{eqnarray}{eqnarray}\label{phase} d\bold{q}=\frac{d^3\bold{q}}{(2\pi)^3}\frac{m}{q^0},~~~~~~~~~d\bold{p}=\frac{d^3p}{(2\pi)^32p^0}. eqnarray}{\end{eqnarray}{eqnarray} \subsection{Forward Scattering terms} The usual assumption of forward scattering is that the fields begin as a free fields and end an other free field which the interactions are isolated from each other. According to this assumption, will prove (as it’s done by \cite{cosowsky1994}) eqnarray}{\begin{eqnarray}{eqnarray} \bar{U}_r(q)\epsilon\!\!\!/_{s}(q\!\!\!/+m)\epsilon\!\!\!/_{s'}U_r(q)&&= \bar{U}_r(q)(2q\cdot\epsilon_s-q\!\!\!/\epsilon\!\!\!/_{s}+m\epsilon\!\!\!/_{s})\epsilon\!\!\!/_{s'}U_r(q)\nonumber\\ &&=(2q.\epsilon_s)\bar{U}_r(q)\epsilon\!\!\!/_{s'}U_r(q)\nonumber\\ &&=\frac{2}{m}(q\cdot\epsilon_s)(q\cdot\epsilon_{s'})\nonumber\\ &&=\bar{U}_r(q)\epsilon\!\!\!/_{s'}(q\!\!\!/+m)\epsilon\!\!\!/_{s}U_r(q) eqnarray}{\end{eqnarray}{eqnarray} These two terms (in Eq.(\ref{12})) cancel each other. Thus, the forward scattering of ordinary Compton scattering doesn't have any contribution to the generation of circular polarization of CMB's for electrons being unpolarized. But this term in the presence of neutrino scattering off photons has a non-zero contribution \cite{rmohammadisadegh}. The treating polarized electrons are the same as neutrinos or any other particles species governed by the Boltzmann equations. With evaluation of the forward scattering term for Compton scattering, also this term will be zero independent of whether the electrons are polarized or unpolarized ( for more details see \cite{cosowsky1994} ) \subsection{Damping terms} The contribution of damping term (usual cross section) of the Compton scattering for the generation of the CMB polarization has been studied in many works [see for example \cite{cosowsky1994} and its references]. At first, we just review the result presented in \cite{cosowsky1994} for the case of Compton scattering of photons from unpolarized electrons. The Boltzmann equation is eqnarray}{\begin{eqnarray}{eqnarray}\label{eq:Boltz} 2k^0\dot{\rho}_{\rm ij}(\mathbf{k})&=& \frac{1}{4}\int d\bold{q}d\bold{q'}d\bold{p}(2\pi)^4\delta^4(q'+p-q-k) \mathcal{M}(q'r',ps'_1,qr,ks_1)\mathcal{M^{\dagger}}(qr,ks'_2,q'r',ps_2)\nonumber\\ &\times&{\Big[n_e(\bold x,\bold q)\delta_{\rm s_2\rm s'_1}(\delta_{\rm i\rm s_1}\rho_{\rm s'_2\rm j}(\mathbf{k})+\delta_{\rm js'_2}\rho_{\rm i\rm s_1}(\mathbf{k}))-2n_e(\bold x, \bold q')\delta_{\rm i\rm s_1}\delta_{\rm j\rm s'_2}\rho_{\rm s'_1\rm s_2}(\mathbf p)\Big]} eqnarray}{\end{eqnarray}{eqnarray} where $n_e(\bold x, \bold q)$ is the electron distribution function. The distribution function of cosmic electrons, which is known as a thermal Maxwell-Boltzmann distribution \cite{cosowsky1994}, is eqnarray}{\begin{eqnarray}{eqnarray} n_e(\bold x,\bold q)=n_e(\bold x)(\frac{2\pi}{mT_e})^{3/2}exp{\big[-\frac{(\bold{\bold q}-m\bold{\bold v}(\bold x))^2}{2mT_e}\big]} eqnarray}{\end{eqnarray}{eqnarray} where $n_e(\bold x)$, $m$, $T_e$ and $\bold{\bold v}(\bold x)=v_e(\bold x)\hat{\bold v}$ are electron number density, electron mass, the electron temperature and the electron bulk velocity, respectively. Let's also write the following useful integrals eqnarray}{\begin{eqnarray}{eqnarray} && \int \frac{d^3\bold q}{(2\pi)^3}n_e(\bold x,\bold q)=n_e(\bold x), \\ && \int \frac{d^3\bold q}{(2\pi)^3}q_{\rm i} n_e(\bold x,\bold q)=m v_{\rm i}(\bold x)n_e(\bold x). eqnarray}{\end{eqnarray}{eqnarray} Now with this furnishing, we go through the Eq.(\ref{eq:Boltz}). First we can simplify Compton scattering amplitude Eq.(\ref{12}) as follows eqnarray}{\begin{eqnarray}{eqnarray}\label{simple12} \mathcal{M}(q'r',ks_1,qr,ps'_1)=-ie^2\bar{U}_{\rm r'}(q')\Big [\frac{\epsilon\!\!\!/_{\rm s'_1}(p)(2q\cdot\epsilon_{\rm s_1}(k)- \epsilon\!\!\!/_{\rm s_1}(k)k\!\!\!/)}{2q\cdot k}-\frac{\epsilon\!\!\!/_{\rm s_1}(k)\ (2q\cdot\epsilon_{\rm s'_1}(p)+\epsilon\!\!\!/_{\rm s'_1}(p)p\!\!\!/)}{2p\cdot q}\Big]U_r(q)\nonumber\\ eqnarray}{\end{eqnarray}{eqnarray} Then the squared Compton amplitude in abbreviated form is eqnarray}{\begin{eqnarray}{eqnarray} \mathcal{M}(q'r',ps'_1,qr,ks_1)\mathcal{M}(qr,ks'_2,q'r',ps_2)=e^4\sum\Big\{\bar U_{r'}(q')T(s_1,s'_1)U_r(q)\bar U_r(q)\tilde T(s_2,s'_2)U_{r'}(q')\Big\}, \label{SA} eqnarray}{\end{eqnarray}{eqnarray} where eqnarray}{\begin{eqnarray}{eqnarray} &&T(s_1,s'_1)=\frac{\epsilon\!\!\!/_{\rm s'_1}(p)}{2q\cdot k}[2q \cdot \epsilon_{\rm s_1}(k)-\epsilon\!\!\!/_{\rm s_1}(k)k\!\!\!/]-\frac{\epsilon\!\!\!/_{\rm s_1}(k)}{2q \cdot p}[2q \cdot \epsilon_{\rm s'_1}(p)+\epsilon\!\!\!/_{\rm s'_1}(p)p\!\!\!/]\label{ts1}\\ &&\tilde T(s_2,s'_2)=\frac{1}{2q\cdot k}[2q\cdot \epsilon_{\rm s'_2}(k)-k\!\!\!/\epsilon\!\!\!/_{\rm s'_2}(k)]\epsilon\!\!\!/_{\rm s_2}(p)-\frac{1}{2q\cdot p}[2q\cdot \epsilon_{\rm s_2}(p)+p\!\!\!/\epsilon\!\!\!/_{\rm s_2}(p)]\epsilon\!\!\!/_{\rm s'_2}(k)\label{ts2} eqnarray}{\end{eqnarray}{eqnarray} Note in the Compton scattering of unpolarized electrons, there are averaging assumption on the final and initial helicity states of electrons in Eq.(\ref{SA}), which allows to use the ordinary completeness relation $\sum_{r}U_{r}(\bold{q})\bar U_{r}(\bold{q})=\frac{q\!\!\!/’+m}{m}$ for both ingoing and out-coming electrons. But here we consider small polarization for ingoing electrons, in this case, the completeness relation of Dirac spinors modifies as \cite{kleiss} eqnarray}{\begin{eqnarray}{eqnarray}\label{completeness} U_{r}({q})\bar U_{r}({q})=\Big[\frac{q\!\!\!/+m}{2m}\frac{1+\gamma_5{S}\!\!\!/_r(\bold q)}{2}\Big] eqnarray}{\end{eqnarray}{eqnarray} where $S_r$ helicity operator with $r=L,R$ is defined eqnarray}{\begin{eqnarray}{eqnarray}\label{HO} S_R(\bold q)=(\frac{{\mid\bold q\mid}}{m},\frac{E}{m}\frac{\bold q}{\mid{\bold q}\mid}),~~~~~~~~S_L(\bold q)=-S_R(\bold q). eqnarray}{\end{eqnarray}{eqnarray} Let's consider a small fraction $\delta_L $ of left-handed polarization for ingoing cosmic electrons while we do not apply any constraint on the outgoing electrons due to their interaction with CMB photons. Hence, we have eqnarray}{\begin{eqnarray}{eqnarray}\label{squared} \mathcal{M}(q'r',ps'_1,qr,ks_1)\mathcal{M}(qr,ks'_2,q'r',ps_2)=e^4Tr\Bigg\{\frac{(q\!\!\!/'+m)}{2m_f}T(s_1,s'_1)\frac{(q\!\!\!/+m)}{2m_f}\Big[\frac{1+\gamma_5 {S}\!\!\!/_L(\bold q)}{2}\Big]\tilde T(s_2,s’_2)\Big]\Bigg\} eqnarray}{\end{eqnarray}{eqnarray} It should be noted that in the above equation $ q$ and $q'$ are ingoing and outgoing electrons momentum, respectively. One can rewrite Eq.(\ref{squared}) as following eqnarray}{\begin{eqnarray}{eqnarray}\label{msquared} \mathcal{M}(q'r',ps'_1,qr,ks_1)\mathcal{M}(qr,ks'_2,q'r',ps_2)&=&\frac{e^4}{2}Tr\Bigg\{\frac{q\!\!\!/'+m}{2m}T(s_1,s'_1)\frac{q\!\!\!/+m}{2m}\tilde{T}(s_2,s'_2)\Bigg\}\\ &+&\frac{e^4}{2}Tr\Bigg\{\frac{q\!\!\!/'+m}{2m}T(s_1,s'_1)\frac{q\!\!\!/+m}{2m}(\gamma_5{S}\!\!\!/_L(\bold q))\tilde{T}(s_2,s'_2)\Bigg\},\nonumber eqnarray}{\end{eqnarray}{eqnarray} where the first term is the amplitude of Compton scattering of unpolarized electrons which is investigated in standard scenario, whereas the second term indicates the contribution of Compton scattering of polarized electrons which is shown as $\mid\mathcal M\mid_{\rm P}^{~2}$. With straightforward calculations (applicable Mathematica package\cite{mertig}) and keeping the dominated contribution, we have eqnarray}{\begin{eqnarray}{eqnarray} \mid\mathcal M\mid_{\rm P}^{~2}\approx \frac{e^4}{4(q\cdot k)^2}&\Bigg\{&q\cdot \epsilon_{\rm s'_2}(k)\Big(k\cdot \epsilon_{\rm s'_1}(p)\hat{q}\cdot \epsilon_{\rm s_1}(k)\times\epsilon_{\rm s_2}(p)+p\cdot \epsilon_{\rm s_1}(k)\hat{q}\cdot\epsilon_{\rm s'_1}(p)\times\epsilon_{\rm s_2}(p)\Big)\nonumber\\ &&+q\cdot\epsilon_{\rm s_2}(p)\Big(p\cdot\epsilon_{\rm s_1}(k)\hat{q}\cdot\epsilon_{\rm s'_2}(k)\times\epsilon_{\rm s'_1}(p)+\hat{q}\cdot\epsilon_{\rm s_1}(k)\epsilon_{\rm s'_2}(k)\cdot p\times\epsilon_{\rm s'_1}(p)\Big)\nonumber\\ &&+\hat{q}\cdot\epsilon_{\rm s'_1}(p)\Big(q\cdot\epsilon_{\rm s_2}(p)k\cdot\epsilon_{\rm s_1}(k)\times\epsilon_{\rm s'_2}(k)-q\cdot\epsilon_{\rm s'_2}(k)\epsilon_{\rm s_2}(p)\cdot k\times\epsilon_{\rm s_1}(k)\Big)\nonumber\\ &&-q\cdot\epsilon_{\rm s'_2}(k)\hat{q}\cdot\epsilon_{\rm s_1}(k)p\cdot\epsilon_{\rm s'_1}(p)\times\epsilon_{\rm s_2}(p)\nonumber\\ &&+\epsilon_{\rm s_1}(k)\cdot\epsilon_{\rm s'_1}(p)\Big(q\cdot\epsilon_{\rm s_2}(p)\hat{q}\cdot k\times\epsilon_{\rm s'_2}(k)-q\cdot\epsilon_{\rm s'_2}(k)\hat{q}\cdot k\times\epsilon_{\rm s_2}(p)\nonumber\\ &&+q\cdot\epsilon_{\rm s_2}(p)\hat{q}\cdot p\times\epsilon_{\rm s'_2}(k)-q\cdot\epsilon_{\rm s'_2}(k)\hat{q}\cdot p\times\epsilon_{\rm s_2}(p)\Big) \nonumber\\ &&+\epsilon_{\rm s_1}(k)\cdot\epsilon_{\rm s_2}(p)q\cdot\epsilon_{\rm s'_2}(k)\hat{q}\cdot p\times\epsilon_{\rm s'_1}(p)+\epsilon_{\rm s'_1}(p)\cdot\epsilon_{\rm s'_2}(k)q\cdot\epsilon_{\rm s_2}(p)\hat{q}\cdot k\times\epsilon_{\rm s_1}(k)\nonumber\\ &&-\delta_{\rm s_2\rm s'_1}q\cdot\epsilon_{\rm s'_2}(k)\hat{q}\cdot k\times\epsilon_{\rm s_1}(k)-\delta_{\rm s_1\rm s'_2}q\cdot\epsilon_{\rm s_2}(p)\hat{q}\cdot p\times\epsilon_{\rm s'_1}(p)\Bigg\},\label{mp1} eqnarray}{\end{eqnarray}{eqnarray} where $\hat{q}=\bold q/\mid q\mid$, then the Boltzmann equation for $\rho_{ij}(\bold{x},\bold{k})$ is given by eqnarray}{\begin{eqnarray}{eqnarray}\label{cegamma} \frac{d}{dt}\rho_{\rm ij}(\bold{x},\bold{k})&=&\frac{e^4\delta_L}{2k^0}\int d\bold{q}d\bold{p}\frac{m}{E(\bold{q}+\bold{k}-\bold{p})}(2\pi)\delta\big(E(\bold{q}+\bold{k}-\bold{p})+p-E(\bold{q})-k\big)\nonumber\\ &\times&\bigg(n_e(\bold x, \bold q)\delta_{\rm s_2\rm s'_1}(\delta_{\rm i\rm s_1}\rho_{\rm s'_2\rm j}(\mathbf{k})+\delta_{\rm j\rm s'_2}\rho_{\rm i\rm s_1}(\mathbf{k}))-2n_e(\bold x, \bold q')\delta_{\rm i\rm s_1}\delta_{\rm j\rm s'_2}\rho_{\rm s'_1\rm s_2}(\mathbf p)\bigg)\mid\mathcal M\mid_P^{~2}\nonumber\,, eqnarray}{\end{eqnarray}{eqnarray} where we introduce $\delta_{L}=n_{e,L}/n_e$ and $\delta_{R}=n_{e,R}/n_e$ as a fraction of polarized electron number density to total one with net Left- or Right-handed polarizations.By running all indices, ignoring the recoil momentum of final electrons and considering below equations eqnarray}{\begin{eqnarray}{equation}\label{1} \delta\big(E(\bold{q}+\bold{k}-\bold{p})+p-E(\bold{q})-k\big)\sim\delta\big(p-k\big), eqnarray}{\end{eqnarray}{equation} eqnarray}{\begin{eqnarray}{eqnarray} E(\bold q+\bold Q)\sim m \big[1+\frac{\bold q^2}{m^2}+\frac{\bold q\cdot\bold Q}{m^2}+....\big] eqnarray}{\end{eqnarray}{eqnarray} eqnarray}{\begin{eqnarray}{eqnarray} n_e(\bold q+\bold Q)\sim n_e(\bold q)\big[1-\frac{\bold Q\cdot(\bold q-m\bold v)}{m T_e}+....\big], eqnarray}{\end{eqnarray}{eqnarray} the time evolution of Stokes parameters would have the following form eqnarray}{\begin{eqnarray}{eqnarray}\label{idotk} \dot{I}(\mathbf{k})=\dot{\tau}_{_{\rm PC}}\int \frac{d\Omega}{4\pi}\sum_{_{\rm S}}\bigg[ f_{IS}{(\hat k,\hat p)}S(\bold k)+g_{IS}{(\hat k,\hat p)}S(\bold p)\bigg], eqnarray}{\end{eqnarray}{eqnarray} eqnarray}{\begin{eqnarray}{eqnarray}\label{qdotk} \dot{Q}(\mathbf{k})=\dot{\tau}_{_{\rm PC}}\int \frac{d\Omega}{4\pi}\sum_{_{\rm S}}\bigg[ f_{QS}{(\hat k,\hat p)}S(\bold k)+g_{QS}{(\hat k,\hat p)}S(\bold p)\bigg] eqnarray}{\end{eqnarray}{eqnarray} eqnarray}{\begin{eqnarray}{eqnarray}\label{udotk} \dot{U}(\mathbf{k})=\dot{\tau}_{_{\rm PC}}\int \frac{d\Omega}{4\pi}\sum_{_{\rm S}}\bigg[ f_{US}{(\hat k,\hat p)}S(\bold k)+g_{US}{(\hat k,\hat p)}S(\bold p)\bigg] eqnarray}{\end{eqnarray}{eqnarray} eqnarray}{\begin{eqnarray}{eqnarray}\label{vdotk} \dot{V}(\mathbf{k})=\dot{\tau}_{_{\rm PC}}\int \frac{d\Omega}{4\pi}\sum_{_{\rm S}}\bigg[ f_{VS}{(\hat k,\hat p)}S(\bold k)+g_{VS}{(\hat k,\hat p)}S(\bold p)\bigg], eqnarray}{\end{eqnarray}{eqnarray} where $S\in\{I,Q,U,V\}$ and eqnarray}{\begin{eqnarray}{equation}\label{optical-pc1} \dot{\tau}_{_{\rm PC}}=\frac{3}{2} \frac{m\,v_e(\bold x) }{k^0} \sigma_T\,\delta_L\,n_e(\bold x)\,. eqnarray}{\end{eqnarray}{equation} All coefficients $f_{IS}$, $f_{QS}$, $f_{US}$ and$f_{VS}$ can be easily obtained from Eqs.(\ref{matrix}) and (\ref{mp1}). As we are interested in calculation of the circular polarization (i.e. Eq.\eqref{vdotk}), we disregard the time evolution of $ \dot{I}(\mathbf{k}) $, $ \dot{Q}(\mathbf{k}) $ and $ \dot{U}(\mathbf{k}) $ for the rest. Also in (\ref{vdotk}), the coefficients of $ f_{VQ}$, $ g_{VQ}$, $ f_{VU}$ and $ g_{VU} $ are not considered because $ Q $ and $ U $ are at least one order of magnitude smaller than $ I $ in the case of CMB radiation. \section{Power Spectrum of the Circular Polarization} We continue the calculation in the presence of the primordial scalar perturbations indicated by $(S)$ which we expand in the Fourier modes characterized by a wave number $\mathbf{K}$. For each given wave number $\mathbf{K}$, it is useful to select a coordinate system with $\mathbf{K} \parallel \hat{\mathbf{z}}$ and $(\hat{\mathbf{e}}_1,\hat{\mathbf{e}}_2)=(\hat{\mathbf{e}}_\theta, \hat{\mathbf{e}}_\phi)$. The baryon bulk velocity $v$ at linear order is irrotational, meaning that it is the gradient of a potential, and thus in Fourier space it is parallel to wave number $\bold{K}$ (see \cite{hu2000}), eqnarray}{\begin{eqnarray}{equation}\label{BV} \bold{v}||\bold{K}\,\, , \,\,\,\,\,\,v=|\bold{v}|\approx(1+z)^{-1/2}10^{-3}. eqnarray}{\end{eqnarray}{equation} Temperature anisotropy $\Delta^{(S)}_{I}$ and circular polarization $\Delta^{(S)}_{V}$ of the CMB radiation can be expanded in the conformal time $\eta$ and can be described by multi-pole moments as following eqnarray}{\begin{eqnarray}{eqnarray} \Delta_{I,V}(\eta,\mathbf{K},\mu)=\sum^{\infty}_{l=0}(2 l+1)(-i)^l\Delta^l_{I,V}(\eta,\mathbf{K})P_{l}(\mu) eqnarray}{\end{eqnarray}{eqnarray} where $\mu = \hat{n}\cdot\hat{\mathbf{K}} = \cos \theta$, the $\theta$ is the angle between the CMB photon direction $\hat{n} = \mathbf{k}/|\mathbf{k}|$ and the wave vectors $\mathbf{K}$ and $P_l(\mu)$ is the Legendre polynomial of rank $l$. So, we could continue with the definition\footnote{This is confusing in the literature, but we should note that the right side of $\Delta^{(S)}_I$ is dimensionless and we continue with it.} eqnarray}{\begin{eqnarray}{eqnarray} \Delta^{(S)}_{I}(\mathbf{K},\mathbf{k},\eta)\equiv\left(4k\frac{\partial I_0}{\partial k}\right)^{-1} \Delta^{(S)}_ I(\mathbf{K},\mathbf{k},\eta). eqnarray}{\end{eqnarray}{eqnarray} Here we should define $\frac{d}{dt}$ in the left hand side of Eq.(\ref{cegamma}) to take into account space-time structure and gravitational effects such as the red-shift and so on. For each plane wave, each scattering and interaction can be described as the transport through a plane parallel medium \cite{mukh,chandra}, and finally, Boltzmann equations in the presence of the primordial scalar perturbations are given as eqnarray}{\begin{eqnarray}{eqnarray} &&\frac{d}{d\eta}\Delta _{V}^{(S)} +iK\mu \Delta _{V}^{(S)} = -\dot\tau_{e\gamma}\Big[\Delta _{V}^{(S)}-\frac{3}{2}\mu \Delta _{V1}^{(S)}\Big]-i2/3\dot{\tau}_{pc}\Big[P_2(\mu)\Delta_I^{(S)}-\Delta_{I2}^{(S)}\Big] \label{Boltzmann2} eqnarray}{\end{eqnarray}{eqnarray} where $\dot{\tau}_{e\gamma}\equiv \frac{d\tau_{e\gamma}}{d\eta}$ which $\tau_{e\gamma}$ is Compton scattering optical depth, $a(\eta)$ is normalized scale factor.\\ The values of $\Delta _{I}^{ (S)}(\eta_0,\hat{n})$ and $\Delta _{V}^{(S)}(\eta_0,\hat{n})$ at the present time $\eta_0$ and the direction $\hat{n}$ can be obtained in the following general form by integrating of the Boltzmann equation (\ref{Boltzmann2}), along the line of sight \cite{zal} and with summing over all the Fourier modes $\mathbf{K}$, eqnarray}{\begin{eqnarray}{eqnarray} \Delta _{V}^{ (S)}(\hat{\bf{n}}) &=&\int d^3 \bf{K} \xi(\bf{K})\Delta _{V}^{(S)} (\mathbf{K},\mathbf{k},\eta_0),\,\,\,\,\,\label{Boltzmann3} eqnarray}{\end{eqnarray}{eqnarray} where $\xi(\mathbf{K})$ is a random variable used to characterize the initial amplitude of each primordial scalar perturbations mode, and then the values of $\Delta _{V}^{(S)}(\mathbf{K},\mathbf{k},\eta_0)$ are given as eqnarray}{\begin{eqnarray}{eqnarray} \Delta _{V}^{(S)} (\mathbf{K},\mu,\eta_0) &\approx&\int_0^{\eta_0} d\eta\, \dot\tau_{e\gamma}\,e^{ix \mu -\tau_{e\gamma}}\,\,\Big[ \frac{3}{2}\mu\Delta _{V1}^{(S)}-i\frac{2\dot\tau_{e\gamma}}{3\dot\tau_{pc}}(P_2(\mu)\Delta_I^{(S)}-\Delta_{I2}^{(S)})\Big],\label{VS} eqnarray}{\end{eqnarray}{eqnarray} where $x=K(\eta_0 - \eta)$. The differential optical depth $\dot\tau_{e\gamma}(\eta)$ and total optical depth $\tau_{e\gamma}(\eta)$ due to the Thomson scattering at time $\eta$ have been defined as follows eqnarray}{\begin{eqnarray}{equation}\label{optical} \dot{\tau}_{e\gamma}=a\,n_e\,\sigma_T,\,\,\,\,\,\,\,\tau_{e\gamma}(\eta)=\int_\eta^{\eta_0}\dot{\tau}_{e\gamma}(\eta) d\eta. eqnarray}{\end{eqnarray}{equation} The power spectrum $ C_{l}^{V(S)} $, due to the Compton scattering in the presence of scalar perturbation, is eqnarray}{\begin{eqnarray}{eqnarray} C_{l}^{V(S)}=\langle \Delta _{Vl}^{(S)\dagger} \Delta _{Vl}^{(S)}\rangle. eqnarray}{\end{eqnarray}{eqnarray} Therefore, the circular power spectrum of the CMB radiation, $ C_{l}^{V} $, due to Compton scattering would be eqnarray}{\begin{eqnarray}{eqnarray} C_{l}^{V}&=&\langle a_{Vl}^{\ast}a_{Vl}\rangle \nonumber\\ &\approx & \dfrac{1}{2l+1} \int d^{3} \bold{K} P_{\varphi}^{(S)}(\bold{K},\tau)\int \vert d\Omega P_{l}^{\ast} \int_{0}^{\tau_{0}}d\tau \dot{\tau}_{e\gamma} e^{ikx-\tau_{e\gamma}} [ \frac{2\dot{\tau_{pc}}}{3\dot\tau_{e\gamma}}(P_2(\mu)\Delta_I^{(S)}-\Delta_{I2}^{(S)}) ] \vert^{2}.\nonumber\\ eqnarray}{\end{eqnarray}{eqnarray} and with the approximation, the power spectrum of circular polarization for $l<2$ can be estimated as eqnarray}{\begin{eqnarray}{eqnarray} C_{l}^{V (S)}\approx(\dfrac{\dot{\tau}_{PC}}{\dot{\tau}_{e\gamma}}\Big|_{\rm av})^{2}C_{l}^{I}=10^{8}\delta_L^2\,C_{l}^{I(S)}, eqnarray}{\end{eqnarray}{eqnarray} where $C_{l}^{I(S)}$ is the power spectrum of temperature fluctuation and also, using Eq.(\ref{BV}), we have eqnarray}{\begin{eqnarray}{equation}\label{av} \dfrac{\dot{\tau}_{PC}}{\dot{\tau}_{e\gamma}}\Big|_{\rm av}\simeq\frac{m v_{e0}}{k^0}\frac{\delta_L}{z_{lss}}\int_0^{z_{lss}}\frac{dz}{(1+z)^{3/2}}\approx10^{4}\delta_L, eqnarray}{\end{eqnarray}{equation} where $ v_{e0}$ is the bulk velocity at present time and $z_{lss}$ indicates red-shift at last scattering surface. \section{Conclusion} In this work, according to assumption regarding an asymmetry in the number density of left- and right-handed electrons in the universe, we were motivated to calculate dominated contribution of this asymmetry for power spectrum of circular polarization $C_l^{V(S)}$ in CMB radiation. It should be mentioned that we have learned from papers \cite{McMaster}-\cite{Kotkin} to do our calculations. We have used Quantum Boltzmann Equation approach. The forward scattering term of polarized Compton scattering has no contribution to the CMB polarizations. We have shown that the damping term of polarized Compton scattering in the presence of scalar perturbation can generate circular polarization in CMB radiation, so that $C_l^{V(S)}$ is proportional to $C_l^{I(S)}$ and $\delta_L^2$. As our results showed, to generate circular polarization for CMB, the bulk velocity of cosmic electrons should be none zero. An interesting point is the converting anisotropy intensity $\Delta_I$ to circular polarization. The most observational groups have reported an upper limit around $\Delta_V/T_{CMB}<10^{-4}\sim \Delta T/T_{CMB}$ which means $C_l^{V(S)}\leq C_l^{I(S)}$. If we apply this upper limit, the fraction of polarized electron number density to the total one should be less than $\delta_L<10^{-4}$. As Eqs.(\ref{idotk})-(\ref{udotk}) shown, the polarized Compton scattering can generate the B-mode polarization in the presence of the scalar perturbation and can affect the value of E-mode polarization and the anisotropy of CMB temperature \cite{next}.
2,877,628,091,441
arxiv
\section{Introduction}Magnetars are strongly magnetized neutron stars~\cite{DT} with emissions powered by the dissipation of magnetic energy. According to one of the conjectures, magnetars can be the source of the extremely powerful short-duration $\gamma$-ray bursts~\cite{U,KR,HBS,CK}. The magnetic field strength at the surface of a magnetar is of about $10^{14}$-$10^{15}$~G~\cite{TD,IShS}. Such huge magnetic fields can be inferred from observations of magnetar periods and spin-down rates, or from hydrogen spectral lines. In the interior of a magnetar the magnetic field strength may be even larger, reaching values of about $10^{18}$~G~\cite{CBP,BPL}. Under such circumstances, the issue of interest is the behavior of neutron star matter in a strong magnetic field~\cite{CBP,BPL,CPL,PG,IY4}. A realistic description of neutron star matter should include, at least, neutrons, protons, electrons and muons subject to the charge neutrality and beta-equilibrium conditions. The magnetic field then influences the system properties through Pauli paramagnetism as well as via Landau quantization of the energy levels of charged particles. Nevertheless, because the neutron fraction is usually considered to be dominant, neutron star matter can be approximated by pure neutron matter as a first step towards a more realistic description of neutron stars. Such an approximation was used in the recent study~\cite{PG} in the model consideration with the effective nuclear forces. It was shown that the behavior of spin polarization of neutron matter in the high density region in a strong magnetic field crucially depends on whether neutron matter develops a spontaneous spin polarization (in the absence of a magnetic field) at several times nuclear matter saturation density, or the appearance of a spontaneous polarization is not allowed at the relevant densities (or delayed to much higher densities). The first case is usual for the Skyrme forces~\cite{R,S,O,VNB,RPLP,ALP,MNQN,KW94,I,IY,I06}, while the second one is characteristic for the realistic nucleon-nucleon (NN) interaction~\cite{PGS,BK,H,VPR,FSS,KS,S11,BB,B}. In the former case, a ferromagnetic transition to a totally spin polarized state occurs while in the latter case a ferromagnetic transition is excluded at all relevant densities and the spin polarization remains quite low even in the high density region. The scenario for the evolution of spin polarization at high densities in which the spontaneous ferromagnetic transition in neutron matter is absent was considered for the magnetic fields up to $10^{18}$~G~\cite{PG}. Such an estimate for the limiting value of the magnetic field strength in the core of a magnetar is usually obtained from the scalar virial theorem~\cite{LS91} based on Newtonian gravity. However, the density in the core of a magnetar is so large that the effects of general relativity might become of importance. Then further increase of the core magnetic field is expected above $10^{18}$~G~\cite{ST}. By comparing with the observational X-ray data, it was argued that the interior magnetic field strength can be as large as $10^{19}$~G~\cite{YZ}. Also, it was shown in the recent study~\cite{FIKPS} that in the core of a magnetar the magnetic field strength could reach values up to $10^{20}$~G, if to assume the inhomogeneous distribution of the matter density and magnetic field inside a neutron star, or to allow the formation of a quark core in the high-density interior of a neutron star (concerning the last point, see also Ref.~\cite{T}). Under such circumstances, if to admit the interior magnetic fields with the strength $H>10^{18}$~G, a different scenario is possible in which a field-induced ferromagnetic phase transition of neutron spins occurs in the magnetar core. This idea was investigated in the recent article~\cite{BRM}, where it was shown within the framework of a lowest constrained variational approach with the Argonne $V_{18}$ NN potential that a fully spin polarized state in neutron matter could be formed in the magnetic field $H\gtrsim 10^{19}$~G. Note, however, that, as was pointed out in Refs.~\cite{FIKPS,Kh}, in such ultrastrong magnetic fields the breaking of the ${\cal O}(3)$ rotational symmetry by the magnetic field results in the anisotropy of the total pressure, having a smaller value parallel than perpendicular to the field direction. The possible outcome could be the gravitational collapse of a magnetar along the magnetic field, if the magnetic field strength is large enough. Thus, exploring the possibility of a field-induced ferromagnetic phase transition in neutron matter in a strong magnetic field, the effect of the pressure anisotropy has to be taken into account because this kind of instability could prevent the formation of a fully polarized state in neutron matter. This effect was not considered in Ref.~\cite{BRM}, thus, leaving open the possibility of the formation of a fully polarized state of neutron spins in a strong magnetic field. The degree of spin polarization is an important issue for determining the neutrino cross sections in the matter, and, hence, it is relevant for the adequate description of the neutrino transport and thermal evolution of a neutron star~\cite{RPLP}. In the given study, we provide a fully self-consistent calculation of the thermodynamic quantities of spin polarized neutron matter at finite temperature taking into account the appearance of the pressure anisotropy in a strong magnetic field. We consider spin polarization phenomena in a degenerate magnetized system of strongly interacting neutrons within the framework of a Fermi liquid formalism~\cite{AKPY,AIP,AIPY,IY3}, unlike to the previous works~\cite{FIKPS,Kh}, where interparticle interactions were disregarded. Note that recently new parametrizations of Skyrme forces were suggested, BSk19-BSk21~\cite{GCP}, aimed to avoid the spontaneous spin instability of nuclear matter at densities beyond the nuclear saturation density for the case of zero temperature. This is achieved by adding different density-dependent terms to the standard Skyrme interaction. The BSk19 parametrization was constrained to reproduce the equation of state (EoS) of nonpolarized neutron matter~\cite{FP} obtained in variational calculation with the use of the realistic Urbana $v_{14}$ NN potential and the three-body force called there TNI. The BSk20 force corresponds to the stiffer EoS~\cite{APR}, obtained in variational calculation with the use of the realistic Argonne $V_{18}$ two-body potential and the semiphenomenological UIX$^*$ three-body force which includes also a relativistic boost correction. Even a stiffer neutron matter EoS was suggested in the Brueckner-Hartree-Fock calculation of Ref.~\cite{LS} based on the same $V_{18}$ two-body potential and a more realistic three-body force containing different meson-exchange contributions. This EoS is the underlying one for the BSk21 Skyrme interaction. The advantage of all of these newly developed Skyrme forces is that they preserve the high-quality fits to the mass data obtained with the conventional Skyrme forces. An important quantity allowing one to distinguish between the different representatives of a generalized Skyrme interaction is the symmetry energy defined as the difference between the energies per nucleon in neutron matter and symmetric nuclear matter (an alternative definition of the symmetry energy is also discussed in Ref.~\cite{GCP}). In the high density region, the symmetry energy decreases with density for the BSk19 force, while it increases with density for BSk20 (moderately) and BSk21 (steeply) forces. As was clarified in Ref.~\cite{SMK} by testing almost 90 parametrizations of the conventional Skyrme forces, the Skyrme interactions, predicting the increasing behavior of the symmetry energy with density, give neutron star models in a broad agreement with observations (e.g., providing satisfactory description of the minimum rotation period, gravitational mass-radius relation, and the binding energy, released in supernova collapse). Considering, based on these arguments, as a more realistic scenario that in which the symmetry energy increases with density in the high density region, in this study we will choose the BSk20 Skyrme parametrization for carrying out numerical calculations. Nevertheless, as emphasized in Ref.~\cite{GCP}, only direct experimental evidence related to the high densities will allow one to ultimately decide which of the BSk19-BSk21 parametrizations of a generalized Skyrme interaction is more appropriate for the description of neutron-rich nuclear systems of astrophysical interest. At this point, it is worthy to note that we consider thermodynamic properties of spin polarized states in neutron matter in a strong magnetic field up to the high density region relevant for astrophysics. Nevertheless, we take into account the nucleon degrees of freedom only, although other degrees of freedom, such as pions, hyperons, kaons, or even quarks could be important at such high densities. \section{Basic equations} The normal (nonsuperfluid) states of neutron matter are described by the normal distribution function of neutrons $f_{\kappa_1\kappa_2}=\mbox{Tr}\,\varrho a^+_{\kappa_2}a_{\kappa_1}$, where $\kappa\equiv({\bf{p}},\sigma)$, ${\bf p}$ is momentum, $\sigma$ is the projection of spin on the third axis, and $\varrho$ is the the density matrix of the system~\cite{I,IY,I06}. The energy of the system is specified as a functional of the distribution function $f$, $E=E(f)$, and determines the single particle energy \begin{eqnarray} \varepsilon_{\kappa_1\kappa_2}(f)=\frac{\partial E(f)}{\partial f_{\kappa_2\kappa_1}}. \label{1} \end{eqnarray} The self-consistent matrix equation for determining the distribution function $f$ follows from the minimum condition of the thermodynamic potential~\cite{AKPY,AIP} and is \begin{align}\label{2} f&=\left\{\mbox{exp}(Y_0\varepsilon+Y_i\cdot \mu_n\sigma_i+ Y_4)+1\right\}^{-1}\\ &\equiv \left\{\mbox{exp}(Y_0\xi)+1\right\}^{-1}.\nonumber \end{align} Here the quantities $\varepsilon, Y_i$ and $Y_4$ are matrices in the space of $\kappa$ variables, with $\bigl(Y_{i,4}\bigr)_{\kappa_1\kappa_2}=Y_{i,4}\delta_{\kappa_1\kappa_2}$, $Y_0=1/T$, $Y_i=-H_i/T$ and $ Y_{4}=-\mu_0/T$ being the Lagrange multipliers, $\mu_0$ being the chemical potential of neutrons, and $T$ the temperature. In Eq.~\p{2}, $\mu_n=-1.9130427(5)\mu_N$ is the neutron magnetic moment~\cite{A} ($\mu_N$ being the nuclear magneton), $\sigma_i$ are the Pauli matrices. Note that, unlike to Refs.~\cite{IY4,IY10}, the term with the external magnetic field $\bf H$ is not included in the single partice energy $\varepsilon$ but is separately introduced in the exponent of the Fermi distribution~\p{2}. Further it will be assumed that the third axis is directed along the external magnetic field $\bf{H}$. Given the possibility for alignment of neutron spins along or opposite to the magnetic field $\bf H$, the normal distribution function of neutrons and single particle energy $\varepsilon$ can be expanded in the Pauli matrices $\sigma_i$ in spin spac \begin{align} f({\bf p})&= f_{0}({\bf p})\sigma_0+f_{3}({\bf p})\sigma_3,\label{7.2}\\ \varepsilon({\bf p})&= \varepsilon_{0}({\bf p})\sigma_0+\varepsilon_{3}({\bf p})\sigma_3. \nonumber \end{align} Using Eqs.~\p{2} and \p{7.2}, one can express evidently the distribution functions $f_{0},f_{3}$ in terms of the quantities $\varepsilon$: \begin{align} f_{0}&=\frac{1}{2}\{n(\omega_{+})+n(\omega_{-}) \},\label{2.4} \\ f_{3}&=\frac{1}{2}\{n(\omega_{+})-n(\omega_{-})\}.\label{2.5} \end{align} Here $n(\omega)=\{\exp(Y_0\omega)+1\}^{-1}$ and \bal \omega_{\pm}&=\xi_{0}\pm\xi_{3},\label{omega}\\ \xi_{0}&=\varepsilon_{0}-\mu_{0},\; \xi_{3}=-\mu_nH+\varepsilon_{3}.\nonumber\end{align} The quantity $\omega_{\pm}$, being the exponent in the Fermi distribution function $n$, plays the role of the quasiparticle spectrum. The branches $\omega_{\pm}$ correspond to neutrons with spin up and spin down, respectively. The distribution functions $f$ satisfy the normalization conditions \begin{align} \frac{2}{\cal V}\sum_{\bf p}f_{0}({\bf p})&=\varrho,\label{3.1}\\ \frac{2}{\cal V}\sum_{\bf p}f_{3}({\bf p})&=\varrho_\uparrow-\varrho_\downarrow\equiv\Delta\varrho.\label{3.2} \end{align} Here $\varrho=\varrho_{\uparrow}+\varrho_{\downarrow}$ is the total density of neutron matter, $\varrho_{\uparrow}$ and $\varrho_{\downarrow}$ are the neutron number densities with spin up and spin down, respectively. The quantity $\Delta\varrho$ may be regarded as the neutron spin order parameter which determines the magnetization of the system $M=\mu_n \Delta\varrho$. The spin ordering of neutrons can also be characterized by the spin polarization parameter \begin{align*} \Pi=\frac{\Delta\varrho}{\varrho}. \end{align*} The magnetization may contribute to the internal magnetic field $\tbf{B}=\tbf{H}+4\pi \tbf{M}$. However, we will assume, analogously to the previous studies~\cite{PG,BPL,IY4}, that, because of the tiny value of the neutron magnetic moment, the contribution of the magnetization to the inner magnetic field $\bf{B}$ remains small for all relevant densities and magnetic field strengths, and, hence, \bal \bf{B}\approx \bf{H}.\label{approx}\end{align} In order to get the self--consistent equations for the components of the single particle energy, one has to set the energy functional of the system. It represents the sum of the matter and field energy contributions \begin{equation}\label{en} E(f,H)=E_m(f)+E_f(H),\;E_f(H)=\frac{H^2}{8\pi}{\cal V}. \end{equation} The matter energy is the sum of the kinetic and Fermi-liquid interaction energy terms~\cite{IY,I06} \begin{align} E_m(f)&=E_0(f)+E_{int}(f),\label{enfunc} \\ {E}_0(f)&=2\sum\limits_{ \bf p}^{} \underline{\varepsilon}_{\,0}({\bf p})f_{0}({\bf p}),\nonumber \\ {E}_{int}(f)&=\sum\limits_{ \bf p}^{}\{ \tilde\varepsilon_{0}({\bf p})f_{0}({\bf p})+ \tilde\varepsilon_{3}({\bf p})f_{3}({\bf p})\},\nonumber \end{align} where \begin{align}\tilde\varepsilon_{0}({\bf p})&=\frac{1}{2\cal V}\sum_{\bf q}U_0^n({\bf k})f_{0}({\bf q}),\;{\bf k}=\frac{{\bf p}-{\bf q}}{2}, \label{flenergies}\\ \tilde\varepsilon_{3}({\bf p})&=\frac{1}{2\cal V}\sum_{\bf q}U_1^n({\bf k})f_{3}({\bf q}). \end{align} Here $\underline\varepsilon_{\,0}({\bf p})=\frac{{\bf p}^{\,2}}{2m_{0}}$ is the free single particle spectrum, $m_0$ is the bare mass of a neutron, $U_0^n({\bf k}), U_1^n({\bf k})$ are the normal Fermi liquid (FL) amplitudes, and $\tilde\varepsilon_{0},\tilde\varepsilon_{3}$ are the FL corrections to the free single particle spectrum. Using Eqs.~\p{1} and \p{enfunc}, we get the self-consistent equations for the components of the single particle energy in the form \bal\xi_{0}({\bf p})&=\underline{\varepsilon}_{\,0}({\bf p})+\tilde\varepsilon_{0}({\bf p})-\mu_0,\; \xi_{3}({\bf p})=-\mu_nH+\tilde\varepsilon_{3}({\bf p}).\label{14.2} \end{align} Taking into account expressions~\p{2.4} and \p{2.5} for the distribution functions $f_0$ and $f_3$, solutions of the self-consistent Eqs.~\p{14.2} should be found jointly with the normalization conditions~\p{3.1}, \p{3.2}. The pressures (longitudinal and transverse with respect to the direction of the magnetic field) in the system are related to the diagonal elements of the stress tensor whose explicit expression reads~\cite{LLP} \begin{equation}\label{sigma} \sigma_{ik}=\biggl[\tilde{ \mathfrak{f}}-\varrho\biggl(\frac{\partial \tilde{ \mathfrak{f}}}{\partial \varrho}\biggr)_{{\bf H},T}\biggr]\delta_{ik}+\frac{H_iB_k}{4\pi}. \end{equation} Here \begin{equation}\label{Ft} \tilde{ \mathfrak{f}}=\mathfrak{f}_H-\frac{H^2}{4\pi} \end{equation} $\mathfrak{f}_H=\frac{1}{\cal V}(E-TS)-\tbf{HM}$ is the Helmholtz free energy density, and the entropy $S$ is given by the formula \begin{eqnarray} S&=&-\sum_{\bf p}\,\sum_{\sigma=+,\,-}\{n(\omega_{\sigma})\ln n(\omega_{\sigma})\label{entr}\\ &&+\bar n(\omega_{\sigma})\ln \bar n(\omega_{\sigma})\}, \;\bar n(\omega)=1-n(\omega).\nonumber \end{eqnarray} For the isotropic medium, the stress tensor~\p{sigma} is symmetric. The transverse $p_{t}$ and longitudinal $p_{l}$ pressures are determined from the formulas \begin{equation*} p_{t}=-\sigma_{11}=-\sigma_{22},\; p_{l}=-\sigma_{33}. \end{equation*} Hence, using Eqs.~\p{en}, \p{sigma}, one can get \begin{align}\label{press} p_t&=\varrho\Bigl(\frac{\partial f_m}{\partial \varrho}\Bigr)_{H,T}-f_m+\frac{H^2}{8\pi},\\ p_l&=\varrho\Bigl(\frac{\partial f_m}{\partial \varrho}\Bigr)_{H,T}-f_m-\frac{H^2}{8\pi},\label{p_l} \end{align} where $f_m=\frac{1}{\cal V}(E_m-TS)$ is the matter free energy density, and we disregarded in Eqs.~\p{press}, \p{p_l} the terms proportional to $M$. The structure of the pressures $p_t$ and $p_l$ is different that reflects the breaking of the rotational symmetry in the magnetic field. In ultrastrong magnetic fields, the quadratic on the magnetic field term (the Maxwell term) will be dominating, leading to increasing the transverse pressure and to decreasing the longitudinal pressure. Hence, at some critical magnetic field, the longitudinal pressure will vanish, resulting in the longitudinal instability of neutron matter. Obviously, at finite temperature the pressures $p_t$ and $p_l$ will be larger compared to the zero temperature case, and, hence, increase of the temperature will lead to the increase of the critical magnetic field. Here we would like to find the magnitude of the critical field at temperatures of about a few tens of MeV, which can be relevant for protoneutron stars, and also to determine the corresponding maximum degree of spin polarization in neutron matter. \section{Spin polarization at $H=0, T\not=0$} \label{spinpol} For providing numerical calculations, we use the BSk20 Skyrme interaction~\cite{GCP} developed to reproduce the zero temperature microscopic EoS of nonpolarized neutron matter~\cite{APR}. Although spontaneous spin polarization at zero temperature is missing for this parametrization for all relevant densities, it is not excluded that at finite temperature a spontaneous ferromagnetic phase transition could occur. Actually, this is the case as will be shown later. In the model calculations of this section we consider the temperatures somewhat larger than the temperatures which could be reachable in the interior of protoneutron stars~\cite{MP}. This will help us to find the critical temperature above which a spontaneous polarization appears, and will also allow us to determine the relevant temperature range for studying spin polarization at $H\not=0$. The recently developed parametrizations BSk19-BSk21 of the Skyrme effective forces appear as a generalization of Skyrme effective NN interaction of the conventional form. In the conventional case, the amplitude of Skyrme NN interaction reads~\cite{VB} \bal\hat v({\bf p},{\bf q})&=t_0(1+x_0P_\sigma)+\frac{1}{6}t_3(1+x_3P_\sigma)\varrho^\alpha \label{49}\\&+\frac{1}{2\hbar^2} t_1(1+x_1P_\sigma)({\bf p}^2+{\bf q}^2) +\frac{t_2}{\hbar^2}(1+x_2P_\sigma){\bf p}{\bf q},\nonumber\end{align} where $P_\sigma=(1+{{\boldsymbol\sigma_1\boldsymbol\sigma_2}})/2$ is the spin exchange operator, $t_i, x_i$ and $\alpha$ are some phenomenological parameters specifying a given parametrization of the Skyrme interaction. The Skyrme interaction used in Ref.~\cite{GCP} has the form \bal \hat v'({\bf p},{\bf q})&=\hat v({\bf p},{\bf q})+ \frac{\varrho^\beta}{2\hbar^2} t_4(1+x_4P_\sigma)({\bf p}^2+{\bf q}^2)\label{comSk}\\&\quad+\frac{\varrho^\gamma}{\hbar^2}t_5(1+x_5P_\sigma){\bf p}{\bf q}.\nonumber\end{align} In Eq.~\p{comSk}, two additional terms are the density-dependent generalizations of the $t_1$ and $t_2$ terms of the usual form. Specific values of the parameters $t_i, x_i, \alpha, \beta$ and $\gamma$ for Skyrme forces BSk19-BSk21 are given in Table~1~\cite{GCP}. The normal FL amplitudes $U_0,U_1$ can be expressed in terms of the Skyrme force parameters. For conventional Skyrme force parametrizations, their explicit expressions are given in Refs.~\cite{AIP,IY3}. As follows from Eqs.~\p{49} and \p{comSk}, in order to obtain the corresponding expressions for the generalized Skyrme interaction~\p{comSk}, one should use the substitutions \begin{align} t_1\rightarrow t_1+t_4\varrho^\beta,\; t_1x_1\rightarrow t_1x_1+t_4x_4\varrho^\beta,\\ t_2\rightarrow t_2+t_5\varrho^\gamma,\; t_2x_2\rightarrow t_2x_2+t_5x_5\varrho^\gamma.\end{align} Therefore, the FL amplitudes are related to the parameters of the Skyrme interaction~\p{comSk} by formulas~\cite{IY10a} \bal U_0^n({\bf k})&=2t_0(1-x_0)+\frac{t_3}{3}\varrho^\alpha(1-x_3) +\frac{2}{\hbar^2}[t_1(1-x_1)\label{101}\\ +t_4(1-x_4)\varrho^\beta+3t_2(1+x_2)+ 3t_5(1+x_5)\varrho^\gamma]{\bf k}^{2}, \nonumber\\ U_1^n({\bf k})&=-2t_0(1-x_0)-\frac{t_3}{3}\varrho^\alpha(1-x_3)+\frac{2}{\hbar^2}[t_2(1+x_2) \label{102}\\&\qua +t_5(1+x_5)\varrho^\gamma-t_1(1-x_1)- t_4(1-x_4)\varrho^\beta]{\bf k}^{2}\nonumber \end{align \begin{table} \centering \caption{The parameters of the BSk19-BSk21 Skyrme forces, according to Ref.~\cite{GCP}. The value of the nuclear saturation density $\varrho_0$ is shown in the bottom line. } \label{tab1} \begin{ruledtabular} \begin{tabular}{lccc} &BSk19&BSk20&BSk21\\ \hline $t_0$ { [MeV fm$^3$]}&-4115.21 &-4056.04&-3961.39 \\ $t_1$ { [MeV fm$^5$]}&403.072 & 438.219&396.131 \\ $t_2$ { [MeV fm$^5$]}&0 &0&0 \\ $t_3$ { [MeV fm$^{3+3\alpha}$]}&23670.4&23256.6&22588.2 \\ $t_4$ { [MeV fm$^{5+3\beta}$]}&-60.0 &-100.000 &-100.000 \\ $t_5$ { [MeV fm$^{5+3\gamma}$]}&-90.0&-120.000&-150.000 \\ $x_0$ &0.398848 & 0.569613 &0.885231 \\ $x_1$ &-0.137960 &-0.392047 &0.0648452 \\ $t_2x_2$ { [MeV fm$^5$]}&-1055.55 &-1147.64 &-1390.38 \\ $x_3$ &0.375201 &0.614276&1.03928 \\ $x_4$ &-6.0 &-3.00000 &2.00000 \\ $x_5$ &-13.0 &-11.0000 &-11.0000 \\ $\alpha$ &1/12 &1/12 &1/12 \\ $\beta$ &1/3 &1/6&1/2 \\ $\gamma$ &1/12 &1/12&1/12 \\ $\varrho_0$ { [1/fm$^{3}$]} &0.1596 &0.1596&0.1582 \\ \end{tabular} \end{ruledtabular} \end{table} Now we present the results of the numerical solution of the self-consistent equations at $H=0$ with the BSk20 Skyrme force. Fig.~\ref{fig1} shows the spin polarization parameter of neutron matter as a function of the density for a few fixed values of the temperature of about several tens of MeV. At zero temperature, there is no spontaneous polarization at all relevant densities because two additional terms in a generalized form~\p{comSk} of the Skyrme interaction were constrained just with the aim to exclude a nonzero polarization at vanishing temperature. Spontaneous polarization does not appear up to some critical temperature $T_c$ which is, at least, larger than $35\,\textrm{MeV}$. Beyond $T_c$, spontaneous spin polarization exists in a finite density interval $(\varrho_{c_1},\varrho_{c_2})$. The unexpected moment is that the temperature promotes spontaneous spin polarization increasing both the width of the density domain where a nonzero polarization exists and the magnitude of the spin polarization parameter. In particular, if to approach the density interval $(\varrho_{c_1},\varrho_{c_2})$ from the lower densities then the left critical point $\varrho_{c_1}$, at which spontaneous polarization appears, decreases with temperature, contrary to intuition which suggests that the temperature should act as a preventing factor to spin polarization and, hence, should delay its appearance. Analogously, with increasing temperature the right critical point $\varrho_{c_2}$ for the disappearance of spontaneous polarization should, according to intuition, decrease, contrary to what really occurs, i.e., the critical density $\varrho_{c_2}$ increases with temperature. \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig1.eps} \end{center} \vspace{-2ex} \caption{Neutron spin polarization parameter as a function of the density for the BSk20 Skyrme force at $H=0$ and several fixed values of the temperature. } \label{fig1}\vspace{-0ex} \end{figure} In order to clarify whether a spontaneously spin polarized state is thermodynamically preferable over the nonpolarized state, one should compare the corresponding free energies. Fig.~\ref{fig2} shows the difference between the free energies per neutron of spin polarized and nonpolarized states, $\delta F/A=(F(\varrho,T,\Pi(\varrho,T))-F(\varrho,T,\Pi=0))/A$, as a function of the density at the same fixed temperatures considered above. It is seen that a spontaneously polarized state is preferable over the nonpolarized state for all relevant densities and temperatures where spontaneous polarization exists. With increasing the temperature, the minimum of the difference $\delta F/A$ becomes more pronounced, and, hence, a spontaneously polarized state becomes more stable with respect to the nonpolarized one. Thus, the state with spontaneous polarization, described by the spin polarization parameter with such unusual properties (cf. Fig.~\ref{fig1}), is supported thermodynamically by the balance of the free energies. \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig2.eps} \end{center} \vspace{-2ex} \caption{ The difference between the free energies per neutron of spin polarized and nonpolarized states as a function of the density at $H=0$ and several fixed values of the temperature for the BSk20 Skyrme force. The difference is shown only for the density domains where spontaneous polarization exists. } \label{fig2}\vspace{-0ex} \end{figure} In order to get a deeper insight into the problem, let us consider separate contributions to the difference between the free energies per neutron $\delta F/A=\delta E/A-T\delta S/A$. Fig.~\ref{fig3} shows the difference between the energies per neutron of spin polarized and nonpolarized states, $\delta E/A=(E(\varrho,T,\Pi(\varrho,T))-E(\varrho,T,\Pi=0))/A$, as a function of the density at the same fixed temperatures considered above. It is seen that the energy per neutron of a spin polarized state is always larger than that of the nonpolarized state for the density domain where spontaneous spin polarization exists. This is because increasing the temperature and spin polarization leads to increasing the kinetic energy term in the energy functional of the system. The sign of the difference $\delta E/A$ could be, in principle, inverted by the negative contribution of the term in the energy functional~\p{enfunc} describing spin correlations in neutron matter with nonzero polarization, but that is not, however, the case. Therefore, the inequality $\delta F/A<0$ can hold only because of the inequality $\delta S/A>0$ for the density range where spontaneous polarization exists. Fig.~\ref{fig4} shows that this is actually true, and the entropy per neutron of a spin polarized state is larger than that of the nonpolarized state for the corresponding temperatures and densities. This unexpected behavior is contradicting to intuition which suggests that the entropy of a more ordered spin polarized state should be less than that of the nonpolarized state. Note that such an unusual behavior of the entropy of a spin polarized state was found earlier for neutron matter with the Skyrme effective interaction~\cite{RPV} and for symmetric nuclear matter with the Gogny effective interaction~\cite{IY2,I07} (in the latter case, for antiferomagnetically ordered nucleon spins). The difference, however, is that in these earlier studies instability with respect to spontaneous spin ordering occurred already at zero temperature whereas in the given case it appears only at temperatures larger than the critical one. Also, it was clarified earlier~\cite{RPV,I07} that the unusual behavior of the entropy of a spin polarized state should be traced back to its dependence on the effective masses of spin-up and spin-down nucleons and to a violation of a certain constraint on them at the corresponding temperatures and densities. In Ref.~\cite{RPV}, this constraint was formulated for a totally polarized neutron matter, and in Ref.~\cite{I07} for symmetric nuclear matter with arbitrary antiferromagnetic spin polarization. \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig3.eps} \end{center} \vspace{-2ex} \caption{ Same as in Fig.~\ref{fig2} but for the difference between the energies per neutron of spin polarized and nonpolarized states. } \label{fig3}\vspace{-0ex} \end{figure} Let us verify now whether this holds true in our case. In the low-temperature limit the entropy per neutron is given by expression \begin{equation}S/A=\sum_{\sigma=+,\,-}\frac{\pi^2}{2\varepsilon_{F\sigma}}T, \label{lowlim}\end{equation} where $\varepsilon_\sigma=\frac{\hbar^2k_{F\sigma}^2}{2m_\sigma}$ is the Fermi energy of neutrons with spin up and spin down, and $k_\sigma=(6\pi^2\varrho_\sigma)^{1/3}$ is the respective Fermi momentum. The low-temperature expansion~\p{lowlim} is valid till $T/\varepsilon_{F\sigma}\ll 1$. By requiring for the difference between the entropies of spin polarized and nonpolarized states to be negative, one can derive the following constraint on the effective masses $m_{n\uparrow}$ and $m_{n\downarrow}$ of neutrons with spin up and spin down in a spin polarized state~\cite{IY10}: \begin{equation}D\equiv \frac{m_{n\uparrow}}{m_n}(1+\Pi)^\frac{1}{3}+ \frac{m_{n\downarrow}}{m_n}(1-\Pi)^\frac{1}{3}-2<0,\label{lowtemD}\end{equation} where \begin{align} \frac{\hbar^2}{2m_{\uparrow(\downarrow)}}&=\frac{\hbar^2}{2m_0} +\frac{\varrho_{\uparrow(\downarrow)}}{2} [t_2(1+x_2)+t_5(1+x_5)\varrho^\gamma]\label{m_ud}\\&\quad+\frac{\varrho_{\downarrow(\uparrow)}} {4}[t_1(1-x_1)+t_4(1-x_4)\varrho^\beta\nonumber\\ &\quad+t_2(1+x_2)+t_5(1+x_5)\varrho^\gamma].\nonumber \end{align} In the constraint~\p{lowtemD}, the effective mass $m_n$ of a neutron in nonpolarized neutron matter is given by~\cite{IY10a} \begin{align} \frac{\hbar^2}{2m_{n}}&=\frac{\hbar^2}{2m_0}+\frac{\varrho}{8} [t_1(1-x_1)+t_4(1-x_4)\varrho^\beta\label{mn}\\&\quad+3t_2(1+x_2)+3t_5(1+x_5)\varrho^\gamma]. \nonumber\end{align} \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig4.eps} \end{center} \vspace{-2ex} \caption{ Same as in Fig.~\ref{fig2} but for the difference between the entropies per neutron of spin polarized and nonpolarized states. } \label{fig4}\vspace{-0ex} \end{figure} After the self-consistent determination of the spin polarization parameter, one can check whether the inequality~\p{lowtemD} is satisfied at the corresponding densities and temperatures. Fig.~\ref{fig5} shows the left-hand side $D$ of the constraint~\p{lowtemD} for the branch $\Pi(\varrho,T)$ of spontaneous polarization as a function of the density at the temperatures $T=37$~MeV and $T=40$~MeV, at which the accuracy of the approximation $T/\varepsilon_{F\sigma}\ll 1$ is satisfactory. It is seen that the inequality~\p{lowtemD} is violated implying that the entropy of a spontaneously polarized state is larger than the entropy of the nonpolarized state at the respective densities and temperatures. Hence, the unusual behavior of the entropy of a spontaneously polarized state mentioned above can be related to the peculiarities of its dependence on the effective masses of neutrons with spin up and spin down. A nontrivial character of the density dependence of the effective masses $m_{n\uparrow}$ and $m_{n\downarrow}$ in neutron matter with spontaneous polarization at different temperatures is clearly seen from Fig.~\ref{fig5a}. In the subsequent analysis, following the scenario according to which spontaneous polarization should be avoided at the relevant densities and temperatures, we will confine our analysis to the temperatures up to 30 MeV which are, definitely, less than the critical temperature $T_c\gtrsim 35$~MeV. Such a choice of the relevant temperature interval is consistent with the results of a completely independent research~\cite{MP} of hybrid stars in the context of relativistic mean-field theory, according to which the maximum temperature attainable in their interior does not exceed 35~MeV. \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig5.eps} \end{center} \vspace{-2ex} \caption{ The difference $D$ in constraint~\p{lowtemD} for the branch $\Pi$ of spontaneous polarization as a function of density at $T=37\,\textrm{MeV}$ and $T=40\,\textrm{MeV}$ for the BSk20 Skyrme force. } \label{fig5}\vspace{-0ex} \end{figure} \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig6.eps} \end{center} \vspace{-2ex} \caption{The ratio of the effective mass of a neutron with spin up (upper dashed curves) and spin down (lower dotted curves) in a spontaneously polarized state to the bare neutron mass as a function of density at $T=37\,\textrm{MeV}$ and $T=40\,\textrm{MeV}$ for the BSk20 Skyrme force. } \label{fig5a}\vspace{-0ex} \end{figure} \section{Longitudinal and transverse pressures at finite temperature. Anisotropic EoS} In this section, we will study the influence of finite temperatures on thermodynamic quantities of spin polarized neutron matter in an ultrastrong magnetic field. We will take into account the effects of the pressure anisotropy, and, in particular, will clarify to which extent the critical magnetic field, at which the longitudinal instability in magnetized neutron matter occurs, will increase due to the impact of finite temperatures. \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig7.eps} \end{center} \vspace{-2ex} \caption{Neutron spin polarization parameter as a function of the magnetic field strength for the BSk20 Skyrme force at $T=0$ and $T=30$~MeV, and at two fixed densities, $\varrho=3\varrho_0$ and $\varrho=4\varrho_0$. The vertical arrows indicate the maximum magnitude of spin polarization attainable at the given temperature and density, see further details in the text.} \label{fig6}\vspace{-0ex} \end{figure} First, we present the results of the numerical solution of the self-consistent equations. Fig.~\ref{fig6} shows the spin polarization parameter of neutron matter as a function of the magnetic field $H$ at two different temperatures, $T=0$ and $T=30$~MeV, and at two different values of the neutron matter density, $\varrho=3\varrho_0$ and $\varrho=4\varrho_0$, which can be relevant for the central regions of a magnetar. Under increasing the density, the effect produced by the magnetic field on spin polarization of neutron matter becomes smaller. It is seen that the impact of the magnetic field remains insignificant up to the field strength $H\sim10^{17}$~G. At the magnetic field $H=10^{18}$~G, usually considered as the maximum magnetic field strength in the core a magnetar (according to a scalar virial theorem~\cite{LS91}), the magnitude of the spin polarization parameter doesn't exceed $45\%$ at $\varrho=3\varrho_0$ and $19\%$ at $\varrho=4\varrho_0$ (for the temperatures under consideration). However, the situation changes if the larger magnetic fields are allowable: With further increasing the magnetic field strength, the magnitude of the spin polarization parameter increases and spin polarization approaches its limiting value $\Pi=-1$, corresponding to a fully spin polarized state. For example, a fully polarized state is formed at $H\approx 1.3\cdot 10^{19}$~G for the temperature $T=0$~MeV and at $H\approx 2.3\cdot 10^{19}$~G for $T=30$~MeV at $\varrho=3\varrho_0$, i.e., certainly, for magnetic fields $H\gtrsim10^{19}$~G. Note that we speak about a fully polarized state at finite temperature although some quantity of neutrons with spin up are always present at $T\not=0$. Nevertheless, this quantity may be made arbitrary small with further increasing the magnetic field, and we consider that a fully polarized state is formed, if the deviation from the limiting value $\Pi=-1$ is less than $10^{-4}$. With increasing the temperature, the value of the magnetic field, at which a fully polarized state occurs, increases, as one could expect. However, practically up to magnetic fields of about $10^{19}$~G, spin polarization demonstrates the unusual behavior and increases with temperature. Further it will be shown that this behavior is thermodynamically supported by the corresponding balance of the Helmholtz free energies. The meaning of the vertical arrows in Fig.~\ref{fig6} is explained later in the text. Now, we should check whether a fully spin polarized state of neutrons in a strong magnetic field can indeed be formed by calculating the anisotropic pressure in dense neutron matter. Fig.~\ref{fig7}a shows the pressures (longitudinal and transverse) in neutron matter as functions of the magnetic field $H$ at the same fixed temperatures and densities, considered above. The upper branches in the branching curves correspond to the transverse pressure, the lower ones to the longitudinal pressure. First, it is clearly seen that up to some threshold magnetic field the difference between the transverse and longitudinal pressures is unessential that corresponds to the isotropic regime. Beyond this threshold magnetic field strength, the anisotropic regime holds for which the transverse pressure increases with $H$ while the longitudinal pressure decreases. The increase of the temperature leads to the increase of the pressures, transverse $p_t$ and longitudinal $p_l$. Also, the increase of the density has the same effect on the pressures $p_t$ and $p_l$ as the increase of the temperature. The most important feature is that the longitudinal pressure vanishes at some critical magnetic field $H_c$ marking the onset of the longitudinal instability in neutron matter. For example, $H_c\approx1.56\cdot 10^{18}$~G for $T=0$~MeV and $H_c\approx1.64\cdot 10^{18}$~G for $T=30$~MeV at $\varrho=3\varrho_0$, and $H_c\approx2.42\cdot 10^{18}$~G for $T=0$~MeV and $H_c\approx2.48\cdot 10^{18}$~G for $T=30$~MeV at $\varrho=4\varrho_0$. Hence, at finite temperatures relevant for protoneutron stars the critical magnetic field is increased compared to the zero temperature case but this increase is, in fact, insignificant. Even with accounting for the finite temperature effects, the critical field doesn't exceed $10^{19}$~G for the density range under consideration. \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig8ab.eps} \end{center} \vspace{-2ex} \caption{ Same as in Fig.~\ref{fig6} but for: (a) the pressures, longitudinal (descending branches) and transverse (ascending branches). (b) Same as in the top panel but for the normalized difference between the transverse and longitudinal pressures. The vertical arrows in the lower panel indicate the points corresponding to the onset of the longitudinal instability in neutron matter. } \label{fig7}\vspace{-0ex} \end{figure} The magnitude of the spin polarization parameter $\Pi$ cannot also exceed some limiting value corresponding to the critical field $H_c$. These maximum values of the $\Pi$'s magnitude are shown in Fig.~\ref{fig6} by the vertical arrows. In particular, $\Pi_c\approx-0.46$ for $T=0$~MeV and $\Pi_c\approx-0.58$ for $T=30$~MeV at $\varrho=3\varrho_0$, and $\Pi_c\approx-0.38$ for $T=0$~MeV and $\Pi_c\approx-0.41$ for $T=30$~MeV at $\varrho=4\varrho_0$. As can be inferred from these values, the appearance of the negative longitudinal pressure in an ultrastrong magnetic field prevents the formation of a fully polarized spin state in the core of a magnetar. Therefore, only the onset of a field-induced ferromagnetic phase transition, or its near vicinity, can be caught under increasing the magnetic field strength in dense neutron matter at finite temperature. A complete spin polarization in the magnetar core is not allowed by the appearance of the negative pressure along the direction of the magnetic field, contrary to the conclusion of Ref.~\cite{BRM} where the pressure anisotropy in a strong magnetic field was disregarded. Fig.~\ref{fig7}b shows the difference between the transverse and longitudinal pressures normalized to the value of the pressure $p_0$ in the isotropic regime (which corresponds to the weak field limit with $p_l=p_t=p_0$): \begin{align*} \delta=\frac{p_{t}-p_{l}}{p_0}. \end{align*} Applying for the transition from the isotropic regime to the anisotropic one the criterion $\delta\simeq 1$, the transition occurs at the threshold field $H_{th}\approx 1.15\cdot 10^{18}$~G for $T=0$~MeV and $H_{th}\approx1.22\cdot 10^{18}$~G for $T=30$~MeV at $\varrho=3\varrho_0$, and at $H_{th}\approx1.83\cdot 10^{18}$~G for $T=0$~MeV and $H_{th}\approx1.86\cdot 10^{18}$~G for $T=30$~MeV at $\varrho=4\varrho_0$. In all cases under consideration, the threshold field $H_{th}$ is greater than $10^{18}$~G, and, hence, the isotropic regime holds for the fields up to $10^{18}$~G. For comparison, the threshold field for a relativistic dense gas of free charged fermions at zero temperature was found to be about $10^{17}$~G~\cite{FIKPS} (without including the anomalous magnetic moments of fermions). For a degenerate gas of free neutrons at zero temperature the model dependent estimate gives $H_{th}\simeq4.5\cdot 10^{18}$~G~\cite{Kh} (including the neutron anomalous magnetic moment). The normalized splitting of the transverse and longitudinal pressures increases more rapidly with the magnetic field at the smaller density and/or at the lower temperature. The vertical arrows in Fig.~\ref{fig7}b indicate the points corresponding to the onset of the longitudinal instability in neutron matter. Since the threshold field $H_{th}$ is less than the critical field $H_c$ for the appearance of the longitudinal instability, the anisotropic regime can be relevant for the core of a magnetar. The maximum allowable normalized splitting of the pressures corresponding to the critical field $H_c$ is $\delta\sim 2$. If the anisotropic regime sets in, a neutron star has the oblate form. Thus, as follows from the preceding discussions, in the anisotropic regime the pressure anisotropy plays an important role in determining the spin structure and configuration of a neutron star. \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig9ab.eps} \end{center} \vspace{-2ex} \caption{ (a) Same as in Fig.~\ref{fig6} but for the Helmholtz free energy density of the system. (b) Same as in Fig.~\ref{fig6} but for the ratio of the magnetic field energy density to the Helmholtz free energy density of the system. The meaning of the vertical arrows is the same as in Fig.~\ref{fig7}b. } \label{fig8}\vspace{-0ex} \end{figure} At the given thermodynamic variables $\varrho,T$ and $H$, the Helmholtz free energy is a relevant thermodynamic function, whose minimum determines a state of thermodynamic equilibrium. Fig.~\ref{fig8}a shows the Helmholtz free energy density of the system as a function of the magnetic field $H$ at two fixed temperatures, $T=0$ and $T=30$~MeV, and at two different densities, $\varrho=3\varrho_0$ and $\varrho=4\varrho_0$. It is seen that the magnetic fields up to $H \sim10^{18}$~G have practically small effect on the Helmholtz free energy density $f_H$, but beyond this field strength the contribution of the magnetic field energy to the free energy $f_H$ rapidly increases with $H$. However, this increase is limited by the values of the critical magnetic field corresponding to the onset of the longitudinal instability in neutron matter. The respective points on the curves are indicated by the vertical arrows. Fig.~\ref{fig8}b shows the ratio of the magnetic field energy density $e_f=\frac{H^2}{8\pi}$ to the Helmholtz free energy density under the same assumptions as in Fig.~\ref{fig8}a. The intersection points of the respective curves in this panel with the line $e_f/f_H=0.5$ correspond to the magnetic fields at which the matter and field contributions to the Helmholtz free energy density are equal. This happens at $H\approx1.18\cdot10^{18}$~G for $T=0$~MeV and $H\approx1.08\cdot10^{18}$~G for $T=30$~MeV at $\varrho=3\varrho_0$, and at $H\approx1.81\cdot10^{18}$~G for $T=0$~MeV and $H\approx1.76\cdot10^{18}$~G for $T=30$~MeV at $\varrho=4\varrho_0$. These values are quite close to the respective values of the threshold field $H_{th}$, and, hence, the transition to the anisotropic regime occurs at the magnetic field strength at which the field and matter contributions to the Helmholtz free energy density become equally important. It is also seen from Fig.~\ref{fig8}b that in all cases when the longitudinal instability occurs in the magnetic field $H_c$ the contribution of the magnetic field energy density to the Helmholtz free energy density of the system dominates over the matter contribution. \begin{figure}[tb] \begin{center} \includegraphics[width=8.0cm,keepaspectratio]{fig10ab.eps} \end{center} \vspace{-2ex} \caption{ The Helmholtz free energy density of the system as a function of: (a) the transverse pressure $p_t$, (b) the longitudinal pressure $p_l$ at $T=0$ (solid lines) and $T=30$~MeV (dashed lines), and at two fixed densities, $\varrho=3\varrho_0$ and $\varrho=4\varrho_0$. The meaning of the vertical arrows in the top panel is the same as in Fig.~\ref{fig7}b. In the bottom panel, the physical region corresponds to $p_l>0$.} \label{fig9}\vspace{-0ex} \end{figure} Because of the pressure anisotropy, the EoS of neutron matter in a strong magnetic field is also anisotropic. Fig.~\ref{fig9} shows the dependence of the Helmholtz free energy density $f_H$ on the transverse pressure (top panel) and on the longitudinal pressure (bottom panel) after excluding the dependence on $H$ in these quantities. Since the dominant Maxwell term enters the pressure $p_t$ and free energy density $f_H$ with positive sign and the pressure $p_l$ with negative sign, the free energy density $f_H$ is the increasing function of $p_t$ and decreasing function of $p_l$. In the case of $f_H(p_t)$ dependence, at the given density, the same $p_t$ corresponds to the larger magnetic field $H$ at the temperature $T=0$~MeV compared to the $T=30$~MeV case (see Fig.~\ref{fig7}a). The overall effect of two factors (temperature and magnetic field) will be the larger value of the free energy density $f_H$ at the given $p_t$ and density for the temperature $T=0$~MeV compared with the $T=30$~MeV case (see Fig.~\ref{fig9}a). The analogous arguments show that, at the given temperature and $p_t$, the Helmholtz free energy density is larger for the smaller density. In the case of $f_H(p_l)$ dependence, at the given density, the same $p_l$ corresponds to the smaller magnetic field $H$ for the temperature $T=0$~MeV compared to the $T=30$~MeV case (see Fig.~\ref{fig7}a). Hence, the free energy density $f_H$ at the given $p_l$ and density is larger for the temperature $T=30$~MeV than that for the $T=0$~MeV case (see Fig.~\ref{fig9}b). Analogously, at the given temperature and $p_l$, the free energy density $f_H$ is larger for the larger density. In the bottom panel, the physical region corresponds to the positive values of the longitudinal pressure. It is worthy to note at this point that since the EoS of neutron matter becomes essentially anisotropic in an ultrastrong magnetic field, the usual scheme for finding the mass-radius relationship based on the Tolman-Oppenheimer-Volkoff (TOV) equations~\cite{TOV} for a spherically symmetric and static neutron star, should be revised. Instead of this, the corresponding relationship should be found by the self-consistent treatment of the anisotropic EoS and axisymmetric TOV equations substituting the conventional TOV equations in the case of an axisymmetric neutron star. \section{Unusual behavior of the entropy at $H\not=0$} \begin{figure}[tb] \begin{center} \includegraphics[width=8.0cm,keepaspectratio]{fig11.eps} \end{center} \vspace{-2ex} \caption{ Same as in Fig.~\ref{fig6} but for the matter part $F_{Hm}/A$ of the Helmholtz free energy per neutron. The meaning of the vertical arrows is the same as in Fig.~\ref{fig7}b.} \label{fig10}\vspace{-0ex} \end{figure} As was discussed in the previous section, the magnitude of the spin polarization parameter increases with temperature in the fields up to about $10^{19}$~G. The Helmholtz free energy density $f_H$, whose minimum at the given $\varrho,T,H$ determines the state of a thermodynamic equilibrium, decreases with temperature (cf. Fig.~\ref{fig8}a) and, hence, such an unusual behavior of spin polarization with temperature is supported thermodynamically. The Helmholtz free energy density $f_H$ can be decomposed into the matter and field contributions, $$f_H=f_{Hm}+e_f,$$ with the matter contribution being $f_{Hm}=\frac{1}{\cal V}(E_m-TS)-HM$. The decrease of the Helmholtz free energy with temperature is, therefore, to be attributed to its matter part. Fig.~\ref{fig10} explicitly shows this point. \begin{figure}[tb] \begin{center} \includegraphics[width=8.0cm,keepaspectratio]{fig12.eps} \end{center} \vspace{-2ex} \caption{ The difference between the entropies per neutron of magnetized neutron matter and nonpolarized neutron matter (with $\Pi=0$ at $H=0$) as a function of the magnetic field strength for the BSk20 Skyrme force at $T=15$ and $T=30$~MeV, and at two fixed densities, $\varrho=3\varrho_0$ and $\varrho=4\varrho_0$. The meaning of the vertical arrows is the same as in Fig.~\ref{fig7}b.} \label{fig11}\vspace{-0ex} \end{figure} An unexpected moment appears if we consider separately the behavior of the entropy of neutron matter with a generalized Skyrme interaction in a strong magnetic field. In Fig.~\ref{fig11}, the difference between the entropy per neutron of magnetized neutron matter and that of the nonpolarized state (with $\Pi=0$ at $H=0$) is presented as a function of magnetic field at the temperatures $T=15$~MeV and $T=30$~MeV, and at the same densities regarded above. It is seen that this difference is positive for all relevant magnetic field strengths. It looks like a spin polarized state is less ordered than the nonpolarized one, contrary to intuitive assumption. In section~\ref{spinpol}, we showed that the unusual behavior of the entropy of a spontaneously polarized state is related to its dependence on the effective masses of neutrons with spin up and spin down and to the violation of the criterion~\p{lowtemD}. The entropy of magnetized neutron matter is given by the same general expression~\p{entr}, and, after providing the low-temperature expansion, we would arrive at the same constraint~\p{lowtemD} on the effective masses in a spin polarized state guaranteeing that its entropy is less than that of the nonpolarized state. Fig.~\ref{fig12} shows the left side $D$ of the constraint~\p{lowtemD} as a function of the magnetic field strength at the temperature $T=15$~MeV, and densities $\varrho=3\varrho_0$ and $\varrho=4\varrho_0$, at which the accuracy of the approximation $T/\varepsilon_{F\sigma}\ll 1$ is acceptable. It is seen that the criterion~\p{lowtemD} is violated, and this explains the unusual behavior of the entropy of dense neutron matter in a strong magnetic field shown in Fig.~\ref{fig11}. Note that the unconventional behavior of the entropy of magnetized neutron matter with Skyrme interaction was found earlier in Ref.~\cite{IY10}. The difference is that for the SLy7 Skyrme interaction used in that work a spontaneously polarized state appears already at zero temperature, while in the given research with a newly developed BSk20 Skyrme force spontaneous polarization appears only at temperatures above the critical one. We have checked that the last feature is also characteristic for the BSk19 and BSk21 Skyrme forces. If to consider the appearance of a spontaneously polarized state as a weak point of a certain Skyrme parametrization (just this argument was used in Ref.~\cite{GCP} as the motivation for developing a new series of Skyrme forces) then this underlines the necessity to further concentrate the efforts on building a new generation of Skyrme forces being free of such kind of spin instabilities. Such an attempt was made in the recent article~\cite{CG} by attracting the ideas from the nuclear energy density functional theory. However, the constraints obtained in this study on the Skyrme force parameters lead to the unrealistic consequence that the effective masses of nucleons with spin up and spin down in a polarized state should be equal, contrary to the results of calculations with realistic NN interaction~\cite{KS,BB}. On the other hand, the observational data still do not rule out the existence of a ferromagnetic hadronic core inside a neutron star caused by spontaneous ordering of hadron spins (in this respect, see, e.g., Refs.~\cite{HB,K}). In any case, developed recently generalized Skyrme parametrizations BSk19-BSk21 are, currently, among the most competitive Skyrme forces for providing neutron star calculations, and, certainly, they are suitable for getting a qualitative estimate of the effects of the pressure anisotropy in strongly magnetized neutron matter at finite temperature. \begin{figure}[tb] \begin{center} \includegraphics[width=8.0cm,keepaspectratio]{fig13.eps} \end{center} \vspace{-2ex} \caption{ The difference $D$ in the constraint~\p{lowtemD} as a function of the magnetic field strength for the BSk20 Skyrme force at the temperature $T=15$~MeV, and densities $\varrho=3\varrho_0$ and $\varrho=4\varrho_0$. The meaning of the vertical arrows is the same as in Fig.~\ref{fig7}b.} \label{fig12}\vspace{-0ex} \end{figure} In summary, we have considered spin polarized states in dense neutron matter in the model with the recently developed BSk20 Skyrme interaction at finite temperature under the presence of strong magnetic fields up to $10^{20}$~G. Although the BSk20 Skyrme force was worked up with the aim to avoid spontaneous spin instability at zero temperature, it has been shown that spontaneous instability appears at temperatures above the critical one, which is, at least, larger than 35~MeV. By this reason, we limited our consideration by the temperatures up to 30~MeV. For a spontaneously polarized state at finite temperature, the entropy demonstrates the unusual behavior being larger than that of the nonpolarized state. This feature has been related to the dependence of the entropy of a spin polarized state on the effective masses of spin-up and spin-down neutrons and to the violation of some constraint on them at the corresponding densities and temperatures. In strong magnetic fields considered in this study the total pressure in neutron matter becomes anisotropic. It has been shown that for the magnetic fields $H>H_{th}\sim10^{18}$~G the pressure anisotropy has a significant impact on thermodynamic properties of neutron matter. In particular, vanishing of the pressure along the direction of the magnetic field in the critical field $H_c>H_{th}$ leads to the appearance of the longitudinal instability of neutron matter. With increasing the density and temperature of neutron matter, the threshold $H_{th}$ and critical $H_c$ magnetic fields also increase. In the limiting case considered in this study and corresponding to the density of about four times nuclear saturation density and the temperature of about a few tens of MeV, the critical field $H_c$ doesn't exceed $10^{19}$~G. This value can be considered as the upper bound on the magnetic field strength inside a magnetar. Our calculations show that the appearance of the longitudinal instability prevents the formation of a fully spin polarized state in neutron matter, and only the states with moderate spin polarization can be developed. In the anisotropic regime, the field contribution to the Helmholtz free energy density becomes comparable and even dominates over the matter contribution. The longitudinal and transverse pressures and anisotropic EoS of neutron matter in a strong magnetic field have been determined at the densities and temperatures relevant for the interior of a magnetar. It has been clarified that the entropy of strongly magnetized neutron matter with the Skyrme BSk20 force demonstrates the unusual behavior similar to that of the entropy of spontaneously polarized state. In both cases, the same reason, discussed above, is responsible for such a behavior. The obtained results can be of importance in the studies of cooling history and structure of strongly magnetized neutron stars. J.Y. was supported by grant 2010-0011378 from Basic Science Research Program through NRF of Korea funded by MEST and by grant R32-10130 from WCU project of MEST and NRF.
2,877,628,091,442
arxiv
\section{Introduction} Speaker diarization, the problem of unsupervised temporal sequence segmentation into speaker specific regions, is one of first processing steps in the conversational analysis of multi-talker audio. The performance of a speaker diarization system is adversely influenced by factors like short speaker turns, overlaps between multiple speakers, far-field effects in audio recording and environmental artifacts. The DIHARD evaluations explored various challenging environments for bench-marking diarization performance \cite{ryant2018first,ryant2019second,ryant2020dihard}. In the past decade, the dominant approach to speaker diarization involved a two step process. The first step consists of deriving embeddings from relatively short windowed segments of speech (typically $1-2$s of audio) while the second step involves the clustering of the embeddings. The early breakthroughs were reported with the i-vector embedding extraction using unsupervised factor analysis modeling \cite{sell2014speaker}. The availability of large amounts of speaker supervised speech recordings along with the advancements in deep learning have propelled the use of deep neural network (DNN) based embedding extractors like the x-vectors \cite{snyder2018x}. In terms of clustering, the common approach to speaker diarization is the bottom-up clustering approach where the goal is to successively merge to achieve a one-to-one correspondence between the ground-truth speakers and clusters~\cite{anguera2006robust}. The most popular approach is the agglomerative hierarchical clustering (AHC) \cite{day1984efficient}. The inputs to the clustering algorithms commonly employ pre-processing techniques on the embeddings like length normalization \cite{garcia2011analysis}, principal component analysis (PCA) \cite{zhu2016online} and PLDA based affinity matrix computation ~\cite{sell2014speaker}. Another common approach to clustering is the spectral clustering approach~\cite{ning2006spectral}. In most of these approaches, the affinity matrix computation and clustering are performed as two independent steps with different cost functions. A neural diarization approach (termed as end-to-end (EEND))~\cite{shinji2019ASRU, shinji2020Interspeech}, proposed recently, explores transformer models for speaker diarization. The key successes reported have been for recordings with $2-3$ speakers. However, training the EEND system with more speakers is challenging because of permutation-invariant loss computation. Further, the performance on recordings with more than $4$ speakers does not improve over the clustering based approaches~\cite{leapDIHARD3}. The self-supervision in diarization can provide effective representations for downstream tasks without requiring the ground-truth labels~\cite{hendrycks2019using}. In this paper, we extend our previous work, based on path integral clustering (PIC) \cite{singh2021pic,Singh2020}, with metric learning framework inspired by neural PLDA \cite{Ramoji2020}. The previous work explored graph based clustering with self-supervised representation learning. The learned representations were used to derive cosine affinity score based adjacency matrix. This adjacency matrix, containing pair-wise similarity scores, was used in path integral clustering for speaker diarization~\cite{singh2021pic}. In the proposed work, we further explore a learnable metric based on neural PLDA in the self-supervised learning framework. In particular, both the embeddings and the adjacency matrix for graph based clustering are jointly learned. Using this joint learning, we show significant performance improvements over baseline systems and previous models based on self-supervised graph clustering methods~\cite{Singh2020,singh2021pic}. \section{Related Work And Contributions}\label{sec:relatedWork} The most common clustering approach used in speaker diarization is based on agglomerative hierarchical clustering (AHC)~\cite{day1984efficient}. The affinity measures explored for AHC consists of cosine similarity score~\cite{silovsky2012speaker} or PLDA \cite{ioffe2006probabilistic, sell2014speaker}. Other methods for clustering include k-means \cite{shum2011exploiting} and spectral clustering~\cite{wang2018speaker}. The long short term memory (LSTM) network used for affinity computation, proposed by Lin et. al.~\cite{Lin2019} and a fully supervised speaker diarization, using unbounded interleaved recurrent neural networks (RNN), proposed by Zhang et. al.~\cite{zhang2019fully}, have been investigated recently. The approaches based on re-segmentation~\cite{diez2018speaker,singh2019leap} consists of using the clustering based results to initialize a hidden Markov model (HMM) based clustering. Recently, the x-vector embedding based HMM variational Bayes, termed as VBx, has shown promising results~\cite{landini2020bayesian}. For self-supervised clustering, the loss functions based on k-means \cite{yang2017towards}, spectral clustering loss~\cite{shaham2018spectralnet} and agglomerative clustering~\cite{yang2016joint} have been investigated for image/text. A recent work based on neural modeling of discriminatively trained PLDA model \cite{Ramoji2020} was proposed for speaker verification. The neural formulation of PLDA allows the learning of the parameters in a Siamese network. This paper extends our previous works on self-supervised learning and graph-based clustering \cite{Singh2020,singh2021pic}. The previous works proposed representation learning and graph based clustering in an iterative self-supervised learning framework. The triplet training loss based on cosine similarity was used in the representation learning. The final embeddings were used with the agglomerative hierarchical clustering (AHC)~\cite{Singh2020} or with a more robust graph based clustering called path integral clustering (PIC) \cite{singh2021pic}. The affinity measures in both the approaches was based on the cosine similarity score. In this work, we include the PLDA parameters as a learnable metric in the self-supervised learning framework inspired by Ramoji et. al~\cite{Ramoji2020}. The PLDA parameters are learnable along with the neural network parameters for the embedding extraction. The entire model is trained using binary cross entropy loss (BCE). The advantage of the proposed approach over the previous work is the direct optimization of the adjacency matrix used in the graph based clustering. We call the proposed approach as self-supervised PLDA based metric learning with path integral clustering (SelfSup-PLDA-PIC). \section{Background} This section describes the pre-processing steps, the probabilistic linear discriminant analysis (PLDA) model and the path integral clustering algorithm used in our approach. \begin{figure*}[t!] \centering \includegraphics[width=15cm]{blockSchematic.jpg} \caption{Block schematic of the proposed self-supervised metric learning approach to speaker diarization. } \label{fig:blockSchematic} \end{figure*} \subsection{Pre-processing steps}\label{sec:preprocess} The x-vectors of each segment in a given recording, extracted using the TDNN network~\cite{snyder2018x}, are mean normalized and whitened. The whitened x-vector features are then processed using length normalization \cite{garcia2011analysis}. Further, a principal component analysis (PCA) based dimensionality reduction at the recording level is also applied \cite{zhu2016online}. \subsection {PLDA model} The PLDA is a generative model which factorizes the input into the speaker factor and the channel factor. The simplified and widely used model is a generative linear-Gaussian model \cite{ioffe2006probabilistic}, where x-vector ${\mathbf x}\in \mathbb{R}^D$ represents segment embedding. The vector $\boldsymbol{y}\in \mathbb{R}^D$ represents the speaker factor. The distribution of ${\mathbf x}$ given the speaker factor $y$ is assumed to be a Gaussian distribution, \begin{equation} p({\mathbf x}|\boldsymbol{y}) = N({\mathbf x};\boldsymbol{y},\bm{\Phi_w}) \end{equation} where $\bm{\Phi_w}\in \mathbb{R}^{DXD}$ is the within class covariance matrix. The latent vector $\boldsymbol{y}$ is assumed to be distributed according to prior distribution: \begin{equation} p(\boldsymbol{y}) = N(\boldsymbol{y};\boldsymbol{m},\bm{\Phi_b}) \end{equation} where $\boldsymbol{m} \in \mathbb{R}^D$ and $\bm{\Phi_b}\in \mathbb{R}^{DXD}$ are global mean and between-speaker covariance matrix respectively. Further, $\bm{\Phi_w}$ and $\bm{\Phi_b}$ can be simultaneously diagonalized using diagonalizing transform $\bm{V}\in \mathbb{R}^{DXD}$: \begin{eqnarray}\label{eq:diagonalize} \boldsymbol{V}\bm{\Phi_w}\boldsymbol{V}^T=\bm{I},~~ \boldsymbol{V}\bm{\Phi_b}\boldsymbol{V}^T=\bm{\Psi} \end{eqnarray} where $\bm{\Psi}$ is a diagonal covariance matrix. Given this model, the generative graphical model can be expressed in terms of $\bm{u},\bm{v}$, {\color{black}where $\bm{u},\bm{v}$ denote the latent vectors} representing speaker variable and the speaker embedding in the projected space respectively. If $\bm{A}=\boldsymbol{V}^{-1}$, the generative model is expressed as: \begin{eqnarray}\label{eq:genModel} p(\boldsymbol{v}) &=& N(.|\bm{0},\bm{\Psi}) \\ \nonumber p(\boldsymbol{u}|\boldsymbol{v}) & =& N(.|\boldsymbol{v},\bm{I}), \\ {\mathbf x}&=&\boldsymbol{m}+\bm{A}\bm{u} \nonumber \end{eqnarray} \subsubsection{PLDA Score computation} \label{sec:pldascoring} The embeddings from the PLDA model are used to obtain the pair-wise similarity score matrix $\boldsymbol{S}$ which captures the similarity between embeddings in the speaker space. The similarity score between a pair of embeddings ${\mathbf x}_i$ and ${\mathbf x}_j$, denoted as $s(i,j)$, is based on the log-likelihood ratio score between same-speaker hypothesis $\mathcal{H}_s$ and different-speaker hypothesis $\mathcal{H}_d$. This can be computed using the PLDA model. We project the embeddings ${\mathbf x}$ into latent space using Equation (\ref{eq:genModel}), \begin{equation}\label{eq:projected} \bm{u}=\bm{A}^{-1}({\mathbf x}-\boldsymbol{m})=\boldsymbol{V}({\mathbf x}-\boldsymbol{m})=\boldsymbol{V}{\mathbf x}-\bm{b} \end{equation} The similarity score can be computed as \cite{ioffe2006probabilistic}: \begin{eqnarray}\label{eq:plda_score} s(i,j)&=-\frac{1}{2}\sum_{k=1}^{d}log(\bm{\Psi}[k]+\frac{1}{2})+2log(\bm{\Psi}[k]+1)\\\nonumber &+\frac{(\Bar {\bm{u}}[k])^2}{\bm{\Psi}[k]+\frac{1}{2}}+\sum_{l\in\{i,j\}}\left((\bm{u}_l[k]-\Bar {\bm{u}}[k])^2+\frac{(\bm{u}_l[k])^2}{\bm{\Psi}[k]+1}\right) \end{eqnarray} where $\bm{\Psi}[k]$ is the diagonal element of the k-th row of $\Psi$, $\bm{u}_i[k]$ is the k-th dimension of $\bm{u}_i$ and $\bar{\bm{u}}[k]=\frac{\bm{u}_i[k]+\bm{u}_j[k]}{2}$ \begin{algorithm}[t!] \SetAlgoLined \textbf{Initialize:} Construct a graph $G=(V,E)$ where, vertices $V$ are the input data $\bm{X}=\{x_1,x_2,...,x_{N_r}\}$. The weighted adjacency matrix $\bm{W}$ is computed and the transition probability matrix $\bm{P}$ is obtained by normalizing $\bm{W}$;\\ Form $n_c$ initial clusters $\mathcal{C}=\{\mathcal{C}_1,...,\mathcal{C}_{n_c}\}$ by assigning each sample $x_i$ to a cluster, using nearest neighbor merging; $N^*=~$required number of speakers \\ \While{$n_c>N^*$}{ \begin{enumerate} \item Merge $\mathcal{C}_a$ and $\mathcal{C}_b$, if $\{\mathcal{C}_a,\mathcal{C}_b\}=\argmax\limits_{\mathcal{C}_a,\mathcal{C}_b\in \mathcal{C}} \mathcal{A}(\mathcal{C}_a,\mathcal{C}_b)$\\ where, $ \mathcal{A}(\mathcal{C}_a,\mathcal{C}_b)$ is given in Equation (\ref{eq:affnty_pic}) \item $\mathcal{C}_c\leftarrow\{\mathcal{C}_c\backslash\{\mathcal{C}_a,\mathcal{C}_b\}\cup\{\mathcal{C}_a\cup\mathcal{C}_b\}\}$ and $n_c=n_c-1$ \item Recompute $\mathcal{A}$ \end{enumerate} } \textbf{Termination:}\vspace{0.5em} $\mathcal{C}_c$ \caption{Path Integral Clustering (Sec. \ref{sec:pic})} \label{algo:pic} \end{algorithm} \subsection{The path integral clustering}\label{sec:pic} The path integral clustering (PIC)~\cite{zhang2013agglomerative} is a graph-based agglomerative clustering algorithm, introduced in \cite{singh2021pic} for speaker diarization. In PIC, a directed graph is created such that vertices represent the input features connected by the set of edges. Let the x-vector embeddings from a recording $r$ be denoted as $\bm{X}=\{{\mathbf x}_1,{\mathbf x}_2,...,{\mathbf x}_{N_r}\}\in \mathbb{R}^D$, where $N_r$ is the total number of embeddings present. We compute an adjacency matrix $\bm{W}$ using the similarity scores (cosine or PLDA scores). From the adjacency matrix, we select only the $K$-nearest neighbors of $\bm{x}_i$. The matrix $\bm{W}$ is converted to transition probability matrix $\bm{P}$ by dividing each row with its row sum. The PIC involves computation of path integral $\mathcal{S}_{\mathcal{C}_a}$ and conditional path integral $\mathcal{S}_{\mathcal{C}_{a} \mid \mathcal{C}_{a} \cup \mathcal{C}_{b}}$ for every cluster pair $\mathcal{C}_a$ and $\mathcal{C}_b$ at each step of merging as follows: \begin{eqnarray} \mathcal{S}_{\mathcal{C}_a} &=& \frac{1}{|\mathcal{C}_a|^2}\bm{1}^T\left(\bm{I}-\sigma\bm{P}_{\mathcal{C}_a}\right)^{-1}\bm{1} \label{eq:pic_ca}\\ \mathcal{S}_{\mathcal{C}_{a} \mid \mathcal{C}_{a} \cup \mathcal{C}_{b}} &=& \frac{1}{|\mathcal{C}_a|^2}\bm{1}_{\mathcal{C}_a}^T\left(\boldsymbol{I}-\sigma\bm{P}_{\mathcal{C}_{a} \cup \mathcal{C}_{b}}\right)^{-1}\bm{1}_{\mathcal{C}_{a}} \label{eq:pic_ca_cb} \end{eqnarray} where, $|\mathcal{C}_a|$, cardinality of $\mathcal{C}_a$, is used for normalization of the path integrals. $\bm{P}_{\mathcal{C}_a}$ and $\bm{P}_{\mathcal{C}_{a} \cup \mathcal{C}_{b}}$ are the sub-matrices of the transition probability matrix $\bm{P}$, column vector $\bm{1}$ is a vector of all ones and of size $|\mathcal{C}_a|$ and $\bm{1}_{\mathcal{C}_a}\in \mathbb{R}^{|\mathcal{C}_a \cup \mathcal{C}_b|}$ is a binary column vector containing ones and zeros corresponding to the nodes of $\mathcal{C}_a$ and $\mathcal{C}_b$ respectively. The scalar $0<\sigma<1$ introduces discounting of longer paths. The cluster affinity measure for the PIC algorithm is computed as, \begin{equation}\label{eq:affnty_pic} \mathcal{A}\left(\mathcal{C}_a,\mathcal{C}_b\right)= \mathcal{S}_{\mathcal{C}_{a} \mid \mathcal{C}_{a} \cup \mathcal{C}_{b}}-\mathcal{S}_{\mathcal{C}_{a}} + \mathcal{S}_{\mathcal{C}_{b} \mid \mathcal{C}_{a} \cup \mathcal{C}_{b}}-\mathcal{S}_{\mathcal{C}_{b}} \end{equation} where, $\mathcal{S}_{\mathcal{C}_{a} \mid \mathcal{C}_{a} \cup \mathcal{C}_{b}}-\mathcal{S}_{\mathcal{C}_{a}}$ is the incremental path integral of $\mathcal{C}_{a}$. It represents the sum of weighted paths between $\mathcal{C}_{a}$ and $\mathcal{C}_{b}$ such that the starting and ending vertices are in $\mathcal{C}_{a}$. Thus, higher affinity shows denser connections between clusters. The clusters with maximum affinity are merged at each time step. The pseudocode is given in Algorithm \ref{algo:pic}. \section{Proposed Approach}\label{sec:proposedWork} The block schematic of the proposed self-supervised metric learning with graph based clustering algorithm (SelfSup-PLDA-PIC) is given in Figure \ref{fig:blockSchematic}. The model consists of a representation learning network and a metric learning network. The x-vectors extracted from short overlapping audio segments are used as inputs to the model. The model generates adjacency matrix which is used in PIC. The SelfSup-PLDA-PIC\footnote[1]{github code link: \url{https://github.com/iiscleap/SelfSup_PLDA.git}} jointly performs representation learning and metric learning using the initial clustering results. The output of clustering generates speaker labels. These labels are used to form same speaker and different speaker score level targets for the adjacency matrix. The model training is performed using the binary cross entropy (BCE) loss using the target adjacency matrix. {\color{black}Since the model updates the parameters based on the unsupervised clustering labels, it is known as self-supervised training. Our previous work involved self-supervised representation learning using fixed cosine similarity scoring~\cite{Singh2020} whereas, here we have introduced metric learning block inspired by PLDA scoring (Sec. \ref{sec:pldascoring}). The similarity score is computed using Equation (\ref{eq:plda_score}), where, along with embeddings $\bm{u}$, the PLDA parameters $\bm{\Psi}$ is also learned. Therefore, it is also referred as neural PLDA~\cite{Ramoji2020}.} In the following sub-section, we discuss model architecture and the joint representation learning and metric learning approach. \subsection{Model architecture and training} The representation learning network is a three layer DNN with $\{D,d,d\}$ units where $D$ is the x-vector dimension. It takes x-vectors of a recording $r$ as input and generates d-dimensional embeddings $\bm{u}=\{\bm{u}_1,...,\bm{u}_{N_r}\}$. Let, $\{\bm{Q},\bm{\Gamma},\bm{V}\}$ denote the learnable weights of each layer respectively and let $\bm{b}$ denote the bias of last layer. The embeddings $\bm{u}$ are passed to the metric learning network, which performs pairwise neural PLDA scoring (discussed in Sec. \ref{sec:pldascoring}) using learnable parameter $\bm{\Psi}\in \mathbb{R}^{d \times d}$. In the forward pass, we generate adjacency matrix $\bm{W}$ using pairwise PLDA scores and perform graph based path integral clustering (discussed in Sec.~\ref{sec:pic}). In the backward pass, we compute the ideal adjacency matrix using the clustering solution from the PIC step. The ideal adjacency matrix is a binary matrix consisting of target (label of $1$) and non-target speaker similarity (label of $0$) scores. Using the target and model output, a binary cross entropy (BCE) based loss function is used to update the learnable parameters $\{\bm{Q},\bm{\Gamma},\bm{V},\bm{b},\bm{\Psi}\}$. A sigmoid non-linearity is also applied on the neural PLDA scores before the BCE loss computation. \section{Experiments}\label{sec:experiment} \subsection{Evaluation data} \begin{itemize}[leftmargin=*] \item \textbf{AMI}: The AMI dataset~\cite{mccowan2005ami} contains meeting recordings from four different sites (Edinburgh, Idiap, TNO, Brno). The official speech recognition partition of AMI dataset comprises of development and evaluation sets consisting of $18$ and $16$ recordings sampled at $16$kHz respectively. We use the single distant microphone (SDM) condition of AMI dataset for experiments. We also compare results on beamformed multi-distant microphone (MDM) recordings with other published results. The number of speakers and duration ranges from 3-5 and $20$-$60$ mins respectively. For the AMI dataset experiments, we use the diarization error rate (DER) metric with a $250$ms collar and by ignoring the overlap regions (as is the common practice on the AMI dataset). \item \textbf{DIHARD III}: The third DIHARD challenge dataset \cite{ryant2020dihard} was released as the third in series of DIHARD speech diarization challenges. It consists of development and evaluation sets of recordings with duration of $0.5$-$10$ mins. These recordings are drawn from $11$ domains including audio-books, telephone recordings, clinical interviews, restaurant conversations, web videos etc. The number of speakers varies from $1$-$10$ with diverse regions of overlapping speech and speaker turn behavior. There are $254$ and $259$ recordings in the development and evaluation sets respectively. For the DIHARD dataset experiments, we use the DER metric with the overlaps and without providing a collar region. \end{itemize} \subsection{Baseline Model} Our baseline model is based on DIHARD III baseline recipe \cite{ryant2020dihard}. It involves feature extraction followed by x-vector embedding extraction. The x-vectors are extracted using the extended-TDNN (ETDNN) \cite{snyder2019speaker} network. For training the ETDNN model, we use $40$D mel-filterbank features using a $25$ms window with $10$ms shift. The $13$-layer ETDNN model follows the architecture described in \cite{snyder2019speaker,zeinali2019but}. The ETDNN model is trained on the VoxCeleb1 \cite{nagrani2017voxceleb} and VoxCeleb2 \cite{Chung2018} datasets, for speaker identification task, to discriminate among the $7,146$ speakers. It has $4$ TDNN layers which alternates with $4$ fully connected layers of size $1024$D. This is followed by $2$ feed forward layers containing $\{1024,2000\}$ units. The segment pooling layer is of $4000$D, containing mean and standard deviation computed at the segment level for the $1500$D layer. From the segment level features, the $512$ dimensional output of the affine component from the $11^{th}$ layer is taken as the x-vector embedding. The pre-processing steps mentioned in section \ref{sec:preprocess} are applied to x-vectors which include whitening transform obtained from the DIHARD development set, length normalization and recording level PCA. We preserve $30$ dimensions for the AMI dataset. For the DIHARD dataset, we choose the number of dimensions which preserves $30$\% of total variance. The PLDA model is trained using $3$sec x-vectors extracted from the subset of Voxceleb-1 and 2. These x-vectors are whitened using a PCA transform learned from DIHARD development set. We use the same PLDA model for the DIHARD and AMI datasets. However, we extract the embeddings using a segment size of $1.5$~s and a temporal shift of $0.75$~s for the AMI dataset due to longer duration recordings while the segments of size $1.5$~s are extracted with a shift of $0.25$~s for the DIHARD dataset. \begin{figure*}[t!] \centering \includegraphics[trim={2.8cm 2.1cm 0.5cm 2.4cm},clip,width=\linewidth]{speaker_activity_score_mat.pdf} \vspace{-1.2cm} \caption{Similarity score matrices using PLDA, SSC-Cosine-PIC \cite{singh2021pic} and SelfSup-PLDA-PIC (proposed) for a 4-speaker recording from AMI development set. The ground truth labels are plotted across time on top of affinity matrices for comparison.} \label{fig:score_comparision} \end{figure*} \subsection{Model Initialization} The initialization is a critical step for self-supervised training to generate reliable labels. We initialize our model parameters $\{\bm{Q},\bm{\Gamma}\}$ using the whitening transform and the recording level PCA from the baseline system respectively. The third layer's weight and bias $\{\bm{V},\bm{b}\}$ of representation learning network are initialized with diagonalizing tranform $\boldsymbol{V}$ and bias $\bm{b}$ from PLDA Equation (\ref{eq:projected}), obtained after applying recording level PCA transform. Similarly, we initialize metric learning parameter $\bm{\Psi}$ using diagonal between class covariance matrix defined in Equation (\ref{eq:diagonalize}). We perform initial clustering (AHC/PIC) till initial $N^0$ number of clusters. The value of $N^0$ is based on a threshold applied to the similarity scores for the AHC system. With the PIC, we use the stopping threshold applied on eigen-values of PIC affinity matrix (Equation (\ref{eq:affnty_pic})) \cite{singh2021pic} to estimate $N^0$. We select a threshold higher than the optimal threshold to avoid over-clustering. For the AMI dataset, the AHC threshold is set as $th=0.0$ to obtain $N^0$ for SelfSup-PLDA-AHC training. For the SelfSup-PLDA-PIC system, the eigen-value based threshold is set at $th=0.7$ for initial clustering in the self-supervised training. For the DIHARD dataset, the threshold for the SelfSup-PLDA-AHC system is set at $th=-0.7$. Since the DIHARD dataset has huge variation in number of speakers ($1$-$10$), the number of speakers estimated by AHC is used as $N^0$ in SelfSup-PLDA-PIC experiments. \subsection {Choice of hyper parameters} The hyper-parameters involved in our approach are selected based on the performance on the development set of both datasets. The nearest neighbour $K$ and scaling factor $\sigma$ are the hyper-parameters for PIC. The value of $K=30$ and $\sigma=0.1$ are the best values obtained for the AMI dataset. Similarly for DIHARD dataset, we found the best values of $K=40$ and $\sigma=0.5$. After model training, a temporal continuity of similarity scores can be incorporated \cite{singh2021pic}. This is done by multiplying the similarity score $s({i,j})$ with an exponential decay given by, \begin{equation} s'(i,j) = s(i,j)\beta^{min(n_b,|i-j|)} \label{eq:temp_continuity} \end{equation} where, $\beta$ is a positive decay factor $<1$, $|i-j|$ is the absolute segment index difference value of embeddings from the $i$th and $j$th segment, and $n_b$ is the maximum value of the decay constant $\beta$. \section{Results} \subsection{AMI dataset} The results for various system configurations with the AMI development and evaluation datasets are reported in Table~\ref{tab:ami_known}. These experiments use the ETDNN based x-vectors. We consider two cases in evaluation, with the known number of speakers $N^*$ and with the unknown number of speakers. The baseline system is the x-vector with PLDA scoring and AHC. The use of graph based clustering with PIC improves the baseline system significantly. The self-supervised clustering (SSC) with cosine based affinity matrix proposed previously \cite{singh2021pic} further improves over the PIC based system. The joint metric learning with the representation learning, proposed in this paper, denoted as SelfSup-PLDA, is shown to provide significant improvements over the previously proposed SSC-Cosine model. The relative DER improvement over the baseline system for the SelfSup-PLDA with temporal continuity is $66$\% and $60$\% for the AMI development and evaluation datasets for the condition with unknown number of speakers. Further, the application of VBx re-segmentation~\cite{landini2020but} on the outputs from the SelfSup-PLDA-PIC system provides final DER values of $2.9$\% and $4.2$\% on the development and evaluation datasets. Our VBx setup is based on baseline ETDNN x-vectors with PLDA adapted from DIHARD dev set. To the best of our knowledge, these results on the SDM data of the AMI corpus constitute the lowest DER reported in literature. The results using residual network (ResNet101) based x-vector embeddings \cite{landini2020bayesian} are shown in Table~\ref{tab:ami_unknown}. For these experiments, we use the ResNet 101 architecture. The first layer is a 2D convolutional layer. This is followed by four residual blocks. Each residual block consists of $3$ residual convolutions. The output of the residual blocks is fed to the statistics pooling layer (from each of the $8$ heads). The dense network layer following the pooling layer is used to extract the ResNet x-vectors. The training data and the cost function used to train the ResNet model is similar to the ETDNN framework. Comparing the system with PLDA scoring and PIC for the ETDNN x-vectors (Table~\ref{tab:ami_known}) and the ResNet x-vectors (Table~\ref{tab:ami_unknown}), we find that the ResNet x-vectors improve significantly over the ETDNN based x-vectors although both models contain similar no. of parameters ($\sim 10^6$). The performance on the evaluation data for this system is $6.2$\% DER. Further, even with this improved baseline model, the proposed approach of SelfSup-PLDA with graph based clustering and the incorporation of the temporal continuity yields significant improvements. For the self supervised metric learning, it is seen that the final results from either of the x-vector models achieve similar DER results on the AMI evaluation dataset. \subsection{Adjacency matrix analysis} The similarity score matrix (adjacency matrix used in graph clustering) for the baseline x-vector PLDA system (left), self-supervised representation learning with cosine scoring~\cite{singh2021pic} (middle) and the proposed self-supervised metric learning (right) are shown in Figure~\ref{fig:score_comparision}. The adjacency score matrix used in the proposed approach is processed with sigmoid non-linearity for training with BCE loss. The ground truth speaker activity for the four speakers in this recording is also shown in this figure. The same speaker regions of the similarity matrix with the baseline x-vector PLDA system are not well pronounced. The self-supervised embedding learning with cosine scoring improves the contrast between the scores from same speaker and across speaker segments. The proposed approach of metric learning with self-supervised principles is seen to provide the best contrast between the scores from same speaker and across speaker regions of the given audio recording. This increase in contrast partly explains the significantly improved DER results observed in Table~\ref{tab:ami_known} for the proposed SelfSup-PLDA+PIC model. \begin{table}[t!] \caption{\color{black}{DER (\%) using ETDNN x-vectors on the AMI dataset.}} \vspace{0.2cm} \label{tab:ami_known} \centering \resizebox{\columnwidth}{!}{\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{\textbf{System}} & \multicolumn{2}{c|}{\textbf{Known $N^*$}}& \multicolumn{2}{c|}{\textbf{Unknown $N^*$}} \\ \cline{2-5} & Dev. & Eval. & Dev. & Eval. \\ \hline\hline x-vec + PLDA + AHC &15.9 & 12.2 &13.1 &12.3 \\ x-vec + PLDA + PIC & \color{black} 5.1 & \color{black}10.2 & \color{black}5.8 & \color{black}11.4 \\ SSC-Cosine-PIC \cite{singh2021pic} &5.3 &6.2 &6.5 &8.4 \\ \midrule SelfSup-PLDA-AHC & 7.9 & 7.3 & 7.7 & 9.4 \\ SelfSup-PLDA-PIC & \color{black} 4.2 & \color{black} 6.2 & \color{black} \textbf{4.4} & \color{black} 6.9 \\ + Temporal continuity & {\color{black}\textbf{4.2}} & {\color{black}\textbf{4.2}} & {\color{black}\textbf{4.4}} & {\color{black}\textbf{4.9}} \\ SelfSup-PLDA-PIC + VBx~\cite{landini2020but} & - &- & \textbf{\color{black}2.9} & \textbf{\color{black}4.2} \\ \hline \end{tabular}} \vspace{-0.4cm} \end{table} \begin{table}[t!] \caption{\color{black}{DER (\%) using the Resnet x-vectors on the AMI dataset.}} \vspace{0.2cm} \label{tab:ami_unknown} \centering \resizebox{0.75\columnwidth}{!}{\begin{tabular}{|l|c|c|} \hline \multirow{2}{*}{\textbf{System}} & \multicolumn{2}{c|}{\textbf{Unknown $N^*$}} \\ \cline{2-3} & Dev. & Eval. \\ \hline\hline x-vec + PLDA + PIC & 6.0 & 6.2 \\ SelfSup-PLDA-PIC &4.6 & 6.0 \\ + Temporal continuity &4.4 & \textbf{4.3} \\ SelfSup-PLDA-PIC + VBx~\cite{landini2020but} & \textbf{3.4} & 4.5 \\ \hline \end{tabular}} \vspace{-0.5cm} \end{table} \begin{table}[t!] \caption{\color{black}{DER (\%) on the MDM recordings of AMI dataset (without TNO recordings).}} \vspace{0.2cm} \label{tab:ami_comparison} \centering \resizebox{0.85\columnwidth}{!}{\begin{tabular}{|l|c|c|} \hline \multirow{2}{*}{\textbf{System}} & \multicolumn{2}{c|}{\textbf{Unknown $N^*$}} \\ \cline{2-3} & Dev. & Eval. \\ \hline\hline x-vec(ResNet101)+AHC+VBx \cite{landini2020bayesian} & 2.78 & 3.09 \\ ECAPA-TDNN \cite{dawalatabad2021ecapa} & 3.66 & \textbf{3.01} \\ x-vec(ETDNN)+ SelfSup-PLDA-PIC &5.38 &4.63 \\ -- + VBx~\cite{landini2020but} &\textbf{2.18} &3.27 \\ \hline \end{tabular}} \end{table} \subsection{Comparison with prior literature} We attempt to compare the recent works proposed in Landini et. al. \cite{landini2020bayesian} and Dawalatabad et. al.~\cite{dawalatabad2021ecapa} with the work proposed in this paper. These previous works report using the beamformed audio from the AMI corpus (multi-distant microphone or MDM), while all the previous results reported in this work used the more challenging single distant microphone (SDM) condition. In order to make the direct comparison, we did not perform any adaptation or fine-tuning on the MDM data. Rather, the same models used for the SDM evaluations are employed to perform diarization on the MDM recordings. Further, keeping in line with the prior works, we have also omitted the TNO recordings in the development and evaluation set in these results. The comparative analysis is shown in Table~\ref{tab:ami_comparison}. The addition of VBx based re-segmentation to the proposed approach provides the best performance on the AMI development set compared to the prior works, with a final DER of $2.18$\%. Further, the result on the evaluation set (DER of $3.27$\%) is slightly inferior to the best reported result of $3.01$\%. This analysis highlights that, even without fine-tuning or adapting the self-supervised model parameters to the MDM condition, the techniques reported in this paper can match the best state-of-art results for the beamformed audio recordings. \subsection{DIHARD dataset} The results on the DIHARD dataset are reported in Table~\ref{tab:dihard}. The PIC approach improves slightly over the AHC approach used in the baseline system~\cite{ryant2020third}. The self-supervised learning approach with cosine similarity \cite{singh2021pic} degraded the performance over the baseline system. This was analyzed to be partly due to the reduced duration of the recordings, large number of speakers within the given recording and the lack of robustness in the simple cosine similarity scoring. The proposed approach of SelfSup-PLDA improves over both the AHC and PIC systems respectively. Without the VBx re-segmentation, the best results are achieved for the SelfSup-PLDA-AHC model. However, the VBx re-segmentation did not improve over the re-segmentation applied on the baseline model. As seen here, the improvements in the DIHARD dataset are less significant compared to those observed in AMI dataset. The primary reason for this reduction in improvement is the shorter duration files ($0.5$-$10$min duration) in DIHARD dataset compared to the $20$-$60$min duration of the AMI recordings. The self-supervised metric learning approaches proposed in this work rely on recording level labels to improve the adjacency matrix used in the graph based clustering. With a reduced number of embeddings, the training of the SelfSup-PLDA model is compromised. Secondly, the DIHARD datasets have diverse domains with some domains having a large number of speakers (more than $7$ speakers) in the given recording. \begin{table}[t!] \caption{\color{black}{DER (\%) when number of speakers ($N^*$) are unknown for the DIHARD dataset.}} \vspace{0.25cm} \label{tab:dihard} \centering \resizebox{0.76\columnwidth}{!}{\begin{tabular}{|l|c|c|} \hline \multirow{2}{*}{\textbf{System}} & \multicolumn{2}{c|}{\textbf{Unknown $N^*$}} \\ \cline{2-3} & Dev. & Eval. \\ \hline\hline x-vec + PLDA + AHC \cite{ryant2020third} &19.7 & 19.5 \\ -- + VBx \cite{landini2020but} & 17.0 & 16.6 \\ x-vec + PLDA + PIC & \color{black} 19.7 & \color{black} 18.9 \\ -- + VBx \cite{landini2020but} & \color{black} \textbf{16.8} & \textbf{16.3} \\ SSC-Cosine-PIC \cite{singh2021pic} &23.9 &21.1 \\ \midrule SelfSup-PLDA-AHC & \textbf{18.9 } & \textbf{18.2 } \\ SelfSup-PLDA-PIC & \color{black} 19.2 & \color{black} \textbf{18.2} \\ -- + VBx \cite{landini2020but} & 17.5 & 17.2 \\ \hline \end{tabular}} \end{table} The large number of speakers also decreases the quality of the pseudo cluster labels used in the self-supervised learning. In order to analyze the impact of the increased number of speakers, we split the results reported in Table~\ref{tab:dihard} into two conditions - recordings having less than or equal to $7$ speakers and those with more than $7$ speakers. This analysis is reported in Table~\ref{tab:dihard-7spk}. As seen here, the model of self-supervised metric learning provides consistent performance improvements for the recordings having less than $8$ speakers. On the other hand, for recordings with more than $7$ speakers, the self-supervised metric learning results in a performance degradation. As hypothesized earlier, the degradation is attributed to the errors in the psuedo-labels used in the self-supervised learning. In future, a confidence measure will also be explored in the self-supervised learning framework to avoid over-fitting to the noisy cluster labels. \begin{table}[t!] \caption{\color{black}{Average DER (\%) on the DIHARD dataset for recordings with $\leq 7$ speakers and $>7$ speakers }} \vspace{0.1cm} \label{tab:dihardspks} \centering \resizebox{0.85\columnwidth}{!}{\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{\textbf{System}} & \multicolumn{2}{c|}{\textbf{$\leq 7$ speakers}}& \multicolumn{2}{c|}{\textbf{$>7$ speakers}} \\ \cline{2-5} & Dev. & Eval. & Dev. & Eval. \\ \hline\hline x-vec + PLDA + AHC &18.0 & 19.3 &36.6 & 27.1 \\ x-vec + PLDA + PIC & \color{black} 17.7 & 17.8 & \textbf{36.5} & \textbf{24.0} \\ \midrule SelfSup-PLDA-PIC & \textbf{17.0} & \textbf{17.2} & 39.5 & 28.1 \\ \hline \end{tabular}} \label{tab:dihard-7spk} \end{table} \section{Summary} We have proposed an approach to perform metric learning and clustering jointly for the task of diarization. The metric learning is performed in a self-supervised manner by updating the neural PLDA model using cluster identities provided by graph based path integral clustering. Using an iterative procedure of metric learning and clustering, we show that the proposed algorithm provides improved similarity scores and precise speaker clusters. With challenging diarization datasets, we have illustrated the performance improvements obtained using the proposed approach. In particular, the self supervised metric learning algorithm provides the best results reported thus far for the AMI single distant microphone conditions. With the more challenging DIHARD dataset evaluations, the proposed approach did not show improvements when the number of speakers in the given recording was greater than $7$. However, for the recordings with {\color{black}less than $8$ speakers}, the model showed consistent performance improvements over the baseline systems compared. \section{Acknowledgements} The authors would like to thank Abhishek Anand and Michael Free of British Telecom Research for their inputs in the model development. The authors would also like to acknowledge the efforts of Rajat Varma in DIHARD experiments as well as in the manuscript preparation. \bibliographystyle{IEEEbib}
2,877,628,091,443
arxiv
\section*{Introduction} \label{intro} Innovation is widely viewed as essential for achieving long-run economic development and sustainability. Measuring and monitoring innovation is therefore key for our understanding of trends in technology and structural transformation over time and between countries. Innovation is typically measured through various indicators covering different stages of the innovation process \cite{dziallas2019}, including scientific output, R \& D expenditures and patents, or compound indices of these \cite{edquist2018, soumitra2020, wipo2021}. Patents remain the most widely used innovation indicator and for good reason. Patent records are publicly and readily available, contain detailed information about inventors and inventing firms globally and for long time periods. In addition, patent records offer ways of assessing the value of patents through patent citation counts \cite{trajtenberg1990, jaffe2002, hall2005}, and the insights into knowledge flows that can be gained from analysis of patent citations are unparalleled \cite{jaffe2002}. There are important sources of discrepancies, however, between innovations and patents, and the extent to which the patent and innovation systems overlap is not settled \cite{fontana2013, higham2021}. The question of how much the patent and innovation systems actually overlap also haunts policy discussions. The patent system has come under fire for being inefficient \cite{stiglitz2007, jaffe2011, derassenfosse2021}, distorting incentives or blocking innovation \cite{heller1998,cohen2016}, while the number of innovations affected by patents or patent laws has been argued to be low or uncertain \cite{moser2005, moser2012, fontana2013, lerner2009}. Gaining a better understanding of this overlap is therefore key also for policy debates. This study presents evidence about the amount and quality of information on innovation that is possible to identify within the patent system. In doing so, this study addresses methodological issues and supplies evidence for broader discussions about the relationship between the patent and innovation systems. This effort is based on a literature-based innovation output (LBIO) database \cite{taalbi2017what, kander2019} containing 4,460 commercialized innovations in Sweden, one of the world's highest ranked innovative economies \cite{wipo2021}. These innovations, commercialized between 1970 and 2015, were linked to 13,561 patents across various national and supra-national patent offices, through manual and machine-learning-assisted searches in Google Patents. The matching methodology is detailed in the supplementary materials. This data enables examination of the information overlap between patents and commercialized innovations from multiple angles. This study views the problem of measuring innovation as analogous to information transmission from a source to a receiver through a noisy channel (Fig. \ref{fig:information}). The key question is how much information the receiver (patent analyst) has about the source (innovations). This study phrases this question as follows: what fraction of innovations can be correctly identified by a patent analyst, based on data available within the patent system? This fraction is circumscribed by three factors (Fig. 1). First, not all innovations are patented, but some fraction $\rho$ that is determined by property laws and appropriability strategies. Previous research proposes varying estimates of the percentage of innovations that are patented, from 9.6\% \cite{fontana2013}, 36\% \cite{arundel1998} to almost half \cite{cohen2000}. A second important issue is that patents, strictly speaking, reflect invention, which may, or may not, lead to \emph{innovation}, viz. new combinations that are commercialized or otherwise come into economic use \cite{oecd2018}. Patenting is also often the outcome of strategic decisions to protect intellectual advances, rather than reflecting innovation activity \cite{dernis2001}. Therefore, to weed out less important or less valuable patents, the usual strategy is to use patent citations. However, while some studies suggest patent citations to be a good measure of economic value \cite{trajtenberg1990, jaffe2002,hall2005}, others find that patent citation counts are ``noisy'', heterogeneous over time, across sectors and countries \cite{criscuolo2008, gambardella2008, roach2013, higham2021, lerner2022}. Such quality-adjusted indicators must balance two aspects: the amount of actual innovations (true positives) that are captured, the recall $\alpha$, and the fraction of true positives among all patents identified, the precision $\beta$. The information about innovations in the final patent selection is then defined by the fraction of innovations covered and the precision of the selection: $\rho \times \alpha \times \beta $. This measure is further motivated in the supplementary materials. \begin{figure*}[h] \centering \begin{subfigure}[b]{.50\linewidth} \begin{tikzpicture}[ bigcircle/.style={ text width=1.6cm, align=center, line width=1mm, draw, rounded corners, minimum width = 2.5cm, minimum height = 2cm, font=\sffamily\footnotesize }, desc/.style 2 args={ text width=2.5cm, font=\sffamily\scriptsize\RaggedRight, label={[#1,yshift=-1.5ex,font=\sffamily\footnotesize]above:#2} }, node distance=10mm and 15mm ] \node[anchor=north] at (2.2,2.5) {$\rho$ patented}; \node[anchor=north] at (6.2,2.5) {\makecell{recall $\alpha$ \\ with precision $\beta$}}; \node [bigcircle] (circ1) {Innovations}; \node [bigcircle,black,right=of circ1] (circ2) {Patent data}; \node [bigcircle,black,right=of circ2] (circ3) {Identification of patented innovations}; \draw [black!80] (circ1) -- (circ2) -- (circ3) ; \matrix [ matrix of nodes, draw=black, line width=0.8mm, below = of circ1, nodes={font=\ttfamily}, every node/.style={anchor=base,text depth=.5ex,text height=2ex,text width=4em} ] { Skype \\ Spotify \\ NMT \\ SBR \\ }; \matrix [ matrix of nodes, draw=black, line width=0.8mm, below = of circ2, nodes={font=\ttfamily}, every node/.style={anchor=base,text depth=.5ex,text height=2ex,text width=4em} ] { Skype \\ Spotify \\ |[fill=black!10]| \\ |[fill=black!10]| \\ Patent 1\\ Patent 2\\ Patent 3\\ }; \matrix [ matrix of nodes, draw=black, line width=0.8mm, below = of circ3, nodes={font=\ttfamily}, every node/.style={anchor=base,text depth=.5ex,text height=2ex,text width=4em} ] { Skype \\ |[fill=black!10]| \\ |[fill=black!10]| \\ |[fill=black!10]| \\ Patent 1\\ Patent 2\\ |[fill=black!10]| \\ }; \node[anchor=north] at (2.1,-4) {$\rho = 1/2$ }; \node[anchor=north] at (6.2,-4) {\makecell{$\alpha=1/2$, \\ $\beta=1/3$}}; \end{tikzpicture} \end{subfigure} \caption{(a) Innovations and patents viewed as information transmission through a noisy channel. A fraction $\rho$ of $N$ innovations enter into the patent system. The patent system also contains noise in the form of non-commercialized inventions that can be reduced through patent quality measures. The quality of the information about innovations depends on the fraction of true positives identified (recall $\alpha$), and the probability that a patent identified is truly an innovation (precision $\beta$). (b) Example. If asked to name the patented innovation, the patent analyst would on average be able to correctly identify 1 correctly a third of the times. The information about the original source, as a fraction of the total, is $\rho \times \alpha \times \beta$ or $1/12$ in the example.} \label{fig:information} \end{figure*} \section*{Results} \paragraph*{Patent propensity} \begin{figure*}[h] \begin{subfigure}[b]{.45\linewidth} \caption{} \label{fig:patprop_cum} \includegraphics[width=7cm, height=7cm, keepaspectratio]{patprop_cum.png} \end{subfigure} \begin{subfigure}[b]{.50\linewidth} \centering \caption{} \label{fig:proptot} \includegraphics[width=7cm, height=7cm, keepaspectratio]{proptot.png} \end{subfigure} \begin{subfigure}[b]{.50\linewidth} \caption{} \label{fig:propPO} \includegraphics[width=7cm, height=7cm, keepaspectratio]{propPO.png} \centering \end{subfigure} \begin{subfigure}[b]{.50\linewidth} \centering \caption{} \label{fig:sectoral} \includegraphics[width=7cm, height=7cm, keepaspectratio]{sectoral.png} \end{subfigure} \caption{(a) Patent propensity by patent office. (b) Total patent propensity, by commercialization year 1970-2015, (c) EPO, USPTO, Sweden, Japan, (d) Patent propensity across sectors (ISIC Rev. 3), all patent offices. Note: Results not given when sectoral counts are below five.} \label{fig:breakdown} \end{figure*} \begin{figure*}[h] \begin{subfigure}[b]{.50\linewidth} \centering \caption{} \label{fig:decomposition} \includegraphics[width=7cm, height=7cm, keepaspectratio]{decomposition.png} \end{subfigure} \begin{subfigure}[b]{.50\linewidth} \centering \caption{} \label{fig:counterfactual} \includegraphics[width=7cm, height=7cm, keepaspectratio]{counterfactual_mvprobit.png} \end{subfigure} \caption{(a) Decomposition of changes in average patent propensity from previous decade. Percentage point contributions from changes within sectors, between sectors and an interaction effect. (b) Patent propensity and estimates of upper bound of the share of innovations dependent on IPR policy changes for five countries.} \label{fig:patentpropensity} \end{figure*} We turn first to the propensity to patent ($\rho$ in Fig. \ref{fig:information}). The main results are given in Fig. \ref{fig:breakdown} and further detailed in Table \ref{tab:sumstat}. The results show that 43.9 percent of all innovations, launched in 1970-2015, were patented in at least one patent office, whereas patenting propensity to any one single patent office was highest for the Swedish and US patent offices. Combining data from two or three patent offices however suffices to capture a near-complete set of all patented innovations (Fig. \ref{fig:patprop_cum}). Piercing below the aggregate reveals stark differences over time and across sectors. The patent propensity to the Swedish Patent Office has increased from a low level of 13.9\% in 1970 to 57.7\% in 2000, followed by a decrease back to 21.1\% in 2015. The patent propensity to USPTO increased from 26.6\% in 1970 to 49.0\% in 2000, followed by a decrease back to 25.3\% in 2015 (Fig. \ref{fig:propPO}). These results are in line with the emergence of a pro-patent era in the late 1980s \cite{granstrand2000, granstrand2012}. Some studies have suggested that this may have been driven by an increased emphasis on high-tech industries where patents are an important means of rent appropriation \cite{kim2004}. Table \ref{tab:sectoral} and Figure \ref{fig:sectoral} suggest that patenting propensity differs substantially across sectors. In line with earlier studies \cite{arundel1998}, we see that high-tech industries like R\&D services and pharmaceuticals have had an especially high patent propensity. At the other side of the spectrum, several industries, including paper and pulp, foodstuff, wood, and ICT sectors like computer equipment and software, all have had a patenting propensity below 40\% and some below 30\%. These differences are statistically robust in logistic regressions that predict the propensity to patent an innovation (Table \ref{tab:logistic}). At the level of individual innovations, we also observe generic statistical associations between the developmental complexity of an innovation, its radicalness and the propensity to patent it (Table \ref{tab:logistic}). Meanwhile, it is also evident from Fig. \ref{fig:sectoral} that patent propensity has increased in most sectors, including low-tech sectors. To better understand the drivers of these patterns, a simple decomposition was carried out of the change of average overall patent propensity between decades (1970s, 1980s, 1990s, 2000s and 2010s). It is clear from this analysis (Figure \ref{fig:decomposition}) that the trends observed mainly reflect generic changes in the propensity to patent, and that the patterns are not driven by any one especially patent intensive sector. In fact, during the decades when patent propensity increased, the ``between effect'' was negative, suggesting that growing sectors tended to be sectors with relatively low patent propensity. This is in contrast to the notion that the patent intensive ICT sectors drove increases in patent propensity \cite{kim2004, holgersson2018}. As these patterns are generic, one must look for structural explanations. For this reason, econometric tests were carried out, analyzing whether the propensity to patent a Swedish innovation in a given country depends on the country's patent laws, including patent duration, coverage, and enforcement \cite{ginarte1997,park2008}. Applying a multivariate probit approach, the findings (Fig. \ref{fig:counterfactual} and Table \ref{tab:mvprobit}) confirm that these patterns are partially explained by strengthened international patent laws, affecting all industries. According to these results, 8\% of all innovations can be linked to strengthened patent laws, since 1970 (Fig., \ref{fig:counterfactual}). After the TRIPS agreement, the estimated effect was 9.8\%, reaching at most a yearly percentage of 15.5\% in the late 1990s. These estimates can also be understood as an upper bound of the fraction of innovations that were forthcoming due to strengthened intellectual property rights (see supplementary text for further discussion). Interpreted as such, the results indicate a limited impact of patent laws on innovation in line with previous work \cite{moser2005, moser2012, lerner2009}. \paragraph*{Prediction of innovations from patent quality statistics} \begin{figure*}[htbp] \centering \footnotesize \begin{subfigure}[b]{.45\linewidth} \caption{} \label{fig:precisionrecall} \includegraphics[width=7cm, height=7cm, keepaspectratio]{precisionrecall.png} \end{subfigure} \begin{subfigure}[b]{.45\linewidth} \centering \footnotesize \caption{} \label{fig:stripplot_cits} \includegraphics[width=7cm, height=7cm, keepaspectratio]{vioplot_cits.png} \end{subfigure} \begin{subfigure}[b]{.45\linewidth} \centering \caption{} \label{fig:sshare_cits} \footnotesize \includegraphics[width=7cm, height=7cm, keepaspectratio]{sshare_cits.png} \end{subfigure} \begin{subfigure}[b]{.45\linewidth} \centering \footnotesize \caption{} \label{fig:frequency_cits} \includegraphics[width=7cm, height=7cm, keepaspectratio]{frequency_cits.png} \end{subfigure} \begin{subfigure}[b]{.45\linewidth} \centering \footnotesize \caption{} \label{fig:PCscatter} \includegraphics[width=7cm, height=7cm, keepaspectratio]{PCscatter.png} \end{subfigure} \begin{subfigure}[b]{.45\linewidth} \centering \footnotesize \caption{} \label{fig:stripplot_pc1} \includegraphics[width=7cm, height=7cm, keepaspectratio]{stripplot_pc1.png} \end{subfigure} \caption{(a) Precision-recall for predicting blockbuster innovations, with and without controls for technology fields. (b) Violin plot of patent citations within 7 years (log scale) with minimum, first quartile, median, third quartile and maximum, (c) share of patents that have link to LBIO or blockbuster innovations, by number of citations. (d) Frequency distribution of patent citations within 7 years, (e) Principal components for Non-LBIO, LBIO and blockbuster groups of patents, (f) Distribution of first principal component for the Non-LBIO, LBIO and blockbuster subsets.} \end{figure*} The second point of interest is to what extent patent citations and other patent quality measures can, under ideal circumstances, be used to separate out radical innovations from less radical innovations. To this end, this study analyzes how well patent quality measures predict four different measures of significant innovation (supplementary materials, Fig. \ref{fig:precisionrecall_all} and Tables \ref{tab:predictsuper_without}-\ref{tab:predictsuper}). The benchmark data is a set of innovations, here called ``blockbuster innovations'' that compares patents linked to 40 innovations known to have been highly successful, to a set of innovations that are known to have been incremental and of little or no economic importance (the methodology is further described in supplementary materials). This approach assumes that the patent analyst has access to training data or has knowledge of the parameters for patent quality data from the OECD Patent Quality Indicators database \cite{squicciarini2013} and heterogeneity across technology fields. The patent quality variables are the number of citations received in the 7 years since the patent was published, patent renewal, family size originality and radicalness \cite{squicciarini2013}, further detailed in supplementary methods and data. Using EPO patent data linked to these innovations, logistic regressions are estimated to find the best performing models. The findings suggest that while there is a robust positive correlation between patent quality measures and significant innovations, this association is noisy and has low predictive power without the addition of controls for technology fields. In the best case uncovered with the present data, the models could identify a fraction $\alpha =0.525$ of all patented innovations, with a precision of $\beta=0.753$ (Fig. \ref{fig:precisionrecall}). Predictions for other definitions of significant innovations have similar or poorer performance (Fig. \ref{fig:precisionrecall_all} and Table \ref{tab:predictsuper_without}-\ref{tab:predictsuper}). Descriptive statistics are informative as to the somewhat weak discriminatory power of patent quality measures. Descriptive statistics (Table \ref{tab:citations}) show that patents not linked to the innovation database have a mean number of citations of 0.761. Innovations have on average more than double the number of citations, while the blockbuster innovations have, on average, only a marginally higher number of citations than the average LBIO innovation. Fig. \ref{fig:frequency_cits} shows that LBIO and blockbuster innovations tend to have higher patent citations. Patents connected to the LBIO database are skewed to the right, as are the blockbuster innovations. Moreover, the higher number of patent citations a patent has, the more likely a patent is to be linked to an innovation (Fig. \ref{fig:sshare_cits}). Similar patterns hold for the other patent quality measures. To illustrate, Fig. \ref{fig:PCscatter} shows principal components of the patent quality measures for LBIO, blockbuster innovations and non-LBIO patents. Fig. \ref{fig:stripplot_pc1} shows the distribution for the first principal component. These results show that there is considerable overlap in the distribution of the three sets, but it is also clear that LBIO and blockbuster innovations score higher on average. Meanwhile, Fig. \ref{fig:frequency_cits} also shows that the vast majority of patents, including patents linked to blockbuster innovations, have a fairly low number of citations. 72\% of all innovations and 67\% of all blockbuster patents had zero or only one citation. Considering the above, one must conclude that patent citation counts is a noisy indicator of significance \cite{criscuolo2008}. In other words, the results suggest that patents with high citation counts tend to capture significant innovations. However, the converse does not hold: a low patent citation count does not imply that a patent was insignificant (compare \cite{abrams2018}). \paragraph*{Information loss} Taken together, an optimistic estimate of the information content about actual innovations that can be identified from within the EPO patent system is a fraction 0.13 of the total information in the case of innovations commercialized in 1977-2015. Using two or three patent office sources could increase the expected information content to 0.17. These figures however, make the assumption that correct controls can be used, accounting for variations in patenting behavior across technology classes and over time. This is implausible without some type of training data. The results support the notion that measuring innovation requires going beyond single dimensions of innovation indicators. \section*{Discussion} The present study offers new evidence about the overlap in the patent and innovation systems for one of the world's most innovative economies, 1970-2015. A general result is that patents must be viewed as offering a glance of innovation activity, but the overlap between the patent and innovation systems is limited. Some studies \cite{fontana2013} have warned about the risks of using patent data without a sound grasp of the extent to which patent provide a partial and perhaps biased representation of innovation activity. The results of this study are in certain respects lenient, especially concerning patent propensity in the most recent decades, in sectors where patents are the dominant means of appropriation, and especially if one considers the overall patenting level. The importance of patenting has increased over time with increased international patent law stringency, and the results as regards patent propensity are close to some of the more optimistic figures reported previously \cite{arundel1998, cohen2000}. On the other hand, the results suggest that available patent quality indicators on their own are unlikely to be robust proxies of the quality of innovations. With the use of machine learning \cite{lerner2022} and training data, such as commercialized innovations and appropriate controls for sectoral and temporal variations in patenting behavior, models can be fitted to achieve better results. However, if the goal of the analysis is to capture more significant innovation activity, even the here idealized circumstances suggest an information loss of at least 83\%. Overall, these results have implications for our use of patent data, and clearly indicate the necessity of a versatile approach to innovation. Available innovation indicators, cover several steps in the innovation process, from scientific production, R \& D expenditure and patenting activity. In this regard, one must emphasize that patent data remains unparalleled as a window on inventor activity, firm collaborations and non-commercialized invention processes. Similarly, patent citation data remains unparalleled as a window on knowledge flows. While there is great variety in indicators in the pipeline of innovation processes \cite{dziallas2019}, a missing link is consistent, long-term, data on innovation output. Self-reported innovation data, such as the community innovation surveys \cite{arundel2013}, offer one important alternative for recent years. The innovation database used in this study is based on trade journal sources and is one of two long-term innovation output databases to date \cite{kander2019,have2009}. Policy makers and researchers interested in monitoring long-term innovation trends should investigate ways of constructing consistent innovation indicators. One possible cost efficient route is to use alternative methodologies, such as literature-based innovation with patent matching, to train matching algorithms in patents. The results of this study suggest, however, that these exercises still risk being imprecise. Another route, though more costly and time-consuming, implies prioritizing the construction of similar long-term databases based on expert opinion, trade journals, or other literature. There are also clear implications for innovation policy. The fraction of innovations responding to strengthened patent laws during the period were on average 8\% percent, peaking at 15.5\% in the 1990s. Although there is no consensus, patent policy instruments may have beneficial effects on knowledge accumulation, but should, if the current results are generalizable, not be viewed as a substitute for innovation policy. \printbibliography \section*{Acknowledgments} I wish to thank Markus Isaksson, Mathias Johansson, Asli Kilicaslan, and Jakob Nyqvist for excellent research assistance, and Frank van der Most for creating the infrastructure that made the patent matching process possible. I also thank Walter G. Park for generously sharing data on intellectual property rights. \textbf{Funding:} I gratefully acknowledge funding support from the Sweden's governmental agency for innovation systems, Vinnova (grant no 2020-01963). \end{refsection} \renewcommand{\theequation}{S\arabic{equation}} \setcounter{equation}{0} \setcounter{table}{0} \setcounter{figure}{0} \renewcommand{\thefigure}{S\arabic{figure}} \renewcommand{\thetable}{S\arabic{table}} \clearpage \begin{refsection} \section*{Supplementary materials} Materials and Methods\\ Supplementary Text\\ Figs. \ref{fig:matchingprocess} to \ref{fig:precisionrecall_all} \\ Tables \ref{tab:sumstat} to \ref{tab:rounds-stats}\\ References \textit{(38-72)} \subsection*{Materials and methods\footnote{The matching methodology is further described in an unpublished working paper \cite{johansson2022}. Figs. \ref{fig:matchingprocess} and \ref{fig:MLmatching}-\ref{fig:zipf} and Tables \ref{tab:features}-\ref{tab:rounds-stats} reproduced here with consent.}} \label{sec:methods} \paragraph*{Innovation data} A contribution of this paper lies in combining a Swedish innovation output database (SWINNO) with patent data for the period 1970-2015 \cite{Sjoo2013SI,kander2019SI,taalbi2021SI}. The data describes a nationwide longitudinal matching between literature-based innovation output and patents \cite{arundel1998SI, brouwer1999SI, cohen2000SI, fontana2013SI, kleinknecht2002SI}. The innovation output database is based on the literature-based innovation output (LBIO) methodology, proposed in the late 1980s, owing in part to the recognition of methodological issues with patents and R\&D as innovation indicators \cite{Kleinknecht1993}. The major advantages of the LBIO approach include capturing commercialized innovations, as well as enabling long-run time series of innovation activity in firms, industries or countries. There is now a sizeable number of studies based on this approach, including studies on firms or industries in Spain, Japan, Netherlands and the US \cite{acs1990SI, alegrevidal2004SI, coombs1996SI, edwards1984SI, greve2003aSI, Grawe2009SI, santarelli1996SI, Sjoo2013SI, walker2002SI}. However, to date, national databases with a long-run ambition and comprehensive industry coverage exist only for Sweden \cite{Sjoo2013SI, kander2019SI, taalbi2017whatSI, taalbi2021SI} and Finland \cite{palmberg1999SI, Saarinen2005SI, makkonen2013SI, kander2019SI}. The LBIO database is based on a selection of 15 trade journals covering product innovations and processes sold on a market for the manufacturing industry and ICT services. The database currently covers in total about 1,200 innovations launched between 1908 and 1969 \cite{taalbi2021SI} and about 4,800 innovations from 1970 to 2019 \cite{kander2019SI}. The trade journals are independent from private interests and screened for articles edited by journalists who have made an editorial choice to include an innovation because of its novelty or significance. Hence, the database captures commercialized innovations, whose degree of novelty or significance was clearly stated in independent trade journal articles. \paragraph*{Matching methodology} The matching and linking of a patent to an innovation was made by the SWINNO research team \cite{johansson2022SI} using Google Patents, a search engine that indexes patents and patent applications from EPO, USPTO, the Swedish Patent Office and other countries’ patent offices. Fig. \ref{fig:matchingprocess} gives an overview of the matching process. In a first step, innovations were matched. Innovations developed by small and medium-sized firms and innovations with low complexity were easy to screen and match, and this process was therefore done manually. This is particularly the case for firms that only have filed a small number of patents. The vast majority of LBIO innovations, roughly 3,600, were manually screened. However, manual matching is inadequate to ensure patent matching quality for big firms and highly complex innovations, such as systems consisting of many (patentable) parts. For such cases, the research team developed a machine-learning assisted methodology (step 2 in Fig. \ref{fig:matchingprocess}). This step departed from using keywords from trade journal articles, and subsequently trained a machine learning model to match keywords and other information to patent documents. The next step (step 3) was to perform manual checks on innovations for which the machine-learning model suggested less than 20 patents. The final database describes manually matched patents and patent-innovation matches suggested by machine-learning. The overall results are summarized in Fig. \ref{fig:treemap}. The manual matching resulted in 43.5\% of all innovations classified as non-matches, and 33.4\% classified as positive matches. 12.9\% of all innovations could be concluded to be non-patented after the machine learning model failed to produce relevant patents. The rest of innovations were suggested by the machine-learning models to have a patent match. After screening most of these, another 6.0\% of all innovations could be confirmed to be patented, while 1.7\% were found to be false positives. The last two percent were innovations with more than 20 patents. These were not manually screened, but assumed to have at least one correct patent match. The following sections give further detail about definitions and procedures to assure the quality of the manual and machine-learning assisted matching of innovations and patents. \paragraph*{Manual matching} The matching process was based on trade journal articles and patent documents. To be considered a match, any given patent had to meet the following three criteria: \begin{enumerate} \item The patent must be directly related to the innovation and/or the novel feature of the innovation, as described in a text \item The patent document must contain a description (not just a title) linking the patent to the innovation \item The patent must be filed within ten years before or after the commercialization year. \end{enumerate} Trade journal articles were essential to determine the relevance of a certain patent. These articles always, or in a majority of cases, provide detailed information about the innovation, including technical characteristics, the names of firm(s), inventors and personnel involved in the development of the innovation, the year the innovation was commercialization and biographical information, including whether the innovation was patented or had an associated patent application. The second and third criteria were enforced to ensure the quality of the matching. Patents lacking abstracts or descriptions were excluded since their relevance could not be decided from the title alone. In deciding what patents to match or rule out, the context of the patent abstract, description, and claim(s) were important. Technological background sections of patent documents could \emph{prima facie} be related to a particular technology field, but need to be directly related to the patent claim(s). Hence, such sections of patent documents were read with caution. Since technological knowledge ages fast, we also applied a window of ten years before or after the innovation's commercialization year. Although a few exceptions could be found, this window implies a broad definition of patent-innovation relationship, emphasizing the presence of technical links and allowing for well-known long development times of certain products, e.g., energy innovations and pharmaceuticals. Other studies have argued for smaller windows \cite{fontana2013SI}. \paragraph*{Machine-learning assisted matching} After the manual matching, the remaining challenge was to match innovations from big firms and innovations with high complexity to relevant patents. The patents of large firms, such as ABB, Scania, Volvo, and Ericsson, may range in the order of thousands or several thousands per year, which renders manual screening time consuming and impractical. For this reason, a subset of innovations were identified for which manual classification was deemed impractical or unreliable. This subset included large firms with over 200 granted patents, and more complex innovations that contain many patentable parts or require the combination of many development processes, examples of which are biotechnology, robotic systems, and airplane technology. For these innovations, the trade journal sources were read in order to assign a set of keywords. The keywords were selected in isolation from the machine-learning model and there was no upper boundary on the number of keywords that could be registered. In the process of finding and matching patents for the above-mentioned innovations, three different subsets of innovations from the SWINNO database were used: first, a set of 99 innovations (commercialized between 1990 and 2015) with (manually) matched patents, a set of 645 innovations (1985-2015) that were used to further augment the model, and finally a set of 464 innovations (1970-1985). The machine-learning part of this process used scikit-learn \cite{scikit2011SI}. To solve the machine learning problem, features needed to be constructed. An overview of the features used for machine learning is given in Table \ref{tab:features}. Apart from features involving information about the innovation's characteristics and its commercialization, the keyword information was used in several variables, in terms of the number and share of keywords that occurred in the patent's title, abstract and description. The process also made use of inventors and contact persons mentioned in trade journal articles. \paragraph*{Finding viable patent candidates} To find viable patent candidates, the Google Patents search engine was queried for each innovation with the firm name as well as the inventors (using all registered spelling variants from the SWINNO database) within 5 years of the year of commercialization. In the cases where more than 1,000 matches were found, the selection of used patents was reduced by constricting the year span until it either returned no more than 1,000 patents or the year span reached 0 (only searching for patents in the year of commercialization). After this limitation the set of 645 innovations for 1985-2015 led to 195,000 potential innovation-patent pairs. \paragraph*{Training models} Models were trained on the annotated data in three rounds, as shown in Figure \ref{fig:MLmatching}, and further detailed elsewhere \cite{johansson2022SI}. The results from these rounds are summarized in Table \ref{tab:rounds} and Table \ref{tab:rounds-stats}. Initially, we used a sample of innovations matched manually to patents, of which we had the highest possible degree of confidence of a correct matching. For the second round, we let the model predict innovation-patent pairings from the 645 innovations from larger companies and went through the 300 most likely matches manually. With this additional data we trained a new model, and similarly processed 1,000 pairings, followed by 800 because model performance decreased with this additional data. With this final dataset, we arrived at a model that we used to create short-lists of potential patents. \paragraph*{Manually processing the machine learning assisted matches} To verify the robustness and reliability of the machine-learning-assisted matches, all innovations with less then 20 predicted patents were manually processed a final time. Manually processing the suggested pairings resulted in an assessment of whether the patent-innovation match, suggested by the machine-learning model, good be considered a true positive or a false positive. These patents were compared to the description of the innovation and how the constructed keywords were contextually used in the abstract, description, and claim(s), using the same methodology as in the previous manual matching procedure. The results (Fig. \ref{fig:machinecontrol}) clearly suggest a positive correlation between the manual assessment and the machine-learning model's P value, viz. the model's estimated rate of false positives. The correlation between the fraction of verified positives and the machine-learning model's P value is shown in Fig.\ref{fig:machinecontrol}. The results also suggest that the machine-learning model tended to be slightly more confident in the matching than warranted. This overconfidence follows since the selection of potential patents was made to minimizes false negatives, whereas false positives were considered less problematic, as they can be dealt with by manual checks. \paragraph*{Manually and machine-learning matched data} \label{sec:results} Figs. \ref{fig:matchstatus}-\ref{fig:matchstatus_share} show the number and share of patented innovations by commercialization year and matching method. The overall addition achieved by the machine-learning approach is only a minor share of the innovations (Fig. \ref{fig:treemap}), and the vast majority of the patented innovations were matched manually. However, the yearly fraction of innovations matched wholly or partially through machine learning could amount to some ten percentage points. Fig. \ref{fig:zipf} shows the number of patents matched to innovations versus the ranking of innovations from most patented to least, following the familiar Zipf's law pattern. Without machine-learning (the red observations) it would not be feasible to match the 100 innovations associated with the highest number of patents. \paragraph*{Measure of information content} The main question of interest to this study is how much information a patent analyst has about the members of the set of innovations. To measure this, the present study uses the expected number of patented innovations that a patent analyst can identify. This is measured as the product of patent propensity $\rho$, recall $\alpha$ and precision $\beta$ for reasons detailed below. The situation can be represented as follows. There are two sets: $X$ being the set of innovations, $Y$ being the set of patents. The intersection $X \cap Y$ is the set of patented innovations. The problem for the patent analyst is to tell which ones among the set of patents are innovations. Using patent quality data the patent analyst sets off to construct a model and makes a prediction of ``identified patents’’ $Z$. The set of true positives is the intersection $ Z \cap X$, and the share of true positives in the total potential innovations that could be defined is a measure of the quantity of information captured, the recall. The number of true positives are not, due to the presence of false positives, identified with complete certainty. An arbitrary patent drawn will identify an innovation with probability $\vert Z \cap X \vert / \vert Z \vert $, the precision. If the patent analyst is asked to give a list of the $\vert Z \cap X \vert $ true positives, on average $\vert Z \cap X \vert \times \vert Z \cap X \vert / \vert Z \vert $ will be correctly identified. Expressed as a fraction of the total number of innovations, this, dimensionless, measure of information is \begin{equation} \vert Z \cap X \vert \times \vert Z \cap X \vert / \vert Z \vert \vert X \vert \end{equation} which is the product of precision, recall and patent propensity \begin{equation} \rho \times \alpha \times \beta \end{equation}. Expressed in general terms, this measure says how much correct information a receiver has about a particular message sent from a source, as a fraction of the total information in the source. One must not fail to notice that the standard in binary classification is to use the harmonic mean of precision and recall, known as $F_1$-score or its generalized variants ($F_\beta$), or sometimes the geometric mean, also known as Fowlkes-Mallow index. These indices quantify the trade-off between precision and recall, but do not offer a quantification of the information content. In standard information theory, the information obtained about one variable by observing another variable is typically measured in terms of mutual information \cite{shannon1948SI}. Put otherwise, this measures how much knowing one of these variables reduces the uncertainty, or entropy, of the other. The entropy of one variable is $H(X) = -\sum_{x \in X} p(X) \log p(X) $ The conditional entropy measures the amount of information needed to describe a variable $X$ given another one $Y$, $H(X \vert Y) = - \sum_{x \in X, y \in Y} p(X,Y) \log p(X,Y)/p(Y) $. Mutual information is defined as $I(X,Y) = H(X)-H(X \vert Y) $ or $I(X,Y)= - \sum_{x \in X, y \in Y} p(X,Y) \log p(X,Y)/p(Y)p(X)$. This measure, while theoretically attractive and widely used, has drawbacks in the current context. Firstly, mutual information does not distinguish between negative and positive associations between variables. Hence, the extreme cases $X=Y$ and $X=1-Y$ have the same mutual information. Secondly, mutual information also takes into account true negatives. This is of no immediate interest in the current analysis. \paragraph*{Logistic model} To analyze the determinants of the propensity to patent an innovation $i$, logistic regression models were estimated, with dependent variable whether an innovation was patented (Y/N): \begin{equation} logit\left(E\left[P_{i}\mid X_{it}\right]\right) = \sum_k \alpha_k {comp}_{ik} + \sum_l \beta_l {nov}_{il} + \gamma {collab}_i + \delta_{s(i)} + \epsilon_{i} \end{equation} where ${comp}_{ik}$ is a set of variables measuring the complexity of the innovation, ${nov}_{il}$ a set of variable measuring its novelty, ${collab}_i$ measures whether the innovation has a collaboration, $\delta_{s(i)}$ are dummy variables for the sector $s$ of an innovation (ISIC Rev. 3, 2-digit level groups). We also use time dummies in all models (not shown in results). As a first measure of complexity, we distinguish between simple products and complex systems based on the description of the product innovation in the trade journal. We also classify innovations by the complexity of the knowledge base involved in developing the innovation, distinguishing between low complexity, medium complexity and high complexity. Low complexity involves only one major knowledge type, whereas high developmental complexity involves more than two types of knowledge. The detailed information contained in trade journal articles also puts us in a position to assess the novelty of innovations. On the one hand, articles frequently state if the innovation implies a novelty to the world market. We also distinguish the novelty in relationship to the knowledge base of the firm. Radical innovations are those that have required a fundamental reorientation of the firm's knowledge base and/or those that were described in trade journal articles as a radical breakthrough from the perspective of the firm. A major improvement implies that the innovation meant a significant step, but did not require a radical reconfiguration of the firm's knowledge base. As a proxy for the significance of an innovation, we also use the number of times that an innovation was mentioned in distinct edited trade journal articles. \paragraph*{Data on intellectual property rights} To examine the role of generic developments for the propensity to patent in a given country, the analysis uses data on export shares to a given country (Statistics Sweden), and intellectual property rights \cite{ginarte1997SI,park2008SI}. The data on intellectual property rights measures five aspects of property rights protection quinquennially for 110 countries during the period 1960-2015. Each aspect is measured as an index between 0 and 1. Patent coverage measures the patentability of 7 types of inventions: software, pharmaceuticals, chemicals, food, plant and animal varieties, surgical products, micro-organisms and utility models \cite{park2008SI}. Patent duration is measured as a fraction of 20 years of patent from the date of application or, for grant-based patent systems, 17 years from the date of grant. Patent membership measures whether a country partakes in international treaties, including the Paris convention and its revisions, the Patent cooperation treaty (PCT), Protection of new varieties (UPOV), the Budapest treaty (microorganism deposits) and the Trade-related intellectual property rights (TRIPS). The patent system's enforcement is scored according to whether it has (1) preliminary (pre-trial) injunctions, protecting patentees from infringement before trials, (2) contributory infringement, protecting against actions that do not in themselves infringe a patent, or (3) burden-of-proof reversals, under which the burden of proof of non-infringement is placed on another party than the patentee, e.g., a company producing a product. The ``loss of rights'' variable measures the absence of three types of restrictions on patent rights: (1) working requirements, viz. requirements to put the patent into use to enjoy patent protection, (2) compulsory licensing, requiring patentees to share exploitation with third parties, and (3) revocation of patents for non-working. If there are no restrictions, the index takes value 1, and 0 if all are present \cite{ginarte1997SI,park2008SI}. \paragraph*{Methods for analysis of patenting across countries} In order to further our understanding of the determinants of patenting behavior, regressions were carried out to explain the propensity to patent an innovation $i$, commercialized in year $t$, in country $j$. Two sets of regressions were carried out. The appropriate approach depends on the properties of the outcome variable. Since the (binary) choice to patent an innovation in one country $j$ is neither mutually exclusive, nor generally independent from the choice to patent in another, ordinary logistic or probit regressions will be unsuitable for statistical inference. Instead, a multivariate probit approach was used to account for correlation of the binary outcomes. The propensity of an innovation $i$ to be patented in country $j$ was analyzed through: \begin{equation} y_{ij}^{*} = \beta_{j}^{'} X_{ij} + \epsilon_{ij} \end{equation} with $y_{ij} = 1$ if $y_{ij}^{*} > 0$, otherwise $0$. Specifically, the model used is \begin{equation} y_{ij}^{*}= \sum_k \alpha_k {comp}_{ik} + \sum_l \beta_l {nov}_{il} + \gamma {collab}_i + \delta_{s(i)} + \sum_m {IPR}_{jmt} +X_{jt}+\epsilon_{ijt} \end{equation} where ${IPR}_{jmt}$ are the set of variables that capture aspects of patent laws in country $j$ and year $t$, as detailed above. ${comp}_{ik}$ is a set of variables measuring the complexity of the innovation, ${nov}_{il}$ a set of variable measuring its novelty, ${collab}_i$ measures whether the innovation has a collaboration, $\delta_{s(i)}$ are dummy variables for the sector $s$ of an innovation (ISIC Rev. 3, 2-digit level groups). \paragraph*{Decomposition} To understand whether the changes in patenting propensity was driven by generic, sectoral or structural changes it is possible to carry out a straightforward decomposition of the aggregate patent propensity. If we define the average patent propensity $\bar{p}$ as $\bar{p}=\sum_i w_i p_i$ with $w_i$ the share of innovations of a sector $i$ and $p_i$ the patent propensity in sector $i$, we can easily derive the percentage point change as the sum of three components. \begin{equation} \Delta \bar{p} \equiv \underbrace{\sum_i \Delta w_i p_i}_{{between}} + \underbrace{\sum_i \Delta p_i w_i}_{{within}} + \underbrace{\sum_i \Delta p_i \Delta w_i }_{{interaction}} \end{equation} Apart from the within and between effects, the ``interaction effect'' captures the effect that changes in patent propensity might be (positively or negatively) correlated with changes in sectoral shares (compare the celebrated Price equation; see \cite{frank1997SI}). \subsection*{Supplementary text} \paragraph*{Determinants of the propensity to patent} To gain insight into what patent indicators capture, and fail to capture, logistic regressions were applied to analyze the determinants of whether an innovation $i$ has a patent. In a first set of models, the models used variables on innovation characteristics, sectoral dummies, and the role of intellectual property rights, as detailed in materials and methods. The results are shown in Table \ref{tab:logistic}. The baseline model examines the impact of the variables on the overall patent propensity. The first panel includes only sectoral dummies, showing overall a similar picture as in Table \ref{tab:sectoral}. The second panel tested for the impact of complexity, showing that patented innovations tend to not be complex systems, but tend, on the other hand, to have higher developmental complexity. This may seem contradictory, but rather reflects different preconditions and means of appropriation. On the one hand, complex system innovations may be more difficult and costly to copy, there may be significant barriers to entry, and lead time advantages are likely to be a more efficient means of appropriation \cite{arundel1998SI}. On the other hand, high-tech industries such as pharmaceuticals, machinery and electronic engineering are known to have lost costs of copying innovations, and therefore high incentives for securing returns from innovation \cite{arundel1998SI, breschi2000SI}. The third panel included measures of novelty. The results unanimously suggest that patented innovations are associated with all kinds of novelty and significance. We observe significant and robust positive associations with the variables measuring whether the innovation is new to the world, radical or major improvement, as well as the number of sources an innovation, thought to be a proxy for significance. Innovations that result from collaborations between many firms are, overall, less likely to be patented, but the finding is less robust. Panels 4-6 use as dependent variable whether the innovation has a patent in EPO, USPTO or Sweden. The results are overall very similar and hence robust across different patent indicators. Panels 7 and 8 run complementary regressions. Panel 7 runs a negative binomial count model regression to predict the number of patents an innovation has, with similar results, except that complex systems are now insignificant. Panel 8 predicts the number of countries in which the innovation is patented, again using a negative binomial count model. Again, the results are very similar to the other specifications. Notably, the number of patents an innovation has, and the number of countries in which an innovation has patents are positively correlated to our measures of novelty. This is in line with the notion that patent family size captures significance \cite{lanjouw2004SI}. \paragraph*{Determinants of patenting across countries} Since the multivariate probit model is computationally intensive, the analysis focused on the five most common national patent offices: Sweden, US, Germany, Japan and Canada. Together these five patent offices account for virtually all patented innovations (98.7\%, compare Fig. \ref{fig:patprop_cum}). For completeness, Table \ref{tab:countrylevel} also reports ordinary logistic models for a panel of all countries with available IPR data. Table \ref{tab:mvprobit} and Table \ref{tab:countrylevel} show overall similar results, but the multivariate probit regressions suggest the presence of cross-country heterogeneity in the effects of IPR on patent propensity. Membership in international treaties (e.g., PCT and TRIPS) has the most consistent positive effect for patent propensity. The overall effect of duration in (Table \ref{tab:countrylevel}) is positive, but of the five focal countries, only Sweden and Japan saw changes in patent duration (Table \ref{tab:mvprobit}). The effect of patent coverage of seven types of inventions (software, pharmaceuticals, chemicals, food, plant and animal varieties) is less consistent, both in the multivariate probit and logistic regressions. Loss of rights, measuring the absence of restrictions, e.g., compulsory licensing, on patent rights, has a negative impact on patent propensity in general, but a positive coefficient for Sweden. The negative signs observed for patent coverage and loss of rights, could reflect an anticommons effect \cite{heller1998SI}. Using these numbers, it is possible to calculate an upper bound of the importance of IPR for innovation activity. A somewhat generous interpretation of the results is that a patented innovation that depends greatly on changes in IPR would not have been forthcoming without them. This is ``generous'', because a great importance of IPR does not necessarily imply that patents were immediately important to the development or commercialization of the innovation, but may instead imply that innovators have found it of greater importance to employ defensive patents, due to a generally increased use of patents \cite{blind2009SI}. For this reason, the coefficients should be viewed as estimating an \emph{upper bound} of the importance of IPR for innovation activity. Conversely, innovations that were not patented or whose patenting choice does not depend on IPR ought to be viewed as relatively independent of policy changes. To illustrate the magnitude of the importance of IPR, one may assume that the internationally strengthened IPR in the 1970s, 1980s and 1990s never happened and estimate counterfactual rates of patenting for each innovation and country, using the IPR indices for 1970. Using the multivariate probit model, the probability than an innovation had at least one patent was estimated for the ordinary regression and a regression with counterfactual 1970 levels of IPR (coverage, membership, loss of rights, patent duration and exclusion). The number of ``lost'' innovations is the difference in the expected number of patented innovations with the historically accurate IPR levels and the counterfactual. The results are shown in Fig. \ref{fig:counterfactual}. Under the above interpretation, an estimated 8\% of all innovations, or would not have been forthcoming without the strengthened patent laws. After the TRIPS agreement, signed in 1994, this figure was 9.8\% , reaching a peak of 15.5\% in the late 1990s. As an indication, Fig. \ref{fig:counterfactual} also shows the share of patents whose filing date precede the commercialization date. One may reason that patents filed after commercialization were either strategic, or less important for the development of the innovation \cite{blind2009SI}. The results do not indicate an increase of post-commercialization patenting over the period, as could be expected if patenting activity had largely shifted towards an predominantly defensive strategies. \paragraph*{Predicting innovations from patent data} To investigate the ability of patent quality data to predict significant innovations, this study uses the EPO patents extracted from Google Patents and matched to the OECD Patent Quality Indicators database (January 2021; \cite{squicciarini2013SI}). The analysis departs from classifying patents into (1) those linked to significant innovations and (2) a set of non-significant patents. Since no such classification is beyond criticism, we test the discriminatory capacity of patent quality measures on four classifications. A basic comparison would be between all patents linked to the LBIO innovation database, and patents that have no such link. A problem with this comparison, is, however, that it is plausible that there are significant patents among those not linked to the LBIO innovation database. For this reason, patents with more than 5 years of renewal were excluded from the analysis and compare LBIO innovations with those patents that were only renewed for 5 years or less. As the benchmark, the analysis makes us of a set of 40 ``blockbuster innovations'', widely known to have had been major success stories and having had a major economic impact. These are pooled together from written sources and interviews with major innovating firms. A first source lists 100 major Swedish patented innovations from 1945 to 1980, selected on the basis of having generated a turnover of at least 3.5 million Swedish krona in 1980 prices \cite{wallmark1991SI}. Another source was an extended list of major Swedish innovations during the 20th century up until 2002 \cite{sedig2002SI} and identify more recent major successes including Skype, Spotify and the bicycle helmet Hövding. In addition, interviews were carried out with research directors in major companies to separate highly successful from incremental or unimportant products. The companies are Ericsson, AGA, Atlas Copco and Sandviken. These innovations correspond to 923 EPO patents. Two other definitions of significant innovations are also used as robustness controls: \begin{itemize} \item ``New to the world'', 795 patents linked to LBIO innovations, described as new on the world market and mentioned in at least 3 journal articles, excluding innovations that are incremental from the firm perspective. \item ``PCA'', selection of 807 patents. An index (principal components) based on the number of trade journal sources of an innovation, its market novelty and firm novelty. \end{itemize} The last subset is constructed from principal components analysis (PCA) on the variables market novelty, firm novelty and the number of sources in which the innovation is mentioned to construct an index of overall significance. The innovations included are those that score in the highest decile. Incremental patents, are identified by using trade journal information and interviews. Non-significant innovations are innovations that were described in trade journal articles as incremental from the firms' perspective, viz. a product improvement, were not indicated as new to the Swedish or world market, and were only mentioned in one trade-journal source. This set also includes innovations that were confirmed by interviewees to have been of lesser or no importance. All selections of significant innovations are compared against the identified incremental patents. An issue is that the performance of the regressions, including the precision and recall, is influenced by the ratio of positives (significant patents) to negatives (incremental patents). One way of approaching this issue, would be to assume that the ratio of positive to negative outcomes is the same as the ratio of LBIO to Non-LBIO patents. The fraction of LBIO patents is 4.8\%. This assumption however risks underestimating the performance since there may be significant patents among the Non-LBIO patents. Instead the approach is to identify the most significant innovations among the LBIO data and estimate upper bounds. The criteria of the above selections are chosen so as to achieve a fraction of the ca 8-9\% most significant patents in the LBIO data, and the ratio of positive to negative outcomes in the models are construed such that \emph{at least} as many of the observations are positive outcomes. In practice, the fraction of positive outcomes are in above 10\%, implying that the results are possibly upward biased. As predictive variables, the regressions use the number of patent citations (forward citations) within 7 years from the publication date, the number of years for which a patent was renewed, and the patent family size \cite{squicciarini2013SI}. In addition, the regressions include two indices. The originality of a patent, captures the breadth of knowledge (technology fields) that a patent relies on. The originality of a patent $p$ is calculated as \cite{squicciarini2013SI} \begin{equation} {Originality}_p = 1 - \sum_{j}^{np} s_{pj}^2 \end{equation} where $s_{pj}$ is the share of citations made by patent $p$ to patent class $j$ out of the $n_p$ patent codes contained in the patents cited by patent $p$. The analysis also makes use of an index of radicalness \cite{squicciarini2013SI} to capture the diversity in the technologies that a patent relies upon, calculated as \begin{equation} {Radicalness}_p = \sum_{j}^{np} {CT}_{j}/n_p \end{equation} with $CT_j$ the number of technology classes of a patent $j$ cited by patent $p$ and $n_p$ is the number of patent codes contained in the patents cited by patent $p$. Descriptive statistics support differences in the number of patent citations received depending on whether the patent was connected to the LBIO, a blockbuster patent, new to the world or the PCA index (Table \ref{tab:citations}). Patents not connected to the LBIO database have a mean number of citations of 0.76. LBIO innovations have on average more than double the number of citations (1.57), while the blockbuster innovations, selected as being the most successful Swedish innovations, have, on average, only somewhat higher number of citations than the average LBIO innovation (1.82). Figs. \ref{fig:PCscatter} and \ref{fig:stripplot_pc1} use principal components to compare the overall properties of the non-LBIO, LBIO and blockbuster samples. These are based on citations (log) when technology fields and filing dates are controlled for, originality, radicalness, renewal and family size (see Table \ref{tab:pca}). Regressions are run with and without dummies for the patents' technology field (Tables \ref{tab:predictsuper_without} and \ref{tab:predictsuper} respectively). The results suggest first of all that LBIO innovations (model 1) are, again, linked to several quality measures. High number of forward citations, high originality, patent family size and patent renewal are all positively associated with LBIO innovations. Models 2-5 compare patents linked to significant innovations with non-significant patents as outlined above. From these models it is clear that none of the patent quality measures are, on their own, a consistent predictor of significant innovations. The results vary between the definitions used to identify significance. The LBIO data has expected (positive) signs in all coefficients, but for the blockbuster benchmark only family size has a significant positive effect. The most consistent predictor is patent renewal, although blockbuster patents have no significant positive effect. The predictive power of the model may also be evaluated in terms of the precision and recall, shown for Models 2-5 in Figure \ref{fig:precisionrecall_all}. The best performing model is the one predicting blockbuster innovations, achieving a maximum recall $\alpha$ of 0.561 and precision $\beta$ of 0.7, when controls for technology field are included. This results in a maximum product of 0.393. Similar results are achieved for the models predicting new-to-the-world innovations and the principal components. \clearpage \subsection*{Supplementary Figures} \begin{figure*}[hptb] \centering \begin{tikzpicture}[ bigcircle/.style={ text width=1.6cm, align=center, line width=1mm, draw, circle, font=\sffamily\footnotesize }, desc/.style 2 args={ text width=4cm, font=\sffamily\scriptsize\RaggedRight, label={[#1,yshift=-1.5ex,font=\sffamily\footnotesize]above:#2} }, node distance=3mm and 5mm ] \node [bigcircle] (circ1) {1. Manual matching}; \node [desc={black}{ },below=of circ1] (list1) { \begin{itemize} \setlength\itemsep{0pt} \item Manual matching of innovations to patents \item Selection of innovations with high complexity or developed by large firms \end{itemize} }; \node [bigcircle,black!50!red,right=of list1] (circ2) {2. Machine-learning assisted method}; \node [desc={black!50!red}{ },text=black!50!red, above=of circ2] (list2) { \begin{itemize} \setlength\itemsep{0pt} \item Assignment of keywords \item Feature engineering \item Selection and training of ML model \end{itemize} }; \node [bigcircle,black!20!red,right=of list2] (circ3) {3. Manual checks on result}; \node [desc={black!20!red}{},text=black!20!red, below=of circ3] (list3) { \begin{itemize} \setlength\itemsep{0pt} \item Manual checks of innovations with less than 20 patent matches \end{itemize} }; \node [bigcircle,black!50!blue,right=of list3] (circ4) {4. Final patent-innovation pairing}; \node [desc={black!50!blue}{},text=black!50!blue, above=of circ4] (h) { }; \draw [black!80] (circ1) -- (circ2) -- (circ3) -- (circ4); \end{tikzpicture} \caption{Overview of matching process} \label{fig:matchingprocess} \end{figure*} \begin{figure*} \centering \footnotesize \begin{tikzpicture}[xscale = 2.5, font=\sffamily, mystyle/.style={draw=white, row sep=-\pgflinewidth, thick, text=black, font=\sffamily\bfseries}, ] \pie[square, style={mystyle}, color={ yellow!20!red, blue!70!white, yellow!50!red, blue!30!white, yellow!80!red, blue!10!whit }, text=inside, ]{43.5/{Manually, No}, 33.4/{Manually, Yes}, 12.9/{ML, No}, 6.0/{Both, Yes},1.7/{Both, No}, 2.6/{ML, Yes}} \end{tikzpicture} \caption{Share of innovations by matching method and matching status. Checked manually, using machine-learning (ML), or both (ML + manually). Matched to at least one patent (Yes) or matched to no patents (No). Innovations commercialized in 1970-2015} \label{fig:treemap} \end{figure*} \begin{figure*} \centering \begin{tikzpicture}[ bigcircle/.style={ text width=1.6cm, align=center, line width=2mm, draw, circle, font=\sffamily\footnotesize }, desc/.style 2 args={ text width=3cm, font=\sffamily\scriptsize\RaggedRight, label={[#1,yshift=-1.5ex,font=\sffamily\footnotesize]above:#2} }, node distance=10mm and 2mm ] \node [bigcircle] (circ1) {Round 1.}; \node [desc={black}{ },below=of circ1] (list1) { \begin{itemize} \setlength\itemsep{0pt} \item Train using 293 verified pairings (99 innovations) \end{itemize} }; \node [bigcircle,black!50!red,right=of list1] (circ2) {Round 2}; \node [desc={black!50!red}{ },text=black!50!red, above=of circ2] (list2) { \begin{itemize} \setlength\itemsep{0pt} \item Manual assessment of 300 matches \item 474 pairings \end{itemize} }; \node [bigcircle,black!20!red,right=of list2] (circ3) {Round 3}; \node [desc={black!20!red}{},text=black!20!red, below=of circ3] (list3) { \begin{itemize} \setlength\itemsep{0pt} \item 2145 pairings \item 885 manually classified pairings for training \end{itemize} }; \node [bigcircle,green!60!blue,right=of list3] (circ4) {Final step: Prediction }; \node [desc={green!60!blue}{},text=green!60!blue, above=of circ4] (h) { \begin{itemize} \setlength\itemsep{0pt} \item 2,604 matches 1970-1984 \item 11,487 matches 1985-2015 \end{itemize} }; \draw [dashed,black!80] (circ1) -- (circ2) -- (circ3) -- (circ4); \end{tikzpicture} \caption{Overview of machine-learning process} \label{fig:MLmatching} \end{figure*} \begin{figure*}[htbp] \centering \begin{subfigure}[b]{.45\linewidth} \caption{} \label{fig:machinecontrol} \includegraphics[width=7cm, height=7cm, keepaspectratio]{MLP.png} \end{subfigure} \centering \begin{subfigure}[b]{.45\linewidth} \centering \caption{} \label{fig:matchstatus} \includegraphics[width=7cm, height=7cm, keepaspectratio]{matchstatus.png} \end{subfigure} \begin{subfigure}[b]{.45\linewidth} \centering \caption{} \label{fig:matchstatus_share} \includegraphics[width=7cm, height=7cm, keepaspectratio]{matchstatus_share.png} \end{subfigure} \begin{subfigure}[b]{.45\linewidth} \centering \caption{} \label{fig:zipf} \includegraphics[width=7cm, height=7cm, keepaspectratio]{zipf.png} \end{subfigure} \caption{ (a) Manual validation. P-value from machine-learning (binned to two decimals) versus share of correct matchings in manual controls for 2,155 patent-innovation pairings, 1970-2015. Only suggested patent-innovation pairings with $P > 0.5$ were considered. (b) Number of patented innovations by commercialization year and matching method, (c) Share of innovations patented, by commercialization year and matching method. (d) Number of matched patents per innovation (y axis) and innovations ranking from innovation with most the patents to least (x axis) .} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[scale=0.4]{precisionrecall_all.png} \caption{(a) Precision and recall for prediction of LBIO (adjusted), (b) Blockbuster innovations, (c) new-to-the-world, and (d) principal components index. Predictions including dummies for technology field in black, without in red. The convex lines are points that achieve $\alpha \times \beta$ of 0.2, 0.3 and 0.4.} \label{fig:precisionrecall_all} \end{figure*} \clearpage \subsection*{Supplementary tables} \begin{table*}[htbp] \centering \caption{Summary statistics of patent propensity, by commercialization year, 1970-1989 (top panel), 1990-2015 (second panel) and total (bottom panel) \label{tab:sumstat}} \footnotesize \begin{tabular}{l c c c}\hline\hline \multicolumn{1}{c}{\textbf{Variable}} & \textbf{Mean} & \textbf{Std. Dev.} & \textbf{N}\\ \hline \emph{1970-1989} & & & \\ Patented (Y/N) & 0.377 & 0.485 & 1968\\ EPO$^{***}$ & 0.181 & 0.385 & 1350\\ USPTO & 0.266 & 0.442 & 1968\\ SE & 0.308 & 0.462 & 1968\\ JPO & 0.195 & 0.396 & 1968\\ \hline \emph{1990-2015} & & & \\ Patented (Y/N) & 0.488 & 0.5 & 2492\\ EPO & 0.414 & 0.493 & 2492\\ USPTO & 0.378 & 0.485 & 2492\\ SE & 0.353 & 0.478 & 2492\\ JPO & 0.255 & 0.436 & 2492\\ \hline \emph{1970-2015} & & & \\ Patented (Y/N) & 0.439 & 0.496 & 4460\\ EPO$^{***}$ & 0.332 & 0.471 & 3842\\ USPTO & 0.328 & 0.47 & 4460\\ SE & 0.333 & 0.471 & 4460\\ JPO & 0.229 & 0.42 & 4460\\ \hline \end{tabular} \caption*{\footnotesize{ $^{***}$ EPO was established in 1977. The figures therefore are based on innovations commercialized 1977-2015. } } \end{table*} \begin{table*}[hptb] \footnotesize \centering \begin{tabular}{lHccccH} \hline & & 1970-1989 & & 1990-2015 & & \\ Sector & period1 & Innovations & Share patented & Innovations & Share patented & \\ \hline Foodstuff & 1 & 11 & 0.273 & 36 & 0.417 & 12 \\ Textiles & 1 & 11 & 0.545 & 17 & 0.471 & 6 \\ Wood & 1 & 27 & 0.296 & 56 & 0.393 & 12 \\ Pulp \& paper & 1 & 19 & 0.211 & 45 & 0.489 & 10 \\ Publishing & 1 & - & 0.500 & 6 & 0.333 & 12 \\ Petroleum & 1 & - & - & - & - & 12 \\ Chemicals & 1 & 50 & 0.420 & 80 & 0.600 & 12 \\ Pharmaceuticals & 1 & 18 & 0.556 & 52 & 0.846 & 11 \\ Plastics & 1 & 100 & 0.420 & 115 & 0.504 & 12 \\ Other non-metallical & 1 & 27 & 0.481 & 14 & 0.357 & 7 \\ Basic metals & 1 & 49 & 0.469 & 54 & 0.519 & 3 \\ Fabricated metals & 1 & 121 & 0.455 & 118 & 0.449 & 12 \\ Machinery & 1 & 657 & 0.451 & 486 & 0.525 & 12 \\ Computers & 1 & 140 & 0.214 & 81 & 0.346 & 12 \\ Electrical app. & 1 & 80 & 0.338 & 102 & 0.549 & 1 \\ Telecom. eq. & 1 & 88 & 0.227 & 244 & 0.492 & 2 \\ Electronic eq. & 1 & 272 & 0.316 & 402 & 0.557 & 10 \\ Automotive & 1 & 68 & 0.368 & 73 & 0.438 & 12 \\ Other transp. eq. & 1 & 58 & 0.190 & 41 & 0.244 & 12 \\ Other manufacturing & 1 & 16 & 0.188 & 27 & 0.444 & 12 \\ Recycling & 1 & 12 & 0.583 & 13 & 0.462 & 12 \\ Telecommunication services & 1 & - & - & 16 & 0.625 & 12 \\ Software & 1 & 40 & 0.0750 & 295 & 0.322 & 3 \\ R\&D & 1 & - & - & 36 & 0.806 & 12 \\ Other business services & 1 & 75 & 0.533 & 60 & 0.383 & 5 \\ \hline \end{tabular} \caption{Patent propensity across sectors (ISIC Rev. 3), all patent offices. \footnotesize{Note: Results not given when sectoral counts are below five.}} \label{tab:sectoral} \end{table*} \begin{table}[hptb] \footnotesize \centering \begin{longtable*}{p{2,5cm}ccccccHcHcH} \hline & (1) & (2) & (3) & (4) & (5) & (6) & (7) & (7) & (7) & (8) & (8) \\ VARIABLES & Baseline & Complexity & Novelty & EPO & USPTO & SE & WO & \# patents & \# patents & \# countries & \# countries \\ \hline & & & & & & & & & & & \\ Simple product & & 0.0366 & -0.0235 & -0.0540 & -0.153 & -0.160 & -0.0105 & -0.0364 & & 0.176 & \\ & & (0.104) & (0.108) & (0.126) & (0.115) & (0.112) & (0.131) & (0.0794) & & (0.116) & \\ Complex system & & -0.321*** & -0.295*** & -0.0673 & -0.322*** & -0.199** & -0.0429 & -0.119* & & -0.188* & \\ & & (0.0922) & (0.0949) & (0.111) & (0.101) & (0.0984) & (0.116) & (0.0701) & & (0.102) & \\ Low dev. complexity & & -0.226** & -0.132 & -0.294** & -0.170 & -0.0496 & -0.255* & -0.0873 & & -0.242** & \\ & & (0.0964) & (0.0992) & (0.125) & (0.109) & (0.105) & (0.130) & (0.0789) & & (0.109) & \\ High dev. complexity & & 0.520*** & 0.317*** & 0.356*** & 0.505*** & 0.324*** & 0.406*** & 0.271*** & & 0.244** & \\ & & (0.0941) & (0.0981) & (0.108) & (0.0995) & (0.0986) & (0.111) & (0.0670) & & (0.103) & \\ New to the world & & & 0.161* & 0.0872 & 0.332*** & 0.136 & 0.0774 & 0.157*** & & 0.263*** & \\ & & & (0.0836) & (0.0948) & (0.0860) & (0.0849) & (0.0988) & (0.0595) & & (0.0929) & \\ Radical & & & 0.745*** & 0.662*** & 0.692*** & 0.643*** & 0.872*** & 0.716*** & & 0.709*** & \\ & & & (0.114) & (0.139) & (0.126) & (0.121) & (0.146) & (0.0929) & & (0.125) & \\ Major improvement & & & 0.233** & 0.164 & 0.245** & 0.266** & 0.213 & 0.271*** & & 0.367*** & \\ & & & (0.109) & (0.137) & (0.122) & (0.117) & (0.144) & (0.0916) & & (0.118) & \\ Sources & & & 0.349*** & 0.304*** & 0.313*** & 0.208*** & 0.327*** & 0.160*** & & 0.143*** & \\ & & & (0.0367) & (0.0355) & (0.0335) & (0.0293) & (0.0370) & (0.0173) & & (0.0329) & \\ Collaboration & & -0.0656 & -0.184** & -0.161* & -0.287*** & -0.199** & -0.132 & -0.129** & & -0.216** & \\ & & (0.0798) & (0.0832) & (0.0969) & (0.0897) & (0.0868) & (0.100) & (0.0625) & & (0.0909) & \\ Textiles & 0.796** & 0.504 & 0.677* & 0.636 & 0.857** & 0.357 & 0.315 & 0.483* & & 0.280 & \\ & (0.377) & (0.371) & (0.385) & (0.458) & (0.391) & (0.398) & (0.492) & (0.280) & & (0.418) & \\ Chemicals & 0.875*** & 0.525** & 0.515** & 0.569** & 0.456* & 0.158 & 0.536** & 0.259 & & 0.217 & \\ & (0.227) & (0.217) & (0.226) & (0.256) & (0.234) & (0.233) & (0.266) & (0.164) & & (0.241) & \\ Pharmaceuticals & 1.906*** & 1.393*** & 1.210*** & 1.501*** & 1.388*** & 0.0179 & 1.425*** & 1.063*** & & 0.885*** & \\ & (0.321) & (0.313) & (0.320) & (0.325) & (0.313) & (0.285) & (0.327) & (0.172) & & (0.301) & \\ Plastics \& rubber & 0.658*** & 0.463** & 0.507*** & 0.314 & -0.127 & 0.274 & 0.298 & 0.103 & & 0.130 & \\ & (0.197) & (0.180) & (0.185) & (0.222) & (0.208) & (0.194) & (0.231) & (0.145) & & (0.200) & \\ Basic metals & 0.722*** & 0.481** & 0.579** & 0.654** & 0.592** & 0.297 & 0.284 & 0.227 & & 0.346 & \\ & (0.245) & (0.234) & (0.242) & (0.285) & (0.249) & (0.252) & (0.303) & (0.188) & & (0.262) & \\ Fabricated metals & 0.567*** & 0.385** & 0.453*** & 0.390* & 0.106 & 0.394** & 0.289 & 0.189 & & 0.223 & \\ & (0.192) & (0.171) & (0.174) & (0.209) & (0.189) & (0.179) & (0.220) & (0.137) & & (0.192) & \\ Machinery & 0.747*** & 0.562*** & 0.623*** & 0.399*** & 0.324** & 0.485*** & 0.432*** & 0.297*** & & 0.261* & \\ & (0.151) & (0.123) & (0.126) & (0.151) & (0.134) & (0.130) & (0.158) & (0.0984) & & (0.140) & \\ Computers & -0.266 & -0.520*** & -0.503** & -0.576** & -0.670*** & -0.764*** & -0.407 & -0.331** & & -0.850*** & \\ & (0.210) & (0.191) & (0.197) & (0.246) & (0.219) & (0.211) & (0.255) & (0.156) & & (0.207) & \\ Electrical app. & 0.570*** & 0.317* & 0.216 & 0.327 & -0.0504 & -0.0839 & 0.379 & 0.162 & & -0.0929 & \\ & (0.206) & (0.187) & (0.193) & (0.227) & (0.206) & (0.202) & (0.236) & (0.143) & & (0.211) & \\ Electronic eq. & 0.543*** & 0.247* & 0.223 & 0.285* & 0.116 & 0.0213 & 0.356** & 0.113 & & -0.200 & \\ & (0.160) & (0.133) & (0.137) & (0.159) & (0.145) & (0.142) & (0.166) & (0.105) & & (0.151) & \\ Software & -0.388** & -0.643*** & -0.750*** & -0.916*** & -0.600*** & -1.323*** & -0.756*** & -0.529*** & & -1.106*** & \\ & (0.187) & (0.166) & (0.172) & (0.196) & (0.181) & (0.201) & (0.197) & (0.131) & & (0.183) & \\ R\&D & 2.104*** & 1.719*** & 1.471*** & 1.788*** & 1.757*** & -0.123 & 1.547*** & 0.937*** & & 0.565 & \\ & (0.447) & (0.440) & (0.450) & (0.455) & (0.454) & (0.381) & (0.443) & (0.218) & & (0.396) & \\ Constant & -1.397*** & -1.144*** & -1.886*** & -3.986*** & -1.823*** & -2.569*** & -5.629*** & -1.633*** & & -0.0821 & \\ & (0.282) & (0.272) & (0.297) & (0.493) & (0.315) & (0.367) & (1.038) & (0.244) & & (0.308) & \\ Observations & 4,460 & 4,460 & 4,460 & 3,842 & 4,460 & 4,460 & 4,126 & 4,460 & 4,460 & 4,460 & 4,460 \\ R-squared & 0.0485 & 0.0541 & 0.0938 & 0.150 & 0.105 & 0.0844 & 0.217 & 0.0768 & 0.0768 & 0.0206 & 0.0206 \\ \hline \multicolumn{12}{c}{ Standard errors in parentheses} \\ \multicolumn{12}{c}{ *** p$<$0.01, ** p$<$0.05, * p$<$0.1} \\ \end{longtable*} \caption{Logistic regression (log odds ratios). Dependent variables as described in text. Selected sectors shown.} \label{tab:logistic} \end{table} \begin{table*}[hptb] \footnotesize \centering \begin{tabular}{lcccccHHHHHHHHHH} \hline & (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13) & (14) & (15) \\ VARIABLES & Sweden & US & Germany & Japan & Canada & Baseline & Baseline & Baseline & Baseline & Baseline & Baseline & Baseline & Baseline & Baseline & Baseline \\ \hline Simple product & -0.0662 & -0.0652 & -0.0191 & -0.0575 & -0.0159 & & & & & & & & & & \\ & (0.0651) & (0.0660) & (0.0674) & (0.0681) & (0.0712) & & & & & & & & & & \\ Complex system & -0.131** & -0.201*** & -0.169*** & -0.209*** & -0.136** & & & & & & & & & & \\ & (0.0576) & (0.0578) & (0.0590) & (0.0595) & (0.0633) & & & & & & & & & & \\ Low dev. complexity & -0.0501 & -0.115* & -0.151** & -0.126* & -0.268*** & & & & & & & & & & \\ & (0.0618) & (0.0630) & (0.0662) & (0.0668) & (0.0724) & & & & & & & & & & \\ High dev. complexity & 0.220*** & 0.342*** & 0.192*** & 0.303*** & 0.178*** & & & & & & & & & & \\ & (0.0583) & (0.0585) & (0.0586) & (0.0586) & (0.0617) & & & & & & & & & & \\ New to the world & 0.0782 & 0.180*** & 0.162*** & 0.150*** & 0.175*** & & & & & & & & & & \\ & (0.0504) & (0.0504) & (0.0514) & (0.0515) & (0.0543) & & & & & & & & & & \\ Radical & 0.341*** & 0.380*** & 0.293*** & 0.336*** & 0.386*** & & & & & & & & & & \\ & (0.0708) & (0.0722) & (0.0749) & (0.0768) & (0.0844) & & & & & & & & & & \\ Major improvement & 0.101 & 0.106 & 0.104 & 0.0894 & 0.192** & & & & & & & & & & \\ & (0.0679) & (0.0695) & (0.0717) & (0.0741) & (0.0815) & & & & & & & & & & \\ Sources & 0.103*** & 0.132*** & 0.0872*** & 0.129*** & 0.106*** & & & & & & & & & & \\ & (0.0147) & (0.0147) & (0.0152) & (0.0143) & (0.0147) & & & & & & & & & & \\ Collaboration & -0.111** & -0.173*** & -0.152*** & -0.148*** & -0.0955* & & & & & & & & & & \\ & (0.0507) & (0.0512) & (0.0526) & (0.0532) & (0.0561) & & & & & & & & & & \\ Pharmaceuticals & -0.0542 & 0.862*** & 0.691*** & 0.876*** & 0.998*** & & & & & & & & & & \\ & (0.173) & (0.182) & (0.170) & (0.175) & (0.178) & & & & & & & & & & \\ Machinery & 0.303*** & 0.214*** & 0.306*** & 0.273*** & 0.0518 & & & & & & & & & & \\ & (0.0779) & (0.0786) & (0.0805) & (0.0818) & (0.0847) & & & & & & & & & & \\ Computers & -0.385*** & -0.320*** & -0.453*** & -0.197 & -0.445*** & & & & & & & & & & \\ & (0.120) & (0.121) & (0.127) & (0.127) & (0.140) & & & & & & & & & & \\ Biotechnology & 0.134 & 0.343*** & 0.351*** & 0.386*** & 0.371*** & & & & & & & & & & \\ & (0.117) & (0.119) & (0.117) & (0.117) & (0.119) & & & & & & & & & & \\ Coverage & -2.541*** & -1.720** & 3.207*** & 0.799*** & 0.120 & & & & & & & & & & \\ & (0.621) & (0.827) & (0.484) & (0.208) & (0.366) & & & & & & & & & & \\ Loss of rights & 2.035*** & & 0.612*** & 0.273 & 1.594 & & & & & & & & & & \\ & (0.510) & & (0.225) & (0.227) & (1.131) & & & & & & & & & & \\ Duration & 1.464** & & & 0.0343 & & & & & & & & & & & \\ & (0.661) & & & (0.296) & & & & & & & & & & & \\ Enforcement & -0.940** & & -0.442* & 0.0380 & 1.902* & & & & & & & & & & \\ & (0.391) & & (0.236) & (0.203) & (1.082) & & & & & & & & & & \\ Membership & 2.431*** & 0.800** & 1.782*** & 0.111 & -0.814* & & & & & & & & & & \\ & (0.457) & (0.338) & (0.263) & (0.203) & (0.481) & -0.103 & & & & & & & & & \\ Export (log) & & -0.0913 & 0.0755 & 0.0387 & (0.191) & & & & & & & & & & \\ & & (0.0819) & (0.231) & (0.0747) & & & & & & & & & & & \\ Year & -0.00158 & 0.000580 & -0.0636*** & -0.0139** & 0.00134 & & & & & & & & & & \\ & (0.00645) & (0.00372) & (0.00646) & (0.00612) & (0.00793) & & & & & & & & & & \\ Constant & 0.131 & -1.622 & 121.4*** & 25.50** & -6.792 & 1.128*** & 1.163*** & 1.033*** & 0.928*** & 1.473*** & 1.455*** & 1.259*** & 1.359*** & 1.154*** & 1.244*** \\ & (12.89) & (7.360) & (12.31) & (11.67) & (15.87) & (0.0352) & (0.0363) & (0.0349) & (0.0357) & (0.0445) & (0.0445) & (0.0455) & (0.0417) & (0.0411) & (0.0421) \\ & & & & & & & & & & & & & & & \\ Observations & 4,460 & 4,460 & 4,460 & 4,460 & 4,460 & 4,460 & 4,460 & 4,460 & 4,460 & 4,460 & 4,460 & 4,460 & 4,460 & 4,460 & 4,460 \\ \hline \multicolumn{16}{c}{ Standard errors in parentheses} \\ \multicolumn{16}{c}{ *** p$<$0.01, ** p$<$0.05, * p$<$0.1} \\ \end{tabular} \caption{Multivariate probit regressions for the propensity of Swedish innovators to patent in five countries. Selected sectors shown.} \label{tab:mvprobit} \end{table*} \begin{table*}[hptb] \footnotesize \centering \begin{tabular}{lccc} \hline & (1) & (2) & (3) \\ VARIABLES & Baseline & IPR & IPR DID \\ \hline & & & \\ Simple product & 0.0954*** & 0.0967*** & 0.0974*** \\ & (0.0321) & (0.0323) & (0.0323) \\ Complex system & -0.140*** & -0.142*** & -0.142*** \\ & (0.0299) & (0.0301) & (0.0301) \\ Low dev. complexity & -0.264*** & -0.265*** & -0.264*** \\ & (0.0333) & (0.0335) & (0.0335) \\ High dev. complexity & 0.296*** & 0.301*** & 0.302*** \\ & (0.0279) & (0.0281) & (0.0280) \\ New to the world & 0.279*** & 0.284*** & 0.283*** \\ & (0.0243) & (0.0244) & (0.0244) \\ Radical & 0.584*** & 0.591*** & 0.591*** \\ & (0.0393) & (0.0395) & (0.0395) \\ Major improvement & 0.339*** & 0.343*** & 0.343*** \\ & (0.0382) & (0.0384) & (0.0384) \\ Sources & 0.167*** & 0.168*** & 0.167*** \\ & (0.00633) & (0.00637) & (0.00635) \\ Collaboration & -0.188*** & -0.190*** & -0.189*** \\ & (0.0262) & (0.0263) & (0.0263) \\ Coverage & & 0.0771 & 0.378*** \\ & & (0.109) & (0.105) \\ Loss of rights & & -1.220*** & -1.277*** \\ & & (0.102) & (0.102) \\ Duration & & 1.932*** & 1.931*** \\ & & (0.193) & (0.195) \\ Enforcement & & 0.0530 & 0.113 \\ & & (0.0753) & (0.0746) \\ Export (log) & & 0.404*** & 0.444*** \\ & & (0.0357) & (0.0354) \\ PCT & & & 0.235*** \\ & & & (0.0509) \\ TRIPS & & & 0.578*** \\ & & & (0.132) \\ Budapest & & & -0.0830 \\ & & & (0.0615) \\ Biotechnology & 0.337*** & 0.346*** & 0.375*** \\ & (0.0492) & (0.0496) & (0.0546) \\ Membership & & 1.175*** & \\ & & (0.109) & \\ Domestic & & -1.948*** & \\ & & (0.452) & \\ Constant & -5.284*** & -4.173*** & -3.808*** \\ & (0.158) & (0.320) & (0.320) \\ Observations & 245,300 & 243,727 & 243,727 \\ Country FE & YES & YES & YES \\ Year FE & YES & YES & YES \\ Pseudo $R^2$ & 0.305 & 0.315 & 0.315 \\ Log-lik & -35104 & -34499 & -34541 \\ \hline \multicolumn{4}{c}{ Standard errors in parentheses} \\ \multicolumn{4}{c}{ *** p$<$0.01, ** p$<$0.05, * p$<$0.1} \\ \end{tabular} \caption{Logistic regressions for the propensity to patent in country $i$} \label{tab:countrylevel} \end{table*} \begin{table*}[hptb] \footnotesize \centering \caption{Logistic regressions (log odds) for prediction of significant innovations from patent quality data, without dummies for technology field.} \label{tab:predictsuper_without} \begin{tabular}{lccccc} \hline & (1) & (2) & (3) & (4) & (5) \\ VARIABLES & LBIO & LBIO adj. & Blockbuster & New to the world & PCA \\ \hline & & & & & \\ Citations & 0.0332*** & 0.244*** & 0.0157 & 0.0398*** & 0.0398*** \\ & (0.00405) & (0.0114) & (0.0112) & (0.0110) & (0.0109) \\ Originality & 0.281*** & 0.147* & -1.511*** & -1.770*** & -1.637*** \\ & (0.0781) & (0.0851) & (0.158) & (0.165) & (0.167) \\ Radicalness & 0.207*** & 0.265*** & 0.432*** & -0.0327 & 0.0308 \\ & (0.0700) & (0.0756) & (0.154) & (0.171) & (0.169) \\ Renewal (years) & 0.0479*** & & -0.0229*** & -0.00694 & -0.00471 \\ & (0.00332) & & (0.00705) & (0.00790) & (0.00786) \\ Family size & 0.0307*** & 0.114*** & 0.0530*** & 0.0167** & 0.0167** \\ & (0.00287) & (0.00372) & (0.00604) & (0.00831) & (0.00826) \\ Filing date & -0.0134*** & -0.0200*** & 0.0133** & 0.0559*** & 0.0569*** \\ & (0.00193) & (0.00173) & (0.00647) & (0.00789) & (0.00792) \\ Constant & 22.99*** & 37.22*** & -27.58** & -112.8*** & -114.9*** \\ & (3.863) & (3.446) & (12.95) & (15.82) & (15.87) \\ & & & & & \\ Observations & 74,465 & 25,879 & 6,147 & 6,171 & 6,177 \\ R-squared & 0.0252 & 0.0986 & 0.0300 & 0.0381 & 0.0332 \\ Tech. field dummies & NO & NO & NO & NO & NO \\ \hline \multicolumn{6}{c}{ Standard errors in parentheses} \\ \multicolumn{6}{c}{ *** p$<$0.01, ** p$<$0.05, * p$<$0.1} \\ \end{tabular} \end{table*} \begin{table*}[hptb] \footnotesize \centering \caption{Logistic regressions (log odds) for prediction of significant innovations from patent quality data, with dummies for technology field.} \label{tab:predictsuper} \begin{tabular}{lccccc} \hline & (1) & (2) & (3) & (4) & (5) \\ VARIABLES & LBIO & LBIO adj. & Blockbuster & New to the world & PCA \\ \hline & & & & & \\ Citations & 0.0437*** & 0.254*** & 0.0479*** & 0.0973*** & 0.0851*** \\ & (0.00430) & (0.0121) & (0.0151) & (0.0154) & (0.0149) \\ Originality & 0.304*** & 0.562*** & -0.660*** & -1.048*** & -0.846*** \\ & (0.0825) & (0.0941) & (0.234) & (0.251) & (0.253) \\ Radicalness & 0.0832 & -0.157* & 0.249 & 0.118 & 0.240 \\ & (0.0734) & (0.0831) & (0.207) & (0.231) & (0.228) \\ Renewal (years) & 0.0499*** & & -0.0164* & 0.0324*** & 0.0392*** \\ & (0.00343) & & (0.00953) & (0.0111) & (0.0110) \\ Family size & 0.0415*** & 0.160*** & 0.0627*** & 2.33e-05 & -0.000826 \\ & (0.00340) & (0.00465) & (0.00852) & (0.0110) & (0.0109) \\ Filing date & -0.00477** & -0.0144*** & 0.00165 & 0.0704*** & 0.0732*** \\ & (0.00200) & (0.00184) & (0.00748) & (0.00879) & (0.00889) \\ Constant & 6.140 & 26.00*** & -3.739 & -142.5*** & -148.3*** \\ & (4.015) & (3.673) & (14.98) & (17.63) & (17.83) \\ & & & & & \\ Observations & 74,436 & 25,865 & 5,975 & 6,095 & 6,171 \\ R-squared & 0.0592 & 0.153 & 0.387 & 0.422 & 0.412 \\ Tech. field dummies & YES & YES & YES & YES & YES \\ \hline \multicolumn{6}{c}{ Standard errors in parentheses} \\ \multicolumn{6}{c}{ *** p$<$0.01, ** p$<$0.05, * p$<$0.1} \\ \end{tabular} \end{table*} \begin{table*} \footnotesize \centering \begin{tabular}{l c c c}\hline\hline \multicolumn{1}{c}{\textbf{Variable}} & \textbf{Mean} & \textbf{Std. Dev.} & \textbf{N}\\ \hline Non-LBIO & 0.761 & 2.231 & 85730\\ LBIO & 1.572 & 4.417 & 4337\\ Blockbuster & 1.817 & 3.641 & 923\\ New to the world & 1.935 & 3.964 & 795\\ PCA & 1.931 & 3.936 & 807\\ \hline\end{tabular} \caption{Patent citations received within 7 years by EPO patents with a link to LBIO innovations, blockbuster innovations, and patents not linked to LBIO} \label{tab:citations} \end{table*} \begin{table*} \footnotesize \begin{tabular}{l*{1}{ccccc}} \hline & Component 1 & Component 2 & Component 3& Component 4 & Component 5\\ \hline \makecell{Citations \\ (field and time \\ controls)}& 0.488& 0.261& -0.698& -0.445& 0.098\\ & & & & & \\ Originality & -0.152& 0.696& 0.195& -0.205& -0.642\\ & & & & & \\ Radicalness & -0.307& 0.633& -0.089& 0.313& 0.632\\ & & & & & \\ Renewal & 0.592& 0.145& -0.046& 0.755& -0.238\\ & & & & & \\ Family size & 0.542& 0.160& 0.682& -0.305& 0.350\\ & & & & & \\ Eigenvalue & & & & & \\ & 1.572& 1.411& 0.802& 0.679& 0.535\\ \hline \end{tabular} \caption{Principal component loadings and eigenvalues for Figs. \ref{fig:PCscatter} and \ref{fig:stripplot_pc1}} \label{tab:pca} \end{table*} \begin{table*} \footnotesize \centering \begin{threeparttable} \begin{tabular}{p{3cm} p{8cm}} \hline Feature & Description \\ \hline \verb|art_comp| & Artefactual complexity \\ \verb|dev_comp| & Developmental complexity \\ novelty & Novelty \\ \verb|year_delta| & Difference between year of commercialization and the year the patent was granted. \\ & Patent text fields - for each of the three text fields of the patent: title, abstract and description the same type of calculation was applied to generate a quantitative feature:. \\ \verb|Title_count| & A count of how many of the keywords can be detected in the title \\ \verb|Abst_count| & A count of how many of the keywords can be detected in the abstract \\ \verb|desc_count| & A count of how many of the keywords can be detected in the description \\ \verb|Title_share| & The ratio between how many of the keywords were found in the field to how many keywords there are. \\ \verb|Abst_share| & Same as above. \\ \verb|desc_share| & Same as above. \\ \hline \multicolumn{2}{c}{} \emph{From the second round the below features were added to the textual information} \\ \hline \verb|Fulltext_share| & Fulltext share. By concatenating the three text fields into the complete text, both above mentioned operations were used to generate two new features. \\ \verb|fulltext_count| & Fulltext count. See above \\ \verb|Top_1-10_field| & These features were created by counting the presence of each keyword in the respective text field and keeping the ten highest counts. The \verb|top_1| contains the most commonly occurring keywords and \verb|top_10| the 10th most commonly occurring. \\ \hline Vectorizing Names\tnote{*} & \\ \hline \verb|Inv_count| & Inventor count. Indicates how many of the names with the respective relationship were found among the patent-inventors. \\ \verb|Cont_count| & Contact count \\ \verb|Inv_s_share| & The share of inventors found in the patent. \\ \verb|cont_s_share| & The share of contact persons found in the patent. \\ \verb|inv_p_share| & The share of inventors from the patent that were identified. \\ \verb|cont_p_share| & The share of contact persons from the patent that were identified. \\ \hline \end{tabular} \begin{tablenotes} \item[*] In the SWINNO database two types of relationships are recorded between innovation and individuals: as inventors/developers or as a contact person. The former indicate that they have been mentioned in an article as having invented or developed the innovation, and the latter that they have been interviewed/cited by a journal. On the patent side multiple names can be recorded as inventor of the patent. \end{tablenotes} \end{threeparttable} \caption{Description of features.} \label{tab:features} \end{table*} \begin{table*} \centering \footnotesize \begin{tabular}{l c c c c c c} \hline & Input & & Output & \\ & Innovations & Patents with classification & Innovation-patent pairs & Predicted matches \\ \hline Round 1 & 99 & 293 & 195,040 & 9,132 \\ Round 2 & 171 & 470 & 194,859 & 11,694 \\ Round 3 & 239 & 2,145 & 193,169 & 11,487 \\ 1970-1984 & & & 122,785 & 2,604 \\ \hline \end{tabular} \caption{Basic statistics for each round of training the machine-learning model. the input was a number of innovations used as input and a number of patents with manually validation of the link to an innovation (Yes or No). The results of the methodology is a number of identified potential innovation-patent pairs and a number of suggested matches. Rounds 1-3 are based on innovations for the period 1985-2015. The matches for 1970-1984 are based on the ML model for round 3.} \label{tab:rounds} \end{table*} \begin{table*} \centering \footnotesize \begin{tabular}{l c c c c c c} \hline Round & Model & Accuracy & $F_1$-score & FP(Type-I) & FN(Type-II) & N \\ \hline 1 & RandomForest & 0.8077 & 0.7999 & 0.3103 & 0.0435 & 293 \\ 2 & RandomForest & 0.7619 & 0.7826 & 0.28 & 0.1765 & 209 \\ 3 & RandomForest & 0.7882 & 0.7151 & 0.2237 & 0.2113 & 885 \\ 3 & MLP & 0.847 & 0.795 & 0.2623 & 0.0796 & 885 \\ \hline \end{tabular} \caption{Statistics for each round and model. Accuracy, F1-score and shares of false positives (FP) and false negatives (FN). Accuracy is calculated as the share of true positives (TP) and true negatives (TN) in the total number of predictions. False positives (negatives) are calculated as shares in the total number of positives (negatives). The $F_1$-score is calculated as $f1 = {2 TP}/\left({2 TP + (FP+FN) }\right)$. } \label{tab:rounds-stats} \end{table*} \clearpage \newpage \DeclareFieldFormat{labelnumber}{ \ifinteger{#1} {\number\numexpr#1+37\relax} {#1}} \printbibliography[resetnumbers=false] \end{refsection} \end{document}
2,877,628,091,444
arxiv
\section{Introduction} To handle notions of differentiation that have become more prominent in computer science, two categorical structures have been useful: monoidal differential categories \cite{blute2006differential} and Cartesian differential categories \cite{blute2009cartesian}. Each axiomatizes a different aspect of differentiation: monoidal differential categories axiomatize the linear maps and then derive the smooth maps from them; conversely, Cartesian differential categories axiomatize the smooth maps and derive the linear maps from them. While these structures have been very useful, they both only represent the ``forward'' aspect of differentiation. For uses of the derivative in supervised learning, the ``reverse'' derivative is more relevant. To understand the difference between forward and reverse differentiation, let us provide a simple example. Consider the smooth map $f: \mathbb{R}^2 \to \mathbb{R}$ defined by $f(x_1,x_2) = x_1^2x_2 + \sin(x_2)$. The Jacobian matrix of $f$, at $(x_1, x_2)$, is the $1 \times 2$ matrix whose components are the partial derivatives of $f$: \[ \mathbf{J}_f(x_1,x_2) := \begin{bmatrix} 2x_1x_2 & x_1^2 + \cos(x_2) \end{bmatrix} \] The directional (forward) derivative of $f$ is the map $\mathsf{D}[f]: \mathbb{R}^2 \times \mathbb{R}^2 \to \mathbb{R}$ given by multiplying the Jacobian matrix of $f$ at the first input vector by the second input vector (seen as a $2 \times 1$ matrix): \[ \mathsf{D}[f]\left ((x_1, x_2), (v_1, v_2) \right) = \mathbf{J}_f(x_1,x_2) \begin{bmatrix} v_1 \\ v_2\end{bmatrix} = 2x_1x_2v_1 + (x_1^2 + \cos(x_2))v_2 \] Note that this ``pushes vectors forwards'': at a point of $\mathbb{R}^2$, the directional derivative $\mathsf{D}[f]$ takes a vector in $\mathbb{R}^2$ to a vector in $\mathbb{R}$, that is, vectors are moved in the same direction as the map $f$ itself. Conversely, the reverse derivative moves vectors in the opposite direction. The reverse derivative uses the \emph{transpose} of the Jacobian of $f$ at $(x_1, x_2)$, which is the $2 \times 1$ matrix: \[ \mathbf{J}^T_f(x_1,x_2) := \begin{bmatrix} 2x_1x_2 \\ x_1^2 + \cos(x_2) \end{bmatrix} \] Then the reverse derivative of $f$ is $\mathsf{R}[f]: \mathbb{R}^2 \times \mathbb{R} \to \mathbb{R}^2$ defined by multiplying the transpose of the Jacobian $f$ at the first input vector by the second input vector (this time seen as a $1 \times 1$ matrix): \[ \mathsf{R}[f]\left ((x_1, x_2), t \right) = \mathbf{J}^T_f(x_1,x_2) t = \begin{bmatrix} 2x_1x_2t \\ \left( x_1^2 + \cos(x_2) \right) t \end{bmatrix} \] Thus this operation indeed moves vectors in the opposite direction as $f$; that is, it takes vectors from the codomain of $f$ and returns vectors in the domain. The reverse derivative is better suited for supervised learning situations, in which one knows a change in the codomain (e.g., the error of some function) and wants to know how much adjustment to make in the domain. Thus, a natural question is how to modify monoidal and Cartesian differential categories to handle reverse differentiation. For the Cartesian side of the picture, this was already accomplished in \cite{cockett_et_al:LIPIcs:2020:11661}. While a Cartesian differential category (CDC) involves a category which comes equipped with an operator $\mathsf{D}$ which for any map $f: A \to B$ outputs a map $\mathsf{D}[f]: A \times A \to B$, a Cartesian \emph{reverse} differential category (CRDC) comes equipped with an operator $\mathsf{R}$ which for any map $f: A \to B$ outputs a map $\mathsf{R}[f]: A \times B \to A$. It was shown in \cite{cockett_et_al:LIPIcs:2020:11661} that a CRDC can be seen as a CDC with additional structure. Specifically, a CRDC is equivalent to giving a CDC in which the subcategory of linear maps in each simple slice has a transpose operator, which categorically speaking is a special type of of dagger structure. The explicit connection with supervised learning was then made in \cite{gradientBasedLearning}, which showed how to describe several supervised-learning techniques in the abstract setting of a CRDC. However, the first CRDC paper \cite{cockett_et_al:LIPIcs:2020:11661} left open the question of what a \emph{monoidal} reverse differential category (MRDC) should be. The goal of this paper is to fill in this gap by defining \emph{monoidal reverse differential categories} and establishing their fundamental relationships to the existing categorical differential structures described above. \begin{center} \begin{tabular}{ |c|c|c| } \hline & Cartesian & Monoidal \\ \hline Forward & CDC \cite{blute2009cartesian} & MDC \cite{blute2006differential} \\ Reverse & CRDC \cite{cockett_et_al:LIPIcs:2020:11661} & MRDC (this paper) \\ \hline \end{tabular} \end{center} What should this structure look like? As mentioned above, CDCs axiomatize smooth maps, while MDCs axiomatize linear maps. However, as noted above, for a CRDC, its subcategory of linear maps has dagger structure. So at a minimum, an MRDC should have dagger structure. However, we argue in this paper that an MRDC should be an even stronger: it should be \emph{self-dual compact closed}. Why do we ask for this additional structure? There are two important requirements we ask of an MRDC. \begin{enumerate} \item Just as every Cartesian reverse differential category (CRDC) gives a Cartesian differential category (CDC) so should a monoidal reverse differential category (MRDC) give a monoidal differential category (MDC); moreover, we should be able to characterize precisely what structure is required of an MDC to make it an MRDC (as we can in the Cartesian case \cite[Theorem 41]{cockett_et_al:LIPIcs:2020:11661}). \item Just as the coKleisli category of an MDC is a CDC \cite[Proposition 3.2.1]{blute2009cartesian}, so should the coKleisli category of an MRDC be a CRDC. \end{enumerate} We shall see in Section \ref{sec:mrdc_is_sdcc} that these requirements force an MRDC to be self-dual compact closed. To prove these results, it will be helpful to investigate the basic structure of an MDC more closely. In Section \ref{sec:diffcats}, we review monoidal differential categories, and add a new aspect to their story: a ``context fibration'' which helps to relate the structure of MDCs to CDCs (and then similarly between MRDCs and CRDCs). Thus, the main contributions of this paper are as follows: \begin{itemize} \item Give the basic definition of a monoidal reverse differential category (MRDC), along with examples, including some unexpected ones in quantum computation. \item Prove theorems that describe the relationships of MRDCs to CDCs, CRDCs, and MDCs. \item Provide additional material about the relationship of MDCs to CDCs via a ``context fibration''. \end{itemize} This work leaves open many future avenues for exploration; we describe some of these in Section \ref{sec:future_work}. \paragraph{Related Work:} In this paper, we study the linear logic categorical semantics for reverse differentiation. This was also studied by Smeding and V{\'a}k{\'a}r when they provide the categorical semantics for CHAD, their programming language for automatic differentiation (which includes both forward and reverse differentiation) \cite{vakar2021chad}. The work in this paper is also related to work done in categorical quantum mechanics. Indeed, the categorical semantics of (differential) linear logic that we consider in this paper also comes with the added assumption of dagger-compact closed structure. Compact closed categories have long been considered as models for linear logic \cite{hyland2003glueing, shirahata1996sequent}, and they form a setting that is also often studied by those in the categorical quantum community, sometimes called multiplicative categorical quantum logic \cite[Chapter 4]{duncan2006types}. We are specifically interested in compact closed models of linear logic with exponentials. This is a setting that was studied by Selinger and Valiron when they developed a programming language for quantum computation with classical control \cite{journal:selinger-valiron-fully-abstract-quantum}, as well as by Vicary who studied categorical quantum harmonic oscillators \cite{vicary2008categorical}. Cockett, Comfort, and Srinivasan also provide a generalization of linear logic with exponentials for categorical quantum mechanics, by generalizing compact closed categories to linear distributive categories with daggers \cite{srinivasan2021dagger}. \paragraph{Outline:} A reader interested in just the definition of MRDC can skip ahead to Section \ref{sec:mrdc}. However, an important part of the paper is the justification of why we define MRDCs the way we do: in particular, why the self-dual compact-closed requirement is important. For this, it was helpful for us to expand on a number of aspects of MDCs and CRDCs. In particular, in Section \ref{sec:context_fibration}, we define a canonical ``context'' fibration associated to any MDC. Then, in Section \ref{sec:fibration_equivalence}, we show that when we can build an associated CDC from an MDC, the canonical fibration associated to the MDC is equivalent to the canonical fibration of linear maps associated to a CDC. This result is key in seeing why MRDCs must be self-dual compact closed. Thus, prior to defining an MRDC, we review the background of MDCs, CDCs, and CRDCs, but also add some important new theory of these structures, which will in turn be helpful in understanding how our definition of an MRDC comes about. Section \ref{sec:mrdc} contains the main definition of the paper, that of an MRDC. We also describe examples, and prove the required properties. We conclude the section by describing additional ways to build CRDCs. Finally, in Section \ref{sec:future_work}, we describe some ways this work can be expanded on in the future. \paragraph{Conventions:} In this paper, we will use the same terminology, notation, graphical calculus, and conventions as found in \cite{Blute2019}. In particular, our string diagrams are to be read from top to bottom, and we write composition diagrammatically, that is, the composition of maps $f: A \to B$ and $g: B \to C$ is denoted as $f;g: A \to C$ or simply, $fg: A \to C$. \section{Forward Differential Categories}\label{sec:diffcats} In this section, we review the theory of monoidal and Cartesian differential categories, and add an important new element to the story: a canonical fibration associated to any coalgebra modality (in particular, to any differential category); see Section \ref{sec:context_fibration}. When the differential category has products, so that we can build its associated Cartesian differential category, we show that this canonical fibration is isomorphic to the canonical linear fibration of the Cartesian differential category; see Theorem \ref{thm:fibration_equivalence}. This isomorphism will be very useful when we go from monoidal reverse differential categories to Cartesian reverse differential categories. \subsection{Coalgebra Modalities} The central structure on which a monoidal differential category rests is a coalgebra modality. \begin{definition}\label{coalgdef} A \textbf{coalgebra modality} \cite[Definition 2.1]{blute2006differential} on a symmetric monoidal category $\mathbb{X}$ is a quintuple $(\oc, \delta, \varepsilon, \Delta, e)$ consisting of an endofunctor $\oc: \mathbb{X} \to \mathbb{X}$ and four natural transformations: \begin{align*} \delta_A: \oc A \to \oc \oc A && \varepsilon_A: \oc A \to A && \Delta_A: \oc A \to \oc A \otimes \oc A && e_A: \oc A \to k \end{align*} such that $(\oc, \delta, \varepsilon)$ is a comonad and for each object $A$, $(\oc A, \Delta, e)$ is a cocommutative comonoid and $\delta_A$ is a comonoid morphism, that is, the diagrams found in \cite[Definition 1]{Blute2019} commute. \end{definition} Note that requiring that $\Delta$ and $e$ be natural transformations is equivalent to asking that for each map $f: A \to B$, $\oc(f): \oc A \to \oc B$ is also a comonoid morphism. In the graphical calculus, we will use functor boxes when dealing with string diagrams involving the endofunctor, that is, a mere map $f: A \to B$ will be encased in a circle while $\oc(f): \oc A \to \oc B$ will be encased in a box: \begin{align*} \begin{array}[c]{c} f \end{array}= \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=circle] (0) at (0, 2.25) {$A$}; \node [style=circle] (1) at (0, -0.25) {$B$}; \node [style={component}] (2) at (0, 1) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (2) to (1); \end{pgfonlayer} \end{tikzpicture} \end{array} && \begin{array}[c]{c} \oc(f) \end{array}= \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=circle] (0) at (0, 2.25) {$\oc A$}; \node [style=circle] (1) at (0, -0.25) {$\oc B$}; \node [style={function}] (2) at (0, 1) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (2) to (1); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} The remaining coalgebra modality structure maps are drawn as follows: \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (1) at (15, 3.5) {$\oc A$}; \node [style=object] (2) at (15, 1) {$\oc\oc A$}; \node [style=component] (3) at (15, 2.25) {$\delta$}; \node [style=object] (5) at (19.75, 3.5) {$\oc A$}; \node [style=object] (7) at (19.25, 1) {$\oc A$}; \node [style=duplicate] (8) at (19.75, 2.25) {$\Delta$}; \node [style=object] (10) at (20.25, 1) {$\oc A$}; \node [style=object] (11) at (17, 3.5) {$\oc A$}; \node [style=object] (12) at (17, 1) {$A$}; \node [style=component] (13) at (17, 2.25) {$\varepsilon$}; \node [style=object] (14) at (22, 3.5) {$\oc A$}; \node [style=component] (16) at (22, 2.5) {$e$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (1) to (3); \draw [style=wire] (3) to (2); \draw [style=wire, bend right=15, looseness=1.25] (8) to (7); \draw [style=wire] (5) to (8); \draw [style=wire, in=97, out=-53] (8) to (10); \draw [style=wire] (11) to (13); \draw [style=wire] (13) to (12); \draw [style=wire] (14) to (16); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} We will occasionally make use of the following canonical natural transformation associated to any coalgebra modality. \begin{definition}\label{dcircdef} For a coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ on a symmetric monoidal category $\mathbb{X}$, its \textbf{coderiving transformation} \cite[Definition 2.2]{cockett_lemay_2018} is the natural transformation $\mathsf{d}^\circ_A: \oc A \to \oc A \otimes A$ defined as follows: \begin{align*} \mathsf{d}^\circ_A := \xymatrixcolsep{5pc}\xymatrix{ \oc A \ar[r]^-{\Delta_A} & \oc A \otimes \oc A \ar[r]^-{1_{\oc A} \otimes \varepsilon_A} & \oc A \otimes A } && \mathsf{d}^\circ := \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (2, 1.5) {$\varepsilon$}; \node [style=duplicate] (1) at (1.25, 2.25) {$\Delta$}; \node [style=object] (2) at (0.5, 0.75) {$\oc A$}; \node [style=object] (3) at (1.25, 3.25) {$\oc A$}; \node [style=object] (4) at (2, 0.75) {$A$}; \node [style=object] (5) at (-1.5, 1) {$A$}; \node [style=object] (6) at (-2, 3) {$\oc A$}; \node [style=integral] (7) at (-2, 2) {{\bf =\!=\!=\!=}}; \node [style=object] (8) at (-2.5, 1) {$\oc A$}; \node [style=object] (9) at (-0.25, 2) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (3) to (1); \draw [style=wire, in=90, out=0, looseness=1.25] (1) to (0); \draw [style=wire] (0) to (4); \draw [style=wire, in=90, out=180] (1) to (2); \draw [style=wire, bend left] (7) to (5); \draw [style=wire] (6) to (7); \draw [style=wire, bend right] (7) to (8); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \end{definition} See \cite[Proposition 2.1]{cockett_lemay_2018} for a list of identities the coderiving transformation satisfies. We now turn our attention to when our symmetric monoidal category also has finite products. \begin{definition} \label{Seelydef} For a coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ on a symmetric monoidal category $\mathbb{X}$ with finite products, the \textbf{Seely maps} consist of the natural transformations: \begin{align*} \chi_{A,B}: \oc(A \times B) \to \oc A \otimes \oc B && \chi_\top: \oc \top \to k \end{align*} defined respectively as follows: \begin{align*} \chi_{A,B} := \xymatrixcolsep{3.5pc}\xymatrix{\oc(A \times B) \ar[r]^-{\Delta_{A,B}} & \oc(A \times B) \otimes \oc(A \times B) \ar[r]^-{\oc(\pi_0) \otimes \oc(\pi_1)} & \oc A \otimes \oc B } && \chi_\top := \xymatrixcolsep{3pc}\xymatrix{ \oc\top \ar[r]^-{e_\top} & k } \end{align*} \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=function2] (0) at (1.25, 1.25) {$\pi_1$}; \node [style=duplicate] (1) at (0.5, 2.25) {$\Delta$}; \node [style=object] (2) at (-0.25, 0.25) {$\oc A$}; \node [style=object] (3) at (0.5, 3.25) {$\oc (A \times B)$}; \node [style=object] (4) at (1.25, 0.25) {$\oc B$}; \node [style=object] (5) at (-1.5, 1) {$\oc B$}; \node [style=object] (6) at (-2, 3) {$\oc(A \times B)$}; \node [style=duplicate] (7) at (-2, 2) {$\chi$}; \node [style=object] (8) at (-2.5, 1) {$\oc A$}; \node [style=object] (9) at (-1, 2) {$=$}; \node [style=function2] (10) at (-0.25, 1.25) {$\pi_0$}; \node [style=component] (12) at (5.25, 1.5) {$e$}; \node [style=object] (14) at (5.25, 2.5) {$\oc \top$}; \node [style=object] (17) at (3.75, 2.5) {$\oc \top$}; \node [style=duplicate] (18) at (3.75, 1.5) {$\chi_\top$}; \node [style=object] (20) at (4.5, 2) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (3) to (1); \draw [style=wire, in=90, out=-15, looseness=1.25] (1) to (0); \draw [style=wire] (0) to (4); \draw [style=wire, bend left] (7) to (5); \draw [style=wire] (6) to (7); \draw [style=wire, bend right] (7) to (8); \draw [style=wire, in=90, out=-165, looseness=1.25] (1) to (10); \draw [style=wire] (10) to (2); \draw [style=wire] (14) to (12); \draw [style=wire] (17) to (18); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} A coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ on a symmetric monoidal category $\mathbb{X}$ with finite products has \textbf{Seely isomorphisms} \cite[Definition 10]{Blute2019} if the Seely maps are isomorphisms, so that $\oc(A \times B) \cong \oc A \otimes \oc B$ and $\oc \top \cong k$. \end{definition} Coalgebra modalities with Seely isomorphisms can equivalently be described as \textbf{monoidal coalgebra modalities} \cite[Definition 2]{Blute2019}, which are coalgebra modalities equipped with extra structure: a natural transformation ${\mathsf{m}_{A,B}: \oc A \otimes \oc B \to \oc(A \otimes B)}$ and a map $\mathsf{m}_k: k \to \oc k$ such that the underlying comonad $\oc$ is a symmetric monoidal comonad, and that both $\Delta$ and $\mathsf{e}$ are both monoidal transformations and $\oc$-coalgebra morphisms (which imply that $\mathsf{m}_{A,B}$ and $\mathsf{m}_k$ are comonoid morphisms). See \cite[Section 7]{Blute2019} for how to build $\mathsf{m}$ and $\mathsf{m}_k$ from the Seely isomorphisms, and vice-versa. Note however that monoidal coalgebra modalities can be defined without the need of finite products; however, as they do not play a central role in this paper, we have elected to only briefly mention them. Many examples of (monoidal) coalgebra modalities can be found throughout the literature, see for example \cite[Section 2.4]{hyland2003glueing} for a very nice list of various kinds of examples of (monoidal) coalgebra modalities. We conclude this section by briefly discussing coalgebra modalities in the presence of additive structure. Indeed, the underlying categorical structure of a differential category is not only a symmetric monoidal category but an \emph{additive} symmetric monoidal category. \begin{definition}\label{addcatdef} An \textbf{additive symmetric monoidal category} \cite[Definition 3]{Blute2019} is a symmetric monoidal category $\mathbb{X}$ such that each hom-set $\mathbb{X}(A,B)$ is a commutative monoid with zero map $0 \in \mathbb{X}(A,B)$ and addition ${+: \mathbb{X}(A,B) \times \mathbb{X}(A,B) \to \mathbb{X}(A,B)}$, $(f,g) \mapsto f +g$, and, such that composition and the tensor product preserves the additive structure, that is, the following equalities hold: $k(f\!+\!g)h\!=kfh\!+\!kgh$ and $k0h=0$, and $k \otimes (f\!+\!g)\otimes h\!= \!k\otimes\!f\otimes h \!+ \!k\otimes\!g\otimes h$ and $k \otimes 0 \otimes h\!=\!0$. \end{definition} By \cite[Theorem 1]{Blute2019}, for additive symmetric monoidal categories, monoidal coalgebra modalities can equivalently be describe as \textbf{additive bialgebra modalities} \cite[Definition 3]{Blute2019}. This implies that, in the additive case, we also have two extra natural transformations $\nabla_A: \oc A \otimes \oc A \to \oc A$ and $u_A: k \to \oc A$ such that $\oc A$ is a bimonoid. In particular, this implies that $\oc A$ is a commutative monoid. In the graphical calculus: \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (3) at (19.75, 1) {$\oc A$}; \node [style=object] (4) at (19.25, 3.5) {$\oc A$}; \node [style=duplicate] (5) at (19.75, 2.25) {$\nabla$}; \node [style=object] (6) at (20.25, 3.5) {$\oc A$}; \node [style=object] (10) at (22, 1) {$\oc A$}; \node [style=component] (11) at (22, 2) {$u$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left=15, looseness=1.25] (5) to (4); \draw [style=wire] (3) to (5); \draw [style=wire, in=-97, out=53] (5) to (6); \draw [style=wire] (10) to (11); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} If an additive symmetric monoidal category has finite products, then the product $\times$ is in fact a biproduct and the terminal object $\top$ is a zero object. Thus, for an additive symmetric monoidal category with finite (bi)products, a coalgebra modality with Seely isomorphisms is an additive bialgebra modality and vice-versa \cite[Theorem 6]{Blute2019}. In particular, $\chi^{-1}_{A,B}: \oc A \otimes \oc B \to \oc (A \times B)$ and $\chi^{-1}_\top: k \to \oc \top$ are constructed as follows using the monoid structure of $\oc A$: \begin{align*} \chi_{A,B} := \xymatrixcolsep{3.25pc}\xymatrix{\oc A \otimes \oc B \ar[rr]^-{\oc(\iota_0) \otimes \oc(\iota_1)} && \oc(A \times B) \otimes \oc(A \times B) \ar[r]^-{\nabla_{A \times B}} & \oc(A \times B) } && \chi_\top := \xymatrixcolsep{3pc}\xymatrix{ k \ar[r]^-{u_\top} & \oc\top } \end{align*} \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=function2] (0) at (4.25, 2.25) {$\iota_1$}; \node [style=duplicate] (1) at (3.5, 1.25) {$\nabla$}; \node [style=object] (2) at (2.75, 3.25) {$\oc A$}; \node [style=object] (3) at (3.5, 0.25) {$\oc (A \times B)$}; \node [style=object] (4) at (4.25, 3.25) {$\oc B$}; \node [style=object] (5) at (1.5, 2.5) {$\oc B$}; \node [style=object] (6) at (1, 0.5) {$\oc(A \times B)$}; \node [style=duplicate] (7) at (1, 1.5) {$\chi$}; \node [style=object] (8) at (0.5, 2.5) {$\oc A$}; \node [style=object] (9) at (2, 1.5) {$=$}; \node [style=function2] (10) at (2.75, 2.25) {$\iota_0$}; \node [style=component] (11) at (8.25, 2) {$u$}; \node [style=object] (12) at (8.25, 1) {$\oc \top$}; \node [style=object] (13) at (6.75, 1) {$\oc \top$}; \node [style=duplicate] (14) at (6.75, 2) {$\chi_\top$}; \node [style=object] (15) at (7.5, 1.5) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (3) to (1); \draw [style=wire, in=-90, out=15, looseness=1.25] (1) to (0); \draw [style=wire] (0) to (4); \draw [style=wire, bend right] (7) to (5); \draw [style=wire] (6) to (7); \draw [style=wire, bend left] (7) to (8); \draw [style=wire, in=-90, out=165, looseness=1.25] (1) to (10); \draw [style=wire] (10) to (2); \draw [style=wire] (12) to (11); \draw [style=wire] (13) to (14); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} where $\iota_0: A \to A \times B$ and $\iota_1: B \to A \times B$ are the injection maps of the biproduct. \subsection{The context fibration associated to a coalgebra modality}\label{sec:context_fibration} In this section we describe a canonical fibration associated to any coalgebra modality, whose individual fibres were studied in \cite{ehrhard2021categorical,HYLAND1999127}. We assume the reader is familiar with the theory of fibrations (as, for example, presented in \cite[Section 2.1]{jacobs1999categorical}). The fibration in question will be over the coKleisli category of the comonad $\oc$. As we will be working with coKleisli categories, we will use the notation in \cite{blute2015cartesian}, where interpretation brackets $\llbracket - \rrbracket$ are used to translate between maps in the coKleisli category and maps in the base category. That is, for a comonad $(\oc, \delta, \varepsilon)$ on a category $\mathbb{X}$ if $\mathbb{X}_\oc$ is its coKleisli category, then a map $f: A \to B$ in $\X_{\oc}$ corresponds to a map $\llbracket f \rrbracket: \oc A \to B$ in $\X$. Using this notation, recall that composition and identities in $\mathbb{X}_\oc$ are defined as: \begin{align*} \llbracket f;g \rrbracket = \delta_{A}; \oc(\llbracket f \rrbracket); \llbracket g \rrbracket && \llbracket 1_A \rrbracket = \varepsilon_A \end{align*} There are canonical adjoint functors $\mathsf{U}_\oc: \mathbb{X}_\oc \to \mathbb{X}$ and $\mathsf{F}_\oc: \mathbb{X} \to \mathbb{X}_\oc$ defined as follows: \begin{align*} \mathsf{U}_\oc(A) = \oc A && \mathsf{U}_\oc(\llbracket f \rrbracket) = \delta_A; \oc(\llbracket f \rrbracket) && \mathsf{F}_\oc(A) = A && \llbracket \mathsf{F}_\oc(f) \rrbracket = \varepsilon_A ; f \end{align*} We now describe the canonical fibration over the coKleisli category of a coalgebra modality. \begin{definition} Let $(\oc, \delta, \varepsilon, \Delta, e)$ be a coalgebra modality on a symmetric monoidal category $\mathbb{X}$. Define the category $\L_\oc[\mathbb{X}]$ as follows: \begin{enumerate}[{\em (i)}] \item The objects of $\L_\oc[\mathbb{X}]$ are pairs of objects $(X,A)$ of $\mathbb{X}$; that is, $Ob\left( \L_\oc[\mathbb{X}] \right) = Ob\left( \mathbb{X} \right) \times Ob\left( \mathbb{X} \right)$; \item The maps of $\L_\oc[\mathbb{X}]$ are pairs $(\llbracket f \rrbracket,g): (X,A) \to (Y,B)$ consisting of a coKleisli map ${\llbracket f \rrbracket: \oc X \to Y}$ and a map $g: \oc X \otimes A \to B$, that is, $\L_\oc[\mathbb{X}]\left( (X,A), (Y,B) \right) = \mathbb{X}_\oc(X, Y) \times \mathbb{X}(\oc X \otimes A, B)$; \item The identity map of $(X,A)$ is defined as $(\llbracket 1_A \rrbracket, e_X \otimes 1_A) = (\varepsilon_A, e_X \otimes 1_A): (X,A) \to (X,A)$; \begin{align*} \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$X$}; \node [style=component] (21) at (9.5, 0) {$\varepsilon$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.5, 1.75) {$A$}; \node [style=object] (21) at (10.5, -0.25) {$A$}; \node [style=component] (22) at (9.75, 0.5) {$e$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (19) to (21); \draw [style=wire] (17) to (22); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) \end{align*} \item The composition of maps $(\llbracket f \rrbracket,g): (X,A) \to (Y,B)$ and $(\llbracket h \rrbracket, k): (Y,B) \to (Z,C)$ is defined as follows: \[ (\llbracket f \rrbracket,g); (\llbracket h \rrbracket, k) = \left( \llbracket f;g \rrbracket, \xymatrixcolsep{4pc}\xymatrix{ \oc X \otimes A \ar[r]^-{\Delta_X \otimes 1_A} & \oc X \otimes \oc X \otimes A \ar[r]^-{\mathsf{U}_\oc\left( \llbracket f \rrbracket \right) \otimes g} & \oc Y \otimes B \ar[r]^-{k} & C} \right) \] \begin{align*} \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (10.25, 0.75) {$g$}; \node [style=object] (21) at (10.25, -0.25) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) ; \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc Y$}; \node [style=object] (20) at (9.5, -1) {$Z$}; \node [style=component] (21) at (9.5, 0) {$h$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc Y$}; \node [style=object] (19) at (10.75, 1.75) {$B$}; \node [style=component] (20) at (10.25, 0.75) {$k$}; \node [style=object] (21) at (10.25, -0.25) {$C$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1.75) {$\oc X$}; \node [style=function2] (18) at (9.5, 0) {$f$}; \node [style=component] (19) at (9.5, -1) {$h$}; \node [style=object] (20) at (9.5, -1.75) {$Z$}; \node [style=component] (21) at (9.5, 1) {$\delta$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (18) to (19); \draw [style=wire] (19) to (20); \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (18); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (15) at (9, 0) {$\delta$}; \node [style=duplicate] (16) at (9.5, 1) {$\Delta$}; \node [style=object] (17) at (9.5, 1.75) {$\oc X$}; \node [style=component] (18) at (10.25, 0) {$g$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (9.75, -1.75) {$k$}; \node [style=object] (21) at (9.75, -2.5) {$C$}; \node [style=function2] (23) at (9, -1) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right] (16) to (15); \draw [style=wire] (17) to (16); \draw [style=wire, in=-90, out=15, looseness=0.75] (18) to (19); \draw [style=wire, in=150, out=-30, looseness=1.25] (16) to (18); \draw [style=wire] (20) to (21); \draw [style=wire, in=30, out=-90] (18) to (20); \draw [style=wire] (15) to (23); \draw [style=wire, bend right, looseness=1.25] (23) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) \end{align*} \end{enumerate} Let $\mathsf{p}_\oc: \L_\oc[\mathbb{X}] \to \mathbb{X}_\oc$ be the forgetful functor, which is defined on objects as $\mathsf{p}_\oc(X,A) = X$ and on maps as ${\mathsf{p}_\oc(\llbracket f \rrbracket,g) = \llbracket f \rrbracket}$. \end{definition} The following is then straightforward: \begin{proposition}\label{prop:context_fibration} Let $(\oc, \delta, \varepsilon, \Delta, e)$ be a coalgebra modality on a symmetric monoidal category $\mathbb{X}$. Then $\mathsf{p}_\oc: \L_\oc[\mathbb{X}] \to \mathbb{X}_\oc$ is a fibration where the Cartesian maps are those of the form: \begin{align*} (\llbracket f \rrbracket, e_X \otimes 1_A): (X,A) \to (Y,A) && \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.5, 1.75) {$A$}; \node [style=object] (21) at (10.5, -0.25) {$A$}; \node [style=component] (22) at (9.75, 0.5) {$e$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (19) to (21); \draw [style=wire] (17) to (22); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) \end{align*} \end{proposition} We now describe the fibres of this fibration. The fibres are examples of Hyland and Schalk's \textbf{comonoid indexing} \cite[Section 4]{HYLAND1999127} over the cofree $\oc$-coalgebras, which are also used by Ehrhard and Jafarrahmani for studying fixed point formulas \cite{ehrhard2021categorical}. In particular, since $\oc X$ is a comonoid, $\oc X \otimes -$ is a comonad and, furthermore, its coKleisli category is precisely the fibre over $X$. \begin{lemma}\label{lem:linfibre} Let $(\oc, \delta, \varepsilon, \Delta, e)$ be a coalgebra modality on a symmetric monoidal category $\mathbb{X}$. For any object $X \in \mathbb{X}$, the fibre over $X$ of the fibration $\mathsf{p}_\oc: \L_\oc[\mathbb{X}] \to \mathbb{X}_\oc$ is written as $\L_\oc[X]$ and given by \begin{enumerate}[{\em (i)}] \item The objects of $\L_\oc[X]$ are the same as the objects of $\mathbb{X}$, that is, $Ob\left( \L_\oc[X] \right) = Ob\left( \mathbb{X} \right)$; \item The maps of $\L_\oc[X]$ are maps $f: \oc X \otimes A \to B$, that is, $\L_\oc[X]\left( A,B \right) = \mathbb{X}(\oc X \otimes A, B)$; \item The identity map of $A$ is defined as $e_X \otimes 1_A: \oc X \otimes A \to A$; \[ \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.5, 1.75) {$A$}; \node [style=object] (21) at (10.5, -0.25) {$A$}; \node [style=component] (22) at (9.75, 0.5) {$e$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (19) to (21); \draw [style=wire] (17) to (22); \end{pgfonlayer} \end{tikzpicture} \end{array} \] \item The composition of maps $f: \oc X \otimes A \to B$ and $g: \oc X \otimes B \to C$ is defined as follows: \begin{align*} \xymatrixcolsep{3pc}\xymatrix{ \oc X \otimes A \ar[r]^-{\Delta_X \otimes 1_A} & \oc X \otimes \oc X \otimes A \ar[r]^-{1_{\oc X} \otimes f} & \oc X \otimes B \ar[r]^-{g} & C} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (10.25, 0.75) {$f$}; \node [style=object] (21) at (10.25, -0.25) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array}; \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$B$}; \node [style=component] (20) at (10.25, 0.75) {$g$}; \node [style=object] (21) at (10.25, -0.25) {$C$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array}= \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=duplicate] (1) at (9.5, 1) {$\Delta$}; \node [style=object] (2) at (9.5, 1.75) {$\oc X$}; \node [style=component] (3) at (10.5, 0) {$f$}; \node [style=object] (4) at (11, 1.75) {$A$}; \node [style=component] (5) at (9.75, -1.25) {$g$}; \node [style=object] (6) at (9.75, -2) {$C$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (2) to (1); \draw [style=wire, in=-90, out=15, looseness=0.75] (3) to (4); \draw [style=wire] (5) to (6); \draw [style=wire, in=30, out=-90] (3) to (5); \draw [style=wire, in=165, out=-150] (1) to (5); \draw [style=wire, in=165, out=-15, looseness=1.25] (1) to (3); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \end{enumerate} For every coKleisli map $\llbracket h \rrbracket: \oc X \to Y$, define the \textbf{substitution functor} $\llbracket h \rrbracket^\ast_\oc: \L_\oc[Y] \to \L_\oc[X]$ on objects as $\llbracket h \rrbracket^\ast_\oc(A) = A$ and on maps $f: \oc Y \otimes A \to B$ as follows: \begin{align*} \llbracket h \rrbracket^\ast_\oc(f) := \xymatrixcolsep{5pc}\xymatrix{ \oc X \otimes A \ar[r]^-{\llbracket h \rrbracket \otimes 1_A} & \oc Y \otimes A \ar[r]^-{f} & B} && \llbracket h \rrbracket^\ast_\oc \left( \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc Y$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (10.25, 0.75) {$f$}; \node [style=object] (21) at (10.25, -0.25) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (9, 0.25) {$\delta$}; \node [style=object] (1) at (10.5, 1.25) {$A$}; \node [style=component] (2) at (9.75, -1.75) {$f$}; \node [style=object] (3) at (9.75, -2.75) {$B$}; \node [style=function2] (4) at (9, -0.75) {$h$}; \node [style=object] (5) at (9, 1.25) {$\oc X$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (2) to (3); \draw [style=wire, bend right, looseness=1.25] (4) to (2); \draw [style=wire] (0) to (4); \draw [style=wire, in=30, out=-90, looseness=0.75] (1) to (2); \draw [style=wire] (5) to (0); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \end{lemma} Every fibre is also a symmetric monoidal category. \begin{lemma}\cite[Proposition 4.1]{HYLAND1999127} Let $(\oc, \delta, \varepsilon, \Delta, e)$ be a coalgebra modality on a symmetric monoidal category $\mathbb{X}$. For every object $X \in \mathbb{X}$, $\L_\oc[X]$ is a symmetric monoidal category where the tensor product $\otimes$ is defined on objects as the tensor product in $\mathbb{X}$, and on maps $f: \oc X \otimes A \to B$ and $g: \oc X \otimes C \to D$ as follows: \[ f \otimes g := \xymatrixcolsep{4.5pc}\xymatrix{ \oc X \otimes A \otimes C \ar[r]^-{\Delta_X \otimes 1_A \otimes 1_C} & \oc X \otimes \oc X \otimes A \otimes C \ar[r]^-{1_{\oc X} \otimes \sigma_{\oc X, A} \otimes 1_C} & \oc X \otimes A \otimes \oc X \otimes C \ar[r]^-{f \otimes g} & B \otimes D} \] \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (10.25, 0.75) {$f$}; \node [style=object] (21) at (10.25, -0.25) {$B$}; \node [style=object] (22) at (11.25, 0.75) {$\bigotimes$}; \node [style=object] (23) at (11.75, 1.75) {$\oc X$}; \node [style=object] (24) at (12.75, 1.75) {$C$}; \node [style=component] (25) at (12.25, 0.75) {$g$}; \node [style=object] (26) at (12.25, -0.25) {$D$}; \node [style=object] (27) at (13.25, 0.75) {$=$}; \node [style=component] (30) at (14.5, 0.25) {$f$}; \node [style=object] (31) at (14.5, -0.75) {$B$}; \node [style=object] (32) at (14.5, 2.25) {$\oc X$}; \node [style=object] (33) at (15.5, 2.25) {$A$}; \node [style=object] (34) at (16.5, 2.25) {$C$}; \node [style=duplicate] (35) at (14.5, 1.5) {$\Delta$}; \node [style=component] (36) at (15.75, 0.25) {$g$}; \node [style=object] (37) at (15.75, -0.75) {$D$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \draw [style=wire] (25) to (26); \draw [style=wire, in=165, out=-90] (23) to (25); \draw [style=wire, in=-90, out=15] (25) to (24); \draw [style=wire] (30) to (31); \draw [style=wire] (32) to (35); \draw [style=wire] (36) to (37); \draw [style=wire, in=135, out=-15, looseness=0.75] (35) to (36); \draw [style=wire, in=-180, out=-150, looseness=1.50] (35) to (30); \draw [style=wire, in=0, out=-90] (33) to (30); \draw [style=wire, in=15, out=-90] (34) to (36); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} and where the monoidal unit is the same as $\mathbb{X}$. Furthermore, for every coKleisli map $\llbracket h \rrbracket: \oc X \to Y$, the substitution functor $\llbracket h \rrbracket^\ast_\oc: \L_\oc[Y] \to \L_\oc[X]$ is a strict symmetric monoidal functor. \end{lemma} That is, as described in \cite[Remark 3.5]{monoidal_fibrations} this fibration is a pseudomonoid in the 2-category of fibrations over $\mathbb{X}_\oc$. This is different from a monoidal fibration, which is defined to be a pseudomonoid in the 2-category of fibrations with non-fixed base \cite[Definition 3.1]{monoidal_fibrations}. However, if the base category has finite products $\mathbb{X}$, then so does $\mathbb{X}_\oc$: on objects the product is defined as in $\mathbb{X}$, and the remaining data is defined as follows: \begin{align*} \llbracket \pi_0 \rrbracket = \varepsilon_{A \times B}; \pi_0 && \llbracket \pi_1 \rrbracket = \varepsilon_{A \times B}; \pi_1 && \llbracket \langle f, g \rangle \rrbracket = \left \langle \llbracket f \rrbracket, \llbracket g \rrbracket \right \rangle && \llbracket f \times g \rrbracket = \left \langle \oc(\pi_0); \llbracket f \rrbracket , \oc(\pi_1); \llbracket g \rrbracket \right \rangle \end{align*} Moreover, such a fibration in which the base category is Cartesian is a monoidal fibration: see \cite[Theorem 4.1]{monoidal_fibrations} and \cite[Theorem 12.8]{shulman_monoidal_fibrations}. In particular this means that the total category of the fibration is monoidal, and the following corollary describes its structure. \begin{corollary} Let $(\oc, \delta, \varepsilon, \Delta, e)$ be a coalgebra modality on a symmetric monoidal category $\mathbb{X}$ with finite products. Then $\L_\oc[\mathbb{X}]$ is a symmetric monoidal category where the tensor product $\otimes_\oc$ is defined on objects as $(X,A) \otimes (Y,B) = (X \times Y, A \otimes B)$, and on maps $(\llbracket f \rrbracket, g): (X,A) \to (Y,B)$ and $(\llbracket h \rrbracket, k): (Z,C) \to (W,D)$, $ (\llbracket f \rrbracket, g) \otimes (\llbracket h \rrbracket, k)$ is defined as follows: \[ \left(\llbracket f \times h \rrbracket, \xymatrixcolsep{4pc}\xymatrix{ \oc (X \times Y) \!\otimes\! A \!\otimes\! C \ar[r]^-{\chi_{X,Y} \otimes 1_A \otimes 1_C} & \oc X \!\otimes\! \oc Y \!\otimes\! A \!\otimes\! C \ar[r]^-{1_{\oc X} \otimes \sigma_{\oc Y, A} \otimes 1_C} & \oc X \otimes A \otimes \oc X \otimes C \ar[r]^-{g \otimes k} & B \otimes D} \right) \] \begin{align*} \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (10.25, 0.75) {$g$}; \node [style=object] (21) at (10.25, -0.25) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) \otimes \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc Z$}; \node [style=object] (20) at (9.5, -1) {$W$}; \node [style=component] (21) at (9.5, 0) {$h$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc Z$}; \node [style=object] (19) at (10.75, 1.75) {$C$}; \node [style=component] (20) at (10.25, 0.75) {$k$}; \node [style=object] (21) at (10.25, -0.25) {$D$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1.5) {$\oc (X \times Y)$}; \node [style=object] (20) at (9.5, -1.5) {$Y$}; \node [style=component] (21) at (9.5, 0) {$\llbracket f \times h \rrbracket$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (30) at (14.5, 0.25) {$f$}; \node [style=object] (31) at (14.5, -0.75) {$B$}; \node [style=object] (32) at (14.5, 2.25) {$\oc (X \times Y)$}; \node [style=object] (33) at (15.5, 2.25) {$A$}; \node [style=object] (34) at (16.5, 2.25) {$C$}; \node [style=duplicate] (35) at (14.5, 1.5) {$\chi$}; \node [style=component] (36) at (15.75, 0.25) {$g$}; \node [style=object] (37) at (15.75, -0.75) {$D$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (30) to (31); \draw [style=wire] (32) to (35); \draw [style=wire] (36) to (37); \draw [style=wire, in=135, out=-15, looseness=0.75] (35) to (36); \draw [style=wire, in=-180, out=-150, looseness=1.50] (35) to (30); \draw [style=wire, in=0, out=-90] (33) to (30); \draw [style=wire, in=15, out=-90] (34) to (36); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) \end{align*} and where the monoidal unit is $(\top, k)$. Furthermore, $\mathsf{p}_\oc: \L_\oc[\mathbb{X}] \to \mathbb{X}_\oc$ is a monoidal fibration in the sense of \cite[Definition 3.1]{monoidal_fibrations}. \end{corollary} It also interesting to note that for monoidal coalgebra modalities, each fibre also comes equipped with monoidal coalgebra modality structure \cite{ehrhard2021categorical,HYLAND1999127}. Furthermore, if one also assumes finite products, we can extend this to a monoidal coalgebra modality on the whole fibration. However, these results are not necessary for the rest of the story of this paper. \subsection{Monoidal Differential Categories} We now recall one of the central structures of this paper: monoidal differential categories (these were originally simply called differential categories, but here we add ``monoidal'' to help differentiate the various structures we are considering). For a more detailed introduction to monoidal differential categories, we refer the reader to \cite{Blute2019,blute2006differential}. \begin{definition}\label{def:diffcat} A \textbf{ monoidal differential category} \cite[Definition 2.4]{blute2006differential} is an additive symmetric monoidal category $\mathbb{X}$ with a coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ which comes equipped with a \textbf{deriving transformation} \cite[Definition 7]{Blute2019}; that is, a natural transformation $\mathsf{d}_A: \oc A \otimes A \to \oc A$, which is drawn in the graphical calculus as: \[\mathsf{d}:= \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (0) at (1.75, 2.75) {$A$}; \node [style=object] (1) at (1.25, 1.25) {$\oc A$}; \node [style=integral] (2) at (1.25, 2) {{\bf =\!=\!=\!=}}; \node [style=object] (3) at (0.75, 2.75) {$\oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right] (2) to (0); \draw [style=wire] (1) to (2); \draw [style=wire, bend left] (2) to (3); \end{pgfonlayer} \end{tikzpicture} \end{array}\] and such that the following axioms hold: \begin{description} \item[{\bf [d.1]}] Constant Rule: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{\oc A \otimes A \ar[dr]_-{0} \ar[r]^-{\mathsf{d}_A} & \oc A \ar[d]^-{e_A} \\ & k} \end{array}&& \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (1.25, 0) {$e$}; \node [style=object] (1) at (2, 2) {$A$}; \node [style=differential] (2) at (1.25, 1) {{\bf =\!=\!=\!=}}; \node [style=object] (3) at (0.5, 2) {$\oc A$}; \node [style=port] (4) at (2.25, 1) {$=$}; \node [style=port] (5) at (3, 1) {$0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right] (2) to (1); \draw [style=wire, bend left] (2) to (3); \draw [style=wire] (2) to (0); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item[{\bf [d.2]}] Leibniz Rule (or Product Rule): \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{\oc A \otimes A \ar[d]_-{\Delta_A \otimes 1_A} \ar[rr]^-{\mathsf{d}_A} && \oc A \ar[d]^-{\Delta_A} \\ \oc A \otimes \oc A \otimes A \ar[rr]_-{(1_{\oc A} \otimes \mathsf{d}_A) + (1_{\oc A} \otimes \sigma_{\oc A, A})(\mathsf{d}_A \otimes 1_{\oc A})} && \oc A \otimes \oc A } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=differential] (6) at (7, 1) {{\bf =\!=\!=\!=}}; \node [style=object] (7) at (7.75, 2) {$A$}; \node [style=object] (8) at (6.25, 2) {$\oc A$}; \node [style=object] (9) at (6.25, -0.75) {$\oc A$}; \node [style=duplicate] (10) at (7, 0.25) {$\Delta$}; \node [style=object] (11) at (7.75, -0.75) {$\oc A$}; \node [style=object] (12) at (9.25, 2) {$\oc A$}; \node [style=differential] (13) at (9, 0) {{\bf =\!=\!=\!=}}; \node [style=object] (14) at (10.75, 2) {$A$}; \node [style=duplicate] (15) at (9.25, 1) {$\Delta$}; \node [style=object] (16) at (9, -0.75) {$\oc A$}; \node [style=object] (17) at (10.5, -0.75) {$\oc A$}; \node [style=object] (18) at (13.5, 2) {$A$}; \node [style=differential] (19) at (13, 0) {{\bf =\!=\!=\!=}}; \node [style=object] (20) at (13, -0.75) {$\oc A$}; \node [style=object] (21) at (11.75, -0.75) {$\oc A$}; \node [style=object] (22) at (12.25, 2) {$\oc A$}; \node [style=duplicate] (23) at (12.25, 1) {$\Delta$}; \node [style=port] (24) at (8, 0.5) {$=$}; \node [style=port] (25) at (11.25, 0.5) {$+$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right] (6) to (7); \draw [style=wire, bend left] (6) to (8); \draw [style=wire, bend right] (10) to (9); \draw [style=wire, bend left] (10) to (11); \draw [style=wire] (6) to (10); \draw [style=wire, in=-90, out=45] (13) to (14); \draw [style=wire, in=150, out=-150, looseness=1.50] (15) to (13); \draw [style=wire] (12) to (15); \draw [style=wire] (13) to (16); \draw [style=wire, bend left, looseness=1.25] (15) to (17); \draw [style=wire, in=-90, out=60, looseness=1.25] (19) to (18); \draw [style=wire, in=91, out=-135, looseness=0.75] (23) to (21); \draw [style=wire, in=150, out=-30] (23) to (19); \draw [style=wire] (22) to (23); \draw [style=wire] (19) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item[{\bf [d.3]}] Linear Rule: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{\oc A \otimes A \ar[r]^-{\mathsf{d}_A} \ar[dr]_-{e_A \otimes 1_A} & \oc A \ar[d]^-{\varepsilon_A} \\ & A } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (0) at (3.25, 1.75) {$\oc A$}; \node [style=object] (1) at (4, 1.75) {$A$}; \node [style=object] (2) at (4, -1) {$A$}; \node [style=component] (3) at (3.25, 0.5) {$e$}; \node [style=object] (4) at (2, 1.75) {$A$}; \node [style=object] (5) at (0.5, 1.75) {$\oc A$}; \node [style=component] (6) at (1.25, 0) {$\varepsilon$}; \node [style=differential] (7) at (1.25, 0.75) {{\bf =\!=\!=\!=}}; \node [style=object] (8) at (1.25, -1) {$A$}; \node [style=port] (9) at (2.5, 0.5) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (3); \draw [style=wire] (1) to (2); \draw [style=wire, bend right] (7) to (4); \draw [style=wire, bend left] (7) to (5); \draw [style=wire] (7) to (6); \draw [style=wire] (6) to (8); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item[{\bf [d.4]}] Chain Rule: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{\oc A \otimes A \ar[d]_-{\Delta_A \otimes 1_A} \ar[rr]^-{\mathsf{d}_A} && \oc A \ar[d]^-{\delta_A} \\ \oc A \otimes \oc A \otimes A \ar[r]_-{\delta_A \otimes \mathsf{d}_A} & \oc \oc A \otimes \oc A \ar[r]_-{\mathsf{d}_{\oc A}} & \oc \oc A } \end{array}&& \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (10) at (7.75, 1.75) {$A$}; \node [style=differential] (11) at (7, 0.75) {{\bf =\!=\!=\!=}}; \node [style=object] (12) at (7, -1.25) {$\oc \oc A$}; \node [style=object] (13) at (6.25, 1.75) {$\oc A$}; \node [style=component] (14) at (7, -0.25) {$\delta$}; \node [style=component] (15) at (9, 0) {$\delta$}; \node [style=duplicate] (16) at (9.5, 1) {$\Delta$}; \node [style=object] (17) at (9.5, 1.75) {$\oc A$}; \node [style=differential] (18) at (10.25, 0) {{\bf =\!=\!=\!=}}; \node [style=object] (19) at (10.5, 1.75) {$A$}; \node [style=differential] (20) at (9.75, -1) {{\bf =\!=\!=\!=}}; \node [style=object] (21) at (9.75, -1.75) {$\oc \oc A$}; \node [style=port] (22) at (8, 0.25) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right] (11) to (10); \draw [style=wire, bend left] (11) to (13); \draw [style=wire] (11) to (14); \draw [style=wire] (14) to (12); \draw [style=wire, bend right] (16) to (15); \draw [style=wire] (17) to (16); \draw [style=wire, bend right] (18) to (19); \draw [style=wire, in=150, out=-30, looseness=1.25] (16) to (18); \draw [style=wire] (20) to (21); \draw [style=wire, in=30, out=-90] (18) to (20); \draw [style=wire, in=150, out=-90] (15) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item[{\bf [d.5]}] Interchange Rule: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{\oc A \otimes A \otimes A \ar[d]_-{\mathsf{d} _A \otimes 1_A}\ar[r]^-{1_{\oc A} \otimes \sigma_{A,A}} & \oc A \otimes A \otimes A \ar[r]^-{\mathsf{d}_A \otimes 1_A} & \oc A \otimes A \ar[d]^-{\mathsf{d}_A} \\ \oc A \otimes A \ar[rr]_-{\mathsf{d}_A} && \oc A } \end{array}&& \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (23) at (14, 1.75) {$A$}; \node [style=object] (24) at (13.5, 1.75) {$A$}; \node [style=object] (25) at (13.5, -0.75) {$\oc A$}; \node [style=object] (26) at (12.5, 1.75) {$\oc A$}; \node [style=codifferential] (27) at (13.5, 0) {{\bf =\!=\!=\!=}}; \node [style=codifferential] (28) at (13, 1) {{\bf =\!=\!=\!=}}; \node [style=object] (29) at (15.25, 1.75) {$\oc A$}; \node [style=codifferential] (30) at (15.75, 1) {{\bf =\!=\!=\!=}}; \node [style=codifferential] (31) at (16.25, 0) {{\bf =\!=\!=\!=}}; \node [style=object] (32) at (16.25, -0.75) {$\oc A$}; \node [style=object] (33) at (16.25, 1.75) {$A$}; \node [style=object] (34) at (16.75, 1.75) {$A$}; \node [style=port] (35) at (14.75, 0.75) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left=15, looseness=1.25] (27) to (28); \draw [style=wire, bend right] (27) to (23); \draw [style=wire] (25) to (27); \draw [style=wire, bend left] (28) to (26); \draw [style=wire, bend right] (28) to (24); \draw [style=wire, bend left=15, looseness=1.25] (31) to (30); \draw [style=wire] (32) to (31); \draw [style=wire, bend left] (30) to (29); \draw [style=wire, in=45, out=-90] (34) to (30); \draw [style=wire, in=45, out=-90, looseness=1.50] (33) to (31); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \end{description} \end{definition} For lists of many examples of differential categories, we invite the reader to see \cite{Blute2019,blute2006differential}. \begin{definition} A \textbf{monoidal differential storage category} \cite[Section 4]{blute2006differential} is a monoidal differential category with finite products whose coalgebra modality has Seely isomorphisms. \end{definition} For a monoidal differential storage category, the differential structure can also equivalently be described in terms of a \textbf{codereliction} \cite[Definition 4.11]{blute2006differential}, which is a natural transformation $\eta_A: A \to \oc A$ satisfying the axioms found in \cite[Definition 9]{Blute2019}. By \cite[Theorem 4]{Blute2019}, for coalgebra modalities with Seely isomorphisms (or more generally monoidal coalgebra modalities), there is a bijective correspondence between coderelictions and deriving transformations. Starting with a deriving transformation $\mathsf{d}$, we construct a codereliction as follows: \begin{align*} \eta_A := \xymatrixcolsep{5pc}\xymatrix{A \ar[r]^-{u_A \otimes 1_A} & \oc A \otimes A \ar[r]^-{\mathsf{d}_A} & \oc A } && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=circle] (0) at (0, 2.25) {$A$}; \node [style=circle] (1) at (0, -0.25) {$\oc A$}; \node [style={component}] (2) at (0, 1) {$\eta$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (2) to (1); \end{pgfonlayer} \end{tikzpicture} \end{array}= \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (0.5, 2) {$u$}; \node [style=differential] (1) at (1, 1) {{\bf =\!=\!=}}; \node [style=object] (2) at (1.75, 2.5) {$A$}; \node [style=object] (3) at (1, 0.25) {$\oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left] (1) to (0); \draw [style=wire, bend right] (1) to (2); \draw [style=wire] (1) to (3); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} Conversely, starting with a codereliction $\eta$, we construct a deriving transformation as follows: \begin{align*} \mathsf{d}_A := \xymatrixcolsep{5pc}\xymatrix{\oc A \otimes A \ar[r]^-{1_{\oc A} \otimes \eta_A} & \oc A \otimes \oc A \ar[r]^-{\nabla_A} & \oc A } && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (0) at (1.75, 2.75) {$A$}; \node [style=object] (1) at (1.25, 1.25) {$\oc A$}; \node [style=integral] (2) at (1.25, 2) {{\bf =\!=\!=\!=}}; \node [style=object] (3) at (0.75, 2.75) {$\oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right] (2) to (0); \draw [style=wire] (1) to (2); \draw [style=wire, bend left] (2) to (3); \end{pgfonlayer} \end{tikzpicture} \end{array}= \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (2, 2.5) {$\eta$}; \node [style=duplicate] (1) at (1.25, 1.5) {$\nabla$}; \node [style=object] (2) at (0.5, 3.25) {$\oc A$}; \node [style=object] (3) at (1.25, 0.75) {$\oc A$}; \node [style=object] (4) at (2, 3.25) {$A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (3) to (1); \draw [style=wire, in=-90, out=0, looseness=1.25] (1) to (0); \draw [style=wire] (0) to (4); \draw [style=wire, in=-90, out=180] (1) to (2); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} These constructions are inverses of each other. \subsection{Cartesian Differential Categories} Another central structure of this paper is a \emph{Cartesian} differential category. For a full detailed introduction to Cartesian differential categories, see \cite{blute2009cartesian,garner2021cartesian}. The underlying structure of a Cartesian differential category is that of a Cartesian left additive category. A category is said to be \emph{left} additive if it is \emph{skew}-enriched over the category of commutative monoids \cite[Section 2.1]{garner2021cartesian}, or in other words, if each hom-set is a commutative monoid such that pre-composition preserves the additive structure. This allows one to have zero maps and sums of maps while allowing for maps which do not preserve the additive structure. Maps which do preserve the additive structure are called \emph{additive} maps. \begin{definition}\label{CLACdef} A \textbf{Cartesian left additive category} \cite[Definition 1.2.1]{blute2009cartesian} is a category $\mathbb{X}$ with finite products such that each hom-set $\mathbb{X}(A,B)$ is a commutative monoid with zero map $0 \in \mathbb{X}(A,B)$ and addition ${+: \mathbb{X}(A,B) \times \mathbb{X}(A,B) \to \mathbb{X}(A,B)}$, $(f,g) \mapsto f +g$, and, such that: \begin{enumerate}[{\em (i)}] \item Pre-composition preserves the additive structure; that is, the following equalities hold: $f;0 = 0$ and $f;(g+h) = f;g + f;h$ \item Post-composition by the projection maps preserve the additive structure; that is, the following equalities hold: $0;\pi_i= 0$ and $(f+g);\pi_i = f;\pi_i + g;\pi_i$. \end{enumerate} In a Cartesian left additive category, a map $f: A \to B$ is \textbf{additive} \cite[Definition 1.1.1]{blute2009cartesian} if post-composition by $f$ preserves the additive structure; that is, the following equalities hold: $(g+h);f = g;f + h;f$ and $0;f = 0$ (note that the projection maps are additive). \end{definition} Here are now some important maps for Cartesian differential categories that can be defined in any Cartesian left additive category. In a Cartesian left additive category $\mathbb{X}$: \begin{enumerate}[{\em (i)}] \item \label{injdef} For each pair of objects $A$ and $B$, define the \textbf{injection maps} $\iota_0: A \to A \times B$ and $\iota_1: B \to A \times B$ respectively as $\iota_0 := \langle 1_A, 0 \rangle$ and $\iota_1 := \langle 0, 1_B \rangle$ \item \label{nabladef} For each object $A$, define the \textbf{sum map} $+_A: A \times A \to A$ as $+_A := \pi_0 + \pi_1$. \item \label{elldef} For each object $A$, define the \textbf{lifting map} $\ell_A: A \times A \to (A \times A) \times (A \times A)$ as follows $\ell_A := \iota_0 \times \iota_1$. \item \label{cdef} For each object $A$ define the \textbf{interchange map} $c_A: (A \times A) \times (A \times A) \to (A \times A) \times (A \times A)$ as follows $c_A : = \left \langle \pi_0 \times \pi_0, \pi_1 \times \pi_1 \right \rangle$. \end{enumerate} Observe that while $c$ is natural in the obvious sense, the same cannot be said for the rest. Indeed, the injection maps $\iota_j$, the sum map $\nabla$, and the lifting map $\ell$ are not natural transformations. In particular, since the injection maps are not natural, it follows that these injection maps do not make the product a coproduct, and therefore not a biproduct. However, the well-known biproduct identities still hold in a Cartesian left additive category. \begin{definition}\label{cartdiffdef} A \textbf{Cartesian differential category} (CDC) \cite[Definition 2.1.1]{blute2009cartesian} is a Cartesian left additive category $\mathbb{X}$ equipped with a \textbf{differential combinator} $\mathsf{D}$ which is a family of operators: \begin{align*} \mathsf{D}: \mathbb{X}(A,B) \to \mathbb{X}(A \times A,B) && (f: A\to B) \mapsto (\mathsf{D}[f]: A \times A \to B) \end{align*} where $\mathsf{D}[f]$ is called the \textbf{derivative} of $f$, such that the following seven axioms hold: \begin{enumerate}[{\bf [CD.1]}] \item Additivity of the differentiation: $\mathsf{D}[f+g] = \mathsf{D}[f] + \mathsf{D}[g]$ and $\mathsf{D}[0]=0$; \item Additivity of the derivative in its second variable: $(1_A \times +_A); \mathsf{D}[f] = (1_A \times \pi_0); \mathsf{D}[f] + (1_A \times \pi_1);\mathsf{D}[f]$ and $\iota_0; \mathsf{D}[f]=0$; \item Coherence with identities and projections: $\mathsf{D}[1_A]=\pi_1$, $\mathsf{D}[\pi_0] = \pi_1;\pi_0$ and $\mathsf{D}[\pi_1] = \pi_1;\pi_1$; \item Coherence with pairings: $\mathsf{D}[\langle f, g \rangle] = \langle \mathsf{D}[f] , \mathsf{D}[g] \rangle$; \item Chain rule: $\mathsf{D}[fg] = \langle \pi_0 f, \mathsf{D}[f] \rangle; \mathsf{D}[g]$; \item Linearity of the derivative in its second variable: $\ell_A; \mathsf{D}\!\left[\mathsf{D}[f] \right] = \mathsf{D}[f]$; \item Symmetry of mixed partial derivatives: $c_A; \mathsf{D}\!\left[\mathsf{D}[f] \right]= \mathsf{D}\left[\mathsf{D}[f] \right]$. \end{enumerate} \end{definition} More discussions on the intuition for the differential combinator axioms can be found in \cite[Remark 2.1.3]{blute2009cartesian}. There are many interesting (and sometimes very exotic) examples of Cartesian differential categories in the literature: see \cite{cockett2020linearizing,garner2021cartesian}. \subsection{Linear Fibration of a Cartesian Differential Category} Just as any monoidal differential category has a canonical fibration associated to it, so too does a Cartesian differential category. To understand this fibration, we begin by describing what it means for a map in a Cartesian differential category to be linear. \begin{definition}\label{def:linear-def} In a Cartesian differential category $\mathbb{X}$ with differential combinator $\mathsf{D}$, a map $f: A \to B$ is \textbf{linear} \cite[Definition 2.2.1]{blute2009cartesian} if the following diagram commutes: \[ \xymatrixcolsep{5pc}\xymatrix{A \times A \ar[dr]_-{\pi_1} \ar[rr]^-{\mathsf{D}[f]} && B \\ & A \ar[ur]_-{f} } \] or equivalently \cite[Lemma 12]{cockett_et_al:LIPIcs:2020:11661}, if the following diagram commutes: \[ \xymatrixcolsep{5pc}\xymatrix{A \ar[dr]_-{\iota_1} \ar[rr]^-{f} && B \\ & A \times A \ar[ur]_-{\mathsf{D}[f]} } \] Define the subcategory of linear maps $\mathsf{LIN}[\mathbb{X}]$ to be the category whose objects are the same as $\mathbb{X}$ and whose maps are linear in $\mathbb{X}$. \end{definition} A modification of this notion allows one to describe maps which are only ``linear in one variable''. \begin{definition}\label{defn:linear_fibration} In a Cartesian differential category $\mathbb{X}$ with differential combinator $\mathsf{D}$, a map ${f\!: X \!\times\! A \to B}$ is \textbf{linear in its second argument $A$} (or \textbf{linear in context $X$}) \cite[Definition 9]{cockett_et_al:LIPIcs:2020:11661} if the following diagram commutes: \[ \xymatrixcolsep{5pc}\xymatrix{X \times (A \times A) \ar[r]^-{\left \langle 1_X \times \pi_0; 0 \times \pi_1 \right \rangle} \ar[d]_-{1_X \times \pi_1} & (X \times A) \times (X \times A) \ar[d]^-{\mathsf{D}[f]} \\ X \times A \ar[r]_-{f} & B } \] or equivalently \cite[Lemma 12]{cockett_et_al:LIPIcs:2020:11661}, if the following diagram commutes: \[ \xymatrixcolsep{5pc}\xymatrix{X \times A \ar[dr]_-{\iota_0 \times \iota_1} \ar[rr]^-{f} && B \\ & (X \times A) \times (X \times A) \ar[ur]_-{\mathsf{D}[f]} } \] \end{definition} With such maps we can define the canonical fibration associated to a Cartesian differential category. \begin{definition} For a Cartesian differential category $\mathbb{X}$ with differential combinator $\mathsf{D}$, define the category $\L[\mathbb{X}]$ as follows: \begin{enumerate}[{\em (i)}] \item The objects of $\L[\mathbb{X}]$ are pairs of elements $(X,A)$ of $\mathbb{X}$, that is, $Ob\left( \L[\mathbb{X}] \right) = Ob\left( \mathbb{X} \right) \times Ob\left( \mathbb{X} \right)$; \item The maps of $\L[\mathbb{X}]$ are pairs of maps $(f,g): (X,A) \to (Y,B)$ consisting of an arbitrary map ${f: X \to Y}$ and a map $g: X \times A \to B$ which is linear in context $X$; \item The identity map of $(X,A)$ is the pair $(1_X, \pi_1): (X,A) \to (X,A)$; \item The composition of maps $(f,g): (X,A) \to (Y,B)$ and $(h, k): (Y,B) \to (Z,C)$ is defined as follows: \[ (f,g); (h, k) = \left( (f;h) , \xymatrixcolsep{4pc}\xymatrix{ X \times A \ar[r]^-{\langle \pi_0, 1_{X \times A} \rangle } & X \times (X \times A) \ar[r]^-{f \times g} &Y \times B \ar[r]^-{k} & C} \right) \] \end{enumerate} Let $\mathsf{p}: \L[\mathbb{X}] \to \mathbb{X}$ be the forgetful functor defined on objects as $\mathsf{p}(X,A) = X$ and on maps as $\mathsf{p}(f,g) = f$. \end{definition} Note that this is a subcategory of the simple fibration over $\X$ \cite[Definition 1.3.1]{jacobs1999categorical}. It is then straightforward to show that: \begin{proposition}\label{prop:linear_fibration} Let $\mathbb{X}$ be a Cartesian differential category with differential combinator $\mathsf{D}$. Then the forgetful functor ${\mathsf{p}: \L[\mathbb{X}] \to \mathbb{X}}$ is a fibration where the Cartesian maps are those of the form $(f, \pi_1): (X,A) \to (Y,A)$. \end{proposition} It will be useful to have an explicit description of the fibres of this fibration: \begin{lemma}\label{lemma:L_fibres} Let $\mathbb{X}$ be a Cartesian differential category with differential combinator $\mathsf{D}$. For any object $X \in \mathbb{X}$, the fibre over $X$ of the fibration $\mathsf{p}: \L[\mathbb{X}] \to \mathbb{X}$ is written as $\L[X]$ and given by \begin{enumerate}[{\em (i)}] \item The objects of $\L[X]$ are the same as the objects of $\mathbb{X}$, that is, $Ob\left( \L[X] \right) = Ob\left( \mathbb{X} \right)$; \item The maps of $\L[X]$ are maps $f: X \times A \to B$ which are linear in context $X$; \item The identity map of $A$ is defined as $\pi_1: X \times A \to A$; \item The composition of maps $f: X \times A \to B$ and $g: X \times B \to C$ is defined as follows: \begin{align*} \xymatrixcolsep{5pc}\xymatrix{ X \times A \ar[r]^-{\langle \pi_0, 1_{X \times A} \rangle} &X \times (X \times A) \ar[r]^-{1_X \times f} &X \times B \ar[r]^-{g} & C} \end{align*} \end{enumerate} For every map $h: X \to Y$, define the \textbf{substitution functor} $h^\ast: \L[Y] \to \L[X]$ on objects as $h^\ast(A) = A$ and on maps $f: Y \times A \to B$ as follows: \begin{align*} h^\ast(f) := \xymatrixcolsep{5pc}\xymatrix{ X \times A \ar[r]^-{h \times 1_A} & Y \times A \ar[r]^-{f} & B} \end{align*} Furthermore, note that for the terminal object $\top$, there is an isomorphism $\L[\top] \cong \mathsf{LIN}[\mathbb{X}]$. \end{lemma} \subsection{The coKleisli construction}\label{cokleislisection} In this section we review a very important source of Cartesian differential categories: the coKleisli categories of monoidal differential categories. Before constructing the differential combinator, we must first describe the additive structure of the coKleisli category. So let $(\oc, \delta, \varepsilon)$ be a comonad on a category $\mathbb{X}$ with finite biproducts. Then $\mathbb{X}_\oc$ is a Cartesian left additive category \cite[Proposition 1.3.3]{blute2009cartesian} where the additive structure is defined as follows: \begin{align*} \llbracket f+g \rrbracket = \llbracket f \rrbracket + \llbracket g \rrbracket && \llbracket 0 \rrbracket = 0 \end{align*} Furthermore, the injection maps, sum maps, lifting maps, and interchange maps in the coKleisli category are easily computed out to be: \begin{align*} \llbracket \iota_0 \rrbracket = \varepsilon_A; \iota_0 && \llbracket \iota_1 \rrbracket = \varepsilon_B; \iota_1 && \llbracket +_A \rrbracket = \varepsilon_{A \times A}; +_A && \llbracket \ell_A \rrbracket = \varepsilon_{A \times A}; \ell_A && \llbracket c_A \rrbracket = \varepsilon_{(A \times A) \times (A \times A)}; c_A \end{align*} If one starts with a differential category, then using the deriving transformation, we are able to construct a differential combinator for the coKleisli category. \begin{proposition}\label{coKleisliCDC} \cite[Proposition 3.2.1]{blute2009cartesian} Let $\mathbb{X}$ be a monoidal differential category with coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ and deriving transformation $\mathsf{d}: \oc A \otimes A \to \oc A$, and finite (bi)products (which we denote here using the product notation). Then the coKleisli category $\mathbb{X}_\oc$ is a Cartesian differential category with Cartesian left additive structure defined above and differential combinator $\mathsf{D}$ defined as follows on a coKleisli map $\llbracket f \rrbracket: \oc A \to B$: \begin{align*} \llbracket \mathsf{D}[f] \rrbracket := \xymatrixcolsep{3pc}\xymatrix{\oc(A \times A) \ar[r]^-{\chi_{A \times A}} & \oc A \otimes \oc A \ar[r]^-{1_{\oc A} \otimes \varepsilon_A} & \oc A \otimes A \ar[r]^-{\mathsf{d}_A} & \oc A \ar[r]^-{\llbracket f \rrbracket} & B } && \begin{array}[c]{c} \llbracket \mathsf{D}[f] \rrbracket \end{array}= \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (7.5, 1.75) {$\varepsilon$}; \node [style=differential] (1) at (7, 1) {{\bf =\!=\!=\!=}}; \node [style=object] (2) at (7, -0.5) {$B$}; \node [style=component] (4) at (7, 0.25) {$f$}; \node [style=duplicate] (6) at (7, 2.5) {$\chi$}; \node [style=object] (7) at (7, 3.25) {$\oc (A \times A)$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=-90, out=30, looseness=1.50] (1) to (0); \draw [style=wire] (1) to (4); \draw [style=wire] (4) to (2); \draw [style=wire] (7) to (6); \draw [style=wire, in=90, out=-30, looseness=1.25] (6) to (0); \draw [style=wire, in=150, out=-150] (6) to (1); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} where $\chi_{A \times A}: \oc (A \times A) \to \oc A \otimes \oc A$ is defined as in Definition \ref{Seelydef}. \end{proposition} It is important to note that the above proposition does not require the coalgebra modality to be monoidal or, equivalently, to have Seely isomorphisms. In addition, we could have also expressed the differential combinator of the coKleisli category in terms of the coderiving transformation $\mathsf{d}^\circ$ (Definition \ref{dcircdef}) as follows on a coKleisli map $\llbracket f \rrbracket: \oc A \to B$: \begin{align*} \llbracket \mathsf{D}[f] \rrbracket := \xymatrixcolsep{2.25pc}\xymatrix{\oc(A \times A) \ar[r]^-{\mathsf{d}^\circ_{A \times A}} & \oc (A \times A) \otimes (A \times A) \ar[r]^-{\oc(\pi_0) \otimes \pi_1} & \oc A \otimes A \ar[r]^-{\mathsf{d}_A} & \oc A \ar[r]^-{\llbracket f \rrbracket} & B } && \begin{array}[c]{c} \llbracket \mathsf{D}[f] \rrbracket \end{array}= \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=differential] (1) at (7, 0.75) {{\bf =\!=\!=\!=}}; \node [style=object] (2) at (7, -0.75) {$B$}; \node [style=component] (4) at (7, 0) {$f$}; \node [style=object] (7) at (7, 3.5) {$\oc (A \times A)$}; \node [style=differential] (8) at (7, 2.75) {{\bf =\!=\!=\!=}}; \node [style=component] (9) at (7.75, 1.75) {$\pi_1$}; \node [style=function2] (10) at (6.25, 1.75) {$\pi_0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (1) to (4); \draw [style=wire] (4) to (2); \draw [style=wire] (7) to (8); \draw [style=wire, in=90, out=-150, looseness=1.25] (8) to (10); \draw [style=wire, in=135, out=-90] (10) to (1); \draw [style=wire, in=-90, out=45, looseness=1.25] (1) to (9); \draw [style=wire, in=90, out=-30, looseness=1.25] (8) to (9); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} We now turn our attention to giving an explicit description of the linear maps in the coKleisli category. \begin{lemma}\label{lem:cokleisli-linear} Let $\mathbb{X}$ be a monoidal differential category with coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ and deriving transformation $\mathsf{d}: \oc A \otimes A \to \oc A$, and finite (bi)products. Then: \begin{enumerate}[{\em (i)}] \item\label{lem:cokleisli-linear.i} A coKleisli map $\llbracket f \rrbracket: \oc A \to B$ is linear in $\mathbb{X}_\oc$ if and only if the following diagram commutes: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{ \oc A \ar[rr]^-{\llbracket f \rrbracket} \ar[d]_-{\mathsf{d}^\circ_A} && B \\ \oc A \otimes A \ar[r]_-{\oc(0) \otimes 1_A} & \oc A \otimes A \ar[r]_-{\mathsf{d}_A} & \oc A \ar[u]_-{\llbracket f \rrbracket} } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=differential] (0) at (6.75, 1) {{\bf =\!=\!=\!=}}; \node [style=object] (1) at (6.75, -0.75) {$B$}; \node [style=component] (2) at (6.75, 0.25) {$f$}; \node [style=object] (3) at (6.75, 3.25) {$ \oc A$}; \node [style=function2] (6) at (6.25, 1.75) {$0$}; \node [style=object] (7) at (4.75, 2.75) {$\oc A$}; \node [style=object] (8) at (4.75, 0.75) {$B$}; \node [style=component] (9) at (4.75, 1.75) {$f$}; \node [style=object] (10) at (5.5, 1.75) {$=$}; \node [style=differential] (11) at (6.75, 2.5) {{\bf =\!=\!=\!=}}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (2) to (1); \draw [style=wire, in=150, out=-90, looseness=1.25] (6) to (0); \draw [style=wire] (7) to (9); \draw [style=wire] (9) to (8); \draw [style=wire] (3) to (11); \draw [style=wire, in=90, out=-150, looseness=1.25] (11) to (6); \draw [style=wire, in=30, out=-30, looseness=1.50] (11) to (0); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item \label{lem:cokleisli-linear.ii} For every map $g: A \to B$ in $\mathbb{X}$, $\llbracket \mathsf{F}_\oc(g) \rrbracket = \varepsilon_A ; g: \oc A \to B$ is linear in $\mathbb{X}_\oc$. Therefore there is a functor $\mathsf{F}_{\mathsf{L}}: \mathbb{X} \to \mathsf{LIN}[\mathbb{X}_\oc]$ defined on objects as $\mathsf{F}_{\mathsf{L}}(A) = A$ and on maps $g: A \to B$ as $\llbracket \mathsf{F}_{\mathsf{L}}(g) \rrbracket = \llbracket \mathsf{F}_\oc(g) \rrbracket = \varepsilon_A ; g$. \end{enumerate} \end{lemma} \begin{proof} For $(i)$, first observe that for any coKleisli map $\llbracket k \rrbracket: \oc (A \times A) \to B$, precomposing by $\llbracket \iota_1 \rrbracket = \varepsilon_A; \iota_1$ is equal to $\llbracket \iota_1; k \rrbracket = \oc(\iota_1); \llbracket k \rrbracket$. Therefore, for any coKleisli map $\llbracket f \rrbracket: \oc A \to B$, we compute the following: \begin{align*} \begin{array}[c]{c} \llbracket \iota_0; \mathsf{D}[f] \rrbracket \end{array}= \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=differential] (0) at (5.5, 2.5) {{\bf =\!=\!=\!=}}; \node [style=object] (1) at (5.5, 0.75) {$B$}; \node [style=component] (2) at (5.5, 1.75) {$f$}; \node [style=object] (3) at (5.5, 4.75) {$ \oc A$}; \node [style=function2] (6) at (5, 3.25) {$0$}; \node [style=object] (10) at (1.25, 3.25) {$=$}; \node [style=differential] (11) at (5.5, 4) {{\bf =\!=\!=\!=}}; \node [style=differential] (12) at (-0.25, 2.25) {{\bf =\!=\!=\!=}}; \node [style=object] (13) at (-0.25, 0.75) {$B$}; \node [style=component] (14) at (-0.25, 1.5) {$f$}; \node [style=object] (15) at (-0.25, 5.75) {$\oc A$}; \node [style=differential] (16) at (-0.25, 4.25) {{\bf =\!=\!=\!=}}; \node [style=component] (17) at (0.5, 3.25) {$\pi_1$}; \node [style=function2] (18) at (-1, 3.25) {$\pi_0$}; \node [style=function2] (19) at (-0.25, 5) {$\iota_1$}; \node [style=object] (23) at (2.75, 5.5) {$\oc A$}; \node [style=differential] (28) at (2.75, 4.75) {{\bf =\!=\!=\!=}}; \node [style=component] (29) at (3.5, 3.75) {$\iota_1$}; \node [style=function2] (30) at (2, 3.75) {$\iota_1$}; \node [style=differential] (31) at (2.75, 1.75) {{\bf =\!=\!=\!=}}; \node [style=object] (32) at (2.75, 0.25) {$B$}; \node [style=component] (33) at (2.75, 1) {$f$}; \node [style=component] (34) at (3.5, 2.75) {$\pi_1$}; \node [style=function2] (35) at (2, 2.75) {$\pi_0$}; \node [style=object] (36) at (4.25, 3.25) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (2) to (1); \draw [style=wire, in=150, out=-90, looseness=1.25] (6) to (0); \draw [style=wire] (3) to (11); \draw [style=wire, in=90, out=-150, looseness=1.25] (11) to (6); \draw [style=wire, in=30, out=-30, looseness=1.50] (11) to (0); \draw [style=wire] (12) to (14); \draw [style=wire] (14) to (13); \draw [style=wire, in=90, out=-150, looseness=1.25] (16) to (18); \draw [style=wire, in=135, out=-90] (18) to (12); \draw [style=wire, in=-90, out=45, looseness=1.25] (12) to (17); \draw [style=wire, in=90, out=-30, looseness=1.25] (16) to (17); \draw [style=wire] (15) to (19); \draw [style=wire] (19) to (16); \draw [style=wire, in=90, out=-150, looseness=1.25] (28) to (30); \draw [style=wire, in=90, out=-30, looseness=1.25] (28) to (29); \draw [style=wire] (23) to (28); \draw [style=wire] (31) to (33); \draw [style=wire] (33) to (32); \draw [style=wire, in=135, out=-90] (35) to (31); \draw [style=wire, in=-90, out=45, looseness=1.25] (31) to (34); \draw [style=wire] (30) to (35); \draw [style=wire] (29) to (34); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} So $\llbracket \iota_1; \mathsf{D}[f] \rrbracket = \mathsf{d}^\circ_A; \left( \oc(0) \otimes 1_A \right); \mathsf{d}_A; \llbracket f \rrbracket$. Therefore, by definition, $\llbracket f \rrbracket: \oc A \to B$ is linear if and only if $ \mathsf{d}^\circ_A; \left( \oc(0) \otimes 1_A \right); \mathsf{d}_A; \llbracket f \rrbracket = \llbracket \iota_1; \mathsf{D}[f] \rrbracket = \llbracket f \rrbracket$. For $(ii)$, we use the linear rule \textbf{[d.3]} to compute: \begin{align*} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=differential] (0) at (-4.25, 3) {{\bf =\!=\!=\!=}}; \node [style=object] (1) at (-4.25, 0.75) {$B$}; \node [style=object] (3) at (-4.25, 5.25) {$ \oc A$}; \node [style=function2] (6) at (-4.75, 3.75) {$0$}; \node [style=differential] (11) at (-4.25, 4.5) {{\bf =\!=\!=\!=}}; \node [style=object] (36) at (-3.25, 3.75) {$=$}; \node [style=component] (47) at (-4.25, 2.25) {$\varepsilon$}; \node [style=component] (48) at (-4.25, 1.5) {$g$}; \node [style=object] (49) at (-2, 5.25) {$ \oc A$}; \node [style=function2] (50) at (-2.5, 3.75) {$0$}; \node [style=differential] (51) at (-2, 4.5) {{\bf =\!=\!=\!=}}; \node [style=component] (52) at (-2.5, 3) {$e$}; \node [style=object] (53) at (-1.5, 2.25) {$B$}; \node [style=component] (54) at (-1.5, 3.75) {$g$}; \node [style=object] (55) at (0.5, 5.25) {$ \oc A$}; \node [style=component] (56) at (0, 3.75) {$e$}; \node [style=differential] (57) at (0.5, 4.5) {{\bf =\!=\!=\!=}}; \node [style=object] (59) at (1, 2.5) {$B$}; \node [style=component] (60) at (1, 3.75) {$g$}; \node [style=object] (61) at (-0.75, 3.75) {$=$}; \node [style=object] (62) at (2.5, 2.75) {$B$}; \node [style=component] (63) at (2.5, 4.25) {$\varepsilon$}; \node [style=component] (64) at (2.5, 3.5) {$g$}; \node [style=object] (65) at (1.75, 3.75) {$=$}; \node [style=object] (66) at (2.5, 5) {$ \oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=150, out=-90, looseness=1.25] (6) to (0); \draw [style=wire] (3) to (11); \draw [style=wire, in=90, out=-150, looseness=1.25] (11) to (6); \draw [style=wire, in=30, out=-30, looseness=1.50] (11) to (0); \draw [style=wire] (0) to (47); \draw [style=wire] (47) to (48); \draw [style=wire] (48) to (1); \draw [style=wire] (49) to (51); \draw [style=wire, in=90, out=-150, looseness=1.25] (51) to (50); \draw [style=wire] (50) to (52); \draw [style=wire] (54) to (53); \draw [style=wire, in=90, out=-30, looseness=1.25] (51) to (54); \draw [style=wire] (55) to (57); \draw [style=wire, in=90, out=-150, looseness=1.25] (57) to (56); \draw [style=wire] (60) to (59); \draw [style=wire, in=90, out=-30, looseness=1.25] (57) to (60); \draw [style=wire] (63) to (64); \draw [style=wire] (64) to (62); \draw [style=wire] (66) to (63); \end{pgfonlayer} \end{tikzpicture} \end{align*} Therefore, $\llbracket \mathsf{F}_\oc(g) \rrbracket = \varepsilon_A ; g$ is linear. As a consequence, $\mathsf{F}_{\mathsf{L}}: \mathbb{X} \to \mathsf{LIN}[\mathbb{X}_\oc]$ is well-defined and is functor since $\mathsf{F}_\oc: \mathbb{X} \to \mathbb{X}_\oc$ is a functor. \end{proof} It is important to note that for an arbitrary differential category $\mathbb{X}$ with finite products, not every linear map in the coKleisli category is of the form $\varepsilon_B; g$. Therefore, $\mathsf{LIN}[\mathbb{X}_\oc]$ is not necessarily isomorphic to the base category $\mathbb{X}$. However, for differential storage categories, the desired isomorphism holds, which is a fundamental concept in differential linear logic. \begin{corollary}\label{cor:seely-lin} Let $\mathbb{X}$ be a monoidal differential storage category with coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ with Seely isomorphisms, deriving transformation $\mathsf{d}: \oc A \otimes A \to \oc A$ (or equivalently codereliction $\eta_A: A \to \oc A$), and finite (bi)products. Then: \begin{enumerate}[{\em (i)}] \item A coKleisli map $\llbracket f \rrbracket: \oc A \to B$ is linear in $\mathbb{X}_\oc$ if and only if the following diagram commutes: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{ \oc A \ar[r]^-{\llbracket f \rrbracket} \ar[d]_-{\varepsilon_A} & B \\ A \ar[r]_-{\eta_A} & \oc A \ar[u]_-{\llbracket f \rrbracket} } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (6.25, 1.75) {$\eta$}; \node [style=object] (1) at (6.25, 0) {$B$}; \node [style=component] (2) at (6.25, 0.75) {$f$}; \node [style=object] (3) at (6.25, 3.5) {$ \oc A$}; \node [style=component] (4) at (6.25, 2.75) {$\varepsilon$}; \node [style=object] (5) at (4.75, 2.75) {$\oc A$}; \node [style=object] (6) at (4.75, 0.75) {$B$}; \node [style=component] (7) at (4.75, 1.75) {$f$}; \node [style=object] (8) at (5.5, 1.75) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (2) to (1); \draw [style=wire] (3) to (4); \draw [style=wire] (5) to (7); \draw [style=wire] (7) to (6); \draw [style=wire] (4) to (0); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item $\mathsf{F}_{\mathsf{L}}: \mathbb{X} \to \mathsf{LIN}[\mathbb{X}_\oc]$ is an isomorphism with inverse $\mathsf{F}^{-1}_{\mathsf{L}}: \mathsf{LIN}[\mathbb{X}_\oc] \to \mathbb{X}$ defined on objects as $\mathsf{F}^{-1}_{\mathsf{L}}(A) = A$ and on maps $\llbracket f \rrbracket: \oc A \to B$ as $\mathsf{F}^{-1}_{L}\left( \llbracket f \rrbracket \right) = \eta_A; \llbracket f \rrbracket$. \end{enumerate} In other words, a coKleisli map $\llbracket f \rrbracket: \oc A \to B$ is linear in $\mathbb{X}_\oc$ if and only if $\llbracket f \rrbracket = \varepsilon_A; g$ for some (necessarily unique) map $g: A \to B$ in $\mathbb{X}$. \end{corollary} \begin{proof} For $(i)$, recall that in an additive bialgebra modality, $\oc(0) = e_A; u_A$. Therefore, for any coKleisli map $\llbracket f \rrbracket: \oc A \to B$ we have that: \begin{align*} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (-1.75, 4.5) {$\eta$}; \node [style=object] (1) at (-1.75, 2.5) {$B$}; \node [style=component] (2) at (-1.75, 3.5) {$f$}; \node [style=object] (3) at (-1.75, 6.25) {$ \oc A$}; \node [style=component] (4) at (-1.75, 5.5) {$\varepsilon$}; \node [style=object] (8) at (-5.25, 4.5) {$=$}; \node [style=differential] (9) at (-6.25, 3.75) {{\bf =\!=\!=\!=}}; \node [style=object] (10) at (-6.25, 2) {$B$}; \node [style=component] (11) at (-6.25, 3) {$f$}; \node [style=object] (12) at (-6.25, 6) {$ \oc A$}; \node [style=function2] (13) at (-6.75, 4.5) {$0$}; \node [style=differential] (18) at (-6.25, 5.25) {{\bf =\!=\!=\!=}}; \node [style=differential] (19) at (-4, 3.25) {{\bf =\!=\!=\!=}}; \node [style=object] (20) at (-4, 1.75) {$B$}; \node [style=component] (21) at (-4, 2.5) {$f$}; \node [style=object] (22) at (-4, 6.5) {$ \oc A$}; \node [style=differential] (24) at (-4, 5.75) {{\bf =\!=\!=\!=}}; \node [style=component] (25) at (-4.5, 5) {$e$}; \node [style=component] (26) at (-4.5, 4) {$u$}; \node [style=object] (27) at (-2.5, 4.5) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (2) to (1); \draw [style=wire] (3) to (4); \draw [style=wire] (4) to (0); \draw [style=wire] (9) to (11); \draw [style=wire] (11) to (10); \draw [style=wire, in=150, out=-90, looseness=1.25] (13) to (9); \draw [style=wire] (12) to (18); \draw [style=wire, in=90, out=-150, looseness=1.25] (18) to (13); \draw [style=wire, in=30, out=-30, looseness=1.50] (18) to (9); \draw [style=wire] (19) to (21); \draw [style=wire] (21) to (20); \draw [style=wire] (22) to (24); \draw [style=wire, in=30, out=-30, looseness=1.25] (24) to (19); \draw [style=wire, in=150, out=-90] (26) to (19); \draw [style=wire, in=90, out=-150] (24) to (25); \end{pgfonlayer} \end{tikzpicture} \end{align*} So $\mathsf{d}^\circ_A; \left( \oc(0) \otimes 1_A \right); \mathsf{d}_A; \llbracket f \rrbracket = \eta_A; \varepsilon_A; \llbracket f \rrbracket$. Then by Lemma \ref{lem:cokleisli-linear}.(\ref{lem:cokleisli-linear.i}), $\llbracket f \rrbracket: \oc A \to B$ is linear if and only if $\llbracket f \rrbracket= \mathsf{d}^\circ_A; \left( \oc(0) \otimes 1_A \right); \mathsf{d}_A; \llbracket f \rrbracket = \eta_A; \varepsilon_A; \llbracket f \rrbracket$. For (ii), usually we would first have to check that $\mathsf{F}^{-1}_{\mathsf{L}}$ is a functor; that is, $\mathsf{F}^{-1}_{\mathsf{L}}$ preserves composition and identities. However, it turns out that there is a way around this by applying \cite[Chapter IV, Theorem 2]{mac2013categories} to isomorphisms. Briefly, if $\mathsf{F}: \mathbb{X} \to \mathbb{Y}$ is a functor and $\mathsf{G}: \mathbb{Y} \to \mathbb{X}$ is a well-defined mapping on objects and maps such that $\mathsf{F} \circ \mathsf{G} = 1_\mathbb{Y}$ and $\mathsf{G} \circ \mathsf{F} = 1_\mathbb{X}$, then $\mathsf{G}$ is a functor and so $\mathsf{F}$ is an isomorphism with inverse $\mathsf{G}$. By Lemma \ref{lem:cokleisli-linear}.(\ref{lem:cokleisli-linear.ii}), $\mathsf{F}_{\mathsf{L}}: \mathbb{X} \to \mathsf{LIN}[\mathbb{X}_\oc]$ is a functor, so it remains to show that $\mathsf{F}^{-1}_{\mathsf{L}} \circ \mathsf{F}_\mathsf{L} = 1_\mathbb{X}$ and $\mathsf{F}_{\mathsf{L}} \circ \mathsf{F}^{-1}_\mathsf{L} = 1_{\mathsf{LIN}[\mathbb{X}_\oc]}$. Starting with the former, on objects this immediate, $\mathsf{F}^{-1}_{\mathsf{L}}\mathsf{F}_\mathsf{L} (A) = A$. While on maps, recall that the linear rule for the codereliction says that $\eta_A; \varepsilon_A = 1_A$, therefore: \[\mathsf{F}^{-1}_{\mathsf{L}}\mathsf{F}_\mathsf{L} (f) = \eta_A; \varepsilon_A; f = f\] So $\mathsf{F}^{-1}_{\mathsf{L}} \circ \mathsf{F}_\mathsf{L} = 1_\mathbb{X}$. For the other direction, on objects this again immediate $\mathsf{F}_{\mathsf{L}}\mathsf{F}^{-1}_\mathsf{L} (A) = A$. For a linear coKleisli map $\llbracket f \rrbracket: \oc A \to B$, by Lemma \ref{lem:cokleisli-linear}.(\ref{lem:cokleisli-linear.i}), we have that: \[\mathsf{F}_{\mathsf{L}}\mathsf{F}^{-1}_\mathsf{L} \left( \llbracket f \rrbracket \right) = \varepsilon_A ; \eta_A ; \llbracket f \rrbracket = \llbracket f \rrbracket\] So $\mathsf{F}_{\mathsf{L}} \circ \mathsf{F}^{-1}_\mathsf{L} = 1_{\mathsf{LIN}[\mathbb{X}_\oc]}$. Therefore, $\mathsf{F}^{-1}_{\mathsf{L}}: \mathsf{LIN}[\mathbb{X}_\oc] \to \mathbb{X}$ is a functor and is an inverse of $\mathsf{F}_{\mathsf{L}}: \mathbb{X} \to \mathsf{LIN}[\mathbb{X}_\oc]$. So we conclude that $\mathbb{X} \cong \mathsf{LIN}[\mathbb{X}_\oc]$.\end{proof} \subsection{Equivalence of Linear Fibrations} \label{sec:fibration_equivalence} Consider a monoidal differential category with finite products. On the one hand, we have the fibration ${\mathsf{p}_\oc: \L_\oc[\mathbb{X}] \to \mathbb{X}_\oc}$ of Proposition \ref{prop:context_fibration} associated to any coalgebra modality. On the other hand, by Proposition \ref{coKleisliCDC}, $\X_{\oc}$ is a Cartesian differential category, and so we also have its associated linear fibration ${\mathsf{p}: \L[\mathbb{X}_\oc] \to \mathbb{X}_\oc}$ of Proposition \ref{prop:linear_fibration}. The objective of this section is to show that they are in fact isomorphic (as fibrations over $\X_{\oc}$). We begin by providing an explicit description of maps which are linear in context in the coKleisli category. \begin{lemma}\label{lem:cokleisli-linearcontext} Let $\mathbb{X}$ be a differential category with coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ and deriving transformation $\mathsf{d}: \oc A \otimes A \to \oc A$, and finite (bi)products. Then: \begin{enumerate}[{\em (i)}] \item \label{lem:cokleisli-linearcontext.i} For every object $X$ and $A$, the following diagram commutes: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{2.5pc}\xymatrix{ \oc X \otimes A \ar[d]_-{\oc(\iota_0) \otimes \iota_1} \ar@{=}[rr]^-{} & & \oc X \otimes A \\ \oc(X \times A) \!\otimes\! (X \times A) \ar[r]_-{\mathsf{d}_{X \times A}} & \oc (X \times A) \ar[r]_-{\mathsf{d}^\circ_{X \times A}} & \oc(X \times A) \!\otimes\! (X \times A) \ar[u]_-{\oc(\pi_0) \otimes \pi_1} } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=differential] (20) at (0.75, 4.5) {{\bf =\!=\!=\!=}}; \node [style=component] (21) at (1.5, 3.5) {$\pi_1$}; \node [style=function2] (22) at (0, 3.5) {$\pi_0$}; \node [style=differential] (23) at (0.75, 5.25) {{\bf =\!=\!=\!=}}; \node [style=component] (26) at (1.5, 6.25) {$\iota_1$}; \node [style=function2] (27) at (0, 6.25) {$\iota_0$}; \node [style=object] (43) at (2.25, 4.75) {$=$}; \node [style=object] (67) at (0, 7) {$\oc X$}; \node [style=object] (68) at (1.5, 7) {$A$}; \node [style=object] (69) at (0, 2.75) {$\oc X$}; \node [style=object] (70) at (1.5, 2.75) {$A$}; \node [style=object] (77) at (3.25, 7) {$\oc X$}; \node [style=object] (78) at (4.75, 7) {$A$}; \node [style=object] (79) at (3.25, 2.75) {$\oc X$}; \node [style=object] (80) at (4.75, 2.75) {$A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=90, out=-150, looseness=1.25] (20) to (22); \draw [style=wire, in=90, out=-30, looseness=1.25] (20) to (21); \draw [style=wire, in=135, out=-90] (27) to (23); \draw [style=wire, in=-90, out=45, looseness=1.25] (23) to (26); \draw [style=wire] (23) to (20); \draw [style=wire] (22) to (69); \draw [style=wire] (21) to (70); \draw [style=wire] (67) to (27); \draw [style=wire] (68) to (26); \draw [style=wire] (77) to (79); \draw [style=wire] (78) to (80); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item\label{lem:cokleisli-linearcontext.ii} A coKleisli map $\llbracket f \rrbracket: \oc(X \times A) \to B$ is linear in context $X$ if and only if the following diagram commutes: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{2pc}\xymatrix{ \oc (X \times A) \ar[d]_-{\mathsf{d}^\circ_{X \times A}} \ar[rrr]^-{\llbracket f \rrbracket} & && B \\ \oc(X \times A) \!\otimes\! (X \times A) \ar[r]_-{\oc(\pi_0) \otimes \pi_1} & \oc X \otimes A \ar[r]_-{\oc(\iota_0) \otimes \iota_1} & \oc(X \times A) \!\otimes\! (X \times A) \ar[r]_-{\mathsf{d}_{X \times A}} & \oc(X \times A) \ar[u]_-{\llbracket f \rrbracket} } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (15) at (-1.75, 5.5) {$\oc(X \times A)$}; \node [style=differential] (16) at (-1.75, 4.75) {{\bf =\!=\!=\!=}}; \node [style=component] (17) at (-1, 3.75) {$\pi_1$}; \node [style=function2] (18) at (-2.5, 3.75) {$\pi_0$}; \node [style=differential] (19) at (-1.75, 1.75) {{\bf =\!=\!=\!=}}; \node [style=object] (20) at (-1.75, 0) {$B$}; \node [style=component] (21) at (-1.75, 1) {$f$}; \node [style=component] (22) at (-1, 2.75) {$\iota_1$}; \node [style=function2] (23) at (-2.5, 2.75) {$\iota_0$}; \node [style=object] (32) at (-4, 4.25) {$\oc (X \times A)$}; \node [style=object] (33) at (-4, 2.25) {$B$}; \node [style=component] (34) at (-4, 3.25) {$f$}; \node [style=object] (35) at (-3.25, 3.25) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=90, out=-150, looseness=1.25] (16) to (18); \draw [style=wire, in=90, out=-30, looseness=1.25] (16) to (17); \draw [style=wire] (15) to (16); \draw [style=wire] (19) to (21); \draw [style=wire] (21) to (20); \draw [style=wire, in=150, out=-90] (23) to (19); \draw [style=wire, in=-90, out=30] (19) to (22); \draw [style=wire] (18) to (23); \draw [style=wire] (17) to (22); \draw [style=wire] (32) to (34); \draw [style=wire] (34) to (33); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item\label{lem:cokleisli-linearcontext.iii} For every map $g: \oc X \otimes A \to B$ in $\mathbb{X}$, the composite: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{3pc}\xymatrix{\oc(X \times A) \ar[r]^-{\mathsf{d}^\circ_{X \times A}} & \oc (X \times A) \times (X \times A) \ar[r]^-{\oc(\pi_0) \times \pi_1} & \oc X \otimes A \ar[r]^-{g} & B } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (7, 0.75) {$g$}; \node [style=object] (2) at (7, 0) {$B$}; \node [style=object] (3) at (7, 3.5) {$\oc (X \times A)$}; \node [style=differential] (4) at (7, 2.75) {{\bf =\!=\!=\!=}}; \node [style=component] (5) at (7.75, 1.75) {$\pi_1$}; \node [style=function2] (6) at (6.25, 1.75) {$\pi_0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (3) to (4); \draw [style=wire, in=90, out=-150, looseness=1.25] (4) to (6); \draw [style=wire, in=135, out=-90] (6) to (0); \draw [style=wire, in=-90, out=45, looseness=1.25] (0) to (5); \draw [style=wire, in=90, out=-30, looseness=1.25] (4) to (5); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} is linear in context $X$ in $\mathbb{X}_\oc$. \end{enumerate} \end{lemma} \begin{proof} For $(i)$, recall the following useful compatibility relation between the deriving transformation and coderiving transformation \cite[Proposition 4.1]{cockett_lemay_2018}: \begin{align*} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=differential] (20) at (0.75, 4.5) {{\bf =\!=\!=\!=}}; \node [style=object] (21) at (1.5, 3.5) {$A$}; \node [style=object] (22) at (0, 3.5) {$\oc A$}; \node [style=differential] (23) at (0.75, 5.25) {{\bf =\!=\!=\!=}}; \node [style=object] (26) at (1.5, 6.25) {$A$}; \node [style=object] (27) at (0, 6.25) {$\oc A$}; \node [style=object] (43) at (2.25, 4.75) {$=$}; \node [style=differential] (44) at (3.25, 4.25) {{\bf =\!=\!=\!=}}; \node [style=object] (45) at (4.75, 3.5) {$A$}; \node [style=object] (46) at (3.25, 3.5) {$\oc A$}; \node [style=differential] (47) at (3.25, 5.5) {{\bf =\!=\!=\!=}}; \node [style=object] (48) at (4.75, 6.25) {$A$}; \node [style=object] (49) at (3.25, 6.25) {$\oc A$}; \node [style=object] (50) at (5.5, 4.75) {$+$}; \node [style=object] (52) at (8, 3.5) {$A$}; \node [style=object] (53) at (6.5, 3.5) {$\oc A$}; \node [style=object] (55) at (8, 6.25) {$A$}; \node [style=object] (56) at (6.5, 6.25) {$\oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=90, out=-150, looseness=1.25] (20) to (22); \draw [style=wire, in=90, out=-30, looseness=1.25] (20) to (21); \draw [style=wire, in=135, out=-90] (27) to (23); \draw [style=wire, in=-90, out=45, looseness=1.25] (23) to (26); \draw [style=wire] (23) to (20); \draw [style=wire] (44) to (46); \draw [style=wire] (49) to (47); \draw [style=wire] (56) to (53); \draw [style=wire] (55) to (52); \draw [style=wire, in=45, out=-90] (48) to (44); \draw [style=wire, in=90, out=-60, looseness=1.50] (47) to (45); \draw [style=wire, in=135, out=-135, looseness=1.25] (47) to (44); \end{pgfonlayer} \end{tikzpicture} \end{align*} Then by using the above identity and the biproduct coherences, we compute: \begin{align*} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=differential] (57) at (-0.25, 9.5) {{\bf =\!=\!=\!=}}; \node [style=component] (58) at (0.5, 8.5) {$\pi_1$}; \node [style=function2] (59) at (-1, 8.5) {$\pi_0$}; \node [style=differential] (60) at (-0.25, 10.25) {{\bf =\!=\!=\!=}}; \node [style=component] (61) at (0.5, 11.25) {$\iota_1$}; \node [style=function2] (62) at (-1, 11.25) {$\iota_0$}; \node [style=object] (63) at (1, 10) {$=$}; \node [style=object] (64) at (-1, 12) {$\oc X$}; \node [style=object] (65) at (0.5, 12) {$A$}; \node [style=object] (66) at (-1, 7.75) {$\oc X$}; \node [style=object] (67) at (0.5, 7.75) {$A$}; \node [style=differential] (72) at (2, 9.25) {{\bf =\!=\!=\!=}}; \node [style=component] (73) at (3.25, 8.5) {$\pi_1$}; \node [style=function2] (74) at (2, 8.5) {$\pi_0$}; \node [style=differential] (75) at (2, 10.5) {{\bf =\!=\!=\!=}}; \node [style=component] (76) at (3.25, 11.25) {$\iota_1$}; \node [style=function2] (77) at (2, 11.25) {$\iota_0$}; \node [style=object] (78) at (2, 12) {$\oc X$}; \node [style=object] (79) at (3.25, 12) {$A$}; \node [style=object] (80) at (2, 7.75) {$\oc X$}; \node [style=object] (81) at (3.25, 7.75) {$A$}; \node [style=object] (82) at (3.75, 10) {$+$}; \node [style=component] (84) at (5.5, 9) {$\pi_1$}; \node [style=function2] (85) at (4.5, 9) {$\pi_0$}; \node [style=component] (87) at (5.5, 10.75) {$\iota_1$}; \node [style=function2] (88) at (4.5, 10.75) {$\iota_0$}; \node [style=object] (89) at (4.5, 12) {$\oc X$}; \node [style=object] (90) at (5.5, 12) {$A$}; \node [style=object] (91) at (4.5, 7.75) {$\oc X$}; \node [style=object] (92) at (5.5, 7.75) {$A$}; \node [style=object] (93) at (6.25, 10) {$=$}; \node [style=differential] (94) at (7.75, 8.5) {{\bf =\!=\!=\!=}}; \node [style=component] (95) at (9, 8.5) {$\pi_1$}; \node [style=function2] (96) at (7.25, 9.5) {$\pi_0$}; \node [style=differential] (97) at (7.75, 10.5) {{\bf =\!=\!=\!=}}; \node [style=component] (98) at (9, 11.25) {$\iota_1$}; \node [style=function2] (99) at (7.75, 11.25) {$\iota_0$}; \node [style=object] (100) at (7.75, 12) {$\oc X$}; \node [style=object] (101) at (9, 12) {$A$}; \node [style=object] (102) at (7.75, 7.75) {$\oc X$}; \node [style=object] (103) at (9, 7.75) {$A$}; \node [style=object] (104) at (9.5, 10) {$+$}; \node [style=object] (109) at (10.25, 12) {$\oc X$}; \node [style=object] (110) at (11.25, 12) {$A$}; \node [style=object] (111) at (10.25, 7.75) {$\oc X$}; \node [style=object] (112) at (11.25, 7.75) {$A$}; \node [style=component] (113) at (9, 10.25) {$\pi_0$}; \node [style=differential] (114) at (13.25, 8.5) {{\bf =\!=\!=\!=}}; \node [style=component] (115) at (14.5, 8.5) {$\pi_1$}; \node [style=function2] (116) at (12.75, 9.5) {$\pi_0$}; \node [style=differential] (117) at (13.25, 10.5) {{\bf =\!=\!=\!=}}; \node [style=function2] (119) at (13.25, 11.25) {$\iota_0$}; \node [style=object] (120) at (13.25, 12) {$\oc X$}; \node [style=object] (121) at (14.5, 12) {$A$}; \node [style=object] (122) at (13.25, 7.75) {$\oc X$}; \node [style=object] (123) at (14.5, 7.75) {$A$}; \node [style=object] (124) at (15, 10) {$+$}; \node [style=object] (125) at (15.75, 12) {$\oc X$}; \node [style=object] (126) at (16.75, 12) {$A$}; \node [style=object] (127) at (15.75, 7.75) {$\oc X$}; \node [style=object] (128) at (16.75, 7.75) {$A$}; \node [style=component] (129) at (14.5, 10.25) {$0$}; \node [style=object] (130) at (11.75, 10) {$=$}; \node [style=object] (131) at (17.25, 10) {$=$}; \node [style=object] (132) at (19.25, 12) {$\oc X$}; \node [style=object] (133) at (20.25, 12) {$A$}; \node [style=object] (134) at (19.25, 7.75) {$\oc X$}; \node [style=object] (135) at (20.25, 7.75) {$A$}; \node [style=object] (136) at (20.75, 10) {$=$}; \node [style=object] (137) at (21.5, 12) {$\oc X$}; \node [style=object] (138) at (22.5, 12) {$A$}; \node [style=object] (139) at (21.5, 7.75) {$\oc X$}; \node [style=object] (140) at (22.5, 7.75) {$A$}; \node [style=object] (141) at (18, 10) {$0$}; \node [style=object] (142) at (18.75, 10) {$+$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=90, out=-150, looseness=1.25] (57) to (59); \draw [style=wire, in=90, out=-30, looseness=1.25] (57) to (58); \draw [style=wire, in=135, out=-90] (62) to (60); \draw [style=wire, in=-90, out=45, looseness=1.25] (60) to (61); \draw [style=wire] (60) to (57); \draw [style=wire] (59) to (66); \draw [style=wire] (58) to (67); \draw [style=wire] (64) to (62); \draw [style=wire] (65) to (61); \draw [style=wire] (72) to (74); \draw [style=wire] (77) to (75); \draw [style=wire, in=135, out=-135] (75) to (72); \draw [style=wire] (74) to (80); \draw [style=wire] (73) to (81); \draw [style=wire] (78) to (77); \draw [style=wire] (79) to (76); \draw [style=wire, in=-90, out=30] (72) to (76); \draw [style=wire, in=90, out=-30] (75) to (73); \draw [style=wire] (85) to (91); \draw [style=wire] (84) to (92); \draw [style=wire] (89) to (88); \draw [style=wire] (90) to (87); \draw [style=wire] (88) to (85); \draw [style=wire] (87) to (84); \draw [style=wire] (99) to (97); \draw [style=wire] (95) to (103); \draw [style=wire] (100) to (99); \draw [style=wire] (101) to (98); \draw [style=wire, in=90, out=-30] (97) to (95); \draw [style=wire] (109) to (111); \draw [style=wire] (110) to (112); \draw [style=wire, in=90, out=-135] (97) to (96); \draw [style=wire, in=135, out=-90] (96) to (94); \draw [style=wire] (94) to (102); \draw [style=wire] (98) to (113); \draw [style=wire, in=45, out=-90] (113) to (94); \draw [style=wire] (119) to (117); \draw [style=wire] (115) to (123); \draw [style=wire] (120) to (119); \draw [style=wire, in=90, out=-30] (117) to (115); \draw [style=wire] (125) to (127); \draw [style=wire] (126) to (128); \draw [style=wire, in=90, out=-135] (117) to (116); \draw [style=wire, in=135, out=-90] (116) to (114); \draw [style=wire] (114) to (122); \draw [style=wire, in=45, out=-90] (129) to (114); \draw [style=wire] (121) to (129); \draw [style=wire] (132) to (134); \draw [style=wire] (133) to (135); \draw [style=wire] (137) to (139); \draw [style=wire] (138) to (140); \end{pgfonlayer} \end{tikzpicture} \end{align*} For $(ii)$, first note that for ${\llbracket k \rrbracket: \oc\left( (X \times A) \times (X \times A) \right) \to B}$, precomposing by ${\llbracket \iota_0 \times \iota_1 \rrbracket = \varepsilon_{X \times A}; (\iota_0 \times \iota_1)}$ is equal to $\llbracket (\iota_0 \times \iota_1); k \rrbracket = \oc (\iota_0 \times \iota_1); \llbracket k \rrbracket$. Therefore, for any coKleisli map $\llbracket f \rrbracket: \oc(X \times A) \to B$, we compute: \begin{align*} \begin{array}[c]{c} \llbracket (\iota_0 \times \iota_1); \mathsf{D}[f] \rrbracket \end{array}= \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (41) at (4.75, 6.75) {$=$}; \node [style=differential] (43) at (3.25, 5.75) {{\bf =\!=\!=\!=}}; \node [style=object] (44) at (3.25, 4.25) {$B$}; \node [style=component] (45) at (3.25, 5) {$f$}; \node [style=object] (46) at (3.25, 10) {$\oc(X \times A)$}; \node [style=differential] (47) at (3.25, 7.75) {{\bf =\!=\!=\!=}}; \node [style=component] (48) at (4, 6.75) {$\pi_1$}; \node [style=function2] (49) at (2.5, 6.75) {$\pi_0$}; \node [style=function3] (50) at (3.25, 8.75) {$\iota_0 \times \iota_1$}; \node [style=object] (51) at (6.75, 9) {$\oc(X \times A)$}; \node [style=differential] (52) at (6.75, 8.25) {{\bf =\!=\!=\!=}}; \node [style=component] (53) at (7.5, 7) {$\iota_0 \times \iota_1$}; \node [style=function3] (54) at (6, 7) {$\iota_0 \times \iota_1$}; \node [style=differential] (55) at (6.75, 4.75) {{\bf =\!=\!=\!=}}; \node [style=object] (56) at (6.75, 3.25) {$B$}; \node [style=component] (57) at (6.75, 4) {$f$}; \node [style=component] (58) at (7.5, 5.75) {$\pi_1$}; \node [style=function2] (59) at (6, 5.75) {$\pi_0$}; \node [style=object] (60) at (8.75, 6.75) {$=$}; \node [style=object] (61) at (10.5, 9) {$\oc(X \times A)$}; \node [style=differential] (62) at (10.5, 8.25) {{\bf =\!=\!=\!=}}; \node [style=component] (63) at (11.25, 7.25) {$\pi_1$}; \node [style=function2] (64) at (9.75, 7.25) {$\pi_0$}; \node [style=differential] (65) at (10.5, 5.25) {{\bf =\!=\!=\!=}}; \node [style=object] (66) at (10.5, 3.75) {$B$}; \node [style=component] (67) at (10.5, 4.5) {$f$}; \node [style=component] (68) at (11.25, 6.25) {$\iota_1$}; \node [style=function2] (69) at (9.75, 6.25) {$\iota_0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (43) to (45); \draw [style=wire] (45) to (44); \draw [style=wire, in=90, out=-150, looseness=1.25] (47) to (49); \draw [style=wire, in=135, out=-90] (49) to (43); \draw [style=wire, in=-90, out=45, looseness=1.25] (43) to (48); \draw [style=wire, in=90, out=-30, looseness=1.25] (47) to (48); \draw [style=wire] (46) to (50); \draw [style=wire] (50) to (47); \draw [style=wire, in=90, out=-150, looseness=1.25] (52) to (54); \draw [style=wire, in=90, out=-30, looseness=1.25] (52) to (53); \draw [style=wire] (51) to (52); \draw [style=wire] (55) to (57); \draw [style=wire] (57) to (56); \draw [style=wire, in=135, out=-90] (59) to (55); \draw [style=wire, in=-90, out=45, looseness=1.25] (55) to (58); \draw [style=wire] (54) to (59); \draw [style=wire] (53) to (58); \draw [style=wire, in=90, out=-150, looseness=1.25] (62) to (64); \draw [style=wire, in=90, out=-30, looseness=1.25] (62) to (63); \draw [style=wire] (61) to (62); \draw [style=wire] (65) to (67); \draw [style=wire] (67) to (66); \draw [style=wire, in=135, out=-90] (69) to (65); \draw [style=wire, in=-90, out=45, looseness=1.25] (65) to (68); \draw [style=wire] (64) to (69); \draw [style=wire] (63) to (68); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} So $\llbracket (\iota_0 \times \iota_1); \mathsf{D}[f] \rrbracket = \mathsf{d}^\circ_{X \times A}; (\oc( \pi_0) \otimes \pi_1); (\oc( \iota_0 \otimes \iota_1); \mathsf{d}_{X \times A}; \llbracket f \rrbracket$. Therefore, by definition, $\llbracket f \rrbracket: \oc(X \times A) \to B$ is linear in context $X$ if and only if $\llbracket f \rrbracket = \llbracket (\iota_0 \times \iota_1); \mathsf{D}[f] \rrbracket = \mathsf{d}^\circ_{X \times A}; (\oc( \pi_0) \otimes \pi_1); (\oc( \iota_0 \otimes \iota_1); \mathsf{d}_{X \times A}; \llbracket f \rrbracket$. For $(iii)$, it is automatic by $(i)$ that we have that: \[ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (7, 0.75) {$g$}; \node [style=object] (1) at (7, 0) {$B$}; \node [style=differential] (3) at (7, 2.75) {{\bf =\!=\!=\!=}}; \node [style=component] (4) at (7.75, 1.75) {$\pi_1$}; \node [style=function2] (5) at (6.25, 1.75) {$\pi_0$}; \node [style=object] (6) at (7, 7.5) {$\oc(X \times A)$}; \node [style=differential] (7) at (7, 6.75) {{\bf =\!=\!=\!=}}; \node [style=component] (8) at (7.75, 5.75) {$\pi_1$}; \node [style=function2] (9) at (6.25, 5.75) {$\pi_0$}; \node [style=differential] (10) at (7, 3.75) {{\bf =\!=\!=\!=}}; \node [style=component] (13) at (7.75, 4.75) {$\iota_1$}; \node [style=function2] (14) at (6.25, 4.75) {$\iota_0$}; \node [style=object] (15) at (8.25, 3.5) {$=$}; \node [style=component] (16) at (10, 2.5) {$g$}; \node [style=object] (17) at (10, 1.75) {$B$}; \node [style=differential] (18) at (10, 4.5) {{\bf =\!=\!=\!=}}; \node [style=component] (19) at (10.75, 3.5) {$\pi_1$}; \node [style=function2] (20) at (9.25, 3.5) {$\pi_0$}; \node [style=object] (21) at (10, 5.25) {$\oc(X \times A)$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (1); \draw [style=wire, in=90, out=-150, looseness=1.25] (3) to (5); \draw [style=wire, in=135, out=-90] (5) to (0); \draw [style=wire, in=-90, out=45, looseness=1.25] (0) to (4); \draw [style=wire, in=90, out=-30, looseness=1.25] (3) to (4); \draw [style=wire, in=90, out=-150, looseness=1.25] (7) to (9); \draw [style=wire, in=90, out=-30, looseness=1.25] (7) to (8); \draw [style=wire] (6) to (7); \draw [style=wire, in=150, out=-90] (14) to (10); \draw [style=wire, in=-90, out=30] (10) to (13); \draw [style=wire] (9) to (14); \draw [style=wire] (8) to (13); \draw [style=wire] (10) to (3); \draw [style=wire] (16) to (17); \draw [style=wire, in=90, out=-150, looseness=1.25] (18) to (20); \draw [style=wire, in=135, out=-90] (20) to (16); \draw [style=wire, in=-90, out=45, looseness=1.25] (16) to (19); \draw [style=wire, in=90, out=-30, looseness=1.25] (18) to (19); \draw [style=wire] (21) to (18); \end{pgfonlayer} \end{tikzpicture}\] Then by $(ii)$, it follows that $\mathsf{d}^\circ_{X \times A}; (\oc(\pi_0) \otimes \pi_1); g$ is linear in context $X$. \end{proof} Below we will show that, in fact, a coKleisli map $\llbracket f \rrbracket: \oc(X \times A) \to B$ is linear in context $X$ if and only if it is of the form $\llbracket f \rrbracket = \mathsf{d}^\circ_{X \times A}; (\oc(\pi_0) \otimes \pi_1); g$ for some (necessarily unique) map $g: \oc X \otimes A \to B$. If we have the Seely isomorphisms, we may also re-express linearity in context using the codereliction. \begin{corollary} Let $\mathbb{X}$ be a differential storage category with coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ with Seely isomorphisms, deriving transformation $\mathsf{d}: \oc A \otimes A \to \oc A$ (or equivalently codereliction $\eta_A: A \to \oc A$), and finite (bi)products. Then a coKleisli map $\llbracket f \rrbracket: \oc(X \times A) \to B$ is linear in context $X$ if and only if the following diagram commutes: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{3pc}\xymatrix{ \oc (X \times A) \ar[d]_-{\chi_{X,A}} \ar[rrr]^-{\llbracket f \rrbracket} & && B \\ \oc X \otimes \oc A \ar[r]_-{1_{\oc X} \otimes \varepsilon_A} & \oc X \otimes A \ar[r]_-{1_{\oc X} \otimes \eta_A} & \oc X \otimes \oc A \ar[r]_-{\chi^{-1}_{X, A}} & \oc(X \times A) \ar[u]_-{\llbracket f \rrbracket} } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (7, 1) {$\chi$}; \node [style=object] (1) at (7, -0.75) {$B$}; \node [style=component] (2) at (7, 0) {$f$}; \node [style=object] (3) at (7, 4.25) {$\oc(X \times A)$}; \node [style=component] (4) at (7, 3.25) {$\chi$}; \node [style=component] (5) at (7.75, 1.75) {$\eta$}; \node [style=object] (6) at (4.75, 2.75) {$\oc (X \times A)$}; \node [style=object] (7) at (4.75, 0.75) {$B$}; \node [style=component] (8) at (4.75, 1.75) {$f$}; \node [style=object] (9) at (5.5, 1.75) {$=$}; \node [style=component] (10) at (7.75, 2.5) {$\varepsilon$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (2) to (1); \draw [style=wire] (3) to (4); \draw [style=wire, in=-90, out=15, looseness=1.50] (0) to (5); \draw [style=wire] (6) to (8); \draw [style=wire] (8) to (7); \draw [style=wire] (10) to (5); \draw [style=wire, in=90, out=-15] (4) to (10); \draw [style=wire, in=165, out=-150] (4) to (0); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \end{corollary} \begin{proof} By expressing the deriving transformation in terms of the multiplication and codereliction, for any coKleisli map $\llbracket f \rrbracket: \oc(X \times A) \to B$, we compute: \begin{align*} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (19) at (0.75, 9) {$\oc(X \times A)$}; \node [style=differential] (20) at (0.75, 8.25) {{\bf =\!=\!=\!=}}; \node [style=component] (21) at (1.5, 7.25) {$\pi_1$}; \node [style=function2] (22) at (0, 7.25) {$\pi_0$}; \node [style=differential] (23) at (0.75, 5.25) {{\bf =\!=\!=\!=}}; \node [style=object] (24) at (0.75, 3.5) {$B$}; \node [style=component] (25) at (0.75, 4.5) {$f$}; \node [style=component] (26) at (1.5, 6.25) {$\iota_1$}; \node [style=function2] (27) at (0, 6.25) {$\iota_0$}; \node [style=object] (28) at (4, 10.25) {$\oc(X \times A)$}; \node [style=object] (33) at (4, 2.25) {$B$}; \node [style=component] (34) at (4, 3.25) {$f$}; \node [style=component] (35) at (4.75, 6.25) {$\iota_1$}; \node [style=function2] (36) at (3.25, 6) {$\iota_0$}; \node [style=component] (37) at (4.75, 5.25) {$\eta$}; \node [style=duplicate] (38) at (4, 4.25) {$\nabla$}; \node [style=component] (39) at (4.75, 7.25) {$\pi_1$}; \node [style=function2] (40) at (3.25, 7.75) {$\pi_0$}; \node [style=component] (41) at (4.75, 8.25) {$\varepsilon$}; \node [style=duplicate] (42) at (4, 9.25) {$\Delta$}; \node [style=object] (43) at (2.25, 6.75) {$=$}; \node [style=object] (44) at (7.25, 10.25) {$\oc(X \times A)$}; \node [style=object] (45) at (7.25, 2.25) {$B$}; \node [style=component] (46) at (7.25, 3.25) {$f$}; \node [style=component] (47) at (8, 6.25) {$\eta$}; \node [style=function2] (48) at (6.5, 5.25) {$\iota_0$}; \node [style=function2] (49) at (8, 5.25) {$\iota_1$}; \node [style=duplicate] (50) at (7.25, 4.25) {$\nabla$}; \node [style=component] (51) at (8, 7.25) {$\varepsilon$}; \node [style=function2] (52) at (6.5, 8.25) {$\pi_0$}; \node [style=function2] (53) at (8, 8.25) {$\pi_1$}; \node [style=duplicate] (54) at (7.25, 9.25) {$\Delta$}; \node [style=object] (55) at (5.5, 6.75) {$=$}; \node [style=component] (56) at (10.5, 6) {$\chi$}; \node [style=object] (57) at (10.5, 4) {$B$}; \node [style=component] (58) at (10.5, 5) {$f$}; \node [style=object] (59) at (10.5, 9.25) {$\oc(X \times A)$}; \node [style=component] (60) at (10.5, 8.25) {$\chi$}; \node [style=component] (61) at (11.25, 6.75) {$\eta$}; \node [style=object] (65) at (9, 6.75) {$=$}; \node [style=component] (66) at (11.25, 7.5) {$\varepsilon$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=90, out=-150, looseness=1.25] (20) to (22); \draw [style=wire, in=90, out=-30, looseness=1.25] (20) to (21); \draw [style=wire] (19) to (20); \draw [style=wire] (23) to (25); \draw [style=wire] (25) to (24); \draw [style=wire, in=135, out=-90] (27) to (23); \draw [style=wire, in=-90, out=45, looseness=1.25] (23) to (26); \draw [style=wire] (22) to (27); \draw [style=wire] (21) to (26); \draw [style=wire] (34) to (33); \draw [style=wire, in=-90, out=0, looseness=1.25] (38) to (37); \draw [style=wire] (38) to (34); \draw [style=wire] (35) to (37); \draw [style=wire, in=-90, out=180] (38) to (36); \draw [style=wire, in=90, out=0, looseness=1.25] (42) to (41); \draw [style=wire] (39) to (41); \draw [style=wire, in=90, out=-180] (42) to (40); \draw [style=wire] (40) to (36); \draw [style=wire] (39) to (35); \draw [style=wire] (28) to (42); \draw [style=wire] (46) to (45); \draw [style=wire, in=-90, out=0, looseness=1.25] (50) to (49); \draw [style=wire] (50) to (46); \draw [style=wire] (47) to (49); \draw [style=wire, in=-90, out=180] (50) to (48); \draw [style=wire, in=90, out=0, looseness=1.25] (54) to (53); \draw [style=wire] (51) to (53); \draw [style=wire, in=90, out=-180] (54) to (52); \draw [style=wire] (52) to (48); \draw [style=wire] (51) to (47); \draw [style=wire] (44) to (54); \draw [style=wire] (56) to (58); \draw [style=wire] (58) to (57); \draw [style=wire] (59) to (60); \draw [style=wire, in=-90, out=15, looseness=1.50] (56) to (61); \draw [style=wire] (66) to (61); \draw [style=wire, in=90, out=-15] (60) to (66); \draw [style=wire, in=165, out=-150] (60) to (56); \end{pgfonlayer} \end{tikzpicture} \end{align*} So we have that: \[\mathsf{d}^\circ_{X \times A}; (\oc( \pi_0) \otimes \pi_1); (\oc( \iota_0 \otimes \iota_1); \mathsf{d}_{X \times A}; \llbracket f \rrbracket = \chi_{X,A}; (1_{\oc X} \otimes \varepsilon_A); (1_{\oc X} \otimes \eta_A); \chi^{-1}_{X,A}; \llbracket f \rrbracket \] Then by Lemma \ref{lem:cokleisli-linearcontext}.(\ref{lem:cokleisli-linearcontext.ii}), $\llbracket f \rrbracket: \oc (X \times A) \to B$ is linear if and only if the following equality holds: \[ \llbracket f \rrbracket = \mathsf{d}^\circ_{X \times A}; (\oc( \pi_0) \otimes \pi_1); (\oc( \iota_0 \otimes \iota_1); \mathsf{d}_{X \times A}; \llbracket f \rrbracket = \chi_{X,A}; (1_{\oc X} \otimes \varepsilon_A); (1_{\oc X} \otimes \eta_A); \chi^{-1}_{X,A}; \llbracket f \rrbracket\] So the desired equivalence holds. \end{proof} We now prove the main result of this section, that we have an isomorphism of fibrations. It is important to note that the following result does not require the Seely isomorphisms. \begin{theorem}\label{thm:fibration_equivalence} Let $\mathbb{X}$ be a monoidal differential category with coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ and deriving transformation $\mathsf{d}: \oc A \otimes A \to \oc A$, and finite (bi)products. Then the fibrations ${\mathsf{p}_\oc: \L_\oc[\mathbb{X}] \to \mathbb{X}_\oc}$ and ${\mathsf{p}: \L[\mathbb{X}_\oc] \to \mathbb{X}_\oc}$ are isomorphic via the functors $\mathsf{E}: \L_\oc[\mathbb{X}] \to \L[\mathbb{X}_\oc]$ and $\mathsf{E}^{-1}: \L[\mathbb{X}_\oc] \to \L_\oc[\mathbb{X}]$ where: \begin{enumerate}[{\em (i)}] \item $\mathsf{E}$ is defined on objects as $\mathsf{E}(X,A) = (X,A)$, and on maps $(\llbracket f \rrbracket, g): (X,A) \to (Y,B)$ as follows: \begin{align*} \mathsf{E}(\llbracket f \rrbracket, g) &= \left( \llbracket f \rrbracket, \xymatrixcolsep{3pc}\xymatrix{\oc(X \times A) \ar[r]^-{\mathsf{d}^\circ_{X \times A}} & \oc (X \times A) \times (X \times A) \ar[r]^-{\oc(\pi_0) \times \pi_1} & \oc X \otimes A \ar[r]^-{g} & B } \right) \end{align*} \begin{align*} \mathsf{E}\left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (10.25, 0.75) {$g$}; \node [style=object] (21) at (10.25, -0.25) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (7, 0.75) {$g$}; \node [style=object] (2) at (7, 0) {$B$}; \node [style=object] (3) at (7, 3.5) {$\oc (X \times A)$}; \node [style=differential] (4) at (7, 2.75) {{\bf =\!=\!=\!=}}; \node [style=component] (5) at (7.75, 1.75) {$\pi_1$}; \node [style=function2] (6) at (6.25, 1.75) {$\pi_0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (3) to (4); \draw [style=wire, in=90, out=-150, looseness=1.25] (4) to (6); \draw [style=wire, in=135, out=-90] (6) to (0); \draw [style=wire, in=-90, out=45, looseness=1.25] (0) to (5); \draw [style=wire, in=90, out=-30, looseness=1.25] (4) to (5); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) \end{align*} \item $\mathsf{E}^{-1}$ is defined on objects as $\mathsf{E}^{-1}(X,A) = (X,A)$, and on maps $(\llbracket f \rrbracket, \llbracket g \rrbracket): (X,A) \to (Y,B)$ as follows: \begin{align*} \mathsf{E}^{-1}(\llbracket f \rrbracket, \llbracket g \rrbracket) = \left( \llbracket f \rrbracket, \xymatrixcolsep{3pc}\xymatrix{\oc X \otimes A \ar[r]^-{\oc(\iota_0) \otimes \iota_1} & \oc(X \times A) \times (X \times A) \ar[r]^-{\mathsf{d}_{X \times A}} & \oc (X \times A) \ar[r]^-{\llbracket g \rrbracket} & B } \right)\end{align*} \begin{align*} \mathsf{E}^{-1}\left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc (X \times A)$}; \node [style=object] (20) at (9.5, -1) {$B$}; \node [style=component] (21) at (9.5, 0) {$g$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=differential] (0) at (7, 1) {{\bf =\!=\!=\!=}}; \node [style=object] (1) at (7, -0.5) {$B$}; \node [style=component] (2) at (7, 0.25) {$g$}; \node [style=component] (5) at (7.75, 1.75) {$\iota_1$}; \node [style=function2] (6) at (6.25, 1.75) {$\iota_0$}; \node [style=object] (7) at (6.25, 2.5) {$\oc X$}; \node [style=object] (8) at (7.75, 2.5) {$A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (2) to (1); \draw [style=wire, in=135, out=-90] (6) to (0); \draw [style=wire, in=-90, out=45, looseness=1.25] (0) to (5); \draw [style=wire] (7) to (6); \draw [style=wire] (8) to (5); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) \end{align*} \end{enumerate} \end{theorem} \begin{proof} For this proof, we will follow the same strategy as in the proof of Corollary \ref{cor:seely-lin}. We will first prove that $\mathsf{E}: \L_\oc[\mathbb{X}] \to \L[\mathbb{X}_\oc]$ is a well-defined morphism of fibrations (that is, a functor that preserves Cartesian maps). Then we will prove that $\mathsf{E} \circ \mathsf{E}^{-1} = 1_{ \L[\mathbb{X}_\oc]}$ and $\mathsf{E}^{-1} \circ \mathsf{E} = 1_{ \L_\oc[\mathbb{X}]}$. Therefore, it follows that ${\mathsf{E}^{-1}: \L[\mathbb{X}_\oc] \to \L_\oc[\mathbb{X}]}$ is also a morphism of fibrations, and that $\mathsf{E}$ and $\mathsf{E}^{-1}$ are isomorphisms and inverses of each other. By Lemma \ref{lem:cokleisli-linearcontext}.(\ref{lem:cokleisli-linearcontext.iii}), $\mathsf{E}$ is well-defined. To show that $\mathsf{E}$ preserve composition, first observe that by using Lemma \ref{lem:cokleisli-linearcontext}.(\ref{lem:cokleisli-linearcontext.i}) we see that composition in $\L[\mathbb{X}_\oc]$ can be expressed as follows (which we leave as an exercise for the reader to work out for themselves): \[ \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc (X \times A)$}; \node [style=object] (20) at (9.5, -1) {$B$}; \node [style=component] (21) at (9.5, 0) {$g$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) ; \left( \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc Y$}; \node [style=object] (20) at (9.5, -1) {$Z$}; \node [style=component] (21) at (9.5, 0) {$h$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc (Y \times B)$}; \node [style=object] (20) at (9.5, -1) {$C$}; \node [style=component] (21) at (9.5, 0) {$k$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1.75) {$\oc X$}; \node [style=function2] (18) at (9.5, 0) {$f$}; \node [style=component] (19) at (9.5, -1) {$h$}; \node [style=object] (20) at (9.5, -1.75) {$Z$}; \node [style=component] (21) at (9.5, 1) {$\delta$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (18) to (19); \draw [style=wire] (19) to (20); \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (18); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c}\resizebox{!}{4.75cm} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (4.75, 0) {$\delta$}; \node [style=object] (2) at (6, 3.75) {$\oc (X \times A)$}; \node [style=object] (6) at (5.5, -4.75) {$C$}; \node [style=function2] (7) at (4.75, -1.25) {$f$}; \node [style=differential] (23) at (6, 2.75) {{\bf =\!=\!=\!=}}; \node [style=component] (24) at (6.75, 1.75) {$\pi_1$}; \node [style=function2] (25) at (5.25, 1.75) {$\pi_0$}; \node [style=duplicate] (26) at (5.25, 0.75) {$\Delta$}; \node [style=differential] (27) at (6.25, -0.75) {{\bf =\!=\!=\!=}}; \node [style=component] (28) at (6.75, 0) {$\iota_1$}; \node [style=function2] (29) at (5.75, 0) {$\iota_0$}; \node [style=differential] (30) at (5.5, -3.25) {{\bf =\!=\!=\!=}}; \node [style=component] (31) at (6.25, -2.5) {$\iota_1$}; \node [style=function2] (32) at (4.75, -2.5) {$\iota_0$}; \node [style=component] (33) at (5.5, -4) {$k$}; \node [style=component] (34) at (6.25, -1.5) {$g$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (7); \draw [style=wire, in=90, out=-150, looseness=1.25] (23) to (25); \draw [style=wire, in=90, out=-30, looseness=1.25] (23) to (24); \draw [style=wire] (2) to (23); \draw [style=wire] (25) to (26); \draw [style=wire, in=90, out=-150] (26) to (0); \draw [style=wire, in=135, out=-90] (29) to (27); \draw [style=wire, in=-90, out=45, looseness=1.25] (27) to (28); \draw [style=wire, in=135, out=-90] (32) to (30); \draw [style=wire, in=-90, out=45, looseness=1.25] (30) to (31); \draw [style=wire] (7) to (32); \draw [style=wire] (30) to (33); \draw [style=wire] (33) to (6); \draw [style=wire] (27) to (34); \draw [style=wire] (34) to (31); \draw [style=wire, in=90, out=-30, looseness=1.25] (26) to (29); \draw [style=wire] (24) to (28); \end{pgfonlayer} \end{tikzpicture} } \end{array} \right)\] Then by Lemma \ref{lem:cokleisli-linearcontext}.(\ref{lem:cokleisli-linearcontext.ii}), we compute that: \begin{align*} &\mathsf{E}\left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (10.25, 0.75) {$g$}; \node [style=object] (21) at (10.25, -0.25) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) ; \mathsf{E}\left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc Y$}; \node [style=object] (20) at (9.5, -1) {$Z$}; \node [style=component] (21) at (9.5, 0) {$h$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc Y$}; \node [style=object] (19) at (10.75, 1.75) {$B$}; \node [style=component] (20) at (10.25, 0.75) {$k$}; \node [style=object] (21) at (10.25, -0.25) {$C$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (7, 0.75) {$g$}; \node [style=object] (2) at (7, 0) {$B$}; \node [style=object] (3) at (7, 3.5) {$\oc (X \times A)$}; \node [style=differential] (4) at (7, 2.75) {{\bf =\!=\!=\!=}}; \node [style=component] (5) at (7.75, 1.75) {$\pi_1$}; \node [style=function2] (6) at (6.25, 1.75) {$\pi_0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (3) to (4); \draw [style=wire, in=90, out=-150, looseness=1.25] (4) to (6); \draw [style=wire, in=135, out=-90] (6) to (0); \draw [style=wire, in=-90, out=45, looseness=1.25] (0) to (5); \draw [style=wire, in=90, out=-30, looseness=1.25] (4) to (5); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) ; \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc Y$}; \node [style=object] (20) at (9.5, -1) {$Z$}; \node [style=component] (21) at (9.5, 0) {$h$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (7, 0.75) {$k$}; \node [style=object] (2) at (7, 0) {$C$}; \node [style=object] (3) at (7, 3.5) {$\oc (Y \times B)$}; \node [style=differential] (4) at (7, 2.75) {{\bf =\!=\!=\!=}}; \node [style=component] (5) at (7.75, 1.75) {$\pi_1$}; \node [style=function2] (6) at (6.25, 1.75) {$\pi_0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (3) to (4); \draw [style=wire, in=90, out=-150, looseness=1.25] (4) to (6); \draw [style=wire, in=135, out=-90] (6) to (0); \draw [style=wire, in=-90, out=45, looseness=1.25] (0) to (5); \draw [style=wire, in=90, out=-30, looseness=1.25] (4) to (5); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) \\&= \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1.75) {$\oc X$}; \node [style=function2] (18) at (9.5, 0) {$f$}; \node [style=component] (19) at (9.5, -1) {$h$}; \node [style=object] (20) at (9.5, -1.75) {$Z$}; \node [style=component] (21) at (9.5, 1) {$\delta$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (18) to (19); \draw [style=wire] (19) to (20); \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (18); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c}\resizebox{!}{6cm} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (4.75, 1.5) {$\delta$}; \node [style=object] (2) at (6, 5.25) {$\oc (X \times A)$}; \node [style=object] (6) at (5.5, -6.5) {$C$}; \node [style=function2] (7) at (4.75, 0) {$f$}; \node [style=differential] (23) at (6, 4.25) {{\bf =\!=\!=\!=}}; \node [style=component] (24) at (6.75, 3.25) {$\pi_1$}; \node [style=function2] (25) at (5.25, 3.25) {$\pi_0$}; \node [style=duplicate] (26) at (5.25, 2.25) {$\Delta$}; \node [style=differential] (27) at (6.25, 0.75) {{\bf =\!=\!=\!=}}; \node [style=component] (28) at (6.75, 1.5) {$\iota_1$}; \node [style=function2] (29) at (5.75, 1.5) {$\iota_0$}; \node [style=differential] (30) at (5.5, -3.25) {{\bf =\!=\!=\!=}}; \node [style=component] (31) at (6.25, -2.5) {$\iota_1$}; \node [style=function2] (32) at (4.75, -2.5) {$\iota_0$}; \node [style=component] (35) at (5.5, -5.75) {$k$}; \node [style=differential] (38) at (5.5, -3.75) {{\bf =\!=\!=\!=}}; \node [style=component] (39) at (6.25, -4.75) {$\pi_1$}; \node [style=function2] (40) at (4.75, -4.75) {$\pi_0$}; \node [style=component] (41) at (6.25, -1.75) {$g$}; \node [style=differential] (44) at (6.25, 0.25) {{\bf =\!=\!=\!=}}; \node [style=component] (45) at (7, -0.75) {$\pi_1$}; \node [style=function2] (46) at (5.5, -0.75) {$\pi_0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (7); \draw [style=wire, in=90, out=-150, looseness=1.25] (23) to (25); \draw [style=wire, in=90, out=-30, looseness=1.25] (23) to (24); \draw [style=wire] (2) to (23); \draw [style=wire] (25) to (26); \draw [style=wire, in=90, out=-150] (26) to (0); \draw [style=wire, in=135, out=-90] (29) to (27); \draw [style=wire, in=-90, out=45, looseness=1.25] (27) to (28); \draw [style=wire, in=135, out=-90] (32) to (30); \draw [style=wire, in=-90, out=45, looseness=1.25] (30) to (31); \draw [style=wire] (7) to (32); \draw [style=wire, in=90, out=-30, looseness=1.25] (26) to (29); \draw [style=wire] (24) to (28); \draw [style=wire, in=90, out=-150, looseness=1.25] (38) to (40); \draw [style=wire, in=135, out=-90] (40) to (35); \draw [style=wire, in=-90, out=45, looseness=1.25] (35) to (39); \draw [style=wire, in=90, out=-30, looseness=1.25] (38) to (39); \draw [style=wire] (30) to (38); \draw [style=wire] (35) to (6); \draw [style=wire, in=90, out=-150, looseness=1.25] (44) to (46); \draw [style=wire, in=135, out=-90] (46) to (41); \draw [style=wire, in=-90, out=45, looseness=1.25] (41) to (45); \draw [style=wire, in=90, out=-30, looseness=1.25] (44) to (45); \draw [style=wire] (27) to (44); \draw [style=wire] (41) to (31); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1.75) {$\oc X$}; \node [style=function2] (18) at (9.5, 0) {$f$}; \node [style=component] (19) at (9.5, -1) {$h$}; \node [style=object] (20) at (9.5, -1.75) {$Z$}; \node [style=component] (21) at (9.5, 1) {$\delta$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (18) to (19); \draw [style=wire] (19) to (20); \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (18); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (2) at (6, 5.25) {$\oc (X \times A)$}; \node [style=differential] (23) at (6, 4.25) {{\bf =\!=\!=\!=}}; \node [style=component] (24) at (6.75, 3.25) {$\pi_1$}; \node [style=function2] (25) at (5.25, 3.25) {$\pi_0$}; \node [style=component] (42) at (4.75, 1.25) {$\delta$}; \node [style=duplicate] (43) at (5.25, 2.25) {$\Delta$}; \node [style=component] (45) at (6, 1.25) {$g$}; \node [style=component] (47) at (5.5, -0.5) {$k$}; \node [style=object] (48) at (5.5, -1.25) {$C$}; \node [style=function2] (49) at (4.75, 0.25) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=90, out=-150, looseness=1.25] (23) to (25); \draw [style=wire, in=90, out=-30, looseness=1.25] (23) to (24); \draw [style=wire] (2) to (23); \draw [style=wire, bend right] (43) to (42); \draw [style=wire, in=150, out=-30, looseness=1.25] (43) to (45); \draw [style=wire] (47) to (48); \draw [style=wire, in=30, out=-90] (45) to (47); \draw [style=wire] (42) to (49); \draw [style=wire, bend right, looseness=1.25] (49) to (47); \draw [style=wire] (25) to (43); \draw [style=wire, in=30, out=-90] (24) to (45); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) \\ &= \mathsf{E}\left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1.75) {$\oc X$}; \node [style=function2] (18) at (9.5, 0) {$f$}; \node [style=component] (19) at (9.5, -1) {$h$}; \node [style=object] (20) at (9.5, -1.75) {$Z$}; \node [style=component] (21) at (9.5, 1) {$\delta$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (18) to (19); \draw [style=wire] (19) to (20); \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (18); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (15) at (9, 0) {$\delta$}; \node [style=duplicate] (16) at (9.5, 1) {$\Delta$}; \node [style=object] (17) at (9.5, 1.75) {$\oc X$}; \node [style=component] (18) at (10.25, 0) {$g$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (9.75, -1.75) {$k$}; \node [style=object] (21) at (9.75, -2.5) {$C$}; \node [style=function2] (23) at (9, -1) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right] (16) to (15); \draw [style=wire] (17) to (16); \draw [style=wire, in=-90, out=15, looseness=0.75] (18) to (19); \draw [style=wire, in=150, out=-30, looseness=1.25] (16) to (18); \draw [style=wire] (20) to (21); \draw [style=wire, in=30, out=-90] (18) to (20); \draw [style=wire] (15) to (23); \draw [style=wire, bend right, looseness=1.25] (23) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \mathsf{E} \left( \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (10.25, 0.75) {$g$}; \node [style=object] (21) at (10.25, -0.25) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) ; \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc Y$}; \node [style=object] (20) at (9.5, -1) {$Z$}; \node [style=component] (21) at (9.5, 0) {$h$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc Y$}; \node [style=object] (19) at (10.75, 1.75) {$B$}; \node [style=component] (20) at (10.25, 0.75) {$k$}; \node [style=object] (21) at (10.25, -0.25) {$C$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) \right) \end{align*} Next we show that $\mathsf{E}$ preserves identities. First note that the identity in $\L[\mathbb{X}_\oc]$ is easily computed out to be $(\llbracket 1_X \rrbracket, \llbracket \pi_1 \rrbracket) = (\varepsilon_X, \varepsilon_{X \times A}; \pi_1)$. Then we compute: \[ \mathsf{E}\left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$X$}; \node [style=component] (21) at (9.5, 0) {$\varepsilon$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.5, 1.75) {$A$}; \node [style=object] (21) at (10.5, -0.25) {$A$}; \node [style=component] (22) at (9.75, 0.5) {$e$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (19) to (21); \draw [style=wire] (17) to (22); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$X$}; \node [style=component] (21) at (9.5, 0) {$\varepsilon$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (1) at (7.75, 0) {$A$}; \node [style=object] (2) at (7, 3.5) {$\oc(X \times A)$}; \node [style=differential] (3) at (7, 2.75) {{\bf =\!=\!=\!=}}; \node [style=component] (4) at (7.75, 1.75) {$\pi_1$}; \node [style=function2] (5) at (6.25, 1.75) {$\pi_0$}; \node [style=component] (6) at (6.25, 1) {$e$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (2) to (3); \draw [style=wire, in=90, out=-150, looseness=1.25] (3) to (5); \draw [style=wire, in=90, out=-30, looseness=1.25] (3) to (4); \draw [style=wire] (5) to (6); \draw [style=wire] (4) to (1); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$X$}; \node [style=component] (21) at (9.5, 0) {$\varepsilon$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (1) at (7.75, 0.75) {$A$}; \node [style=object] (2) at (7, 3.5) {$\oc(X \times A)$}; \node [style=differential] (3) at (7, 2.75) {{\bf =\!=\!=\!=}}; \node [style=component] (4) at (7.75, 1.75) {$\pi_1$}; \node [style=component] (5) at (6.25, 1.75) {$e$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (2) to (3); \draw [style=wire, in=90, out=-150, looseness=1.25] (3) to (5); \draw [style=wire, in=90, out=-30, looseness=1.25] (3) to (4); \draw [style=wire] (4) to (1); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$X$}; \node [style=component] (21) at (9.5, 0) {$\varepsilon$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (1) at (7.75, 1) {$A$}; \node [style=object] (2) at (7.75, 4) {$\oc(X \times A)$}; \node [style=component] (4) at (7.75, 2) {$\pi_1$}; \node [style=component] (5) at (7.75, 3) {$\varepsilon$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (4) to (1); \draw [style=wire] (2) to (5); \draw [style=wire] (5) to (4); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) \] So $\mathsf{E}$ is a functor. To show that $\mathsf{E}$ is a fibration morphism, we must show that $\mathsf{p} \circ \mathsf{E} = \mathsf{p}_\oc$ and that $\mathsf{E}$ preserves Cartesian maps. Starting with the former, this is straightforward on both maps and objects since $\mathsf{p}\mathsf{E}(X,A) = X = \mathsf{p}_\oc(X,A)$ and $\mathsf{p}\mathsf{E}(\llbracket f \rrbracket, g) = \llbracket f \rrbracket = \mathsf{p}_\oc(\llbracket f \rrbracket,g)$. Next we must show that $\mathsf{E}$ also preserves Cartesian maps. By Proposition \ref{prop:linear_fibration}, note that in $\L[\mathbb{X}_\oc]$ these are easily computed out to be the maps of the form $(\llbracket f \rrbracket, \llbracket \pi_1 \rrbracket) = (\llbracket f \rrbracket, \varepsilon_{X \times A}; \pi_1)$, while recall that by Proposition \ref{prop:context_fibration}, Cartesian maps in $\L_\oc[\mathbb{X}]$ are of the form $(\llbracket f \rrbracket, e_X \otimes 1_A)$. By a similar calculation as the one above, we easily compute that $\mathsf{E} (\llbracket f \rrbracket, e_X \otimes 1_A) = (\llbracket f \rrbracket, \varepsilon_{X \times A}; \pi_1)$. So $\mathsf{E}$ preserves Cartesian maps, and we conclude that $\mathsf{E}$ is a fibration morphism. Next we show that $\mathsf{E}$ and $\mathsf{E}^{-1}$ are inverses of each other. Starting with $\mathsf{E} \circ \mathsf{E}^{-1}$, clearly on objects $\mathsf{E}\mathsf{E}^{-1}(X,A) = (X,A)$, while on maps we use Lemma \ref{lem:cokleisli-linearcontext}.(\ref{lem:cokleisli-linearcontext.ii}): \[ \mathsf{E}\mathsf{E}^{-1}\left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc (X \times A)$}; \node [style=object] (20) at (9.5, -1) {$B$}; \node [style=component] (21) at (9.5, 0) {$g$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \mathsf{E} \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=differential] (0) at (7, 1) {{\bf =\!=\!=\!=}}; \node [style=object] (1) at (7, -0.5) {$B$}; \node [style=component] (2) at (7, 0.25) {$g$}; \node [style=component] (5) at (7.75, 1.75) {$\iota_1$}; \node [style=function2] (6) at (6.25, 1.75) {$\iota_0$}; \node [style=object] (7) at (6.25, 2.5) {$\oc X$}; \node [style=object] (8) at (7.75, 2.5) {$A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (2) to (1); \draw [style=wire, in=135, out=-90] (6) to (0); \draw [style=wire, in=-90, out=45, looseness=1.25] (0) to (5); \draw [style=wire] (7) to (6); \draw [style=wire] (8) to (5); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (0) at (2.75, 5.5) {$\oc(X \times A)$}; \node [style=differential] (1) at (2.75, 4.75) {{\bf =\!=\!=\!=}}; \node [style=component] (2) at (3.5, 3.75) {$\pi_1$}; \node [style=function2] (3) at (2, 3.75) {$\pi_0$}; \node [style=differential] (4) at (2.75, 1.75) {{\bf =\!=\!=\!=}}; \node [style=object] (5) at (2.75, 0.25) {$B$}; \node [style=component] (6) at (2.75, 1) {$g$}; \node [style=component] (7) at (3.5, 2.75) {$\iota_1$}; \node [style=function2] (8) at (2, 2.75) {$\iota_0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=90, out=-150, looseness=1.25] (1) to (3); \draw [style=wire, in=90, out=-30, looseness=1.25] (1) to (2); \draw [style=wire] (0) to (1); \draw [style=wire] (4) to (6); \draw [style=wire] (6) to (5); \draw [style=wire, in=150, out=-90] (8) to (4); \draw [style=wire, in=-90, out=30] (4) to (7); \draw [style=wire] (3) to (8); \draw [style=wire] (2) to (7); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) =\left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc (X \times A)$}; \node [style=object] (20) at (9.5, -1) {$B$}; \node [style=component] (21) at (9.5, 0) {$g$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) \] So $\mathsf{E} \circ \mathsf{E}^{-1} = 1_{ \L[\mathbb{X}_\oc]}$. Next for $\mathsf{E}^{-1} \circ \mathsf{E}$, again this is clear on objects since $\mathsf{E}^{-1}\mathsf{E}(X,A) = (X,A)$, while on maps we use Lemma \ref{lem:cokleisli-linearcontext}.(\ref{lem:cokleisli-linearcontext.i}): \[ \mathsf{E}^{-1}\mathsf{E}\left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (10.25, 0.75) {$g$}; \node [style=object] (21) at (10.25, -0.25) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \mathsf{E}^{-1} \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (7, 0.75) {$g$}; \node [style=object] (2) at (7, 0) {$B$}; \node [style=object] (3) at (7, 3.5) {$\oc (X \times A)$}; \node [style=differential] (4) at (7, 2.75) {{\bf =\!=\!=\!=}}; \node [style=component] (5) at (7.75, 1.75) {$\pi_1$}; \node [style=function2] (6) at (6.25, 1.75) {$\pi_0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (3) to (4); \draw [style=wire, in=90, out=-150, looseness=1.25] (4) to (6); \draw [style=wire, in=135, out=-90] (6) to (0); \draw [style=wire, in=-90, out=45, looseness=1.25] (0) to (5); \draw [style=wire, in=90, out=-30, looseness=1.25] (4) to (5); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (9) at (7, 1) {$g$}; \node [style=object] (10) at (7, 0.25) {$B$}; \node [style=differential] (12) at (7, 3) {{\bf =\!=\!=\!=}}; \node [style=component] (13) at (7.75, 2) {$\pi_1$}; \node [style=function2] (14) at (6.25, 2) {$\pi_0$}; \node [style=differential] (15) at (7, 3.75) {{\bf =\!=\!=\!=}}; \node [style=component] (16) at (7.75, 4.75) {$\pi_1$}; \node [style=function2] (17) at (6.25, 4.75) {$\pi_0$}; \node [style=object] (18) at (6.25, 5.5) {$\oc X$}; \node [style=object] (19) at (7.75, 5.5) {$A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (9) to (10); \draw [style=wire, in=90, out=-150, looseness=1.25] (12) to (14); \draw [style=wire, in=135, out=-90] (14) to (9); \draw [style=wire, in=-90, out=45, looseness=1.25] (9) to (13); \draw [style=wire, in=90, out=-30, looseness=1.25] (12) to (13); \draw [style=wire, in=-90, out=150, looseness=1.25] (15) to (17); \draw [style=wire, in=-90, out=30, looseness=1.25] (15) to (16); \draw [style=wire] (15) to (12); \draw [style=wire] (18) to (17); \draw [style=wire] (19) to (16); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \left(\begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc X$}; \node [style=object] (20) at (9.5, -1) {$Y$}; \node [style=component] (21) at (9.5, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array}, \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (10.25, 0.75) {$g$}; \node [style=object] (21) at (10.25, -0.25) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) \] So $\mathsf{E}^{-1} \circ \mathsf{E} = 1_{ \L_\oc[\mathbb{X}]}$. As a consequence, it follows that $\mathsf{E}^{-1}$ is also a fibration morphism. Therefore, $\mathsf{E}$ and $\mathsf{E}^{-1}$ are fibration isomorphisms and inverses of each other, and so we conclude that the fibrations ${\mathsf{p}_\oc: \L_\oc[\mathbb{X}] \to \mathbb{X}_\oc}$ and ${\mathsf{p}: \L[\mathbb{X}_\oc] \to \mathbb{X}_\oc}$ are isomorphic. \end{proof} As an immediate consequence, we have that fibres over the same object are isomorphic. \begin{corollary}\label{cor:fibres-equiv} Let $\mathbb{X}$ be a monoidal differential category with coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ and deriving transformation $\mathsf{d}: \oc A \otimes A \to \oc A$, and finite (bi)products. Then for each object $X$, $\L_\oc[X]$ is isomorphic to $\L[\mathbb{X}]$ via the functors $\mathsf{E}_X: \L_\oc[X] \to \L[X]$ and $\mathsf{E}^{-1}_X: \L[X] \to \L_\oc[X]$ where: \begin{enumerate}[{\em (i)}] \item $\mathsf{E}_X$ is defined on objects as $\mathsf{E}_X(A) = A$, and on maps $g: \oc X \otimes A \to B$ as follows: \begin{align*} \mathsf{E}_X(g) = \left(\xymatrixcolsep{3pc}\xymatrix{\oc(X \times A) \ar[r]^-{\mathsf{d}^\circ_{X \times A}} & \oc (X \times A) \times (X \times A) \ar[r]^-{\oc(\pi_0) \times \pi_1} & \oc X \otimes A \ar[r]^-{g} & B } \right) \end{align*} \begin{align*} \mathsf{E}\left( \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (10.25, 0.75) {$g$}; \node [style=object] (21) at (10.25, -0.25) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (7, 0.75) {$g$}; \node [style=object] (2) at (7, 0) {$B$}; \node [style=object] (3) at (7, 3.5) {$\oc (X \times A)$}; \node [style=differential] (4) at (7, 2.75) {{\bf =\!=\!=\!=}}; \node [style=component] (5) at (7.75, 1.75) {$\pi_1$}; \node [style=function2] (6) at (6.25, 1.75) {$\pi_0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (3) to (4); \draw [style=wire, in=90, out=-150, looseness=1.25] (4) to (6); \draw [style=wire, in=135, out=-90] (6) to (0); \draw [style=wire, in=-90, out=45, looseness=1.25] (0) to (5); \draw [style=wire, in=90, out=-30, looseness=1.25] (4) to (5); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item $\mathsf{E}_X^{-1}$ is defined on objects as $\mathsf{E}^{-1}_X(A) = A$, and on maps $\llbracket g \rrbracket: \oc(X \times A) \to B$ as follows: \begin{align*} \mathsf{E}_X^{-1}(\llbracket g \rrbracket) = \left( \xymatrixcolsep{3pc}\xymatrix{\oc X \otimes A \ar[r]^-{\oc(\iota_0) \otimes \iota_1} & \oc(X \times A) \times (X \times A) \ar[r]^-{\mathsf{d}_{X \times A}} & \oc (X \times A) \ar[r]^-{\llbracket g \rrbracket} & B } \right) \end{align*} \begin{align*}\mathsf{E}_X^{-1}\left( \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.5, 1) {$\oc (X \times A)$}; \node [style=object] (20) at (9.5, -1) {$B$}; \node [style=component] (21) at (9.5, 0) {$g$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (17) to (21); \draw [style=wire] (21) to (20); \end{pgfonlayer} \end{tikzpicture} \end{array} \right) = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=differential] (0) at (7, 1) {{\bf =\!=\!=\!=}}; \node [style=object] (1) at (7, -0.5) {$B$}; \node [style=component] (2) at (7, 0.25) {$g$}; \node [style=component] (5) at (7.75, 1.75) {$\iota_1$}; \node [style=function2] (6) at (6.25, 1.75) {$\iota_0$}; \node [style=object] (7) at (6.25, 2.5) {$\oc X$}; \node [style=object] (8) at (7.75, 2.5) {$A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (0) to (2); \draw [style=wire] (2) to (1); \draw [style=wire, in=135, out=-90] (6) to (0); \draw [style=wire, in=-90, out=45, looseness=1.25] (0) to (5); \draw [style=wire] (7) to (6); \draw [style=wire] (8) to (5); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \end{enumerate} As such, a coKleisli map $\llbracket f \rrbracket: \oc(X \times A) \to B$ is linear in context $X$ if and only if there exists a (necessarily unique) map $g: \oc X \otimes A \to B$ such that $\llbracket f \rrbracket = \mathsf{d}^\circ_{X \times A}; (\oc(\pi_0) \otimes \pi_1); g$. \end{corollary} \section{Cartesian reverse differential categories}\label{sec:CRDC} In this section we recall the key definitions and results on Cartesian reverse differential categories from \cite{cockett_et_al:LIPIcs:2020:11661}. \subsection{Definition} \begin{definition}\label{cartrevdiffdef}\cite[Definition 13]{cockett_et_al:LIPIcs:2020:11661} A \textbf{Cartesian reverse differential category} (CRDC) is a Cartesian left additive category $\mathbb{X}$ equipped with a \textbf{reverse differential combinator} $\mathsf{R}$, which is a family of operators ${\mathsf{R}: \mathbb{X}(A,B) \to \mathbb{X}(A \times B,A)}$, $f \mapsto \mathsf{R}[f]$, where $\mathsf{R}[f]$ is called the \textbf{reverse derivative} of $f$, such that the following seven axioms hold: \begin{enumerate}[{\bf [RD.1]}] \item Additivity of reverse differentiation: $\mathsf{R}[f+g] = \mathsf{R}[f] + \mathsf{R}[g]$ and $\mathsf{R}[0]=0$; \item Additivity of the reverse derivative in its second variable: \[ (1_A \times +_B); \mathsf{R}[f] = (1_A \times \pi_0);\mathsf{R}[f] + (1_A \times \pi_1)\mathsf{R}[f] \] and $\iota_0; \mathsf{R}[f]=0$. \item Coherence with identities and projections: $\mathsf{R}[1_A]=\pi_1$, $\mathsf{R}[\pi_0] = \pi_1;\iota_0$ and $\mathsf{R}[\pi_1] = \pi_1;\iota_1$; \item Coherence with pairings: $\mathsf{R}[\langle f, g \rangle] = (1_A \times \pi_0); \mathsf{R}[f] + (1_A \times \pi_1);\mathsf{R}[g]$; \item Reverse chain rule: $\mathsf{R}[fg] = \left \langle \pi_0, \langle \pi_0; f, \pi_1 \rangle; \mathsf{R}[g] \right \rangle; \mathsf{R}[f]$; \item Linearity of the reverse derivative in its second variable: $(\iota_0 \times \iota_1); (\iota_0 \times 1_{A \times B}); \mathsf{R}\!\left[\mathsf{R}\!\left[\mathsf{R}[f] \right] \right]; \pi_1 = \mathsf{R}[f]$; \item Symmetry of mixed partial derivatives: \[ c_A; (\iota_0 \times 1_{A \times A}); \mathsf{R}\!\left[\mathsf{R}\!\left[(\iota_0 \times 1_A); \mathsf{R}\!\left[\mathsf{R}[f] \right]; \pi_1 \right] \right]; \pi_1 = (\iota_0 \times 1_{A \times A});\mathsf{R}\!\left[\mathsf{R}\!\left[(\iota_0 \times 1_A); \mathsf{R}\!\left[\mathsf{R}[f] \right]; \pi_1 \right] \right]; \pi_1 \] \end{enumerate} \end{definition} For more discussion on the definition and examples, see \cite{cockett_et_al:LIPIcs:2020:11661}. One of the central results of that paper is that any CRDC also has the structure of a CDC: \begin{theorem}\label{thm:crdc_to_cdc}\cite[Theorem 16]{cockett_et_al:LIPIcs:2020:11661} If $\X$ is a Cartesian reverse differential category with reverse differential combinator $\mathsf{R}$, then $\X$ is also a Cartesian differential category, where for a map $f: A \to B$, its derivative $D[f]: A \times A \to B$ is defined as follows: \[ D[f] := \xymatrixcolsep{5pc}\xymatrix{ A \times A \ar[r]^-{\iota_0 \times 1_A} & A \times B \times A \ar[r]^-{R[R[f]]} & A \times B \ar[r]^-{\pi_1} & B } \] \end{theorem} In general, however, there is no reason why a CDC should have the structure of a CRDC. In the next two sections we look at what additional structure is needed on a CDC to get a CRDC. \subsection{Dagger fibrations} In this section, we review the structure necessary to go from a Cartesian differential category to a reverse Cartesian differential category: a \emph{dagger fibration} structure on its fibration of linear maps. The idea of a dagger fibration is to capture (from the fibrational point of view) the idea of each fibre being a dagger category. Recalling that a dagger category involves a functor from a category to its opposite, from the fibrational point of view, this must then involve a map from a fibration to its \emph{dual fibration}, a fibration which takes the original fibration and takes the opposite category in each fibre. This fibration can be defined directly as follows: \begin{definition}\cite[Defn. 1.10.11]{jacobs1999categorical} If $p: \E \to \B$ is a fibration, its \textbf{dual fibration} is the fibration $p^*: \E^* \to \B$ given by: \begin{enumerate}[{\em (i)}] \item The objects of $\mathbb{E}^\ast$ are the same as the objects of $\mathbb{E}$; that is, $Ob(\mathbb{E}^\ast) = Ob(\mathbb{E})$; \item A map from $E$ to $E'$ in $\E^*$ consists of an equivalence class of spans \[ \xymatrix{ & S \ar[dl]_{v} \ar[dr]^{c} & \\ E & & E'} \] where $v$ is vertical and $c$ Cartesian. Such a span is equivalent to $(S',v',c')$ if there is a vertical isomorphism $\alpha: S \to S'$ which makes the relevant triangles commute. \end{enumerate} \end{definition} The following are our two primary examples of the dual fibration construction. \begin{example} \normalfont The dual fibration of the fibration of Proposition \ref{prop:context_fibration} has objects pairs $(X,A)$, with a map from $(X,A)$ to $(Y,B)$ consisting of a coKleisli map $\llbracket f \rrbracket: \oc X \to Y$ and a map \[ g: \oc X \otimes B \to A \] \end{example} \begin{example} \normalfont The dual fibration of the fibration of Proposition \ref{prop:linear_fibration} has objects pairs $(X,A)$, with a map from $(X,A)$ to $(Y,B)$ consisting of a map $f: X \to Y$ and a map \[ g: X \times B \to A \] \end{example} \begin{lemma}\cite[Lemma 1.10.12]{jacobs1999categorical} If $p: \E \to \B$ is a fibration, then: \begin{enumerate}[{\em (i)}] \item For each object $B$ of $\B$ there is an isomorphism of categories \[ [p^{-1}(B)]^{\op} \cong (p^*)^{-1}(B) \] which is natural in $B$; \item There is an isomorphism of fibrations $(\E^*)^* \cong \E$ over $\B$. \end{enumerate} \end{lemma} The following does not appear in any published accounts on the dual fibration, but is straightforward: \begin{lemma} If $(p: \E \to \B)$ and $(p': \E' \to \B')$ are fibrations and \[ \xymatrix{\E \ar[r]^{F} \ar[d]_p & \E' \ar[d]^{p'} \\ \B \ar[r]_G & \B'} \] is a morphism of fibrations, then there is a morphism of fibrations \[ \xymatrix{\E^* \ar[r]^{F^*} \ar[d]_{p^*} & (\E')^* \ar[d]^{(p')^*} \\ \B \ar[r]_G & \B'} \] which sends a span $(S, v, C): X \to X'$ to $(FS,F(v),F(c)): FX \to FX'$. \end{lemma} We can now succinctly define what it means for a fibration to have dagger category structure in each fibre. \begin{definition}\cite[Definition 33]{cockett_et_al:LIPIcs:2020:11661} A \textbf{dagger fibration} consists of a fibration $p: \E \to \B$ together with a morphism of fibrations \[ (-)^{\dagger}: \E \to \E^* \] which is stationary on objects and ``is its own inverse''; that is, the composite \[ \E \to^{(-)^{\dagger}} \E^* \to^{((-)^{\dagger})^*} (\E^*)^* \cong \E \] is the identity functor. \end{definition} \begin{example}\label{ex:daggerlin} \normalfont Let us consider what it would mean to have dagger structure on the linear fibration $\L[X]$ (Definition \ref{defn:linear_fibration}) of a Cartesian differential category $\X$. In particular, this would mean that for each map $f: X \times A \to B$ which is linear in context $X$, we would need to give a map $f^{\dagger[X]}: X \times B \to A$ which is also linear in context $X$. The axioms for a dagger fibration are then equivalent to asking that each fibre $\mathcal{L}[X]$ (Lemma \ref{lemma:L_fibres}) be a dagger category with dagger $\dagger[X]: \mathcal{L}[X]^{op} \to \mathcal{L}[X]$ such that each substitution functor preserves the dagger. Explicitly, the operation $\dagger[-]$ satisfies the following: \begin{enumerate}[{\em (i)}] \item Contravariant functoriality: $(\langle \pi_0, f \rangle; g)^{\dagger[X]} = \langle \pi_0, g^{\dagger[X]} \rangle; f^{\dagger[X]}$ and $\pi_1^{\dagger[X]} = \pi_1$ \item Involutive: ${f^{\dagger[X]}}^{\dagger[X]} =f$ \item Change of Base: for every map $h: Y \to X$ in $\mathbb{X}$, its associated substitution functor $h^\ast: \mathcal{L}[X] \to \mathcal{L}[Y]$ preserves the dagger, that is, $(h^\ast(f))^{\dagger[Y]} = h^\ast\left( f^{\dagger[X]} \right)$. \end{enumerate} \end{example} If $\X$ is a Cartesian \emph{reverse} differential category, then by Theorem \ref{thm:crdc_to_cdc}, it is also a Cartesian differential category, and in this case its associated linear fibration has dagger structure: \begin{theorem}\label{thm:crdc_dagger_fibration}\cite[Theorem 37]{cockett_et_al:LIPIcs:2020:11661} If $\X$ is a CRDC, then its associated fibration of linear maps $\L[\X]$ is a dagger fibration, where for a map $f: X \times A \to B$ which is linear in context $X$, \[ f^{\dagger[X]} := X \times B \to^{\iota_0 \times 1} X \times A \times B \to^{R[f]} X \times A \to^{\pi_1} A \] \end{theorem} It will also be useful (see Lemma \ref{lemma:compactclosed_to_dagger}) to characterize when the fibration associated to any coalgebra modality has dagger fibration structure. \begin{example} \normalfont \label{ex:!daggerfibration} To give a dagger fibration structure on the fibration of Proposition \ref{prop:context_fibration} corresponds to associating every map $f: \oc X \otimes A \to B$ to a map $f^{\dagger[X]}: \oc X \otimes B \to A$, which we draw in the graphical calculus simply as: \[ f^\dagger[X] := \left( \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (10.25, 0.75) {$f$}; \node [style=object] (21) at (10.25, -0.25) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right)^{\dagger[X]} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$B$}; \node [style=component] (20) at (10.25, 0.75) {$f^\dagger$}; \node [style=object] (21) at (10.25, -0.25) {$A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \] Once again, the axioms for a dagger fibration in this case are equivalent to asking that each of the fibres $\mathcal{L}_\oc[X]$ be a dagger category with dagger $\dagger[X]: \mathcal{L}_\oc[X]^{op} \to \mathcal{L}_\oc[X]$ such that each substitution functor preserves the dagger. Explicitly, the operation $\dagger[-]$ satisfies the following: \begin{enumerate}[{\em (i)}] \item Contravariant functorality: \begin{align*} \left((\Delta_X \otimes 1_A);(1_{\oc X} \otimes f);g \right)^{\dagger[X]} = (\Delta_X \otimes 1_C);(1_{\oc X} \otimes g^{\dagger[X]}); f^{\dagger[X]} && {(e_X \otimes 1_A)^{\dagger[X]} = e_X \otimes 1_A} \end{align*} \begin{align*} \left( \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=duplicate] (1) at (9.5, 1) {$\Delta$}; \node [style=object] (2) at (9.5, 1.75) {$\oc X$}; \node [style=component] (3) at (10.5, 0) {$f$}; \node [style=object] (4) at (11, 1.75) {$A$}; \node [style=component] (5) at (9.75, -1.25) {$g$}; \node [style=object] (6) at (9.75, -2) {$C$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (2) to (1); \draw [style=wire, in=-90, out=15, looseness=0.75] (3) to (4); \draw [style=wire] (5) to (6); \draw [style=wire, in=30, out=-90] (3) to (5); \draw [style=wire, in=165, out=-150] (1) to (5); \draw [style=wire, in=165, out=-15, looseness=1.25] (1) to (3); \end{pgfonlayer} \end{tikzpicture} \end{array} \right)^{\dagger[X]} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=duplicate] (1) at (9.5, 1) {$\Delta$}; \node [style=object] (2) at (9.5, 1.75) {$\oc X$}; \node [style=component] (3) at (10.5, 0) {$g^\dagger$}; \node [style=object] (4) at (11, 1.75) {$C$}; \node [style=component] (5) at (9.75, -1.25) {$f^\dagger$}; \node [style=object] (6) at (9.75, -2.25) {$A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (2) to (1); \draw [style=wire, in=-90, out=15, looseness=0.75] (3) to (4); \draw [style=wire] (5) to (6); \draw [style=wire, in=30, out=-90] (3) to (5); \draw [style=wire, in=165, out=-150] (1) to (5); \draw [style=wire, in=165, out=-15, looseness=1.25] (1) to (3); \end{pgfonlayer} \end{tikzpicture} \end{array} && \left( \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.5, 1.75) {$A$}; \node [style=object] (21) at (10.5, -0.25) {$A$}; \node [style=component] (22) at (9.75, 0.5) {$e$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (19) to (21); \draw [style=wire] (17) to (22); \end{pgfonlayer} \end{tikzpicture} \end{array} \right)^{\dagger[X]} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.5, 1.75) {$A$}; \node [style=object] (21) at (10.5, -0.25) {$A$}; \node [style=component] (22) at (9.75, 0.5) {$e$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (19) to (21); \draw [style=wire] (17) to (22); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item Involution: ${f^{\dagger[X]}}^{\dagger[X]} =f$ \[ \left( \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$B$}; \node [style=component] (20) at (10.25, 0.75) {$f^\dagger$}; \node [style=object] (21) at (10.25, -0.25) {$A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \right)^{\dagger[X]} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.75, 1.75) {$A$}; \node [style=component] (20) at (10.25, 0.75) {$f$}; \node [style=object] (21) at (10.25, -0.25) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (20) to (21); \draw [style=wire, in=165, out=-90] (17) to (20); \draw [style=wire, in=-90, out=15] (20) to (19); \end{pgfonlayer} \end{tikzpicture} \end{array} \] \item Change of Base: For every coKleisli map $\llbracket h \rrbracket: \oc Y \to X$, the substitution functor ${\llbracket h \rrbracket^\ast: \mathcal{L}_\oc[Y] \to \mathcal{L}_\oc[X]}$ preserves the dagger, that is, $ \left( (\delta \otimes 1) (\oc(h) \otimes 1) f \right)^{\dagger[Y]} = (\delta \otimes 1) (\oc(h) \otimes 1)f^{\dagger[X]}$ \begin{align*} \left( \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (9, 0.25) {$\delta$}; \node [style=object] (1) at (10.5, 1.25) {$A$}; \node [style=component] (2) at (9.75, -1.75) {$f$}; \node [style=object] (3) at (9.75, -2.75) {$B$}; \node [style=function2] (4) at (9, -0.75) {$h$}; \node [style=object] (5) at (9, 1.25) {$\oc X$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (2) to (3); \draw [style=wire, bend right, looseness=1.25] (4) to (2); \draw [style=wire] (0) to (4); \draw [style=wire, in=30, out=-90, looseness=0.75] (1) to (2); \draw [style=wire] (5) to (0); \end{pgfonlayer} \end{tikzpicture} \end{array} \right)^{\dagger[X]} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (9, 0.25) {$\delta$}; \node [style=object] (1) at (10.5, 1.25) {$B$}; \node [style=component] (2) at (9.75, -1.75) {$f^\dagger$}; \node [style=object] (3) at (9.75, -2.75) {$A$}; \node [style=function2] (4) at (9, -0.75) {$h$}; \node [style=object] (5) at (9, 1.25) {$\oc X$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (2) to (3); \draw [style=wire, bend right, looseness=1.25] (4) to (2); \draw [style=wire] (0) to (4); \draw [style=wire, in=30, out=-90, looseness=0.75] (1) to (2); \draw [style=wire] (5) to (0); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \end{enumerate} \end{example} \subsection{Characterization theorem for CRDCs} With one extra ingredient, the structure described in Theorem \ref{thm:crdc_dagger_fibration} is enough to characterize CRDCs. \begin{definition} A CDC $\X$ has a \textbf{contextual linear dagger} if $\L[\X]$ has a dagger fibration structure for which each fibre $\L[\X]$ has dagger biproducts. \end{definition} \begin{theorem}\label{thm:characterization_of_crdc}\cite[Theorem 42] {cockett_et_al:LIPIcs:2020:11661} A Cartesian reverse differential category is precisely the same as a Cartesian differential $\X$ with a contextual linear dagger. \end{theorem} In particular, given such a CDC, its reverse combinator is given by taking the dagger of its (forward) derivative $\mathsf{D}$. So for a map $f: A \to B$, its derivative is $\mathsf{D}[f]: A \times A \to B$ (which is linear in its second variable by \textbf{[CD.6]}), so we define the reverse derivative as follows: \[ R[f] := A \times B \to^{D[f]^{\dagger[A]}} A \] \section{Monoidal Reverse Differential Categories}\label{sec:mrdc} This section introduces the main subject of the article: monoidal reverse differential categories. As noted in the introduction, we would like these structures to satisfy several requirements: \begin{enumerate} \item Just as every Cartesian reverse differential category is a Cartesian differential category, so should every monoidal reverse differential category be a monoidal differential category. \item Just as every monoidal differential storage category has Cartesian differential category structure on its coKleisli category, so should every monoidal reverse differential storage category have Cartesian reverse differential structure on its coKleisli category. \item Examples of this structure should be interesting and varied. \end{enumerate} In the next section, we will see that requirements 1 and 2 will force monoidal reverse differential categories to be self-dual compact closed. \subsection{Monoidal reverse differential categories should be self-dual compact closed}\label{sec:mrdc_is_sdcc} First, let us recall the relevant definitions. \begin{definition}\label{SDCC} In a symmetric monoidal category, a \textbf{self-dual object} \cite[Definition 3.1]{heunen2019categories} is a triple $(A, \cup_A, \cap_A)$ consisting of an object $A$ and two maps $\cup_A: A \otimes A \to k$ and $\cap_A: k \to A \otimes A$, drawn in the graphical calculus as follows: \begin{align*} \begin{array}[c]{c} \cup_A \end{array} : = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (1) at (1.25, 1.25) {$A$}; \node [style=object] (7) at (2.5, 1.25) {$A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right=90, looseness=2.00] (1) to (7); \end{pgfonlayer} \end{tikzpicture} \end{array} && \begin{array}[c]{c} \cap_A \end{array} : = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (1) at (1.25, 1.25) {$A$}; \node [style=object] (7) at (2.5, 1.25) {$A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left=90, looseness=2.00] (1) to (7); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} such that the following diagram commutes (often called the \textbf{snake equations)}: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{ A \ar@{=}[dr]_-{} \ar[d]_-{\cap_A \otimes 1_A} \ar[r]^-{1_A \otimes \cap_A} & A \otimes A \otimes A \ar[d]^-{\cup_A \otimes 1_A} \\ A \otimes A \otimes A \ar[r]_-{1_A \otimes \cup_A} & A } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (1.75, 2.75) {}; \node [style=none] (1) at (0.5, 2.75) {}; \node [style=object] (3) at (0.5, 4) {$A$}; \node [style=none] (4) at (3, 2.75) {}; \node [style=object] (5) at (3, 1.25) {$A$}; \node [style=none] (7) at (1.75, 2.75) {}; \node [style=object] (8) at (4, 2.75) {$=$}; \node [style=object] (9) at (5, 4) {$A$}; \node [style=object] (10) at (5, 1.25) {$A$}; \node [style=object] (11) at (6, 2.75) {$=$}; \node [style=none] (12) at (8.25, 2.75) {}; \node [style=none] (13) at (9.5, 2.75) {}; \node [style=object] (14) at (9.5, 4) {$A$}; \node [style=none] (15) at (7, 2.75) {}; \node [style=object] (16) at (7, 1.25) {$A$}; \node [style=none] (17) at (8.25, 2.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left=90, looseness=2.00] (0.center) to (4.center); \draw [style=wire] (4.center) to (5); \draw [style=wire, bend right=90, looseness=2.00] (1.center) to (7.center); \draw [style=wire] (3) to (1.center); \draw [style=wire] (9) to (10); \draw [style=wire, bend right=90, looseness=2.00] (12.center) to (15.center); \draw [style=wire] (15.center) to (16); \draw [style=wire, bend left=90, looseness=2.00] (13.center) to (17.center); \draw [style=wire] (14) to (13.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} A self-dual object $(A, \cup_A, \cap_A)$ is said to satisfy the \textbf{twist equations} if the cup and cap are symmetry invariant, that is, the following diagram commutes: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{ A \otimes A \ar[dr]_-{\cup_A} \ar[r]^-{\sigma_{A,A}} & A \otimes A \ar[d]^-{\cup_A} \\ & k } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (1) at (1.5, 1) {}; \node [style=none] (7) at (2.75, 1) {}; \node [style=object] (8) at (1.5, 2.25) {$A$}; \node [style=object] (9) at (2.75, 2.25) {$A$}; \node [style=none] (10) at (-1.25, 1) {}; \node [style=none] (11) at (0, 1) {}; \node [style=object] (12) at (-1.25, 2.25) {$A$}; \node [style=object] (13) at (0, 2.25) {$A$}; \node [style=object] (14) at (0.75, 1.25) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right=90, looseness=2.00] (1.center) to (7.center); \draw [style=wire, in=-90, out=90] (1.center) to (9); \draw [style=wire, in=90, out=-90] (8) to (7.center); \draw [style=wire, bend right=90, looseness=2.00] (10.center) to (11.center); \draw [style=wire] (12) to (10.center); \draw [style=wire] (13) to (11.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{ k \ar[dr]_-{\cap_A} \ar[r]^-{\cap_A} & A \otimes A \ar[d]^-{\sigma_{A,A}} \\ & k } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (1) at (1.5, 2.25) {}; \node [style=none] (7) at (2.75, 2.25) {}; \node [style=object] (8) at (1.5, 1) {$A$}; \node [style=object] (9) at (2.75, 1) {$A$}; \node [style=none] (10) at (-1.25, 2.25) {}; \node [style=none] (11) at (0, 2.25) {}; \node [style=object] (12) at (-1.25, 1) {$A$}; \node [style=object] (13) at (0, 1) {$A$}; \node [style=object] (14) at (0.75, 2) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left=90, looseness=2.00] (1.center) to (7.center); \draw [style=wire, in=90, out=-90] (1.center) to (9); \draw [style=wire, in=-90, out=90] (8) to (7.center); \draw [style=wire, bend left=90, looseness=2.00] (10.center) to (11.center); \draw [style=wire] (12) to (10.center); \draw [style=wire] (13) to (11.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} A \textbf{self-dual compact closed category} \cite[Section 5]{selinger2010autonomous} is a symmetric monoidal category $\mathbb{X}$ equipped with a family of maps $\cup_A: A \otimes A \to k$ and $\cap_A: k \to A \otimes A$ such that for each object $A$, $(A, \cup_A, \cap_A)$ is a self-dual object which satisfies the twist equations. \end{definition} Without loss of generality, in a self-dual compact closed category, we use the convention that for each pair of objects $A$ and $B$: \begin{align*} \begin{array}[c]{c} \cup_{A \otimes B} \end{array} : = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (135) at (7, -4.5) {$A$}; \node [style=object] (136) at (8.25, -4.5) {$A$}; \node [style=object] (137) at (7.5, -4.5) {$B$}; \node [style=object] (138) at (8.75, -4.5) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right=90, looseness=2.00] (135) to (136); \draw [style=wire, bend right=90, looseness=2.00] (137) to (138); \end{pgfonlayer} \end{tikzpicture} \end{array} && \begin{array}[c]{c} \cap_{A \otimes B} \end{array} : = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (135) at (7, -4.5) {$A$}; \node [style=object] (136) at (8.25, -4.5) {$A$}; \node [style=object] (137) at (7.5, -4.5) {$B$}; \node [style=object] (138) at (8.75, -4.5) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left=90, looseness=2.00] (135) to (136); \draw [style=wire, bend left=90, looseness=2.00] (137) to (138); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} and that for the unit $k$, $\cup_k = \cap_k = 1_k$ (which are just empty drawings in the graphical calculus). We also point out that the twist equations are not strictly necessary for the story of this paper, or for defining a compact closed category where $A = A^\ast$ as in \cite{selinger2010autonomous}. However, in this paper we have elected to include them in the definition as it is more practical, greatly simplifying our string diagram computations, and all of the examples of monoidal reverse differential categories that we have discovered so far satisfy this twist equation. Furthermore, in most of the literature when considering self-dual compact closed categories, the twist equations are often taken as axioms such as for the ZX calculus \cite{Coecke08interactingquantum}, quantum computing (which considers the free self-dual compact closed PROP) \cite{7174913}, and hypergraph categories \cite{fong2019hypergraph}. We now discuss the induced dagger functor of a self-dual compact closed category: \begin{lemma} \cite[Remark 4.5]{selinger2010autonomous} \label{sliding} Let $\mathbb{X}$ be a self-dual compact closed category with cups $\cup$ and caps $\cap$. Then $\mathbb{X}$ is a dagger category whose dagger functor $(\_)^\ast: \mathbb{X}^{op} \to \mathbb{X}$ is defined on objects as $A^\ast = A$ and for a map $f: A \to B$, $f^\ast: B \to A$ \cite[Definition 3.9]{heunen2019categories} is defined as follows: \begin{align*} \begin{array}[c]{c} f^{\ast} := \xymatrixcolsep{5pc}\xymatrix{B \ar[r]^-{\cap_A \otimes 1_A} & \oc A \otimes A \otimes B \ar[r]^-{1_A \otimes f \otimes 1_A} & A \otimes B \otimes B \ar[r]^-{1_A \otimes \cap_A} & A } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (9) at (5, 3.25) {$B$}; \node [style=object] (10) at (5, 0.75) {$A$}; \node [style=object] (11) at (6, 2) {$=$}; \node [style=none] (12) at (8.25, 2.75) {}; \node [style=none] (13) at (9.5, 1.5) {}; \node [style=object] (14) at (9.5, 4) {$B$}; \node [style=none] (15) at (7, 2.75) {}; \node [style=object] (16) at (7, 0.5) {$A$}; \node [style=none] (17) at (8.25, 1.5) {}; \node [style=component] (18) at (5, 2) {$f^\ast$}; \node [style=component] (19) at (8.25, 2) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right=90, looseness=2.00] (12.center) to (15.center); \draw [style=wire] (15.center) to (16); \draw [style=wire, bend left=90, looseness=2.00] (13.center) to (17.center); \draw [style=wire] (14) to (13.center); \draw [style=wire] (9) to (18); \draw [style=wire] (18) to (10); \draw [style=wire] (12.center) to (19); \draw [style=wire] (19) to (17.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} Explicitly, $1_A^\ast = 1_A$, $(f;g)^\ast = g^\ast;f^\ast$ and $f^{\ast\ast} = f$. Furthermore, for any map $f: A \to B$, the following diagrams commute \cite[Lemma 3.12 \& 3.26]{heunen2019categories}: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{ k \ar[r]^-{\cap_A} \ar[d]_-{\cap_B} & A \otimes A \ar[d]^-{1_A \otimes f} \\ B \otimes B \ar[r]_-{f^\ast \otimes 1_B} & A \otimes B } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (11) at (-3, 4.5) {$=$}; \node [style=none] (12) at (-4, 5.25) {}; \node [style=none] (15) at (-5.25, 5.25) {}; \node [style=object] (16) at (-5.25, 3.5) {$A$}; \node [style=component] (19) at (-4, 4.5) {$f$}; \node [style=object] (20) at (-4, 3.5) {$B$}; \node [style=none] (21) at (-2, 5.25) {}; \node [style=none] (22) at (-0.75, 5.25) {}; \node [style=object] (23) at (-0.75, 3.5) {$B$}; \node [style=component] (24) at (-2, 4.5) {$f^\ast$}; \node [style=object] (25) at (-2, 3.5) {$A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right=90, looseness=2.00] (12.center) to (15.center); \draw [style=wire] (15.center) to (16); \draw [style=wire] (12.center) to (19); \draw [style=wire] (19) to (20); \draw [style=wire, bend left=90, looseness=2.00] (21.center) to (22.center); \draw [style=wire] (22.center) to (23); \draw [style=wire] (21.center) to (24); \draw [style=wire] (24) to (25); \end{pgfonlayer} \end{tikzpicture} \end{array} \\ \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{ B \otimes A \ar[d]_-{f^\ast \otimes 1_A} \ar[r]^-{1_B \otimes f} & B \otimes B \ar[d]^-{\cup_B} \\ A \otimes A \ar[r]_-{\cup_A} & k } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (0) at (2.75, 4.25) {$=$}; \node [style=none] (1) at (1.75, 3.5) {}; \node [style=none] (2) at (0.5, 3.5) {}; \node [style=object] (3) at (0.5, 5.25) {$A$}; \node [style=component] (4) at (1.75, 4.25) {$f$}; \node [style=object] (5) at (1.75, 5.25) {$B$}; \node [style=none] (6) at (3.75, 3.5) {}; \node [style=none] (7) at (5, 3.5) {}; \node [style=object] (8) at (5, 5.25) {$B$}; \node [style=component] (9) at (3.75, 4.25) {$f^\ast$}; \node [style=object] (10) at (3.75, 5.25) {$A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left=90, looseness=2.00] (1.center) to (2.center); \draw [style=wire] (2.center) to (3); \draw [style=wire] (1.center) to (4); \draw [style=wire] (4) to (5); \draw [style=wire, bend right=90, looseness=2.00] (6.center) to (7.center); \draw [style=wire] (7.center) to (8); \draw [style=wire] (6.center) to (9); \draw [style=wire] (9) to (10); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} The above identities are sometimes referred to as the \textbf{sliding equations}. \end{lemma} We will now justify that monoidal reverse differential categories should be self-dual compact closed. We begin in the Seely case. By requirement 1. at the beginning of this section, if we start with an MRDC which satisfies the Seely requirements, then it should be a differential storage category $\mathbb{X}$, with coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ (with the Seely isomorphisms), and deriving transformation $\mathsf{d}: \oc A \otimes A \to \oc A$ (or equivalently codereliction $\eta_A: A \to \oc A$). Of course this means that the coKleisli category $\mathbb{X}_\oc$ is a Cartesian differential category. But then by requirement (2), $\mathbb{X}_\oc$ should be a Cartesian reverse differential category, which means its fibration of linear maps $\mathcal{L}[\mathbb{X}_\oc]$ is a dagger fibration. As explained in Example \ref{ex:daggerlin}, each of the fibres $\mathcal{L}[X]$ is then a $\dagger$-category. By Corollary \ref{cor:seely-lin}, we also have the isomorphisms $\mathcal{L}[0] \cong \mathsf{LIN}[\mathbb{X}_\oc] \cong \mathbb{X}$, which implies that the base category $\mathbb{X}$ is itself a $\dagger$-category. To distinguish between the two dagger structures, we will use $\dagger$ for the daggers in $\mathbb{X}_\oc$, and $\ast$ for the daggers in $\mathbb{X}$. If we assume that $\dagger$ is monoidal, then using the comonad counit $\varepsilon$ and the codereliction $\eta$, we can now build cups and caps to make every object of $\mathbb{X}$ a self-dual object. Unfortunately, since the twist equation is non-canonical, it does not appear to come for free from this approach, and so $\mathbb{X}$ is not a self-dual compact closed category as defined in this paper. Of course this should somewhat be expected since we asked for the twist equation for practical reasons. Nevertheless, this still justifies the link between reverse differentiation and self-duality. Using Theorem \ref{thm:fibration_equivalence}, pushing the dagger through the fibration equivalence $\mathcal{L}[\mathbb{X}_\oc] \cong \mathcal{L}_\oc[\mathbb{X}]$, we also have that $\mathcal{L}_\oc[\mathbb{X}]$ is a dagger fibration as in Example \ref{ex:!daggerfibration}. Consider then the map $\varepsilon_A: \oc A \to A$ interpreted as a map in the fibre $\mathcal{L}_\oc[\mathbb{X}](k,A)$. Taking its dagger we obtain a map $\varepsilon_A^{\dagger[A]}: \oc A \otimes A \to k$. Precomposing this map with the codereliction we obtain our cup $\cup_A: A \otimes A \to k$: \[ \cup_A := \xymatrixcolsep{5pc}\xymatrix{ A \otimes A \ar[r]^-{\eta_A \otimes 1_A} & \oc A \otimes A \ar[r]^-{\varepsilon_A^{\dagger[A]}} & k } \] To build the cap $\cap_A: k \to A \otimes A$, we use the dagger $\ast$ on the base category $\mathbb{X}$: \[ \cap_A := \xymatrixcolsep{5pc}\xymatrix{ k \ar[r]^-{{\varepsilon_A^{\dagger[A]}}^\ast} & \oc A \otimes A \ar[r]^-{\eta^\ast_A \otimes 1_A} & A \otimes A } \] \begin{lemma} Let $\mathbb{X}$ be a differential storage category. Suppose $\mathbb{X}_\oc$ is a Cartesian reverse differential category, and therefore has a contextual linear dagger $\dagger$. If $\dagger$ is (strict) monoidal then every object of $\mathbb{X}$ is a self-dual object where the cups and caps are defined as above. \end{lemma} \begin{proof} We must show that the cups and caps satisfy the snake equations. To do so we will need some simple identities. First observe that for a map $f: \oc X \otimes A \to B$ and a map $g: B \to D$, since the dagger is contravariant, it straightforward to show that: \[(f;g)^{\dagger[X]} = (1_X \otimes g^\ast); f^{\dagger[X]}\] Next, using the assumption that $\dagger$ is monoidal, we have that for any map $f: \oc X \otimes A \to B$ and any object $C$: \[ (f \otimes 1_C)^{\dagger[X]} = f^{\dagger[X]} \otimes 1_C\] The last required identity comes from the fact that in any Cartesian differential category with a contextual linear dagger, the dagger preserves linearity in context. Translating this in terms of the fibration $\mathcal{L}_\oc[\mathbb{X}]$, by Corollary \ref{cor:seely-lin} we have that for any map $f: \oc X \otimes A \to B$: \[ \left( (\varepsilon_X; \eta_X) \otimes 1_A \right) ;f^{\dagger[X]} = \left( \left( (\varepsilon_X; \eta_X) \otimes 1_A \right); f \right)^{\dagger[X]} \] Now we compute one of the snake equations: \begin{align*} (1_A \otimes \cap_A);(\cup_A \otimes 1_A) &=~ (1_A \otimes {\varepsilon_A^{\dagger[A]}}^\ast); (1_A \otimes \eta^\ast_A \otimes 1_A) ;(\eta_A \otimes 1_A \otimes 1_A); (\varepsilon^{\dagger[A]}_A \otimes 1_A) \\ &=~ \eta_A; (1_{\oc A} \otimes {\varepsilon_A^{\dagger[A]}}^\ast); (1_A \otimes \eta^\ast_A \otimes 1_A); (\varepsilon^{\dagger[A]}_A \otimes 1_A) \\ &=~ \eta_A; (1_{\oc A} \otimes {\varepsilon_A^{\dagger[A]}}^\ast); (1_A \otimes (\varepsilon_A \otimes 1_A)^\ast) ; (\eta_A \otimes 1_A)^{\dagger[A]} \\ &=~ \eta_A; (1_{\oc A} \otimes {\varepsilon_A^{\dagger[A]}}^\ast); \left( (\varepsilon_A \otimes 1_A);(\eta_A \otimes 1_A) \right)^{\dagger[A]} \\ &=~ \eta_A; \left( \left( (\varepsilon_A; \eta_A) \otimes 1_A \right) ; \varepsilon_A^{\dagger[A]} \right)^{\dagger[A]} \\ &=~ \eta_A; \left( \left( \varepsilon_A; \eta_A; \varepsilon_A \right)^{\dagger[A]} \right)^{\dagger[A]} \\ &=~ \eta_A ; {\varepsilon_A^{\dagger[A]}}^{\dagger[A]} \\ &=~ \eta_A; \varepsilon_A \\ &=~ 1_A \end{align*} The proof for the other snake equation is similar. So we conclude that $(A, \cup_A, \cap_A)$ is a self-dual object, and that $\mathbb{X}$ is a self-dual compact closed category. \end{proof} Even if the coalgebra modality does not have the Seely isomorphisms, it is still possible to show that for each of the fibres $\mathcal{L}_\oc[X]$ every object is self-dual. However, the computations in the proof are more complicated and not necessarily more enlightening. So we will omit the proof and simply provide the construction. In the fibre $\mathcal{L}_\oc[X]$, the cup is a map $\cup^X_A: \oc X \otimes A \otimes A \to k$ defined as follows: \[ \cup^X_A := \xymatrixcolsep{5pc}\xymatrix{ \oc X \otimes A \otimes A \ar[r]^-{\oc(0) \otimes 1_A \otimes 1_A} & \oc A \otimes A \otimes A \ar[r]^-{\mathsf{d}_A \otimes 1_A} & \oc A \otimes A \ar[r]^-{\varepsilon_A^{\dagger[A]}} & k } \] while the cap $\cap^X_A: \oc X \to A \otimes A$ is defined as the dagger in the fibre of the cap: $\cap^X_A := {\cup^X_A}^{\dagger[X]}$. \subsection{Definition and examples} Given the discussion of the previous section, monoidal reverse differential categories should at least be self-dual compact closed. We now give the full definition. \begin{definition} \label{MRDC} A \textbf{monoidal reverse differential category} (MRDC) is an additive symmetric monoidal category $\mathbb{X}$, such that $\mathbb{X}$ is a self-dual compact closed category, equipped with a coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ and a \textbf{reverse deriving transformation} which is a family of maps $\mathsf{r}_A: \oc A \otimes \oc A \to A$ drawn in the graphical calculus as: \[\mathsf{r}:= \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (0) at (1.75, 2.75) {$A$}; \node [style=object] (1) at (1.25, 1.25) {$\oc A$}; \node [style=integral] (2) at (1.25, 2) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (3) at (0.75, 2.75) {$\oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right] (2) to (0); \draw [style=wire] (1) to (2); \draw [style=wire, bend left] (2) to (3); \end{pgfonlayer} \end{tikzpicture} \end{array}\] such that the following axioms hold: \begin{description} \item \item[{\bf [r.N]}] Reverse Naturality Rule: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{\oc A \otimes \oc B \ar[d]_-{1_{\oc A} \otimes \oc(f)^\ast} \ar[r]^-{\oc(f) \otimes 1_{\oc B}} & \oc B \otimes \oc B \ar[r]^-{\mathsf{r}_B} & B \ar[d]^-{f^\ast} \\ \oc A \otimes \oc A \ar[rr]_-{\mathsf{r}_A} && A } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=integral] (28) at (0, 8) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=function] (29) at (-0.5, 9) {$f$}; \node [style=component] (30) at (0, 7.25) {$f^\ast$}; \node [style=object] (31) at (0, 6.25) {$A$}; \node [style=object] (32) at (-0.5, 10) {$\oc A$}; \node [style=object] (33) at (0.5, 10) {$\oc B$}; \node [style=object] (34) at (1.5, 8) {$=$}; \node [style=integral] (35) at (3, 8) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=component] (36) at (3.5, 9) {$\oc(f)^\ast$}; \node [style=object] (38) at (3, 6.25) {$A$}; \node [style=object] (39) at (3.5, 10) {$\oc B$}; \node [style=object] (40) at (2.5, 10) {$\oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (32) to (29); \draw [style=wire, in=150, out=-90] (29) to (28); \draw [style=wire, in=-90, out=45] (28) to (33); \draw [style=wire] (28) to (30); \draw [style=wire] (30) to (31); \draw [style=wire] (39) to (36); \draw [style=wire, in=30, out=-90] (36) to (35); \draw [style=wire, in=-90, out=135] (35) to (40); \draw [style=wire] (35) to (38); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item[{\bf [r.1]}] Reverse Constant Rule: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{\oc A \ar[dr]_-{0} \ar[r]^-{1_{\oc A} \otimes e^\ast_A} & \oc A \otimes \oc A \ar[d]^-{\mathsf{r}_A} \\ & A } \end{array}&& \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (41) at (5.25, 0) {$A$}; \node [style=component] (42) at (5.75, 2) {$e^\ast$}; \node [style=differential] (43) at (5.25, 1) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (44) at (4.75, 2.5) {$\oc A$}; \node [style=port] (45) at (6.25, 1) {$=$}; \node [style=port] (46) at (7, 1) {$0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=-90, out=30] (43) to (42); \draw [style=wire, in=-90, out=135] (43) to (44); \draw [style=wire] (43) to (41); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item[{\bf [r.2]}] Reverse Leibniz Rule (or Reverse Product Rule): \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{11pc}\xymatrix{\oc A \otimes \oc A \otimes \oc A \ar[d]_-{\Delta_A \otimes 1_{\oc A} \otimes 1_{\oc A}} \ar[rr]^-{1_{\oc A} \otimes \Delta^\ast_A} && \oc A \otimes \oc A \ar[d]^-{\mathsf{r}_A} \\ \oc A \otimes \oc A \otimes \oc A \ar[r]_-{(1_{\oc A} \otimes \cup_{\oc A}) + (1_{\oc A} \otimes \sigma_{\oc A, \oc A})(1_{\oc A} \otimes \cup_{\oc A})} & \oc A \otimes \oc A \ar[r]_-{\mathsf{r}_A} & A } \end{array} \end{align*} \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (41) at (-3.5, 2.75) {$A$}; \node [style=differential] (43) at (-3.5, 3.75) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (44) at (-4.75, 6) {$\oc A$}; \node [style=object] (67) at (-3.5, 6) {$\oc A$}; \node [style=duplicate] (68) at (-2.75, 4.75) {$\Delta^\ast$}; \node [style=object] (69) at (-2, 6) {$\oc A$}; \node [style=object] (70) at (-1.75, 4.25) {$=$}; \node [style=object] (71) at (-0.5, 6) {$\oc A$}; \node [style=differential] (72) at (0, 3.25) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (73) at (1.75, 6) {$\oc A$}; \node [style=duplicate] (74) at (-0.5, 5) {$\Delta$}; \node [style=object] (75) at (0, 2.5) {$A$}; \node [style=object] (76) at (0.75, 6) {$\oc A$}; \node [style=port] (83) at (2.25, 4.25) {$+$}; \node [style=none] (84) at (0.75, 4.5) {}; \node [style=none] (85) at (0, 4.5) {}; \node [style=object] (86) at (3.25, 6) {$\oc A$}; \node [style=differential] (87) at (3.75, 3.25) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (88) at (4.5, 6) {$\oc A$}; \node [style=duplicate] (89) at (3.25, 5) {$\Delta$}; \node [style=object] (90) at (3.75, 2.5) {$A$}; \node [style=object] (91) at (5.5, 6) {$\oc A$}; \node [style=none] (92) at (5.5, 4.75) {}; \node [style=none] (93) at (3.75, 4.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=-90, out=135] (43) to (44); \draw [style=wire] (43) to (41); \draw [style=wire, bend left] (68) to (67); \draw [style=wire, bend right] (68) to (69); \draw [style=wire, in=45, out=-90, looseness=1.25] (68) to (43); \draw [style=wire, in=-90, out=30, looseness=1.25] (72) to (73); \draw [style=wire, in=150, out=-135, looseness=1.50] (74) to (72); \draw [style=wire] (71) to (74); \draw [style=wire] (72) to (75); \draw [style=wire, bend left=90, looseness=2.00] (84.center) to (85.center); \draw [style=wire] (76) to (84.center); \draw [style=wire, in=90, out=-30] (74) to (85.center); \draw [style=wire, in=-90, out=45, looseness=1.25] (87) to (88); \draw [style=wire, in=150, out=-135, looseness=1.50] (89) to (87); \draw [style=wire] (86) to (89); \draw [style=wire] (87) to (90); \draw [style=wire, bend left=90, looseness=2.00] (92.center) to (93.center); \draw [style=wire] (91) to (92.center); \draw [style=wire, in=90, out=-30] (89) to (93.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item[{\bf [r.3]}] Reverse Linear Rule: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{\oc A \otimes A \ar[r]^-{1_{\oc A} \otimes \varepsilon^\ast_A} \ar[dr]_-{e_A \otimes 1_A} & \oc A \otimes \oc A \ar[d]^-{\mathsf{r}_A} \\ & A } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (94) at (-11.25, 11.5) {$\oc A$}; \node [style=object] (95) at (-10.5, 11.5) {$A$}; \node [style=object] (96) at (-10.5, 8.75) {$A$}; \node [style=component] (97) at (-11.25, 10.25) {$e$}; \node [style=port] (103) at (-12, 10.25) {$=$}; \node [style=object] (104) at (-13.25, 8.75) {$A$}; \node [style=component] (105) at (-12.75, 10.75) {$\varepsilon^\ast$}; \node [style=differential] (106) at (-13.25, 9.75) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (107) at (-13.75, 11.5) {$\oc A$}; \node [style=object] (110) at (-12.75, 11.5) {$\oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (94) to (97); \draw [style=wire] (95) to (96); \draw [style=wire, in=-90, out=30] (106) to (105); \draw [style=wire, in=-90, out=135] (106) to (107); \draw [style=wire] (106) to (104); \draw [style=wire] (110) to (105); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item[{\bf [r.4]}] Reverse Chain Rule: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{\oc A \otimes \oc \oc A \ar[d]_-{\Delta_A \otimes 1_{\oc \oc A}} \ar[rrr]^-{1_{\oc A} \otimes \delta^\ast_A} &&& \oc A \otimes \oc A \ar[d]^-{\mathsf{r}_A} \\ \oc A \otimes \oc A \otimes \oc \oc A \ar[r]_-{1_{\oc A} \otimes \delta_A \otimes 1_{\oc \oc A}} & \oc A \otimes \oc \oc A \otimes \oc \oc A \ar[r]_-{1_{\oc A} \otimes \mathsf{r}_{\oc A}} & \oc A \otimes \oc A \ar[r]_-{\mathsf{r}_A} & A } \end{array} \end{align*} \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=port] (103) at (-12, 10.25) {$=$}; \node [style=object] (104) at (-13.25, 8.75) {$A$}; \node [style=component] (105) at (-12.75, 10.75) {$\delta^\ast$}; \node [style=differential] (106) at (-13.25, 9.75) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (107) at (-13.75, 11.5) {$\oc A$}; \node [style=object] (110) at (-12.75, 11.5) {$\oc \oc A$}; \node [style=duplicate] (117) at (-10.75, 11.75) {$\Delta$}; \node [style=object] (118) at (-10.75, 12.5) {$\oc A$}; \node [style=differential] (121) at (-10.25, 9) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (122) at (-10.25, 8.25) {$A$}; \node [style=component] (124) at (-10.25, 10.75) {$\delta$}; \node [style=differential] (125) at (-9.75, 10) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (126) at (-9.25, 12.5) {$\oc \oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=-90, out=30] (106) to (105); \draw [style=wire, in=-90, out=135] (106) to (107); \draw [style=wire] (106) to (104); \draw [style=wire] (110) to (105); \draw [style=wire] (118) to (117); \draw [style=wire] (121) to (122); \draw [style=wire, in=-90, out=150] (125) to (124); \draw [style=wire, in=-90, out=45] (125) to (126); \draw [style=wire, in=90, out=-30, looseness=1.25] (117) to (124); \draw [style=wire, in=150, out=-150] (117) to (121); \draw [style=wire, in=30, out=-90] (125) to (121); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item[{\bf [r.5]}] Reverse Interchange Rule: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{4pc}\xymatrix{\oc A \otimes \oc A \ar[d]_-{1_{\oc A} \otimes \cap_{\oc A} \otimes 1_{\oc A} }\ar[r]^-{1_{\oc A} \otimes \cap_{\oc A} \otimes 1_{\oc A} } & \oc A \otimes \oc A \otimes \oc A \otimes \oc A \ar[r]^-{\mathsf{r}_A \otimes \mathsf{r}_A} & A \otimes A \ar[d]^-{\sigma_{A,A}} \\ \oc A \otimes \oc A \otimes \oc A \otimes \oc A \ar[rr]_-{\mathsf{r}_A \otimes \mathsf{r}_A} && A \otimes A } \end{array} \end{align*} \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=port] (103) at (3.25, 3.25) {$=$}; \node [style=object] (104) at (0.25, 2) {$A$}; \node [style=differential] (106) at (0.25, 3) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (107) at (-0.25, 5) {$\oc A$}; \node [style=object] (127) at (2, 2) {$A$}; \node [style=differential] (128) at (2, 3) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (129) at (2.5, 5) {$\oc A$}; \node [style=none] (130) at (1.5, 3.75) {}; \node [style=none] (131) at (0.75, 3.75) {}; \node [style=object] (132) at (4.75, 1.75) {$A$}; \node [style=differential] (133) at (4.75, 3.5) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (134) at (4.25, 5.5) {$\oc A$}; \node [style=object] (135) at (6.5, 1.75) {$A$}; \node [style=differential] (136) at (6.5, 3.5) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (137) at (7, 5.5) {$\oc A$}; \node [style=none] (138) at (6, 4.25) {}; \node [style=none] (139) at (5.25, 4.25) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=-90, out=135] (106) to (107); \draw [style=wire] (106) to (104); \draw [style=wire, in=-90, out=45] (128) to (129); \draw [style=wire] (128) to (127); \draw [style=wire, bend right=90, looseness=2.00] (130.center) to (131.center); \draw [style=wire, in=45, out=-90] (131.center) to (106); \draw [style=wire, in=150, out=-90] (130.center) to (128); \draw [style=wire, in=-90, out=135] (133) to (134); \draw [style=wire, in=-90, out=45] (136) to (137); \draw [style=wire, bend right=90, looseness=2.00] (138.center) to (139.center); \draw [style=wire, in=45, out=-90] (139.center) to (133); \draw [style=wire, in=150, out=-90] (138.center) to (136); \draw [style=wire, in=90, out=-90, looseness=1.25] (136) to (132); \draw [style=wire, in=90, out=-90, looseness=1.25] (133) to (135); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \end{description} \end{definition} Before we get to examples, it will be useful to describe the relationship between monoidal differential categories and monoidal reverse differential categories. In particular, it shows that our definition satisfies requirement 1 described in the introduction to this section. \begin{theorem}\label{thm:rdc_to_dc} A reverse differential category is precisely a differential category which is also self-dual compact closed. Explicitly: \begin{enumerate}[{\em (i)}] \item If $\mathbb{X}$ is a reverse differential category, then $\mathbb{X}$ is a differential category where the deriving transformation $\mathsf{d}_A: \oc A \otimes A \to \oc A$ is defined as: \begin{align*} \begin{array}[c]{c} \mathsf{d}_A := \xymatrixcolsep{3.5pc}\xymatrix{\oc A \otimes A \ar[r]^-{1_{\oc A} \otimes \cap_{\oc A} \otimes 1_A} & \oc A \otimes \oc A \otimes \oc A \otimes A \ar[r]^-{\mathsf{r}_A \otimes \sigma_{\oc A, A}} & A \otimes A \otimes \oc A \ar[r]^-{\cup_A \otimes 1_{\oc A}} & \oc A } \end{array} \end{align*} \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (0) at (1.75, 2.75) {$A$}; \node [style=object] (1) at (1.25, 1.25) {$\oc A$}; \node [style=integral] (2) at (1.25, 2) {{\bf =\!=\!=\!=}}; \node [style=object] (3) at (0.75, 2.75) {$\oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right] (2) to (0); \draw [style=wire] (1) to (2); \draw [style=wire, bend left] (2) to (3); \end{pgfonlayer} \end{tikzpicture} \end{array} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (1.5, 3.25) {}; \node [style=none] (1) at (1, 2.25) {}; \node [style=integral] (2) at (1, 2.5) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (3) at (0.5, 4.25) {$\oc A$}; \node [style=none] (4) at (2, 3.25) {}; \node [style=port] (5) at (2, 0.5) {$\oc A$}; \node [style=object] (6) at (2.5, 4.25) {$A$}; \node [style=none] (7) at (2.5, 2.25) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=-90, out=30, looseness=1.50] (2) to (0.center); \draw [style=wire] (1.center) to (2); \draw [style=wire, in=-90, out=150] (2) to (3); \draw [style=wire, bend left=90, looseness=2.00] (0.center) to (4.center); \draw [style=wire] (4.center) to (5); \draw [style=wire, bend right=90, looseness=2.00] (1.center) to (7.center); \draw [style=wire] (6) to (7.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \item If $\mathbb{X}$ is a differential category which is also a self-dual compact closed category, then $\mathbb{X}$ is a reverse differential category where the reverse deriving transformation is defined as: \begin{align*} \begin{array}[c]{c} \mathsf{r}_A := \xymatrixcolsep{3.5pc}\xymatrix{\oc A \otimes \oc A \ar[r]^-{1_{\oc A} \otimes \cap_{A} \otimes 1_{\oc A}} & \oc A \otimes A \otimes A \otimes \oc A \ar[r]^-{\mathsf{d}_A \otimes \sigma_{A, \oc A}} & \oc A \otimes \oc A \otimes A \ar[r]^-{\cup_{\oc A} \otimes 1_A} & \oc A } \end{array} \end{align*} \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (0) at (1.75, 2.75) {$\oc A$}; \node [style=object] (1) at (1.25, 1.25) {$A$}; \node [style=integral] (2) at (1.25, 2) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (3) at (0.75, 2.75) {$\oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right] (2) to (0); \draw [style=wire] (1) to (2); \draw [style=wire, bend left] (2) to (3); \end{pgfonlayer} \end{tikzpicture} \end{array} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (1.5, 3.25) {}; \node [style=none] (1) at (1, 2.25) {}; \node [style=integral] (2) at (1, 2.5) {{\bf =\!=\!=\!=}}; \node [style=object] (3) at (0.5, 4.25) {$\oc A$}; \node [style=none] (4) at (2, 3.25) {}; \node [style=port] (5) at (2, 0.5) {$A$}; \node [style=object] (6) at (2.5, 4.25) {$\oc A$}; \node [style=none] (7) at (2.5, 2.25) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=-90, out=30, looseness=1.50] (2) to (0.center); \draw [style=wire] (1.center) to (2); \draw [style=wire, in=-90, out=150] (2) to (3); \draw [style=wire, bend left=90, looseness=2.00] (0.center) to (4.center); \draw [style=wire] (4.center) to (5); \draw [style=wire, bend right=90, looseness=2.00] (1.center) to (7.center); \draw [style=wire] (6) to (7.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \end{enumerate} Furthermore, these constructions are inverses of each other. \end{theorem} \begin{remark}\label{twistremark} \normalfont As noted above, the story of this monoidal reverse differential categories could, in theory, be done without assuming the twist equation. If we drop that axiom, then the above constructions include an extra twist (depending on the convention of if $A^\ast$ if on the left/right for the cup/cap): \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (0) at (1.75, 2.75) {$A$}; \node [style=object] (1) at (1.25, 1.25) {$\oc A$}; \node [style=integral] (2) at (1.25, 2) {{\bf =\!=\!=\!=}}; \node [style=object] (3) at (0.75, 2.75) {$\oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right] (2) to (0); \draw [style=wire] (1) to (2); \draw [style=wire, bend left] (2) to (3); \end{pgfonlayer} \end{tikzpicture} \end{array} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (1) at (5.75, 0.75) {}; \node [style=integral] (2) at (5.75, 1) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (3) at (5.25, 3.25) {$\oc A$}; \node [style=object] (5) at (7, -1.5) {$\oc A$}; \node [style=object] (6) at (8, 3.25) {$A$}; \node [style=none] (7) at (8, 0.75) {}; \node [style=none] (8) at (6.25, 2.5) {}; \node [style=none] (9) at (7, 2.5) {}; \node [style=none] (10) at (6.25, 1.5) {}; \node [style=none] (11) at (7, 1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (1.center) to (2); \draw [style=wire, in=-90, out=150, looseness=0.75] (2) to (3); \draw [style=wire, bend right=90, looseness=2.00] (1.center) to (7.center); \draw [style=wire] (6) to (7.center); \draw [style=wire, bend left=90, looseness=2.00] (8.center) to (9.center); \draw [style=wire, in=90, out=-90] (8.center) to (11.center); \draw [style=wire, in=-90, out=90] (10.center) to (9.center); \draw [style=wire, in=-105, out=30] (2) to (10.center); \draw [style=wire] (11.center) to (5); \end{pgfonlayer} \end{tikzpicture} \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (0) at (1.75, 2.75) {$\oc A$}; \node [style=object] (1) at (1.25, 1.25) {$A$}; \node [style=integral] (2) at (1.25, 2) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (3) at (0.75, 2.75) {$\oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right] (2) to (0); \draw [style=wire] (1) to (2); \draw [style=wire, bend left] (2) to (3); \end{pgfonlayer} \end{tikzpicture} \end{array} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (1) at (5.75, 0.75) {}; \node [style=integral] (2) at (5.75, 1) {{\bf =\!=\!=\!=}}; \node [style=object] (3) at (5.25, 3.25) {$\oc A$}; \node [style=object] (5) at (7, -1.5) {$A$}; \node [style=object] (6) at (8, 3.25) {$\oc A$}; \node [style=none] (7) at (8, 0.75) {}; \node [style=none] (8) at (6.25, 2.5) {}; \node [style=none] (9) at (7, 2.5) {}; \node [style=none] (10) at (6.25, 1.5) {}; \node [style=none] (11) at (7, 1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (1.center) to (2); \draw [style=wire, in=-90, out=150, looseness=0.75] (2) to (3); \draw [style=wire, bend right=90, looseness=2.00] (1.center) to (7.center); \draw [style=wire] (6) to (7.center); \draw [style=wire, bend left=90, looseness=2.00] (8.center) to (9.center); \draw [style=wire, in=90, out=-90] (8.center) to (11.center); \draw [style=wire, in=-90, out=90] (10.center) to (9.center); \draw [style=wire, in=-105, out=30] (2) to (10.center); \draw [style=wire] (11.center) to (5); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} However, as explained above, we have elected to assume the twist equation to simplify our string diagrams. \end{remark} \begin{proof} The axioms of a reverse deriving transformation $\mathsf{r}$ correspond precisely to the axioms of a deriving transformation $\mathsf{d}$. Naturality of $\mathsf{d}$ corresponds to the reverse naturality rule \textbf{[r.N]}, while \textbf{[d.n]} corresponds to \textbf{[r.n]} for $n = 1, 2, \hdots, 5$. The correspondence follows from using the snake equations (Definition \ref{SDCC}) and the sliding equations (Lemma \ref{sliding}). Since the computations are all similar, we will not work out the full proof in detail and will instead provide two examples, prove that the constructions are inverses of each other, and then leave the rest as an exercise for the reader. Starting with a reverse deriving transformation $\mathsf{r}$, we will show that the constructed $\mathsf{d}$ satisfies the Leibniz rule \textbf{[d.2]} by using the snake equations and sliding equations on the reverse Leibniz rule \textbf{[r.2]}: \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=differential] (0) at (1.75, 7.75) {{\bf =\!=\!=\!=}}; \node [style=object] (1) at (2.5, 8.75) {$A$}; \node [style=object] (2) at (1, 8.75) {$\oc A$}; \node [style=object] (3) at (1, 6) {$\oc A$}; \node [style=duplicate] (4) at (1.75, 7) {$\Delta$}; \node [style=object] (5) at (2.5, 6) {$\oc A$}; \node [style=object] (6) at (18.25, 2.5) {$\oc A$}; \node [style=differential] (7) at (18, 0.5) {{\bf =\!=\!=\!=}}; \node [style=object] (8) at (19.75, 2.5) {$A$}; \node [style=duplicate] (9) at (18.25, 1.5) {$\Delta$}; \node [style=object] (10) at (18, -0.25) {$\oc A$}; \node [style=object] (11) at (19.5, -0.25) {$\oc A$}; \node [style=object] (12) at (22.5, 2.5) {$A$}; \node [style=differential] (13) at (22, 0.5) {{\bf =\!=\!=\!=}}; \node [style=object] (14) at (22, -0.25) {$\oc A$}; \node [style=object] (15) at (20.75, -0.25) {$\oc A$}; \node [style=object] (16) at (21.25, 2.5) {$\oc A$}; \node [style=duplicate] (17) at (21.25, 1.5) {$\Delta$}; \node [style=port] (18) at (2.75, 7.25) {$=$}; \node [style=port] (19) at (20.25, 1) {$+$}; \node [style=none] (20) at (4.5, 8.5) {}; \node [style=none] (21) at (4, 7.5) {}; \node [style=integral] (22) at (4, 7.75) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (23) at (3.5, 9.5) {$\oc A$}; \node [style=none] (24) at (5, 8.5) {}; \node [style=object] (26) at (5.5, 9.5) {$A$}; \node [style=none] (27) at (5.5, 7.5) {}; \node [style=object] (28) at (4.25, 5) {$\oc A$}; \node [style=duplicate] (29) at (5, 6) {$\Delta$}; \node [style=object] (30) at (5.75, 5) {$\oc A$}; \node [style=port] (31) at (17, 1) {$=$}; \node [style=differential] (33) at (8.25, 6.75) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (34) at (7, 9) {$\oc A$}; \node [style=none] (35) at (8.25, 9) {}; \node [style=duplicate] (36) at (9, 7.75) {$\Delta^\ast$}; \node [style=none] (37) at (9.75, 9) {}; \node [style=object] (38) at (12, 7.25) {$=$}; \node [style=object] (39) at (13.25, 9) {$\oc A$}; \node [style=differential] (40) at (13.75, 6.25) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=none] (41) at (15.5, 9) {}; \node [style=duplicate] (42) at (13.25, 8) {$\Delta$}; \node [style=none] (44) at (14.5, 9) {}; \node [style=port] (45) at (18, 7.25) {$+$}; \node [style=none] (46) at (14.5, 7.5) {}; \node [style=none] (47) at (13.75, 7.5) {}; \node [style=object] (48) at (19.25, 9) {$\oc A$}; \node [style=differential] (49) at (19.75, 6.25) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=none] (50) at (20.5, 9) {}; \node [style=duplicate] (51) at (19.25, 8) {$\Delta$}; \node [style=none] (53) at (21.5, 9) {}; \node [style=none] (54) at (21.5, 7.75) {}; \node [style=none] (55) at (19.75, 7.5) {}; \node [style=object] (56) at (10.5, 5) {$\oc A$}; \node [style=object] (57) at (11.5, 5) {$\oc A$}; \node [style=none] (58) at (10.5, 9) {}; \node [style=none] (59) at (11.5, 9) {}; \node [style=object] (60) at (16.25, 5) {$\oc A$}; \node [style=object] (61) at (17.25, 5) {$\oc A$}; \node [style=none] (62) at (16.25, 9) {}; \node [style=none] (63) at (17.25, 9) {}; \node [style=none] (64) at (8.25, 6.5) {}; \node [style=object] (65) at (9.5, 10.75) {$A$}; \node [style=none] (66) at (9.5, 6.5) {}; \node [style=none] (67) at (13.75, 6) {}; \node [style=object] (68) at (15.25, 10.75) {$A$}; \node [style=none] (69) at (15.25, 6) {}; \node [style=none] (70) at (19.75, 6) {}; \node [style=object] (71) at (21.25, 10.75) {$A$}; \node [style=none] (72) at (21.25, 6) {}; \node [style=none] (73) at (21.5, 9) {}; \node [style=none] (74) at (20.5, 9) {}; \node [style=object] (75) at (22.25, 5) {$\oc A$}; \node [style=object] (76) at (23.25, 5) {$\oc A$}; \node [style=none] (77) at (22.25, 9) {}; \node [style=none] (78) at (23.25, 9) {}; \node [style=port] (91) at (6.75, 1) {$+$}; \node [style=object] (100) at (3, 4) {$\oc A$}; \node [style=duplicate] (101) at (3, 3) {$\Delta$}; \node [style=object] (102) at (2, -1.75) {$\oc A$}; \node [style=none] (103) at (5, 1) {}; \node [style=none] (104) at (4.5, 0) {}; \node [style=integral] (105) at (4.5, 0.25) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=none] (106) at (5.5, 1) {}; \node [style=port] (107) at (5.5, -1.75) {$\oc A$}; \node [style=object] (108) at (6, 4) {$A$}; \node [style=none] (109) at (6, 0) {}; \node [style=port] (110) at (1.5, 1) {$=$}; \node [style=object] (111) at (8, 4) {$\oc A$}; \node [style=duplicate] (112) at (8, 3) {$\Delta$}; \node [style=object] (113) at (10.75, -1.75) {$\oc A$}; \node [style=none] (114) at (8.5, 1) {}; \node [style=none] (115) at (8, 0) {}; \node [style=integral] (116) at (8, 0.25) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=none] (117) at (9, 1) {}; \node [style=port] (118) at (9, -1.75) {$\oc A$}; \node [style=object] (119) at (9.5, 4) {$A$}; \node [style=none] (120) at (9.5, 0) {}; \node [style=object] (121) at (15.25, 2.5) {$\oc A$}; \node [style=differential] (122) at (15, 0.5) {{\bf =\!=\!=\!=}}; \node [style=object] (123) at (16.75, 2.5) {$A$}; \node [style=duplicate] (124) at (15.25, 1.5) {$\Delta$}; \node [style=object] (125) at (15, -0.25) {$\oc A$}; \node [style=object] (126) at (16.5, -0.25) {$\oc A$}; \node [style=object] (127) at (13.25, 2.5) {$A$}; \node [style=differential] (128) at (12.75, 0.5) {{\bf =\!=\!=\!=}}; \node [style=object] (129) at (12.75, -0.25) {$\oc A$}; \node [style=object] (130) at (11.5, -0.25) {$\oc A$}; \node [style=object] (131) at (12, 2.5) {$\oc A$}; \node [style=duplicate] (132) at (12, 1.5) {$\Delta$}; \node [style=port] (133) at (14, 1) {$+$}; \node [style=port] (134) at (11, 1) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right] (0) to (1); \draw [style=wire, bend left] (0) to (2); \draw [style=wire, bend right] (4) to (3); \draw [style=wire, bend left] (4) to (5); \draw [style=wire] (0) to (4); \draw [style=wire, in=-90, out=45] (7) to (8); \draw [style=wire, in=150, out=-150, looseness=1.50] (9) to (7); \draw [style=wire] (6) to (9); \draw [style=wire] (7) to (10); \draw [style=wire, bend left, looseness=1.25] (9) to (11); \draw [style=wire, in=-90, out=60, looseness=1.25] (13) to (12); \draw [style=wire, in=91, out=-135, looseness=0.75] (17) to (15); \draw [style=wire, in=150, out=-30] (17) to (13); \draw [style=wire] (16) to (17); \draw [style=wire] (13) to (14); \draw [style=wire, in=-90, out=30, looseness=1.50] (22) to (20.center); \draw [style=wire] (21.center) to (22); \draw [style=wire, in=-90, out=150] (22) to (23); \draw [style=wire, bend left=90, looseness=2.00] (20.center) to (24.center); \draw [style=wire, bend right=90, looseness=2.00] (21.center) to (27.center); \draw [style=wire] (26) to (27.center); \draw [style=wire, bend right] (29) to (28); \draw [style=wire, bend left] (29) to (30); \draw [style=wire] (24.center) to (29); \draw [style=wire, in=-90, out=135] (33) to (34); \draw [style=wire, bend left] (36) to (35.center); \draw [style=wire, bend right] (36) to (37.center); \draw [style=wire, in=45, out=-90, looseness=1.25] (36) to (33); \draw [style=wire, in=-90, out=30, looseness=1.25] (40) to (41.center); \draw [style=wire, in=150, out=-135, looseness=1.50] (42) to (40); \draw [style=wire] (39) to (42); \draw [style=wire, bend left=90, looseness=2.00] (46.center) to (47.center); \draw [style=wire] (44.center) to (46.center); \draw [style=wire, in=90, out=-30] (42) to (47.center); \draw [style=wire, in=-90, out=45, looseness=1.25] (49) to (50.center); \draw [style=wire, in=150, out=-135, looseness=1.50] (51) to (49); \draw [style=wire] (48) to (51); \draw [style=wire, bend left=90, looseness=2.00] (54.center) to (55.center); \draw [style=wire] (53.center) to (54.center); \draw [style=wire, in=90, out=-30] (51) to (55.center); \draw [style=wire, bend left=90, looseness=2.00] (37.center) to (59.center); \draw [style=wire, bend left=90, looseness=1.50] (35.center) to (58.center); \draw [style=wire] (58.center) to (56); \draw [style=wire] (59.center) to (57); \draw [style=wire] (62.center) to (60); \draw [style=wire] (63.center) to (61); \draw [style=wire, bend left=90, looseness=1.75] (44.center) to (62.center); \draw [style=wire, bend left=90, looseness=1.75] (41.center) to (63.center); \draw [style=wire, bend right=90, looseness=2.00] (64.center) to (66.center); \draw [style=wire] (65) to (66.center); \draw [style=wire] (33) to (64.center); \draw [style=wire, bend right=90, looseness=2.00] (67.center) to (69.center); \draw [style=wire] (68) to (69.center); \draw [style=wire] (40) to (67.center); \draw [style=wire, bend right=90, looseness=2.00] (70.center) to (72.center); \draw [style=wire] (71) to (72.center); \draw [style=wire] (49) to (70.center); \draw [style=wire] (77.center) to (75); \draw [style=wire] (78.center) to (76); \draw [style=wire, bend left=90, looseness=1.75] (74.center) to (77.center); \draw [style=wire, bend left=90, looseness=1.75] (73.center) to (78.center); \draw [style=wire] (100) to (101); \draw [style=wire, in=-90, out=30, looseness=1.50] (105) to (103.center); \draw [style=wire] (104.center) to (105); \draw [style=wire, bend left=90, looseness=2.00] (103.center) to (106.center); \draw [style=wire] (106.center) to (107); \draw [style=wire, bend right=90, looseness=2.00] (104.center) to (109.center); \draw [style=wire] (108) to (109.center); \draw [style=wire, in=90, out=-150, looseness=0.75] (101) to (102); \draw [style=wire, in=150, out=-15] (101) to (105); \draw [style=wire] (111) to (112); \draw [style=wire, in=90, out=-15, looseness=1.25] (112) to (113); \draw [style=wire, in=-90, out=30, looseness=1.50] (116) to (114.center); \draw [style=wire] (115.center) to (116); \draw [style=wire, bend left=90, looseness=2.00] (114.center) to (117.center); \draw [style=wire] (117.center) to (118); \draw [style=wire, bend right=90, looseness=2.00] (115.center) to (120.center); \draw [style=wire] (119) to (120.center); \draw [style=wire, bend right=60] (112) to (116); \draw [style=wire, in=-90, out=45] (122) to (123); \draw [style=wire, in=150, out=-150, looseness=1.50] (124) to (122); \draw [style=wire] (121) to (124); \draw [style=wire] (122) to (125); \draw [style=wire, bend left, looseness=1.25] (124) to (126); \draw [style=wire, in=-90, out=60, looseness=1.25] (128) to (127); \draw [style=wire, in=91, out=-135, looseness=0.75] (132) to (130); \draw [style=wire, in=150, out=-30] (132) to (128); \draw [style=wire] (131) to (132); \draw [style=wire] (128) to (129); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} To show that $\mathsf{d}$ is natural and satisfies the rest of the deriving transformation axioms is similar. Conversely, starting with a deriving transformation $\mathsf{d}$, we will show that the constructed $\mathsf{r}$ satisfies the reverse chain rule \textbf{[r.4]} by using the snake equations and sliding equations on the chain rule \textbf{[d.4]}: \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=port] (0) at (24, 12.5) {$=$}; \node [style=object] (1) at (22.75, 11) {$A$}; \node [style=component] (2) at (23.25, 13) {$\delta^\ast$}; \node [style=differential] (3) at (22.75, 12) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (4) at (22.25, 13.75) {$\oc A$}; \node [style=object] (5) at (23.25, 13.75) {$\oc \oc A$}; \node [style=none] (6) at (25.5, 13.5) {}; \node [style=none] (7) at (25, 12.5) {}; \node [style=integral] (8) at (25, 12.75) {{\bf =\!=\!=\!=}}; \node [style=object] (9) at (24.5, 14.5) {$\oc A$}; \node [style=none] (10) at (26, 13.5) {}; \node [style=port] (11) at (26, 10.75) {$A$}; \node [style=object] (12) at (26.5, 14.5) {$\oc A$}; \node [style=none] (13) at (26.5, 12.5) {}; \node [style=component] (14) at (26.5, 13.25) {$\delta^\ast$}; \node [style=port] (15) at (27.25, 12.5) {$=$}; \node [style=none] (16) at (28.75, 14.25) {}; \node [style=none] (17) at (28.25, 12.25) {}; \node [style=integral] (18) at (28.25, 13.5) {{\bf =\!=\!=\!=}}; \node [style=object] (19) at (27.75, 15.25) {$\oc A$}; \node [style=none] (20) at (29.25, 14.25) {}; \node [style=port] (21) at (29.25, 10.5) {$A$}; \node [style=object] (22) at (29.75, 15.25) {$\oc A$}; \node [style=none] (23) at (29.75, 12.25) {}; \node [style=component] (24) at (28.25, 12.75) {$\delta$}; \node [style=port] (25) at (30.5, 12.5) {$=$}; \node [style=component] (26) at (31.25, 13.5) {$\delta$}; \node [style=duplicate] (27) at (31.75, 14.5) {$\Delta$}; \node [style=object] (28) at (31.75, 15.25) {$\oc A$}; \node [style=differential] (29) at (32.5, 13.5) {{\bf =\!=\!=\!=}}; \node [style=differential] (30) at (32, 12.5) {{\bf =\!=\!=\!=}}; \node [style=none] (31) at (32, 12) {}; \node [style=none] (32) at (34.25, 12.25) {}; \node [style=none] (33) at (33, 14.25) {}; \node [style=none] (34) at (33.75, 14.25) {}; \node [style=port] (35) at (33.75, 10) {$A$}; \node [style=object] (36) at (34.25, 15.25) {$\oc A$}; \node [style=port] (37) at (35, 12.5) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=-90, out=30] (3) to (2); \draw [style=wire, in=-90, out=135] (3) to (4); \draw [style=wire] (3) to (1); \draw [style=wire] (5) to (2); \draw [style=wire, in=-90, out=30, looseness=1.50] (8) to (6.center); \draw [style=wire] (7.center) to (8); \draw [style=wire, in=-90, out=150] (8) to (9); \draw [style=wire, bend left=90, looseness=2.00] (6.center) to (10.center); \draw [style=wire] (10.center) to (11); \draw [style=wire, bend right=90, looseness=2.00] (7.center) to (13.center); \draw [style=wire] (12) to (14); \draw [style=wire] (14) to (13.center); \draw [style=wire, in=-90, out=30, looseness=1.50] (18) to (16.center); \draw [style=wire, in=-90, out=150] (18) to (19); \draw [style=wire, bend left=90, looseness=2.00] (16.center) to (20.center); \draw [style=wire] (20.center) to (21); \draw [style=wire, bend right=90, looseness=2.00] (17.center) to (23.center); \draw [style=wire] (22) to (23.center); \draw [style=wire] (18) to (24); \draw [style=wire] (24) to (17.center); \draw [style=wire, bend right] (27) to (26); \draw [style=wire] (28) to (27); \draw [style=wire, in=150, out=-30, looseness=1.25] (27) to (29); \draw [style=wire, in=30, out=-90] (29) to (30); \draw [style=wire, in=150, out=-90] (26) to (30); \draw [style=wire, bend right=90, looseness=2.00] (31.center) to (32.center); \draw [style=wire] (30) to (31.center); \draw [style=wire, bend left=90, looseness=2.00] (33.center) to (34.center); \draw [style=wire, in=-90, out=30] (29) to (33.center); \draw [style=wire] (34.center) to (35); \draw [style=wire] (36) to (32.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \\ \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=port] (119) at (45.25, 12.5) {$=$}; \node [style=duplicate] (120) at (46.5, 14) {$\Delta$}; \node [style=object] (121) at (46.5, 14.75) {$\oc A$}; \node [style=differential] (122) at (47, 11.25) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (123) at (47, 10.5) {$A$}; \node [style=component] (124) at (47, 13) {$\delta$}; \node [style=differential] (125) at (47.5, 12.25) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (126) at (48, 14.75) {$\oc \oc A$}; \node [style=none] (127) at (43.25, 11.75) {}; \node [style=none] (128) at (42.75, 10.75) {}; \node [style=integral] (129) at (42.75, 11) {{\bf =\!=\!=\!=}}; \node [style=none] (130) at (43.75, 11.75) {}; \node [style=none] (131) at (44.25, 10.75) {}; \node [style=duplicate] (132) at (42, 15.25) {$\Delta$}; \node [style=object] (133) at (42, 16) {$\oc A$}; \node [style=object] (134) at (43.75, 8.75) {$A$}; \node [style=component] (135) at (42.5, 14.25) {$\delta$}; \node [style=object] (136) at (44.75, 16) {$\oc \oc A$}; \node [style=none] (137) at (43.75, 14.25) {}; \node [style=none] (138) at (43.25, 13.25) {}; \node [style=integral] (139) at (43.25, 13.5) {{\bf =\!=\!=\!=}}; \node [style=none] (140) at (44.25, 14.25) {}; \node [style=none] (141) at (44.75, 13.25) {}; \node [style=component] (143) at (35.75, 14) {$\delta$}; \node [style=duplicate] (144) at (36.75, 15.5) {$\Delta$}; \node [style=object] (145) at (36.75, 16.25) {$\oc A$}; \node [style=differential] (146) at (38.25, 14.5) {{\bf =\!=\!=\!=}}; \node [style=differential] (147) at (36.25, 12.5) {{\bf =\!=\!=\!=}}; \node [style=none] (148) at (36.25, 12) {}; \node [style=none] (149) at (40, 12) {}; \node [style=none] (150) at (38.75, 15.25) {}; \node [style=none] (151) at (39.5, 15.25) {}; \node [style=port] (152) at (39.5, 8.75) {$A$}; \node [style=object] (153) at (40, 16.25) {$\oc A$}; \node [style=port] (154) at (40.75, 12.5) {$=$}; \node [style=none] (155) at (37.5, 13.5) {}; \node [style=none] (156) at (38.25, 13.5) {}; \node [style=none] (157) at (37, 13.5) {}; \node [style=none] (158) at (37.5, 13.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (121) to (120); \draw [style=wire] (122) to (123); \draw [style=wire, in=-90, out=150] (125) to (124); \draw [style=wire, in=-90, out=45] (125) to (126); \draw [style=wire, in=90, out=-30, looseness=1.25] (120) to (124); \draw [style=wire, in=150, out=-150] (120) to (122); \draw [style=wire, in=30, out=-90] (125) to (122); \draw [style=wire, in=-90, out=30, looseness=1.50] (129) to (127.center); \draw [style=wire] (128.center) to (129); \draw [style=wire, bend left=90, looseness=2.00] (127.center) to (130.center); \draw [style=wire, bend right=90, looseness=2.00] (128.center) to (131.center); \draw [style=wire] (133) to (132); \draw [style=wire, in=90, out=-30, looseness=1.25] (132) to (135); \draw [style=wire, in=-90, out=30, looseness=1.50] (139) to (137.center); \draw [style=wire] (138.center) to (139); \draw [style=wire, bend left=90, looseness=2.00] (137.center) to (140.center); \draw [style=wire, bend right=90, looseness=2.00] (138.center) to (141.center); \draw [style=wire, in=150, out=-90] (135) to (139); \draw [style=wire] (136) to (141.center); \draw [style=wire] (140.center) to (131.center); \draw [style=wire, in=-150, out=150] (129) to (132); \draw [style=wire] (130.center) to (134); \draw [style=wire, bend right] (144) to (143); \draw [style=wire] (145) to (144); \draw [style=wire, in=150, out=-15, looseness=1.25] (144) to (146); \draw [style=wire, in=150, out=-90] (143) to (147); \draw [style=wire, bend right=90, looseness=2.00] (148.center) to (149.center); \draw [style=wire] (147) to (148.center); \draw [style=wire, bend left=90, looseness=2.00] (150.center) to (151.center); \draw [style=wire, in=-90, out=30] (146) to (150.center); \draw [style=wire] (151.center) to (152); \draw [style=wire] (153) to (149.center); \draw [style=wire, bend right=90, looseness=2.00] (155.center) to (157.center); \draw [style=wire, bend left=90, looseness=2.00] (156.center) to (158.center); \draw [style=wire] (146) to (156.center); \draw [style=wire, in=30, out=-90, looseness=1.25] (157.center) to (147); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} To show that $\mathsf{r}$ satisfies the rest of the reverse deriving transformations axioms is similar. Lastly, we use the snake equations to prove that these constructions are inverses of each other: \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (7, 9.75) {}; \node [style=none] (1) at (6.5, 8.25) {}; \node [style=integral] (2) at (6.5, 8.75) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (3) at (5.75, 12) {$\oc A$}; \node [style=none] (4) at (8, 9.75) {}; \node [style=none] (5) at (8.5, 8.25) {}; \node [style=none] (6) at (8.5, 11) {}; \node [style=none] (7) at (8, 7.25) {}; \node [style=none] (8) at (9.25, 11) {}; \node [style=port] (9) at (9.25, 5.25) {$A$}; \node [style=object] (10) at (9.75, 12) {$\oc A$}; \node [style=none] (11) at (9.75, 7.25) {}; \node [style=object] (12) at (16, 8.75) {$\oc A$}; \node [style=object] (13) at (15.5, 7.25) {$A$}; \node [style=integral] (14) at (15.5, 8) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (15) at (15, 8.75) {$\oc A$}; \node [style=port] (16) at (14.25, 8) {$=$}; \node [style=none] (17) at (12.25, 7) {}; \node [style=none] (18) at (11.5, 7) {}; \node [style=none] (19) at (13, 7) {}; \node [style=object] (20) at (13, 6) {$A$}; \node [style=none] (21) at (12.25, 7) {}; \node [style=object] (22) at (10.25, 8) {$=$}; \node [style=none] (23) at (12.75, 8.75) {}; \node [style=none] (24) at (13.5, 8.75) {}; \node [style=object] (25) at (13.5, 9.75) {$\oc A$}; \node [style=none] (26) at (12, 8.75) {}; \node [style=none] (27) at (12.75, 8.75) {}; \node [style=integral] (28) at (11.5, 7.75) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (29) at (10.75, 9.75) {$\oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=-90, out=30] (2) to (0.center); \draw [style=wire] (1.center) to (2); \draw [style=wire, in=-90, out=150, looseness=0.75] (2) to (3); \draw [style=wire, bend left=90, looseness=2.00] (0.center) to (4.center); \draw [style=wire, bend right=90, looseness=2.00] (1.center) to (5.center); \draw [style=wire, bend left=90, looseness=2.00] (6.center) to (8.center); \draw [style=wire] (8.center) to (9); \draw [style=wire, bend right=90, looseness=2.00] (7.center) to (11.center); \draw [style=wire] (10) to (11.center); \draw [style=wire] (4.center) to (7.center); \draw [style=wire] (5.center) to (6.center); \draw [style=wire, bend right] (14) to (12); \draw [style=wire] (13) to (14); \draw [style=wire, bend left] (14) to (15); \draw [style=wire, bend left=90, looseness=2.00] (17.center) to (19.center); \draw [style=wire] (19.center) to (20); \draw [style=wire, bend right=90, looseness=2.00] (18.center) to (21.center); \draw [style=wire, bend right=90, looseness=2.00] (23.center) to (26.center); \draw [style=wire, bend left=90, looseness=2.00] (24.center) to (27.center); \draw [style=wire] (25) to (24.center); \draw [style=wire, in=-90, out=150, looseness=0.75] (28) to (29); \draw [style=wire] (28) to (18.center); \draw [style=wire, in=-90, out=30] (28) to (26.center); \end{pgfonlayer} \end{tikzpicture} \end{array}&& \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (7, 9.75) {}; \node [style=none] (1) at (6.5, 8.25) {}; \node [style=integral] (2) at (6.5, 8.75) {{\bf =\!=\!=\!=}}; \node [style=object] (3) at (5.75, 12) {$\oc A$}; \node [style=none] (4) at (8, 9.75) {}; \node [style=none] (5) at (8.5, 8.25) {}; \node [style=none] (6) at (8.5, 11) {}; \node [style=none] (7) at (8, 7.25) {}; \node [style=none] (8) at (9.25, 11) {}; \node [style=port] (9) at (9.25, 5.25) {$\oc A$}; \node [style=object] (10) at (9.75, 12) {$A$}; \node [style=none] (11) at (9.75, 7.25) {}; \node [style=object] (12) at (16, 8.75) {$A$}; \node [style=object] (13) at (15.5, 7.25) {$\oc A$}; \node [style=integral] (14) at (15.5, 8) {{\bf =\!=\!=\!=}}; \node [style=object] (15) at (15, 8.75) {$\oc A$}; \node [style=port] (16) at (14.25, 8) {$=$}; \node [style=none] (17) at (12.25, 7) {}; \node [style=none] (18) at (11.5, 7) {}; \node [style=none] (19) at (13, 7) {}; \node [style=object] (20) at (13, 6) {$\oc A$}; \node [style=none] (21) at (12.25, 7) {}; \node [style=object] (22) at (10.25, 8) {$=$}; \node [style=none] (23) at (12.75, 8.75) {}; \node [style=none] (24) at (13.5, 8.75) {}; \node [style=object] (25) at (13.5, 9.75) {$A$}; \node [style=none] (26) at (12, 8.75) {}; \node [style=none] (27) at (12.75, 8.75) {}; \node [style=integral] (28) at (11.5, 7.75) {{\bf =\!=\!=\!=}}; \node [style=object] (29) at (10.75, 9.75) {$\oc A$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=-90, out=30] (2) to (0.center); \draw [style=wire] (1.center) to (2); \draw [style=wire, in=-90, out=150, looseness=0.75] (2) to (3); \draw [style=wire, bend left=90, looseness=2.00] (0.center) to (4.center); \draw [style=wire, bend right=90, looseness=2.00] (1.center) to (5.center); \draw [style=wire, bend left=90, looseness=2.00] (6.center) to (8.center); \draw [style=wire] (8.center) to (9); \draw [style=wire, bend right=90, looseness=2.00] (7.center) to (11.center); \draw [style=wire] (10) to (11.center); \draw [style=wire] (4.center) to (7.center); \draw [style=wire] (5.center) to (6.center); \draw [style=wire, bend right] (14) to (12); \draw [style=wire] (13) to (14); \draw [style=wire, bend left] (14) to (15); \draw [style=wire, bend left=90, looseness=2.00] (17.center) to (19.center); \draw [style=wire] (19.center) to (20); \draw [style=wire, bend right=90, looseness=2.00] (18.center) to (21.center); \draw [style=wire, bend right=90, looseness=2.00] (23.center) to (26.center); \draw [style=wire, bend left=90, looseness=2.00] (24.center) to (27.center); \draw [style=wire] (25) to (24.center); \draw [style=wire, in=-90, out=150, looseness=0.75] (28) to (29); \draw [style=wire] (28) to (18.center); \draw [style=wire, in=-90, out=30] (28) to (26.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} So we conclude that a reverse differential category is precisely the same thing as differential category which is also self-dual compact closed. \end{proof} We note that a self-dual compact closed differential category is also a codifferential category (the dual of a differential category). Indeed, observe that if $(\oc, \delta, \varepsilon, \Delta, e)$ is a coalgebra modality on a self-dual compact closed category $\mathbb{X}$, then we can define an algebra modality (the dual of a coalgebra modality). The monad endofunctor $\wn: \mathbb{X} \to \mathbb{X}$ is defined on objects as $\wn A = \oc A$ and on maps $\wn f = \left( \oc(f^\ast) \right)^\ast$, while the remaining natural transformation are the duals of the coalgebra modality natural transformations. Explicitly, $(\wn, \delta^\ast, \varepsilon^\ast, \Delta^\ast, e^\ast)$ is an algebra modality on $\mathbb{X}$. It is crucial to observe that $\oc$ and $\wn$ are equal on objects but not maps, and that $\delta^\ast$, $\varepsilon^\ast$, $\Delta^\ast$ and $e^\ast$ are not necessarily natural with respect to $\oc$. For example, $\wn(f); \Delta^\ast = \Delta^\ast; (\wn(f) \otimes \wn(f))$ but $\Delta^\ast; \oc(f) $ may not be equal to $(\oc(f) \otimes \oc(f)); \Delta^\ast$. Furthermore, if $\mathbb{X}$ is also a differential category, where $\mathsf{d}$ is a deriving transformation for $(\oc, \delta, \varepsilon, \Delta, e)$, then $\mathbb{X}$ is also a codifferential category where $\mathsf{d}^\ast$ is a deriving transformation for $(\wn, \delta^\ast, \varepsilon^\ast, \Delta^\ast, e^\ast)$. Before providing examples of reverse differential categories, let us first discuss the relation between the reverse deriving transformation and the coderiving transformation. Indeed, observe that if $(\oc, \delta, \varepsilon, \Delta, e)$ is a coalgebra modality on a self-dual compact closed category $\mathbb{X}$, then there is a canonical map of the desired type $\oc A \otimes \oc A \to A$ defined as the composite: \begin{align*} \begin{array}[c]{c} \xymatrixcolsep{5pc}\xymatrix{\oc A \otimes \oc A \ar[r]^-{1_{\oc A} \otimes \mathsf{d}^\circ_A} & \oc A \otimes \oc A \otimes A \ar[r]^-{\cup_{\oc A} \otimes 1_A} & A} \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (93) at (3, -13) {}; \node [style=none] (96) at (3.75, -13) {}; \node [style=object] (101) at (3, -10.5) {$\oc A$}; \node [style=object] (117) at (4.25, -10.5) {$\oc A$}; \node [style=differential] (118) at (4.25, -11.25) {{\bf =\!=\!=\!=}}; \node [style=object] (119) at (5.25, -14) {$A$}; \node [style=none] (120) at (3.75, -12.25) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right=90, looseness=2.00] (93.center) to (96.center); \draw [style=wire] (101) to (93.center); \draw [style=wire] (117) to (118); \draw [style=wire, in=90, out=-150, looseness=1.25] (118) to (120.center); \draw [style=wire, in=90, out=-30, looseness=1.25] (118) to (119); \draw [style=wire] (120.center) to (96.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} While this is a map of the right type, this is not automatically a reverse deriving transformation. This map is a reverse deriving transformation if and only if the coderiving transformation is deriving transformation for the induced algebra modality. \begin{lemma}\label{lem:coder-rev} If $\mathbb{X}$ is a self-dual compact closed category, which is additive symmetric monoidal and equipped with a coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$, the following are equivalent: \begin{enumerate} \item $\mathsf{r} := (1 \otimes \mathsf{d}^\circ);(\cup \otimes 1)$ is a reverse deriving transformation; \item $\mathsf{d}^\circ$ is a deriving transformation for the algebra modality $(\wn, \delta^\ast, \varepsilon^\ast, \Delta^\ast, e^\ast)$; that is, the dual diagrams of Definition \ref{def:diffcat} commute. \end{enumerate} Furthermore, in this case, $\mathsf{d}^\ast = \mathsf{d}^\circ$. \end{lemma} \begin{proof} For $(i) \Rightarrow (ii)$: by Theorem \ref{thm:rdc_to_dc}, we obtain a deriving transformation $\mathsf{d}_A: \oc A \otimes A \to \oc A$. We will now show that $\mathsf{d}^\ast = \mathsf{d}^\circ$. So using the snake equations and Theorem \ref{thm:rdc_to_dc}, we compute: \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (93) at (13, 12.25) {}; \node [style=none] (96) at (13.75, 12.25) {}; \node [style=object] (117) at (14.25, 14.75) {$\oc A$}; \node [style=differential] (118) at (14.25, 14) {{\bf =\!=\!=\!=}}; \node [style=object] (119) at (15.25, 11) {$A$}; \node [style=none] (120) at (13.75, 13) {}; \node [style=none] (121) at (5.25, 14) {}; \node [style=none] (122) at (4.75, 12.5) {}; \node [style=integral] (123) at (4.75, 13) {{\bf =\!=\!=\!=}}; \node [style=none] (125) at (6.5, 14) {}; \node [style=none] (126) at (7, 12.5) {}; \node [style=object] (127) at (7, 15.25) {$\oc A$}; \node [style=object] (128) at (6.5, 10.25) {$A$}; \node [style=object] (143) at (7.75, 12.5) {$=$}; \node [style=none] (151) at (4.25, 14) {}; \node [style=none] (152) at (5.75, 14) {}; \node [style=object] (153) at (5.75, 10.25) {$\oc A$}; \node [style=object] (166) at (17.75, 11.5) {$A$}; \node [style=object] (167) at (17.25, 13) {$\oc A$}; \node [style=integral] (168) at (17.25, 12.25) {{\bf =\!=\!=\!=}}; \node [style=object] (169) at (16.75, 11.5) {$\oc A$}; \node [style=port] (170) at (16, 12.25) {$=$}; \node [style=object] (180) at (10, 11) {$A$}; \node [style=none] (182) at (8.75, 13.25) {}; \node [style=object] (184) at (8.75, 11) {$\oc A$}; \node [style=none] (185) at (9.5, 13.25) {}; \node [style=none] (186) at (8.75, 13.25) {}; \node [style=integral] (187) at (10, 12.25) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (188) at (10.75, 14.25) {$\oc A$}; \node [style=none] (190) at (12.25, 13.25) {}; \node [style=object] (191) at (12.25, 11) {$\oc A$}; \node [style=none] (192) at (13, 13.25) {}; \node [style=none] (193) at (12.25, 13.25) {}; \node [style=object] (194) at (11.5, 12.5) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right=90, looseness=2.00] (93.center) to (96.center); \draw [style=wire] (117) to (118); \draw [style=wire, in=90, out=-150, looseness=1.25] (118) to (120.center); \draw [style=wire, in=90, out=-30] (118) to (119); \draw [style=wire] (120.center) to (96.center); \draw [style=wire, in=-90, out=30] (123) to (121.center); \draw [style=wire] (122.center) to (123); \draw [style=wire, bend left=90, looseness=2.00] (121.center) to (125.center); \draw [style=wire, bend right=90, looseness=2.00] (122.center) to (126.center); \draw [style=wire] (125.center) to (128); \draw [style=wire] (126.center) to (127); \draw [style=wire, bend left=90, looseness=2.00] (151.center) to (152.center); \draw [style=wire, in=150, out=-90] (151.center) to (123); \draw [style=wire] (152.center) to (153); \draw [style=wire, bend left] (168) to (166); \draw [style=wire] (167) to (168); \draw [style=wire, bend right] (168) to (169); \draw [style=wire, bend left=90, looseness=2.00] (182.center) to (185.center); \draw [style=wire, in=-90, out=30, looseness=0.75] (187) to (188); \draw [style=wire, in=-90, out=150] (187) to (185.center); \draw [style=wire] (187) to (180); \draw [style=wire] (186.center) to (184); \draw [style=wire, bend left=90, looseness=2.00] (190.center) to (192.center); \draw [style=wire] (193.center) to (191); \draw [style=wire] (192.center) to (93.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} So $\mathsf{d}^\ast = \mathsf{d}^\circ$. Therefore, by the above discussion, $\mathsf{d}^\ast = \mathsf{d}^\circ$ is a deriving transformation for the algebra modality $(\wn, \delta^\ast, \varepsilon^\ast, \Delta^\ast, e^\ast)$. Conversely, for $(ii) \Rightarrow (i)$: by the dual of the above discussion, we have that ${\mathsf{d}^\circ}^\ast$ is a deriving transformation for the coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$. Then by Theorem \ref{thm:rdc_to_dc}, we obtain a reverse deriving transformation $\mathsf{r}_A: \oc A \otimes \oc A \to A$. Expanding out the construction, we compute: \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (121) at (18.5, 2) {}; \node [style=none] (122) at (18, 3.5) {}; \node [style=integral] (123) at (18, 3) {{\bf =\!=\!=\!=}}; \node [style=none] (125) at (19.75, 2) {}; \node [style=none] (126) at (20.25, 3.5) {}; \node [style=none] (151) at (17.5, 2) {}; \node [style=none] (152) at (19, 2) {}; \node [style=object] (153) at (19, 5.75) {$\oc A$}; \node [style=port] (170) at (13.5, 2.5) {$=$}; \node [style=object] (194) at (16.75, 2.5) {$=$}; \node [style=none] (195) at (15.25, 3.5) {}; \node [style=none] (196) at (14.75, 2.5) {}; \node [style=integral] (197) at (14.75, 2.75) {{\bf =\!=\!=\!=}}; \node [style=object] (198) at (14.25, 4.5) {$\oc A$}; \node [style=none] (199) at (15.75, 3.5) {}; \node [style=port] (200) at (15.75, 0.75) {$A$}; \node [style=object] (201) at (16.25, 4.5) {$\oc A$}; \node [style=none] (202) at (16.25, 2.5) {}; \node [style=object] (203) at (13, 3.25) {$\oc A$}; \node [style=object] (204) at (12.5, 1.75) {$A$}; \node [style=integral] (205) at (12.5, 2.5) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (206) at (12, 3.25) {$\oc A$}; \node [style=none] (207) at (20.25, 2.5) {}; \node [style=none] (208) at (22, 2.25) {}; \node [style=object] (209) at (22, 5.25) {$\oc A$}; \node [style=none] (210) at (22, 2.25) {}; \node [style=none] (211) at (19.75, 3.5) {}; \node [style=none] (212) at (21.25, 3.5) {}; \node [style=none] (213) at (19.75, 3.5) {}; \node [style=none] (214) at (21.25, 3.5) {}; \node [style=port] (215) at (21.25, 0.5) {$A$}; \node [style=object] (216) at (22.75, 2.5) {$=$}; \node [style=none] (224) at (23.5, 2) {}; \node [style=none] (225) at (24.25, 2) {}; \node [style=object] (226) at (23.5, 4.5) {$\oc A$}; \node [style=object] (227) at (24.75, 4.5) {$\oc A$}; \node [style=differential] (228) at (24.75, 3.75) {{\bf =\!=\!=\!=}}; \node [style=object] (229) at (25.75, 1) {$A$}; \node [style=none] (230) at (24.25, 2.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=90, out=-30] (123) to (121.center); \draw [style=wire] (122.center) to (123); \draw [style=wire, bend right=90, looseness=2.00] (121.center) to (125.center); \draw [style=wire, bend left=90, looseness=2.00] (122.center) to (126.center); \draw [style=wire, bend right=90, looseness=2.00] (151.center) to (152.center); \draw [style=wire, in=-150, out=90] (151.center) to (123); \draw [style=wire] (152.center) to (153); \draw [style=wire, in=-90, out=30, looseness=1.50] (197) to (195.center); \draw [style=wire] (196.center) to (197); \draw [style=wire, in=-90, out=150] (197) to (198); \draw [style=wire, bend left=90, looseness=2.00] (195.center) to (199.center); \draw [style=wire] (199.center) to (200); \draw [style=wire, bend right=90, looseness=2.00] (196.center) to (202.center); \draw [style=wire] (201) to (202.center); \draw [style=wire, bend right] (205) to (203); \draw [style=wire] (204) to (205); \draw [style=wire, bend left] (205) to (206); \draw [style=wire, bend right=90, looseness=2.00] (207.center) to (208.center); \draw [style=wire] (126.center) to (207.center); \draw [style=wire] (209) to (210.center); \draw [style=wire, bend left=90, looseness=2.00] (211.center) to (212.center); \draw [style=wire] (213.center) to (125.center); \draw [style=wire] (214.center) to (215); \draw [style=wire, bend right=90, looseness=2.00] (224.center) to (225.center); \draw [style=wire] (226) to (224.center); \draw [style=wire] (227) to (228); \draw [style=wire, in=90, out=-150, looseness=1.25] (228) to (230.center); \draw [style=wire, in=90, out=-30, looseness=1.25] (228) to (229); \draw [style=wire] (230.center) to (225.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} So we conclude that $\mathsf{r} := (1 \otimes \mathsf{d}^\circ);(\cup \otimes 1)$. \end{proof} In the presence of Seely isomorphisms, if all the important structure maps are duals of one another, then the reverse deriving transformation is of the above form. \begin{definition} A reverse differential storage category is a reverse differential category with finite products whose coalgebra modality has Seely isomorphisms and such that $\eta = \varepsilon^\ast$, $\nabla= \Delta^\ast$, and $u = e^\ast$ (where $\eta$ is the induced codereliction, and $\nabla$ and $\mathsf{u}$ are the induced natural monoid structure). \end{definition} \begin{corollary} A reverse differential storage category is precisely a differential storage category which is also self-dual compact closed and such that $\eta = \varepsilon^\ast$, $\nabla= \Delta^\ast$, and $u = e^\ast$. Furthermore, in a reverse differential storage category, the reverse deriving transformation is of the form $\mathsf{r} = (1 \otimes \mathsf{d}^\circ);(\cup \otimes 1)$. \end{corollary} \begin{proof} The first part of the statement is simply an extension of Theorem \ref{thm:rdc_to_dc}. For the second part, recall that in a differential storage category the deriving transformation is of the form $\mathsf{d} = (1 \otimes \eta); \nabla$. By assumption, the dual of the deriving transformation is computed out to be: \[\mathsf{d}^\ast = \left( (1 \otimes \eta); \nabla \right)^\ast = \nabla^\ast ; (1 \otimes \eta)^\ast = \nabla^\ast ; (1 \otimes \eta^\ast) = \Delta ; (1 \otimes \varepsilon) = \mathsf{d}^\circ \] So $\mathsf{d}^\ast = \mathsf{d}^\circ$. However, recall that since we are in the self-dual case, $\mathsf{d}^\ast = \mathsf{d}^\circ$ is a deriving transformation for the algebra modality $(\wn, \delta^\ast, \varepsilon^\ast, \Delta^\ast, e^\ast)$. Then by Lemma \ref{lem:coder-rev}, it follows that $\mathsf{r} = (1 \otimes \mathsf{d}^\circ);(\cup \otimes 1)$. \end{proof} We conclude this section with examples of reverse differential categories. \begin{example} \normalfont Let $\mathsf{REL}$ be the category of sets and relations, where recall that the objects are sets and the maps are relations between them; that is, a relation from a set $X$ to a set $Y$, denoted $R: X \to Y$, is a subset $R \subseteq X \times Y$. $\mathsf{REL}$ is a symmetric monoidal category where the monoidal product is given by the Cartesian product of sets, $X \otimes Y = X \times Y$, and where the monoidal unit is a chosen singleton $\lbrace \ast \rbrace$. With this monoidal structure, $\mathsf{REL}$ is also a self-dual compact closed category where for a set $X$, its cup $\cup_X: X \times X \to \lbrace \ast \rbrace$ and cap $\cap_X: \lbrace \ast \rbrace \to X \times X$ are the dual relations which relate the single element to all pairs of copies of elements of $X$: \begin{align*} \cup_X = \left \lbrace \left( (x,x), \ast \right) \vert~ \forall. x \in X \right \rbrace \subset (X \times X) \times \lbrace \ast \rbrace && \cap_X = \left \lbrace \left( \ast, (x,x) \right) \vert~ \forall. x \in X \right \rbrace \subset \lbrace \ast \rbrace (X \times X) \end{align*} $\mathsf{REL}$ is also a (monoidal) differential category. The additive symmetric monoidal structure is induced by the biproduct, which is given by the disjoint union of sets, $X \sqcup Y$, and where the terminal object is the empty set $\emptyset$. As such, the sum of parallel relations ${R: X \to Y}$ and $S: X \to Y$ is defined as their union $R + S := R \cup S$, while the zero map $0: X \to Y$ is the empty relation $0 := \emptyset$. The coalgebra modality on $\mathsf{REL}$ is given by finite bags (also called finite multisets). So for a set $X$, let $\oc X$ be the set of all finite bags of $X$. This coalgebra modality has the Seely isomorphisms so $\oc(X \sqcup Y) \cong \oc X \times \oc Y$ and $ \oc \emptyset \cong \lbrace \ast \rbrace$. The deriving transformation ${\mathsf{d}_X: \oc X \times X \to \oc X}$ is defined as the relation which adds an element into the bag: \begin{align*} \mathsf{d}_X := \left \lbrace \left( (B, x), B \sqcup \llbracket x \rrbracket \right) \vert~ \forall. B \in \oc X, x \in X \right \rbrace \subset (\oc X \times X ) \times \oc X \end{align*} where $\llbracket x \rrbracket$ is the one element bag and $\sqcup$ is the (necessarily disjoint) union of finite bags. For more details on this differential category, see \cite[Section 2.5.1]{blute2006differential}. By applying Theorem \ref{thm:rdc_to_dc}, $\mathsf{REL}$ is also a reverse differential category where the reverse deriving transformation $\mathsf{r}_X: \oc X \times \oc X \to X$ is the relation that relates two bags that differ by one element to that said element: \begin{align*} \mathsf{r}_X := \left \lbrace \left( (B, B \sqcup \llbracket x \rrbracket), x \right) \vert~ \forall. B \in \oc X, x \in X \right \rbrace \subset (\oc X \times X ) \times \oc X \end{align*} \end{example} \begin{example} \normalfont The above example generalizes to the weighted relational model \cite{journal:weighted-relational,ong2017quantitative}. The underlying category is the biproduct completion of a complete commutative semiring. Briefly recall that a complete commutative semiring is a commutative semiring where one can have sums indexed by arbitrary sets $I$, which we denote by $\sum \limits_{i \in I}$, such that these summation operations satisfy certain distributivity and partitions axioms, see \cite[Section III.B]{ong2017quantitative}. For a complete commutative semiring $R$, define the category $R^\Pi$ whose objects are sets $X$ and where a map from $X$ to $Y$ is a set function $f: X \times Y \to R$, and where composition and identities are defined as in \cite[Section III.B]{ong2017quantitative}. Note that when we take the two-element Boolean algebra $B = \lbrace 0, 1 \rbrace$, then $B^\Pi$ is isomorphic to $\mathsf{REL}$. For any complete commutative semiring $R$, $R^\Pi$ is a symmetric monoidal category where the monoidal product is given by the Cartesian product of sets, $X \otimes Y = X \times Y$, and where the monoidal unit is a chosen singleton $\lbrace \ast \rbrace$. $R^\Pi$ is also a self-dual compact closed category where for a set $X$, its cup $\cup_X: X \times X \to \lbrace \ast \rbrace$ and cap $\cap_X: \lbrace \ast \rbrace \to X \times X$ are the functions defined as follows: \begin{align*} \cup_X: (X \times X) \times \lbrace \ast \rbrace &\to R & \cap_X: \lbrace \ast \rbrace \times (X \times X) &\to R \\ \left( (x,y), \ast) \right) &\mapsto \begin{cases} 0 & \text{ if } x \neq y \\ 1 & \text{ if } x =y \end{cases} & \left(\ast, (x,y) \right) &\mapsto \begin{cases} 0 & \text{ if } x \neq y \\ 1 & \text{ if } x =y \end{cases} \end{align*} $R^\Pi$ is also a differential category. The additive symmetric monoidal structure is induced by the biproduct, which is given by the disjoint union of sets, $X \sqcup Y$, and where the terminal object is the empty set $\emptyset$. The sum of ${f: X \to Y}$ and $g: X \to Y$ (which recall are functions $X \times Y \to R$) is defined pointwise, $(f+g)(x,y) = f(x,y) + g(x,y)$, while the zero map $0: X \to Y$ is the function which maps everything to zero, $0(x,y) = 0$. The coalgebra modality on $R^\Pi$ is again given by finite bags, that is, for a set $X$, let $\oc X$ be the set of all finite bags of $X$, and this coalgebra modality has the Seely isomorphisms as in the previous example. The deriving transformation ${\mathsf{d}_X: \oc X \times X \to \oc X}$ is defined as follows: \begin{align*} \mathsf{d}_X: (\oc X \times X) \times \oc X &\to R \\ \left( (B,x), B^\prime \right) &\mapsto \begin{cases} 0 & \text{ if } B \sqcup \llbracket x \rrbracket \neq B^\prime \\ \vert B^\prime \vert = \vert B \vert + 1 & \text{ if } B \sqcup \llbracket x \rrbracket = B^\prime \end{cases} \end{align*} where $\vert B \vert$ is the cardinality of the finite bag. The image by $\vert B^\prime \vert = \vert B \vert + 1$ takes into account that if we were in the unordered case, there would be $n+1$ possible ways of putting an element into a bag of size $n$. Of course the factor disappears in the case that the semiring is additively idempotent (i.e. $1+1 = 1$), such as the two-element Boolean algebra $B$. Which is why the factor does not appear in the differential structure of $\mathsf{REL}$ as described in the previous example. For more details on this differential category, see \cite[Section 6]{lemay2020convenient}. By applying Theorem \ref{thm:rdc_to_dc}, $R^\Pi$ is also a reverse differential category where the reverse deriving transformation $\mathsf{r}_X: \oc X \times \oc X \to X$ is defined as follows: \begin{align*} \mathsf{r}_X: (\oc X \times \oc X) \times X &\to R \\ \left( (B,B^\prime), x \right) &\mapsto \begin{cases} 0 & \text{ if } B \sqcup \llbracket x \rrbracket \neq B^\prime \\ \vert B^\prime \vert = \vert B \vert + 1 & \text{ if } B \sqcup \llbracket x \rrbracket = B^\prime \end{cases} \end{align*} \end{example} \begin{example} \normalfont Let $k$ be a field and $\mathsf{FVEC}_k$ the category of finite dimensional $k$-vector spaces and $k$-linear maps between them. While $\mathsf{FVEC}_k$ is a compact closed category, it is not canonically self-dual compact closed, since to give a self-dual structure corresponds to providing a basis. So let $\mathsf{FVEC}^{\mathcal{B}}_k$ be the category whose objects are pairs $(V, B_V)$ consisting of a finite dimensional $k$-vector space and a basis $B_V$ of $V$, and whose maps are arbitrary $k$-linear maps between the underlying vector spaces. $\mathsf{FVEC}^{\mathcal{B}}_k$ is a self-dual compact closed category, where the tensor product is defined as: \[ (V, B_V) \otimes (W, B_W) = (V \otimes W, B_V \otimes B_W = \lbrace v \otimes w \vert~ \forall. v \in B_V, w \in B_W \rbrace) \] the monoidal unit is $(k, \lbrace 1 \rbrace)$, and where the cup $\cup_{(V,B_V)}: (V, B_V) \otimes (V, B_V) \to (k, \lbrace 1 \rbrace)$ is defined as on basis elements $v, w \in B_V$ as follows: \begin{align*} \cup_{(V,B_V)}(v,w) = \begin{cases} 1 & \text{if } v = w \\ 0 & \text{if } v \neq w \end{cases} \end{align*} and the cap $\cap_{(V,B_V)}: (k, \lbrace 1 \rbrace) \to (V, B_V) \otimes (V, B_V)$ is the $k$-linear map defined as: \begin{align*} \cap_{(V,B_V)}(1) = \sum_{v \in B_V} v \otimes v \end{align*} which is well-defined since $B_V$ is a finite set. $\mathsf{FVEC}^{\mathcal{B}}_k$ is also an additive symmetric monoidal category where the additive structure is induced by the direct sum of vector spaces (which is the categorical biproduct): \[ (V, B_V) \oplus (W, B_W) = (V \oplus W, B_V \oplus B_W = \lbrace v \oplus w \vert~ \forall. v \in B_V, w \in B_W \rbrace ) \] and where $(0, \emptyset)$ is the zero object. Unfortunately, however, as explained in \cite{lemayfhilb}, $\mathsf{FVEC}^{\mathcal{B}}_k$ does not usually have a (non-trivial) differential category structure. This problem is solved when we consider $k = \mathbb{Z}_2$, as was done in \cite{hyland2003glueing,lemayfhilb}. $\mathsf{FVEC}^{\mathcal{B}}_{\mathbb{Z}_2}$ is then a differential category where the coalgebra modality is induced by the exterior algebra, which is defined as follows: \[ \oc(V, B_V) = \left( \mathsf{E}(V) = \bigoplus^{\mathsf{dim}(V)}_{n=0} \bigwedge^n V , \mathsf{E}(B_V) = \lbrace v_1 \wedge \hdots \wedge v_n \vert~ \forall n\in \mathbb{N}, v_i \in B_V \rbrace \right) \] Recall that the wedge product satisfies that $v \wedge v = 0$. Usually, one also has that the wedge product is anticommutative, that is, $v \wedge w = -w \wedge v$. But in the case of $\mathbb{Z}_2$, $1=-1$ and therefore $v \wedge w = w \wedge v$, which is key to obtaining a coalgebra modality. The deriving transformation $\mathsf{d}_{(V,B_V)}: \oc(V, B_V) \otimes (V, B_V) \to \oc(V, B_V)$ is defined on basis elements as follows: \begin{align*} \mathsf{d}_{(V,B_V)} \left( (v_1 \wedge \hdots \wedge v_n) \otimes v \right) = v_1 \wedge \hdots \wedge v_n \wedge v \end{align*} See \cite[Example 2.6.(iii)]{lemayfhilb} for more details on this differential category. By applying Theorem \ref{thm:rdc_to_dc}, $\mathsf{FVEC}^{\mathcal{B}}_{\mathbb{Z}_2}$ is also a reverse differential category where the reverse deriving transformation $\mathsf{r}_{(V,B_V)}: \oc(V, B_V) \otimes \oc(V, B_V) \to (V, B_V)$ is defined on basis elements as follows: \begin{align*} \mathsf{r}_{(V,B_V)} \left( (v_1 \wedge \hdots \wedge v_n) \otimes (w_1 \wedge \hdots \wedge w_m) \right) = \begin{cases} v & \text{if } v_1 \wedge \hdots \wedge v_n \wedge v = w_1 \wedge \hdots \wedge w_m \text{ for a $v \in B_V$}\\ 0 & \text{otherwise } \end{cases} \end{align*} \end{example} \begin{example}\label{ex:quantum1} \normalfont Pagani, Selinger, and Valiron's categorical model of a quantum lambda calculus, $\overline{\mathbf{CPMs}}^\oplus$ \cite[Section 4.2]{journal:selinger-valiron-fully-abstract-quantum}, is a reverse differential storage category. The objects of $\overline{\mathbf{CPMs}}^\oplus$ are families of pairs of natural numbers and subgroups of permutations, while the maps of $\overline{\mathbf{CPMs}}^\oplus$ can be interpreted as completely positive (continuous) module homomorphisms \cite[Propostion 16]{journal:selinger-valiron-fully-abstract-quantum}. $\overline{\mathbf{CPMs}}^\oplus$ is self-dual compact closed \cite[Section 4.3.3]{journal:selinger-valiron-fully-abstract-quantum} and has (in)finite biproducts \cite[Section 4.3.1]{journal:selinger-valiron-fully-abstract-quantum}, and so is an additive symmetric monoidal category as well. Furthermore, $\overline{\mathbf{CPMs}}^\oplus$ is a Lafont category, that is, $\overline{\mathbf{CPMs}}^\oplus$ has a coalgebra modality $\oc$ which is given by cofree cocommutative comonoids, and these coalgebra modalities are called free exponential modalities \cite[Section 4.3.4]{journal:selinger-valiron-fully-abstract-quantum}. By \cite[Theorem 21]{lemay:LIPIcs.CALCO.2021.19}, in the presence of biproducts, any free exponential modality has a (canonical) deriving transformation, and so any Lafont category with biproducts is a differential (storage) category. Therefore, $\overline{\mathbf{CPMs}}^\oplus$ is a differential (storage) category. Since $\overline{\mathbf{CPMs}}^\oplus$ is also self-dual compact closed, by applying Theorem \ref{thm:rdc_to_dc}, we conclude that $\overline{\mathbf{CPMs}}^\oplus$ is also a reverse differential category. In future work, it would be interesting to study in more detail the consequence of reverse differential structure in this model of quantum lambda calculus. \end{example} \begin{example}\label{ex:quantum2} \normalfont There is another interesting relationship between reverse differential categories and categorical quantum mechanics. Every reverse differential storage category whose coalgebra modality is a free exponential modality is a model of Vicary's categorical quantum harmonic oscillator \cite[Defintion 3.1]{vicary2008categorical}. We note, however, that the converse is not necessarily true since the required base category for a categorical quantum harmonic oscillator need only be a $\dagger$-symmetric monoidal category instead of a ($\dagger$-)compact closed category. That said, as discussed in \cite[Section 6]{vicary2008categorical}, in future work it would be interesting to revisit Vicary's categorical quantum harmonic oscillators from the point of view of (reverse) differential categories. \end{example} \subsection{From MRDCs to CRDCs}\label{sec:cokleisliRDC} In this section we prove that our definition satisfies requirement 2 of an MRDC, that is, that the coKleisli category of an MRDC is a CRDC. First, note that we already have part of what we need. By Theorem \ref{thm:characterization_of_crdc}, to give a CRDC is equivalent to giving a CDC and a contextual linear dagger. Moreover, by Theorem \ref{thm:rdc_to_dc}, any MRDC is a differential category, and by Theorem \ref{coKleisliCDC}, if $\X$ is a differential category then its coKleisli category has the structure of a CDC. Thus, putting this together, for any MRDC, its coKleisli category is a CDC. Thus, all that remains to show is that the coKleisli category has a contextual linear dagger, and for this, we need its linear fibration, $\L[\X_{\oc}]$ to be a dagger fibration. However, Theorem \ref{thm:fibration_equivalence} showed that there is an isomorphism of fibrations $\L[\X_{\oc}] \cong \L_{\oc}[\X]$. Thus it will suffice to give a dagger fibration structure on $\L_{\oc}[\X]$: \begin{lemma}\label{lemma:compactclosed_to_dagger} If $\mathbb{X}$ is a self-dual compact closed category, then the fibration $\L_{\oc}[\X]$ has dagger fibration structure, where for a map $f: \oc X \otimes A \to B$, its dagger $f^{\dagger[X]}: \oc X \otimes B \to A$ is defined as the following composite: \begin{align*} \begin{array}[c]{c} f^{\dagger[X]} := \xymatrixcolsep{3.5pc}\xymatrix{\oc X \otimes B \ar[r]^-{1_{\oc X} \otimes \cap_A \otimes 1_B} & \oc X \otimes A \otimes A \otimes B \ar[r]^-{f \otimes \sigma_{A,B}} & B \otimes B \otimes A \ar[r]^-{\cup_B \otimes 1_A} & A } \end{array} && \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (1.5, 3.25) {}; \node [style=none] (1) at (1, 2.25) {}; \node [style=component] (2) at (1, 2.5) {$f$}; \node [style=object] (3) at (0.5, 4.25) {$\oc X$}; \node [style=none] (4) at (2, 3.25) {}; \node [style=port] (5) at (2, 0.5) {$A$}; \node [style=object] (6) at (2.5, 4.25) {$B$}; \node [style=none] (7) at (2.5, 2.25) {}; \node [style=object] (8) at (-1.5, 3) {$\oc X$}; \node [style=object] (9) at (-0.5, 3) {$B$}; \node [style=component] (10) at (-1, 2) {$f^\dagger$}; \node [style=object] (11) at (-1, 1) {$A$}; \node [style=port] (12) at (0, 2) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=-90, out=30, looseness=1.50] (2) to (0.center); \draw [style=wire] (1.center) to (2); \draw [style=wire, in=-90, out=150] (2) to (3); \draw [style=wire, bend left=90, looseness=2.00] (0.center) to (4.center); \draw [style=wire] (4.center) to (5); \draw [style=wire, bend right=90, looseness=2.00] (1.center) to (7.center); \draw [style=wire] (6) to (7.center); \draw [style=wire] (10) to (11); \draw [style=wire, in=165, out=-90] (8) to (10); \draw [style=wire, in=-90, out=15] (10) to (9); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} Furthermore, for any map $f: A \to B$ in $\mathbb{X}$, the dagger of $e_X \otimes f: \oc X \otimes A \to B$ is $(e_X \otimes f)^{\dagger[X]} = e_X \otimes f^\ast: \oc X \otimes B \to A$, where $f^\ast: B \to A$ is defined as in Lemma \ref{sliding}. \end{lemma} \begin{proof} Per Example \ref{ex:!daggerfibration}, it suffices to prove that the dagger operation satisfies contravariant functoriality, involution, and change of base. We begin by showing that the dagger is contravariant on composition: \begin{align*} \left( \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=duplicate] (1) at (9.5, 1) {$\Delta$}; \node [style=object] (2) at (9.5, 1.75) {$\oc X$}; \node [style=component] (3) at (10.5, 0) {$f$}; \node [style=object] (4) at (11, 1.75) {$A$}; \node [style=component] (5) at (9.75, -1.25) {$g$}; \node [style=object] (6) at (9.75, -2) {$C$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (2) to (1); \draw [style=wire, in=-90, out=15, looseness=0.75] (3) to (4); \draw [style=wire] (5) to (6); \draw [style=wire, in=30, out=-90] (3) to (5); \draw [style=wire, in=165, out=-150] (1) to (5); \draw [style=wire, in=165, out=-15, looseness=1.25] (1) to (3); \end{pgfonlayer} \end{tikzpicture} \end{array} \right)^{\dagger[X]} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=duplicate] (13) at (-2.75, -3.25) {$\Delta$}; \node [style=object] (14) at (-2.75, -2.5) {$\oc X$}; \node [style=component] (15) at (-1.75, -4.25) {$f$}; \node [style=component] (17) at (-2.5, -5.5) {$g$}; \node [style=none] (19) at (-1.25, -3.5) {}; \node [style=none] (20) at (-0.75, -3.5) {}; \node [style=port] (21) at (-0.75, -8.25) {$A$}; \node [style=none] (22) at (-2.5, -6) {}; \node [style=object] (23) at (-0.25, -2.5) {$B$}; \node [style=none] (24) at (-0.25, -6) {}; \node [style=none] (25) at (9, -6) {}; \node [style=none] (26) at (8.5, -7) {}; \node [style=component] (27) at (8.5, -6.75) {$f$}; \node [style=none] (29) at (9.5, -6) {}; \node [style=port] (30) at (9.5, -8.75) {$A$}; \node [style=none] (32) at (10, -7) {}; \node [style=none] (33) at (9.5, -3.25) {}; \node [style=none] (34) at (9, -4.25) {}; \node [style=component] (35) at (9, -4) {$g$}; \node [style=none] (37) at (10, -3.25) {}; \node [style=object] (39) at (10.5, -2.25) {$B$}; \node [style=none] (40) at (10.5, -4.25) {}; \node [style=duplicate] (41) at (7.75, -3) {$\Delta$}; \node [style=object] (42) at (7.75, -2.25) {$\oc X$}; \node [style=duplicate] (59) at (2.25, -2) {$\Delta$}; \node [style=object] (60) at (2.25, -1.25) {$\oc X$}; \node [style=component] (61) at (4, -3.25) {$f$}; \node [style=component] (62) at (2, -5.5) {$g$}; \node [style=none] (63) at (4.5, -2.75) {}; \node [style=none] (64) at (5, -2.75) {}; \node [style=port] (65) at (5, -9.25) {$A$}; \node [style=none] (66) at (2, -6) {}; \node [style=object] (67) at (5.75, -1.25) {$B$}; \node [style=none] (68) at (5.75, -6.25) {}; \node [style=none] (69) at (3.25, -4.25) {}; \node [style=none] (70) at (4, -4.25) {}; \node [style=none] (71) at (2.75, -4.25) {}; \node [style=none] (72) at (3.25, -4.25) {}; \node [style=object] (73) at (0.5, -5) {$=$}; \node [style=object] (74) at (6.5, -5) {$=$}; \node [style=duplicate] (75) at (12.5, -3.75) {$\Delta$}; \node [style=object] (76) at (12.5, -3) {$\oc X$}; \node [style=component] (77) at (13.5, -4.75) {$f^\dagger$}; \node [style=object] (78) at (14, -3) {$B$}; \node [style=component] (79) at (12.75, -6) {$g^\dagger$}; \node [style=object] (80) at (12.75, -7) {$A$}; \node [style=object] (81) at (11.25, -5) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (14) to (13); \draw [style=wire, in=30, out=-90] (15) to (17); \draw [style=wire, in=165, out=-150] (13) to (17); \draw [style=wire, in=165, out=-15, looseness=1.25] (13) to (15); \draw [style=wire, bend left=90, looseness=2.00] (19.center) to (20.center); \draw [style=wire] (20.center) to (21); \draw [style=wire, in=15, out=-90] (19.center) to (15); \draw [style=wire, bend right=90, looseness=2.00] (22.center) to (24.center); \draw [style=wire] (23) to (24.center); \draw [style=wire] (17) to (22.center); \draw [style=wire, in=-90, out=30, looseness=1.50] (27) to (25.center); \draw [style=wire] (26.center) to (27); \draw [style=wire, bend left=90, looseness=2.00] (25.center) to (29.center); \draw [style=wire] (29.center) to (30); \draw [style=wire, bend right=90, looseness=2.00] (26.center) to (32.center); \draw [style=wire, in=-90, out=30, looseness=1.50] (35) to (33.center); \draw [style=wire] (34.center) to (35); \draw [style=wire, bend left=90, looseness=2.00] (33.center) to (37.center); \draw [style=wire, bend right=90, looseness=2.00] (34.center) to (40.center); \draw [style=wire] (39) to (40.center); \draw [style=wire] (37.center) to (32.center); \draw [style=wire] (42) to (41); \draw [style=wire, in=165, out=-30] (41) to (35); \draw [style=wire, in=165, out=-135] (41) to (27); \draw [style=wire] (60) to (59); \draw [style=wire, in=165, out=-150] (59) to (62); \draw [style=wire, in=165, out=-15, looseness=1.25] (59) to (61); \draw [style=wire, bend left=90, looseness=2.00] (63.center) to (64.center); \draw [style=wire] (64.center) to (65); \draw [style=wire, in=15, out=-90] (63.center) to (61); \draw [style=wire, bend right=90, looseness=2.00] (66.center) to (68.center); \draw [style=wire] (67) to (68.center); \draw [style=wire] (62) to (66.center); \draw [style=wire, bend right=90, looseness=2.00] (69.center) to (71.center); \draw [style=wire, bend left=90, looseness=2.00] (70.center) to (72.center); \draw [style=wire] (61) to (70.center); \draw [style=wire, in=15, out=-90] (71.center) to (62); \draw [style=wire] (76) to (75); \draw [style=wire, in=-90, out=15, looseness=0.75] (77) to (78); \draw [style=wire] (79) to (80); \draw [style=wire, in=30, out=-90] (77) to (79); \draw [style=wire, in=165, out=-150] (75) to (79); \draw [style=wire, in=165, out=-15, looseness=1.25] (75) to (77); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} Next we show that the dagger preserves identities: \begin{align*} \left( \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (17) at (9.75, 1.75) {$\oc X$}; \node [style=object] (19) at (10.5, 1.75) {$A$}; \node [style=object] (21) at (10.5, -0.25) {$A$}; \node [style=component] (22) at (9.75, 0.5) {$e$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (19) to (21); \draw [style=wire] (17) to (22); \end{pgfonlayer} \end{tikzpicture} \end{array} \right)^{\dagger[X]} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (82) at (2.25, 9.75) {$\oc X$}; \node [style=object] (83) at (4, 9.75) {$A$}; \node [style=component] (85) at (2.25, 8.5) {$e$}; \node [style=object] (86) at (9.5, 9.75) {$\oc X$}; \node [style=object] (87) at (10.25, 9.75) {$A$}; \node [style=object] (88) at (10.25, 7.75) {$A$}; \node [style=component] (89) at (9.5, 8.5) {$e$}; \node [style=none] (90) at (3, 9) {}; \node [style=none] (91) at (3, 8.25) {}; \node [style=none] (93) at (3.5, 9) {}; \node [style=port] (94) at (3.5, 6.25) {$A$}; \node [style=none] (95) at (4, 8.25) {}; \node [style=object] (96) at (4.75, 8) {$=$}; \node [style=object] (97) at (5.5, 9.75) {$\oc X$}; \node [style=component] (98) at (5.5, 8.5) {$e$}; \node [style=none] (99) at (7, 8.5) {}; \node [style=none] (100) at (7.75, 8.5) {}; \node [style=object] (101) at (7.75, 9.75) {$A$}; \node [style=none] (102) at (6.25, 8.5) {}; \node [style=object] (103) at (6.25, 7) {$A$}; \node [style=none] (104) at (7, 8.5) {}; \node [style=object] (105) at (8.5, 8) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (82) to (85); \draw [style=wire] (87) to (88); \draw [style=wire] (86) to (89); \draw [style=wire, bend left=90, looseness=2.00] (90.center) to (93.center); \draw [style=wire] (93.center) to (94); \draw [style=wire, bend right=90, looseness=2.00] (91.center) to (95.center); \draw [style=wire] (90.center) to (91.center); \draw [style=wire] (83) to (95.center); \draw [style=wire] (97) to (98); \draw [style=wire, bend right=90, looseness=2.00] (99.center) to (102.center); \draw [style=wire] (102.center) to (103); \draw [style=wire, bend left=90, looseness=2.00] (100.center) to (104.center); \draw [style=wire] (101) to (100.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} Next we show that the dagger is involutive using the snake equations: \begin{align*} \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (1.5, 3.25) {}; \node [style=none] (1) at (1, 2.25) {}; \node [style=component] (2) at (1, 2.5) {$f^\dagger$}; \node [style=object] (3) at (0.5, 4.25) {$\oc X$}; \node [style=none] (4) at (2, 3.25) {}; \node [style=port] (5) at (2, 0.5) {$B$}; \node [style=object] (6) at (2.5, 4.25) {$A$}; \node [style=none] (7) at (2.5, 2.25) {}; \node [style=object] (8) at (-1.5, 3.25) {$\oc X$}; \node [style=object] (9) at (-0.5, 3.25) {$A$}; \node [style=component] (10) at (-1, 2) {${f^\dagger}^\dagger$}; \node [style=object] (11) at (-1, 0.75) {$B$}; \node [style=port] (12) at (0, 2) {$=$}; \node [style=none] (106) at (4.75, 4) {}; \node [style=none] (107) at (4.25, 3) {}; \node [style=component] (108) at (4.25, 3.25) {$f$}; \node [style=object] (109) at (3.75, 5) {$\oc X$}; \node [style=none] (110) at (5.25, 4) {}; \node [style=none] (113) at (5.75, 3) {}; \node [style=none] (114) at (5.75, 3.5) {}; \node [style=none] (115) at (6.25, 3.5) {}; \node [style=port] (116) at (6.25, -0.5) {$B$}; \node [style=none] (117) at (5.25, 1.75) {}; \node [style=object] (118) at (6.75, 5) {$A$}; \node [style=none] (119) at (6.75, 1.75) {}; \node [style=port] (120) at (3.25, 2) {$=$}; \node [style=object] (121) at (8, 3.25) {$\oc X$}; \node [style=object] (122) at (9, 3.25) {$A$}; \node [style=component] (123) at (8.5, 2) {$f$}; \node [style=object] (124) at (8.5, 0.75) {$B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=-90, out=30, looseness=1.50] (2) to (0.center); \draw [style=wire] (1.center) to (2); \draw [style=wire, in=-90, out=150] (2) to (3); \draw [style=wire, bend left=90, looseness=2.00] (0.center) to (4.center); \draw [style=wire] (4.center) to (5); \draw [style=wire, bend right=90, looseness=2.00] (1.center) to (7.center); \draw [style=wire] (6) to (7.center); \draw [style=wire] (10) to (11); \draw [style=wire, in=165, out=-90] (8) to (10); \draw [style=wire, in=-90, out=15] (10) to (9); \draw [style=wire, in=-90, out=30, looseness=1.50] (108) to (106.center); \draw [style=wire] (107.center) to (108); \draw [style=wire, in=-90, out=150] (108) to (109); \draw [style=wire, bend left=90, looseness=2.00] (106.center) to (110.center); \draw [style=wire, bend right=90, looseness=2.00] (107.center) to (113.center); \draw [style=wire, bend left=90, looseness=2.00] (114.center) to (115.center); \draw [style=wire] (115.center) to (116); \draw [style=wire] (114.center) to (113.center); \draw [style=wire, bend right=90, looseness=2.00] (117.center) to (119.center); \draw [style=wire] (118) to (119.center); \draw [style=wire] (110.center) to (117.center); \draw [style=wire] (123) to (124); \draw [style=wire, in=165, out=-90] (121) to (123); \draw [style=wire, in=-90, out=15] (123) to (122); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} Lastly we show that the substitution functors preserves the dagger, which is automatic by definition: \begin{align*} \left( \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (9, 0.25) {$\delta$}; \node [style=object] (1) at (10.5, 1.25) {$A$}; \node [style=component] (2) at (9.75, -1.75) {$f$}; \node [style=object] (3) at (9.75, -2.75) {$B$}; \node [style=function2] (4) at (9, -0.75) {$h$}; \node [style=object] (5) at (9, 1.25) {$\oc X$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (2) to (3); \draw [style=wire, bend right, looseness=1.25] (4) to (2); \draw [style=wire] (0) to (4); \draw [style=wire, in=30, out=-90, looseness=0.75] (1) to (2); \draw [style=wire] (5) to (0); \end{pgfonlayer} \end{tikzpicture} \end{array} \right)^{\dagger[X]} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (131) at (-1.25, -4.25) {}; \node [style=none] (132) at (-1.75, -5.25) {}; \node [style=component] (133) at (-1.75, -5) {$f$}; \node [style=none] (135) at (-0.75, -4.25) {}; \node [style=port] (136) at (-0.75, -7) {$A$}; \node [style=object] (137) at (-0.25, -2.25) {$B$}; \node [style=none] (138) at (-0.25, -5.25) {}; \node [style=component] (139) at (-2.5, -3.25) {$\delta$}; \node [style=function2] (140) at (-2.5, -4.25) {$h$}; \node [style=object] (141) at (-2.5, -2.25) {$\oc X$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, in=-90, out=30, looseness=1.50] (133) to (131.center); \draw [style=wire] (132.center) to (133); \draw [style=wire, bend left=90, looseness=2.00] (131.center) to (135.center); \draw [style=wire] (135.center) to (136); \draw [style=wire, bend right=90, looseness=2.00] (132.center) to (138.center); \draw [style=wire] (137) to (138.center); \draw [style=wire] (139) to (140); \draw [style=wire] (141) to (139); \draw [style=wire, in=165, out=-90] (140) to (133); \end{pgfonlayer} \end{tikzpicture} \end{array} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (9, 0.25) {$\delta$}; \node [style=object] (1) at (10.5, 1.25) {$B$}; \node [style=component] (2) at (9.75, -1.75) {$f^\dagger$}; \node [style=object] (3) at (9.75, -2.75) {$A$}; \node [style=function2] (4) at (9, -0.75) {$h$}; \node [style=object] (5) at (9, 1.25) {$\oc X$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (2) to (3); \draw [style=wire, bend right, looseness=1.25] (4) to (2); \draw [style=wire] (0) to (4); \draw [style=wire, in=30, out=-90, looseness=0.75] (1) to (2); \draw [style=wire] (5) to (0); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} So we conclude that $\L_{\oc}[\X]$ has dagger fibration. Next, for any map $f: A \to B$, using the snake equations and the sliding equations, we compute: \begin{align*} \left( \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=object] (8) at (1, 3) {$A$}; \node [style=component] (10) at (1, 2) {$f$}; \node [style=object] (11) at (1, 1) {$B$}; \node [style=object] (24) at (0, 3) {$\oc X$}; \node [style=component] (25) at (0, 2) {$e$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (10) to (11); \draw [style=wire] (8) to (10); \draw [style=wire] (24) to (25); \end{pgfonlayer} \end{tikzpicture} \end{array} \right)^{\dagger[X]} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-7.5, 3.25) {}; \node [style=none] (1) at (-7.5, 2.25) {}; \node [style=none] (4) at (-7, 3.25) {}; \node [style=none] (7) at (-6.75, 2.25) {}; \node [style=port] (12) at (-6, 2.75) {$=$}; \node [style=component] (21) at (-7.5, 2.75) {$f$}; \node [style=object] (24) at (-8.25, 4.5) {$\oc X$}; \node [style=component] (25) at (-8.25, 2.75) {$e$}; \node [style=object] (26) at (2.75, 3.75) {$B$}; \node [style=component] (27) at (2.75, 2.75) {$f^\ast$}; \node [style=object] (28) at (2.75, 1.75) {$A$}; \node [style=object] (29) at (1.75, 3.75) {$\oc X$}; \node [style=component] (30) at (1.75, 2.75) {$e$}; \node [style=object] (31) at (-6.75, 4.5) {$B$}; \node [style=object] (32) at (-7, 0.75) {$A$}; \node [style=none] (33) at (-4.25, 3.25) {}; \node [style=none] (34) at (-4.25, 2.25) {}; \node [style=none] (35) at (-3.75, 3.25) {}; \node [style=none] (36) at (-3.25, 2.25) {}; \node [style=component] (37) at (-3.25, 2.75) {$f^\ast$}; \node [style=object] (38) at (-5, 4.5) {$\oc X$}; \node [style=component] (39) at (-5, 2.75) {$e$}; \node [style=object] (40) at (-3.25, 4.5) {$B$}; \node [style=object] (41) at (-3.75, 0.75) {$A$}; \node [style=port] (42) at (-2.5, 2.75) {$=$}; \node [style=none] (43) at (-0.25, 3.25) {}; \node [style=none] (44) at (-0.25, 2.25) {}; \node [style=none] (45) at (-0.75, 3.25) {}; \node [style=none] (46) at (0.25, 2.25) {}; \node [style=component] (47) at (0.25, 2.75) {$f^\ast$}; \node [style=object] (48) at (-1.5, 4.5) {$\oc X$}; \node [style=component] (49) at (-1.5, 2.75) {$e$}; \node [style=object] (50) at (0.25, 4.5) {$B$}; \node [style=object] (51) at (-0.75, 0.75) {$A$}; \node [style=port] (52) at (1, 2.75) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left=90, looseness=2.00] (0.center) to (4.center); \draw [style=wire, bend right=90, looseness=2.00] (1.center) to (7.center); \draw [style=wire] (21) to (1.center); \draw [style=wire] (24) to (25); \draw [style=wire] (0.center) to (21); \draw [style=wire] (27) to (28); \draw [style=wire] (26) to (27); \draw [style=wire] (29) to (30); \draw [style=wire] (31) to (7.center); \draw [style=wire] (4.center) to (32); \draw [style=wire, bend left=90, looseness=2.00] (33.center) to (35.center); \draw [style=wire, bend right=90, looseness=2.00] (34.center) to (36.center); \draw [style=wire] (38) to (39); \draw [style=wire] (35.center) to (41); \draw [style=wire] (40) to (37); \draw [style=wire] (37) to (36.center); \draw [style=wire] (33.center) to (34.center); \draw [style=wire, bend right=90, looseness=2.00] (43.center) to (45.center); \draw [style=wire, bend right=90, looseness=2.00] (44.center) to (46.center); \draw [style=wire] (48) to (49); \draw [style=wire] (45.center) to (51); \draw [style=wire] (50) to (47); \draw [style=wire] (47) to (46.center); \draw [style=wire] (43.center) to (44.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} So the desired equality holds. \end{proof} The following then follows from the remarks above: \begin{corollary}\label{coKlielsicondag} Let $\mathbb{X}$ be a differential category which is self-dual compact closed. Then the coKleisli category $\mathbb{X}_\oc$ is a Cartesian differential category with a contextual linear dagger, where for a map $\llbracket f \rrbracket: \oc(X \times A) \to B$ which is linear in context $X$, its dagger $\llbracket f^{\dagger[X]} \rrbracket: \oc (X \times B) \to A$ is defined as follows: \begin{align*} \begin{array}[c]{c} \llbracket f^{\dagger[X]} \rrbracket = \mathsf{E}_X\left( \mathsf{E}^{-1}_X(\llbracket f \rrbracket)^{\dagger[X]} \right) \end{array}&& \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (4.75, 2.75) {}; \node [style=none] (1) at (4, 0.25) {}; \node [style=none] (4) at (5.75, 2.75) {}; \node [style=port] (5) at (5.75, -0.75) {$A$}; \node [style=none] (7) at (5.25, 0.25) {}; \node [style=object] (8) at (1, 3) {$\oc(X \times B)$}; \node [style=component] (10) at (1, 2) {$f^\dagger$}; \node [style=object] (11) at (1, 1) {$A$}; \node [style=port] (12) at (2.25, 2) {$=$}; \node [style=object] (15) at (4.25, 5.75) {$\oc (X \times B)$}; \node [style=differential] (16) at (4.25, 5) {{\bf =\!=\!=\!=}}; \node [style=component] (17) at (5.25, 4) {$\pi_1$}; \node [style=function2] (18) at (3.25, 4) {$\pi_0$}; \node [style=differential] (19) at (4, 1.5) {{\bf =\!=\!=\!=}}; \node [style=component] (21) at (4, 0.75) {$f$}; \node [style=component] (22) at (4.75, 2.5) {$\iota_1$}; \node [style=function2] (23) at (3.25, 2.5) {$\iota_0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left=90, looseness=2.00] (0.center) to (4.center); \draw [style=wire] (4.center) to (5); \draw [style=wire, bend right=90, looseness=2.00] (1.center) to (7.center); \draw [style=wire] (10) to (11); \draw [style=wire] (8) to (10); \draw [style=wire] (15) to (16); \draw [style=wire, in=90, out=-150, looseness=1.25] (16) to (18); \draw [style=wire, in=90, out=-30, looseness=1.25] (16) to (17); \draw [style=wire] (19) to (21); \draw [style=wire, in=135, out=-90] (23) to (19); \draw [style=wire, in=-90, out=45, looseness=1.25] (19) to (22); \draw [style=wire] (21) to (1.center); \draw [style=wire] (0.center) to (22); \draw [style=wire] (18) to (23); \draw [style=wire] (17) to (7.center); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} where $\mathsf{E}_X$ and $\mathsf{E}^{-1}_X$ are defined on in Corollary \ref{cor:fibres-equiv}, and the $\dagger[X]$ on the right-hand side is defined as in Lemma \ref{lemma:compactclosed_to_dagger}. \end{corollary} \begin{proof} It follows from the equivalence of Theorem \ref{thm:fibration_equivalence}, that by giving a dagger fibration on $\mathcal{L}_\oc[\mathbb{X}]$ (Lemma \ref{lemma:compactclosed_to_dagger}), we obtain a dagger fibration on $\mathcal{L}[\mathbb{X}_\oc]$ defined as follows: \[ \xymatrixcolsep{5pc}\xymatrix{ \mathcal{L}[\mathbb{X}_\oc] \ar[r]^-{\mathsf{E}^{-1}} & \mathcal{L}_\oc[\mathbb{X}] \ar[r]^-{(-)^\dagger} & \mathcal{L}_\oc[\mathbb{X}]^\ast \ar[r]^-{\mathsf{E}^\ast} & \mathcal{L}[\mathbb{X}_\oc]^\ast } \] Zooming in on the fibres, we have that the dagger on maps which are linear in context is defined as $\llbracket f^{\dagger[X]} \rrbracket = \mathsf{E}_X\left( \mathsf{E}^{-1}_X(\llbracket f \rrbracket)^{\dagger[X]} \right)$. It remains to show that each fibre also has dagger-biproducts. First note that in the fibres, the projections maps and injections maps are respectively: \begin{align*} \llbracket 1_X \times \pi_i \rrbracket = \varepsilon_{X \times (A_0 \times A_1)}; (1_X \times \pi_X): \oc\left( X \times (A_0 \times A_1) \right) \to A_i \\ \llbracket 1_X \times \iota_i \rrbracket = \varepsilon_{X \times A_i}; (1_X \times \iota_i): \oc (X \times A_i) \to A_0 \times A_1 \end{align*} By applying $\mathsf{E}^{-1}_X$ to the projection we obtain the following: \begin{align*} \mathsf{E}^{-1}_X \left( \llbracket 1_X \times \pi_i \rrbracket \right) = e_X \otimes \pi_i: \oc X \otimes (A_0 \times A_1) \to A_i \end{align*} By Lemma \ref{lemma:compactclosed_to_dagger}, their dagger is given the dual: \begin{align*} \mathsf{E}^{-1}_X \left( \llbracket 1_X \times \pi_i \rrbracket \right)^{\dagger[X]} = (e_X \otimes \pi_i)^{\dagger[X]} = e_X \otimes \pi_i^\ast: \oc X \otimes A_i \to A_0 \times A_1 \end{align*} However by \cite{houston2008finite}, the dual of the projections are the injections (and vice-versa). So: \begin{align*} \mathsf{E}^{-1}_X \left( \llbracket 1_X \times \pi_i \rrbracket \right)^{\dagger[X]} = e_X \otimes \pi_i^\ast = e_X \otimes \iota_i \end{align*} Lastly, applying $\mathsf{E}_X$ we finally obtain that: \begin{align*} \mathsf{E}\left( \mathsf{E}^{-1}_X \left( \llbracket 1_X \times \pi_i \rrbracket \right)^{\dagger[X]} \right) = \mathsf{E}\left( e_X \otimes \iota_i \right) = \varepsilon_{X \times A_i}; (1_X \times \iota_i) = \llbracket 1_X \times \iota_i \rrbracket \end{align*} So we conclude that $\llbracket 1_X \times \pi_i \rrbracket^{\dagger[X]} = \llbracket 1_X \times \iota_i \rrbracket$, and that therefore each fibre $\mathcal{L}[X]$ has dagger biproducts. Thus, $\mathbb{X}_\oc$ is a Cartesian differential category with a contextual linear dagger. \end{proof} We then obtain one of the main results of this paper: \begin{theorem}\label{coKleisliCRDC} Let $\mathbb{X}$ be a reverse differential category with coalgebra modality $(\oc, \delta, \varepsilon, \Delta, e)$ and reverse deriving transformation $\mathsf{r}: \oc A \otimes \oc A \to A$, and finite (bi)products (which we denote here using the product notation). Then the coKleisli category $\mathbb{X}_\oc$ is a Cartesian reverse differential category with Cartesian left additive structure defined in Section \ref{cokleislisection} and reverse differential combinator $\mathsf{R}$ defined as follows on a coKleisli map $\llbracket f \rrbracket: \oc A \to B$: \begin{align*} \llbracket \mathsf{R}[f] \rrbracket := \xymatrixcolsep{3pc}\xymatrix{\oc(A \times B) \ar[r]^-{\chi_{A \times B}} & \oc A \otimes \oc B \ar[r]^-{1_{\oc A} \otimes \varepsilon_B} & \oc A \otimes B \ar[r]^-{1_{\oc A} \otimes \llbracket f \rrbracket^\ast} & \oc A \otimes \oc A \ar[r]^-{\mathsf{r}_A} & A} && \begin{array}[c]{c} \llbracket \mathsf{R}[f] \rrbracket \end{array}= \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (7.5, 1.25) {$\varepsilon$}; \node [style=differential] (1) at (7, -1) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (2) at (7, -1.75) {$A$}; \node [style=duplicate] (3) at (7, 2.5) {$\chi$}; \node [style=object] (4) at (7, 3.25) {$\oc (A \times B)$}; \node [style=component] (5) at (7.5, 0) {$f^\ast$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (4) to (3); \draw [style=wire, in=90, out=-30, looseness=1.25] (3) to (0); \draw [style=wire, in=150, out=-150] (3) to (1); \draw [style=wire] (1) to (2); \draw [style=wire] (0) to (5); \draw [style=wire, in=45, out=-90, looseness=0.75] (5) to (1); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} where $\chi_{A \times B}: \oc (A \times B) \to \oc A \otimes \oc B$ is defined as in Definition \ref{Seelydef} and $(\_)^\ast$ is defined as in Lemma \ref{sliding}. Furthermore, the induced differential combinator is precisely that of Proposition \ref{coKleisliCDC}, and the induced contextual linear dagger is precisely that of Corollary \ref{coKlielsicondag}. \end{theorem} \begin{proof} By Proposition \ref{coKleisliCDC}, $\mathbb{X}_\oc$ is a Cartesian differential category, and since $\mathbb{X}$ is compact closed, by Corollary \ref{coKlielsicondag}, $\mathbb{X}_\oc$ also has a contextual linear dagger. Therefore by Theorem \ref{thm:characterization_of_crdc}, $\mathbb{X}_\oc$ is a Cartesian reverse differential category where for a coKleisli map $\llbracket f \rrbracket: \oc X \to B$, its reverse derivative $\llbracket \mathsf{R}[f] \rrbracket: \oc (A \times B) \to A$ is defined as $\llbracket \mathsf{R}[f] \rrbracket = \llbracket \mathsf{D}[f]^{\dagger[A]} \rrbracket$. Expanding this out, we compute: \begin{align*} \begin{array}[c]{c} \llbracket \mathsf{R}[f] \rrbracket = \llbracket \mathsf{D}[f]^{\dagger[A]} \rrbracket \end{array} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=port] (42) at (-3, -7.5) {$=$}; \node [style=none] (53) at (-4.5, -4.5) {}; \node [style=none] (54) at (-5.25, -10) {}; \node [style=none] (55) at (-3.5, -4.5) {}; \node [style=object] (56) at (-3.5, -12.25) {$A$}; \node [style=none] (57) at (-4, -10) {}; \node [style=object] (62) at (-5, -1.5) {$\oc (A \times B)$}; \node [style=differential] (63) at (-5, -2.25) {{\bf =\!=\!=\!=}}; \node [style=component] (64) at (-4, -3.25) {$\pi_1$}; \node [style=function2] (65) at (-6, -3.25) {$\pi_0$}; \node [style=differential] (66) at (-5.25, -5.75) {{\bf =\!=\!=\!=}}; \node [style=component] (68) at (-4.5, -4.75) {$\iota_1$}; \node [style=function2] (69) at (-6, -4.75) {$\iota_0$}; \node [style=differential] (70) at (-5.25, -8.75) {{\bf =\!=\!=\!=}}; \node [style=component] (72) at (-5.25, -9.5) {$f$}; \node [style=differential] (74) at (-5.25, -6.75) {{\bf =\!=\!=\!=}}; \node [style=component] (75) at (-4.5, -7.75) {$\pi_1$}; \node [style=function2] (76) at (-6, -7.75) {$\pi_0$}; \node [style=none] (77) at (-0.75, -6.75) {}; \node [style=none] (78) at (-1.5, -9.25) {}; \node [style=none] (79) at (0.25, -6.75) {}; \node [style=object] (80) at (0.25, -11.5) {$A$}; \node [style=none] (81) at (-0.25, -9.25) {}; \node [style=object] (82) at (-1.25, -3.75) {$\oc (A \times B)$}; \node [style=differential] (83) at (-1.25, -4.5) {{\bf =\!=\!=\!=}}; \node [style=component] (84) at (-0.25, -5.5) {$\pi_1$}; \node [style=function2] (85) at (-2.25, -5.5) {$\pi_0$}; \node [style=differential] (89) at (-1.5, -8) {{\bf =\!=\!=\!=}}; \node [style=component] (90) at (-1.5, -8.75) {$f$}; \node [style=port] (91) at (1, -7.5) {$=$}; \node [style=none] (92) at (3.25, -7.75) {}; \node [style=none] (93) at (2.5, -9.25) {}; \node [style=none] (94) at (4.25, -7.75) {}; \node [style=object] (95) at (4.25, -11.5) {$A$}; \node [style=none] (96) at (3.75, -9.25) {}; \node [style=object] (97) at (2.75, -3.75) {$\oc(A \times B)$}; \node [style=differential] (98) at (2.75, -4.5) {{\bf =\!=\!=\!=}}; \node [style=component] (99) at (3.75, -5.5) {$\pi_1$}; \node [style=function2] (100) at (1.75, -5.5) {$\pi_0$}; \node [style=differential] (101) at (2.5, -8.75) {{\bf =\!=\!=\!=}}; \node [style=component] (102) at (3.75, -6.5) {$f^\ast$}; \node [style=component] (103) at (7, -6.75) {$\varepsilon$}; \node [style=differential] (104) at (6.5, -9) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (105) at (6.5, -9.75) {$A$}; \node [style=duplicate] (106) at (6.5, -5.5) {$\chi$}; \node [style=object] (107) at (6.5, -4.75) {$\oc (A \times B)$}; \node [style=component] (108) at (7, -8) {$f^\ast$}; \node [style=port] (109) at (5, -7.5) {$=$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left=90, looseness=2.00] (53.center) to (55.center); \draw [style=wire] (55.center) to (56); \draw [style=wire, bend right=90, looseness=2.00] (54.center) to (57.center); \draw [style=wire] (62) to (63); \draw [style=wire, in=90, out=-150, looseness=1.25] (63) to (65); \draw [style=wire, in=90, out=-30, looseness=1.25] (63) to (64); \draw [style=wire, in=135, out=-90] (69) to (66); \draw [style=wire, in=-90, out=45, looseness=1.25] (66) to (68); \draw [style=wire] (53.center) to (68); \draw [style=wire] (65) to (69); \draw [style=wire] (64) to (57.center); \draw [style=wire] (70) to (72); \draw [style=wire, in=90, out=-150, looseness=1.25] (74) to (76); \draw [style=wire, in=135, out=-90] (76) to (70); \draw [style=wire, in=-90, out=45, looseness=1.25] (70) to (75); \draw [style=wire, in=90, out=-30, looseness=1.25] (74) to (75); \draw [style=wire] (66) to (74); \draw [style=wire] (72) to (54.center); \draw [style=wire, bend left=90, looseness=2.00] (77.center) to (79.center); \draw [style=wire] (79.center) to (80); \draw [style=wire, bend right=90, looseness=2.00] (78.center) to (81.center); \draw [style=wire] (82) to (83); \draw [style=wire, in=90, out=-150, looseness=1.25] (83) to (85); \draw [style=wire, in=90, out=-30, looseness=1.25] (83) to (84); \draw [style=wire] (84) to (81.center); \draw [style=wire] (89) to (90); \draw [style=wire] (90) to (78.center); \draw [style=wire, in=135, out=-90, looseness=1.25] (85) to (89); \draw [style=wire, in=30, out=-90, looseness=1.50] (77.center) to (89); \draw [style=wire, bend left=90, looseness=2.00] (92.center) to (94.center); \draw [style=wire] (94.center) to (95); \draw [style=wire, bend right=90, looseness=2.00] (93.center) to (96.center); \draw [style=wire] (97) to (98); \draw [style=wire, in=90, out=-150, looseness=1.25] (98) to (100); \draw [style=wire, in=90, out=-30, looseness=1.25] (98) to (99); \draw [style=wire, in=150, out=-90, looseness=0.75] (100) to (101); \draw [style=wire, in=30, out=-90, looseness=1.25] (92.center) to (101); \draw [style=wire] (101) to (93.center); \draw [style=wire] (96.center) to (102); \draw [style=wire] (99) to (102); \draw [style=wire] (107) to (106); \draw [style=wire, in=90, out=-30, looseness=1.25] (106) to (103); \draw [style=wire, in=150, out=-150] (106) to (104); \draw [style=wire] (104) to (105); \draw [style=wire] (103) to (108); \draw [style=wire, in=45, out=-90, looseness=0.75] (108) to (104); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} So we conclude that the reverse differential combinator of $\mathbb{X}_\oc$ is induced by the reverse deriving transformation of $\mathbb{X}$. \end{proof} Similarly to the differential combinator, the reverse differential combinator can also be expressed in terms of the coderiving transformation as follows on a coKleisli map $\llbracket f \rrbracket: \oc A \to B$: \begin{align*} \llbracket \mathsf{R}[f] \rrbracket \!:=\!\! \xymatrixcolsep{2.25pc}\xymatrix{\oc(A \times B) \ar[r]^-{\mathsf{d}^\circ_{A \times B}} & \oc (A \times B) \!\otimes\! (A \times B) \ar[r]^-{\oc(\pi_0) \otimes \pi_1} & \oc A \otimes B \ar[r]^-{1_{\oc A} \otimes \llbracket f \rrbracket^\ast} & \oc A \!\otimes\! \oc A \ar[r]^-{\mathsf{r}_A} & A } && \begin{array}[c]{c} \llbracket \mathsf{R}[f] \rrbracket \end{array}\!=\!\! \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=differential] (104) at (6.5, -9) {{\bf \aquarius\!\aquarius\!\aquarius}}; \node [style=object] (105) at (6.5, -9.75) {$A$}; \node [style=component] (108) at (7, -8) {$f^\ast$}; \node [style=object] (117) at (6.25, -5) {$\oc (A \times B)$}; \node [style=differential] (118) at (6.25, -5.75) {{\bf =\!=\!=\!=}}; \node [style=component] (119) at (7, -6.75) {$\pi_1$}; \node [style=function2] (120) at (5.5, -6.75) {$\pi_0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire] (104) to (105); \draw [style=wire, in=45, out=-90, looseness=0.75] (108) to (104); \draw [style=wire] (117) to (118); \draw [style=wire, in=90, out=-150, looseness=1.25] (118) to (120); \draw [style=wire, in=90, out=-30, looseness=1.25] (118) to (119); \draw [style=wire] (119) to (108); \draw [style=wire, in=150, out=-90] (120) to (104); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \subsection{Other constructions of CRDCs} The inconvenience of monoidal reverse differential categories is that the self-dual compact closed requirement is quite strong. Indeed, there are not many interesting or well-studied models of differential linear logic in the literature that are self-dual compact closed. In fact, from a linear logic perspective, such models are often considered somewhat ``degenerate'' \cite[Definition 3]{hyland2003glueing}. Therefore, examples of Cartesian reverse differential categories arising from monoidal reverse differential categories will often not appear naturally. There is, however, another way of construction Cartesian reverse differential categories from coKleisli categories. In particular, this slightly altered construction can be done with any monoidal differential category. Instead of requiring all objects in the base category to be self-dual, we can instead take the full subcategory of the coKleisli category of self-dual objects in the base category. The proof that this subcategory is a Cartesian reverse differential category is essentially the same as Theorem \ref{coKleisliCRDC}. Being a full subcategory of a Cartesian differential category that is closed under finite products implies that said subcategory is also a Cartesian differential category. Then using the self-duality, we can build a contextual linear dagger and we conclude that we have a Cartesian reverse differential category. \begin{definition} Let $(\oc, \delta, \varepsilon)$ be a comonad on a symmetric monoidal category $\mathbb{X}$. Define $\mathsf{R}[\mathbb{X}_\oc]$ as the full subcategory of the coKleisli category $\mathbb{X}_\oc$ whose objects are self-dual objects (Definition \ref{SDCC}) of $\mathbb{X}$, so triples $(A, \cup_A, \cap_A)$. Recall that by a full subcategory, the maps of $\mathsf{R}[\mathbb{X}_\oc]$ are all those of $\mathbb{X}_\oc$ between the underlying objects, that is, $\mathsf{R}[\mathbb{X}_\oc]\left( (A, \cup, \cap), (B, \cup, \cap) \right) = \mathbb{X}_\oc(A,B) = \mathbb{X}(\oc A, B)$, and both composition and identities are the same as in $\mathbb{X}_\oc$. \end{definition} Suppose that the base symmetric monoidal category $\mathbb{X}$ has finite biproducts. The zero object is self-dual, where the cups and caps are simply the zero morphisms \cite[Lemma 3.19]{heunen2019categories}, and the biproduct of self-dual objects is again self-dual \cite[Lemma 3.23]{heunen2019categories}. Explicitly, if $(A, \cup_A, \cap_A)$ and $(B, \cup_B, \cap_B)$ are self-dual objects, then $(A \times B, \cup_{A \times B}, \cap_{A \times B})$ is also a self-dual object where the cup and cap are defined respectively as follows: \begin{align*} \begin{array}[c]{c} \cup_{A \times B} \end{array} : = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (7, -4.5) {$\pi_0$}; \node [style=component] (1) at (8.25, -4.5) {$\pi_0$}; \node [style=object] (4) at (9, -4.5) {$+$}; \node [style=object] (5) at (7, -3.5) {$A \times B$}; \node [style=object] (6) at (8.25, -3.5) {$A \times B$}; \node [style=component] (7) at (9.75, -4.5) {$\pi_1$}; \node [style=component] (8) at (11, -4.5) {$\pi_1$}; \node [style=object] (9) at (9.75, -3.5) {$A \times B$}; \node [style=object] (10) at (11, -3.5) {$A \times B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend right=90, looseness=2.00] (0) to (1); \draw [style=wire] (5) to (0); \draw [style=wire] (6) to (1); \draw [style=wire, bend right=90, looseness=2.00] (7) to (8); \draw [style=wire] (9) to (7); \draw [style=wire] (10) to (8); \end{pgfonlayer} \end{tikzpicture} \end{array} && \begin{array}[c]{c} \cap_{A \times B} \end{array} : = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=component] (0) at (7, -3.75) {$\iota_0$}; \node [style=component] (1) at (8.25, -3.75) {$\iota_0$}; \node [style=object] (4) at (9, -3.75) {$+$}; \node [style=object] (5) at (7, -4.75) {$A \times B$}; \node [style=object] (6) at (8.25, -4.75) {$A \times B$}; \node [style=component] (7) at (9.75, -3.75) {$\iota_1$}; \node [style=component] (8) at (11, -3.75) {$\iota_1$}; \node [style=object] (9) at (9.75, -4.75) {$A \times B$}; \node [style=object] (10) at (11, -4.75) {$A \times B$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left=90, looseness=2.00] (0) to (1); \draw [style=wire] (5) to (0); \draw [style=wire] (6) to (1); \draw [style=wire, bend left=90, looseness=2.00] (7) to (8); \draw [style=wire] (9) to (7); \draw [style=wire] (10) to (8); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} Therefore it follows that $\mathsf{R}[\mathbb{X}_\oc]$ has finite products. Note that any full subcategory of a Cartesian differential category whose objects are closed under finite products is again a Cartesian differential category. Therefore, if the starting base category $\mathbb{X}$ is a differential category, $\mathsf{R}[\mathbb{X}_\oc]$ will be a Cartesian differential category. \begin{lemma} Let $\mathbb{X}$ be a differential category with finite biproducts. Then $\mathsf{R}[\mathbb{X}_\oc]$ is a Cartesian differential category where the differential combinator is defined as in Proposition \ref{coKleisliCDC}. \end{lemma} Now that we have established that $\mathsf{R}[\mathbb{X}_\oc]$ is a Cartesian differential category, to show that it is also a Cartesian reverse differential category, it remains only to show that $\mathsf{R}[\mathbb{X}_\oc]$ has a contextual linear dagger. However, we may define the dagger in the same way that it was done in Corollary \ref{coKlielsicondag}. \begin{lemma} Let $\mathbb{X}$ be a differential category with finite biproducts. Then $\mathsf{R}[\mathbb{X}_\oc]$ is a Cartesian differential category with a contextual linear dagger, where the dagger is defined in the same way as in Corollary \ref{coKlielsicondag}. \end{lemma} \begin{proof} Using essentially the same proof as throughout Section \ref{sec:cokleisliRDC}, it follows from self-duality that we obtain a contextual linear dagger. \end{proof} As a result, it follows that $\mathsf{R}[\mathbb{X}_\oc]$ is a Cartesian reverse differential category. Since the base category does not necessarily have a reverse deriving transformation, we will explicitly write the reverse differential combinator of $\mathsf{R}[\mathbb{X}_\oc]$ in terms of the deriving transformation and the cups and caps. \begin{proposition} Let $\mathbb{X}$ be a differential category with finite products. Then $\mathsf{R}[\mathbb{X}_\oc]$ is a Cartesian reverse differential category, where the reverse differential combinator is defined as follows for a coKleisli map ${\llbracket f \rrbracket: \oc A \to B}$: \begin{align*} \begin{array}[c]{c} \llbracket \mathsf{R}[f] \rrbracket := \xymatrixcolsep{5pc}\xymatrix{\oc(A \times B) \ar[r]^-{\chi_{A,B}} & \oc A \otimes \oc B \ar[r]^-{1_{\oc A} \otimes \varepsilon_B} & \oc A \otimes B \ar[r]^-{1_{\oc A} \otimes \cap_A \otimes 1_B} & \\ \oc A \otimes A \otimes A \otimes B \ar[r]^-{\mathsf{d}_A \otimes \sigma_{A,B}} & \oc A \otimes B \otimes A \ar[r]^-{\llbracket f \rrbracket \otimes 1_B \otimes 1_A} & B \otimes B \otimes A \ar[r]^-{\cup_B \otimes 1_A} & A } \end{array} \end{align*} \begin{align*} \begin{array}[c]{c} \llbracket \mathsf{R}[f] \rrbracket \end{array} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (29) at (9.25, -6.75) {}; \node [style=none] (30) at (8.5, -9.25) {}; \node [style=none] (31) at (10.25, -6.75) {}; \node [style=object] (32) at (10.25, -11.5) {$A$}; \node [style=none] (33) at (9.75, -9.25) {}; \node [style=object] (34) at (8.75, -3.75) {$\oc (A \times B)$}; \node [style=component] (35) at (8.75, -4.5) {$\chi$}; \node [style=component] (36) at (9.75, -5.5) {$\varepsilon$}; \node [style=differential] (38) at (8.5, -8) {{\bf =\!=\!=\!=}}; \node [style=component] (39) at (8.5, -8.75) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left=90, looseness=2.00] (29.center) to (31.center); \draw [style=wire] (31.center) to (32); \draw [style=wire, bend right=90, looseness=2.00] (30.center) to (33.center); \draw [style=wire] (34) to (35); \draw [style=wire, in=90, out=-30, looseness=1.25] (35) to (36); \draw [style=wire] (36) to (33.center); \draw [style=wire] (38) to (39); \draw [style=wire] (39) to (30.center); \draw [style=wire, in=30, out=-90, looseness=1.50] (29.center) to (38); \draw [style=wire, in=150, out=-150] (35) to (38); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} \end{proposition} \begin{proof} Since $\mathsf{R}[\mathbb{X}_\oc]$ is a Cartesian differential category with a contextual linear dagger, then by Theorem \ref{thm:characterization_of_crdc}, $\mathsf{R}[\mathbb{X}_\oc]$ is a Cartesian reverse differential category. By essentially the same calculations as in the proof of Theorem \ref{coKleisliCRDC}, we can show that the resulting reverse differential combinator is precisely the desired one. \end{proof} We could have also expressed the reverse differential combinator in terms of the coderiving transformation as follows: \begin{align*} \begin{array}[c]{c} \llbracket \mathsf{R}[f] \rrbracket := \xymatrixcolsep{5pc}\xymatrix{\oc(A \times B) \ar[r]^-{\mathsf{d}^\circ_{A \times B}} & \oc (A \times B) \otimes (A \times B) \ar[r]^-{\oc(\pi_0) \times \pi_1} & \oc A \otimes B \ar[r]^-{1_{\oc A} \otimes \cap_A \otimes 1_B} & \\ \oc A \otimes A \otimes A \otimes B \ar[r]^-{\mathsf{d}_A \otimes \sigma_{A,B}} & \oc A \otimes B \otimes A \ar[r]^-{\llbracket f \rrbracket \otimes 1_B \otimes 1_A} & B \otimes B \otimes A \ar[r]^-{\cup_B \otimes 1_A} & A } \end{array} \end{align*} \begin{align*} \begin{array}[c]{c} \llbracket \mathsf{R}[f] \rrbracket \end{array} = \begin{array}[c]{c} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (18) at (5.75, -6.75) {}; \node [style=none] (19) at (5, -9.25) {}; \node [style=none] (20) at (6.75, -6.75) {}; \node [style=object] (21) at (6.75, -11.5) {$A$}; \node [style=none] (22) at (6.25, -9.25) {}; \node [style=object] (23) at (5.25, -3.75) {$\oc (A \times B)$}; \node [style=differential] (24) at (5.25, -4.5) {{\bf =\!=\!=\!=}}; \node [style=component] (25) at (6.25, -5.5) {$\pi_1$}; \node [style=function2] (26) at (4.25, -5.5) {$\pi_0$}; \node [style=differential] (27) at (5, -8) {{\bf =\!=\!=\!=}}; \node [style=component] (28) at (5, -8.75) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=wire, bend left=90, looseness=2.00] (18.center) to (20.center); \draw [style=wire] (20.center) to (21); \draw [style=wire, bend right=90, looseness=2.00] (19.center) to (22.center); \draw [style=wire] (23) to (24); \draw [style=wire, in=90, out=-150, looseness=1.25] (24) to (26); \draw [style=wire, in=90, out=-30, looseness=1.25] (24) to (25); \draw [style=wire] (25) to (22.center); \draw [style=wire] (27) to (28); \draw [style=wire] (28) to (19.center); \draw [style=wire, in=135, out=-90, looseness=1.25] (26) to (27); \draw [style=wire, in=30, out=-90, looseness=1.50] (18.center) to (27); \end{pgfonlayer} \end{tikzpicture} \end{array} \end{align*} As mentioned above, the advantage of this construction is that we construct a Cartesian reverse differential category from any differential category, with or without the Seely isomorphisms. In most cases, the self-dual objects of a differential category are of a ``finite-dimensional'' flavour. We conclude this section by applying this construction to well-known examples of differential categories to reconstruct some of the main examples of Cartesian reverse differential categories. \begin{example} \normalfont This example recaptures the reverse differentiation of polynomials from \cite[Example 14.1]{cockett_et_al:LIPIcs:2020:11661}. For simplicity, we will work with vector fields over a field but we note that this example can be generalized to the category of modules over any commutative semiring. Let $k$ be a field and $\mathsf{VEC}_k$ be the category of $k$-vector spaces and $k$-linear maps between them. Then $\mathsf{VEC}^{op}_k$ is a differential category where $\oc V$ is the free symmetric algebra over $V$: \[ \oc V = \bigoplus^{\infty}_{n=0} V^{\otimes_s^n}\] where $\otimes_s^n$ is the $n$-fold symmetrized tensor product of $V$. If $X$ is a basis of $V$, then $\oc V \cong k[X]$, where $k[X]$ is the polynomial ring over $X$. From this point of view, the deriving transformation can be described as a map $\mathsf{d}_V: k[X] \to k[X] \otimes V$ which maps a polynomial to the sum of its partial derivatives: \[ \mathsf{d}(p(\vec x) = \sum \limits^n_{i=1} \frac{\partial p(\vec x)}{\partial x_i} \otimes x_i \] Thus $\mathsf{VEC}^{op}_{k}$ is a differential category, whose differential structure captures polynomial differentiation. For more details on this (co)differential category, see \cite[Section 2.5.3]{blute2006differential}. The self-dual objects in $\mathsf{VEC}^{op}_k$ are precisely the finite-dimensional vector spaces, and since self-dual objects are self-dual, the same is true in $\mathsf{VEC}^{op}_k$. Therefore, $\mathsf{R}[{\mathsf{VEC}^{op}_k}_\oc]$ is equivalent, as Cartesian reverse differential categories, to $\mathsf{POLY}_k$ from \cite[Example 14.1]{cockett_et_al:LIPIcs:2020:11661}. \end{example} \begin{example} \normalfont This example recaptures the reverse differentiation of smooth functions from \cite[Example 14.2]{cockett_et_al:LIPIcs:2020:11661}. Let $\mathbb{R}$ be the field of real numbers. While the differential structure on $\mathsf{VEC}^{op}_\mathbb{R}$ from the above example captures polynomial differentiation, $\mathsf{VEC}^{op}_\mathbb{R}$ has another differential structure where this time the deriving transformation corresponds to differentiating (real) smooth functions. The key to this example is the notion of $C^\infty$-rings, which recall are defined as the algebras of the Lawvere theory whose morphisms are smooth maps between the Euclidean spaces $\mathbb{R}^n$. Equivalently, a $C^\infty$-ring is a set $A$ equipped with a family of functions ${\Phi_f: A^n \to A}$ indexed by the smooth functions $f: \mathbb{R}^n \to \mathbb{R}$ and which satisfies certain coherence equations. For example, $C^\infty(\mathbb{R}^n) = {\lbrace f: \mathbb{R}^n \to \mathbb{R} \vert~ f \text{ is smooth} \rbrace}$ is a $C^\infty$-ring. For every $\mathbb{R}$-vector space $V$, there exists a free $C^\infty$-ring over $V$ \cite[Section 4]{cruttwell2019integral}, which we denote as $\mathsf{S}^\infty(V)$. If $V$ is finite dimensional of dimension $n$, then $\mathsf{S}^\infty(V) \cong C^\infty(\mathbb{R}^n)$ as $C^\infty$-rings, and in particular, $\mathsf{S}^\infty(\mathbb{R}^n) = C^\infty(\mathbb{R}^n)$. Then $\mathsf{VEC}^{op}_\mathbb{R}$ is a differential category with respect to the coalgebra modality $\mathsf{S}^\infty$ and whose deriving transformation is induced by differentiating smooth functions. In particular for $\mathbb{R}^n$, the deriving transformation $\mathsf{d}: C^\infty(\mathbb{R}^n) \to C^\infty(\mathbb{R}^n) \otimes \mathbb{R}^n$ maps a smooth function $f: \mathbb{R}^n \to \mathbb{R}$ to the sum of its partial derivatives: \[ \mathsf{d}(f) = \sum_{i=1}^{n} \frac{\partial f}{\partial x_i} \otimes x_i \] Hence $\mathsf{VEC}^{op}_\mathbb{R}$ is a monoidal differential category, whose differential structure captures smooth function differentiation. For more details on this differential category, see \cite{cruttwell2019integral}. As explained in the above example, the self-dual objects of $\mathsf{VEC}^{op}_\mathbb{R}$ are the finite-dimensional vector spaces. Therefore, $\mathsf{R}[{\mathsf{VEC}^{op}_\mathbb{R}}_{\mathsf{S}^\infty}]$ is equivalent as a Cartesian reverse differential category to the example $\mathsf{SMOOTH}$ from \cite[Example 14.2]{cockett_et_al:LIPIcs:2020:11661}. \end{example} \section{Conclusions and Future Work}\label{sec:future_work} In this paper we have filled in a gap in the literature on categorical differential structures by providing a definition of a \emph{monoidal reverse differential category}. We have also provided key results to relate this structure to others, showing how monoidal reverse differential categories relate to monoidal differential categories, Cartesian differential categories, and Cartesian reverse differential categories. This work provides many additional avenues for exploration; we briefly discuss some of them here. \begin{itemize} \item To understand what the structure of MRDCs should be, this paper started from an MDC and looked at what would happen if it's associated CDC was a CRDC. However, there is another approach one could take. In \cite{blute2015cartesian}, the authors look at what additional structure on a CDC would be necessary to form an MDC. Thus, alternatively, one could start with a CRDC with such structure, and show that one gets an MRDC. We leave this for future work. \item In \cite{garner2021cartesian}, the authors describe how CDCs can be seen as a type of skew-enriched category, and use this result to demonstrate how every CDC embeds into a CDC associated to an MDC. Similar results for CRDCS and MRDCs would be very useful. \item In the world of ``reverse'' differential structures, the analog of tangent structures \cite{cockettCruttwellTangent} has yet to be described. Such a structure would axiomatize the cotangent bundle in differential geometry. Understanding such a structure's relationship to MRDC and CRDCs will then help further bridge the gap between differential geometry and differentiation in computer science. \item All of the above items are theoretical; however, there is an important applied avenue which this work allows one to pursue. As examples \ref{ex:quantum1} and \ref{ex:quantum2} demonstrated, several abstract models of quantum computation are MRDCs. Then in particular, by Theorem \ref{coKleisliCRDC}, the coKleisli category associated to these models is a CRDC. By the results of \cite{gradientBasedLearning}, this means that one could apply supervised learning techniques to these examples. This possibility of combining quantum computation with supervised learning is an exciting direction we hope will be pursued in the future. \end{itemize} \bibliographystyle{plain}
2,877,628,091,445
arxiv
\section{Introduction} Let $L$ be a submanifold of $M$ with a unit normal vector field $N$ and constant $(r+1)$th mean curvature $S_{r+1}$. If $M$ is the manifold with constant sectional curvature (Einstein manifold for $r=1$) then $L$ is characterized by a variational problem (see, among the others, \cite{ace,bc,bs2,e}). Therefore, there is a natural question about the stability of $L$. In this paper, we give some criteria for the stability of such submanifolds in the case when they are leaves of a codimension one foliation (Theorem \ref{t:1}). Next, for a conformal vector field $U$ we obtain the formula for $L_r(f)$, where $f=\langle U,N\rangle$, in the case of an arbitrary manifold (Theorem \ref{t:2}). Using this, we show that normal component of a Killing field is a $r$th Jacobi field of a submanifold with $S_{r+1}$ constant (Proposition \ref{p:2}). Finally, we investigate relations between $r$th Jacobi fields and vector fields preserving a foliation - Section \ref{s:3}. \par Throughout the paper everything (manifolds, foliations, metrics, etc.) is assumed to be $C^{\infty}$-differentiable and oriented. For simplicity, we usually work with $S_r$ instead of its normalized counterpart $H_r$ (see Remark \ref{r1}). Repeated indices denote summation over their range. \section{Preliminaries} \label{s:0} Let $M$ be a $(n+1)$-dimensional Riemannian manifold, $L$ be a codimension one submanifold of $M$ and $\langle \cdot ,\cdot\rangle$ represent a metric on $M$. Assume that both $M$ and $L$ are oriented and let $N$ be an orthogonal unit vector field. Let $\overline\nabla$ denote the Levi-Civita connection of the metric. Then $\overline\nabla $ induces the connection $\nabla$ on the set $\Gamma(L)$ of all vector fields tangent to $L$. Define the second fundamental form (or, the shape operator) $A$ of $L $ with respect to $N$ by \[ A: \Gamma(L) \rightarrow \Gamma(L), \quad A(X)=-(\overline\nabla_XN )^\top \quad \textrm{for } X\in \Gamma(L), \] where $^\top $ denotes the orthogonal projection on the vector bundle tangent to $L$. Note that $A$ is a self-adjoint linear operator and at each point $p\in L$ has real eigenvalues $\kappa _1(p),\ldots,\kappa_n(p)$ (the principal curvatures). Associated to the shape operator there are $n$ algebraic invariants given by \[ S_r(p)=\sigma_r(\kappa_1(p),\ldots,\kappa_n(p)) , \] where $\sigma_r$ for $r=1,2,\ldots,n$ are the elementary symmetric functions given by \[ \sigma _r(x_1,\ldots,x_n) =\sum_{i_1<\cdots<i_r}x_{i_1}\cdots x_{i_r}, \] $\sigma_0=1$ and $\sigma_r=0$ for other $r$. Moreover, observe that the characteristic polynomial of $A$ can be written in terms of the $S_r$'s as \[ \det(tI-A)=\sum _{r=0}^{n} (-1)^rS_rt^{n-r}. \] The normalized $r$th mean curvature $H_r$ of $L$ is defined by \[ H_r=S_r\dbinom{n}{r}^{-1}. \] \begin{rem} \label{r1} {\rm Sometimes $H_r$, instead of $S_r$, is called $r$th mean curvature.} \end{rem} \par Now, we introduce the Newton transformations $T_r:\Gamma(L)\rightarrow \Gamma(L)$ arising from the shape operator. They are defined inductively by \[ T_0=I,\quad T_r=S_rI-AT_{r-1}, \quad 1\leq r\leq n, \] or, equivalently, by \[ T_r = S_rI-S_{r-1}A+\cdots+(-1)^{r-1}S_1A^{r-1}+(-1)^rA^r. \] Note that, by the Cayley-Hamilton theorem we have $T_n$=0. Furthermore, $T_r$ is also self-adjoint and $A$ together with all the $T_r$'s can be simultaneously diagonalized; if $e_1,\ldots,e_n$ are the eigenvectors of $A$ corresponding to the eigenvalues $\kappa _1(p),\ldots ,\kappa _n(p)$, respectively, then they are also eigenvectors of $T_r$ corresponding to the eigenvalues $\mu_{i,r}(p)$ of $T_r$, that is $T_r(e_i)=\mu_{i,r}(p)e_i$, where \[ \mu_{i,r}(p)=\frac{\partial \sigma_{r+1}}{\partial x_i}(\kappa_1(p),\ldots,\kappa_n(p)). \] We say that $T_r$ is definite (semi definite) if $T_r>0$ or $T_r<0$ on $L$ ($T_r\geq 0$ or $T_r\leq0$ on $L$). \par The following algebraic properties of $T_r$ are well known (see, for instance, \cite{ro}) and will be applied throughout this paper: \begin{align*} &\operatorname{Tr}(T_r)=(n-r)S_r =c_rH_r,\\ &\operatorname{Tr}(AT_r)=(r+1)S_{r+1}=c_rH_{r+1},\\ &\operatorname{Tr}(A^2T_r)=S_1S_{r+1}-(r+2)S_{r+2}, \end{align*} where $c_r=(n-r)\dbinom{n}{r}=(r+1)\dbinom{n}{r+1}$. \par Let $f\in C^\infty(L)$. Define operators $L_r,J_r$ as follows: \[ L_rf=\operatorname{Tr}(T_r\circ {\rm Hess} f), \] and \[ J_rf=L_rf+\operatorname{Tr}(A^2T_r)f+\operatorname{Tr}(\overline R(N)T_r)f, \] where $\overline R(N): \Gamma(L)\rightarrow \Gamma(L)$ is given by \[ \overline R(N)(X)=\overline R(X,N)N, \quad X\in\Gamma(L), \] and $\overline R$ being the curvature tensor of $\overline \nabla$. Then \[ L_rf=\divergence(T_r{\rm \nabla}f)-\langle\divergence T_r,{\rm \nabla} f\rangle, \] where $\divergence T_r =(\nabla_{e_i}T_r)e_i$, and we have the following cases (see, among the others, \cite{alm,bc,bs,cr,e}). \par For $r=0$ we have $\divergence_L T_0=0$ thus $L_r=L_0=\Delta$ \[ J_0f=\Delta f+\operatorname{Tr} A^2f +\overline\operatorname{Ric}(N)f. \] If $r=1$ and $M$ is an Einstein manifold, then $\divergence T_1=0$ and \[ J_1f=\divergence(T_1{\rm \nabla}f)+(S_1S_{2}-3S_{3})f+\operatorname{Tr}(\overline R(N)T_1)f. \] If $M$ is a manifold with constant sectional curvature $c$, then for arbitrary $r$ $\divergence (T_r)=0$ and \[ J_rf=\divergence(T_r{\rm \nabla}f)+(S_1S_{r+1}-(r+2)S_{r+2})f+(n-r)cS_rf. \] For these three cases we have the following proposition (e.g. \cite{bc}). \begin{prop} \label{p0} If $L$ is compact without boundary or if $L$ is noncompact and $f\in C_c^\infty(L)$ then \[ \int _L L_r(f)=0 \quad {\rm and} \quad \int_L fL_r(f)=-\int_L\langle T_r\nabla f,\nabla f\rangle. \] \hfill$\square$ \end{prop} Next, we define \[ I_r(f,g)=-\int_L fJ_rg, \] for $f,g\in C_c^\infty(L)=\{f\in C^\infty(L): f-{\rm\, a\, compactly\, supported\, function}\}$. \par Let us recall that a submanifold is $r$-minimal ($0\leq r\leq n-1$) if $S_{r+1}=0$. Let $\mathcal F$ be a codimension one foliation. We say that $\mathcal F$ is $r$-minimal if any leaf of $\mathcal F$ is a $r$-minimal submanifold of $M$. A foliation such that every leaf has constant $(r+1)$th mean curvature is called $r$-tense. \par Similarly, as for submanifolds, we may define $S_r,H_r,T_r$ for a foliation $\mathcal F$ (e.g. \cite{aw,aw2}). In this case, the functions $S_r$, are smooth on the whole $M$ and, for any point $p\in M$, $S_r(p)$ coincides with the r-th mean curvature at $p$ of the leaf $L$ of $\mathcal F$ which passes through $p$; therefore we will use the same notation for $r$th mean curvature of foliations and submanifolds. Finally, recall that a hypersurface $L$ with $S_{r+1}=constant$, of a manifold with constant sectional curvature (Einstein manifold -- for $r=1$) is a critical point of the variational problem of minimizing the integral \[ \mathcal A_r=\int_LF_r(S_1,\ldots,S_r), \] for compactly supported volume-preserving variations, see \cite{bc,bs2,e}. The functions $F_r$ are defined inductively by \begin{align*} &F_0=1,\\ &F_1=S_1,\\ &F_r=S_r+\frac{c(n-r+1)}{r-1}F_{r-2}, \quad 2\leq r\leq n-1. \end{align*} The second variation formula reads $\mathcal A_r''(0)=(r+1)I_r(f,f)$. Thus, we may introduce the following definition (see discussion in \cite{ace}). \begin{defin}We say that a submanifold $L$ with $S_{r+1}=constant$ is $r$-stable if $I_r(f,f)\geq 0$ for all $f\in C_c^*(L)$ or if $I_r(f,f)\leq 0$ for all $f\in C_c^*(L)$. We say $L$ is $r$-unstable if there exist functions $f,g\in C_c^*(L)$ such that $I_r(f,f)<0$ and $I_r(g,g)>0$; where \begin{equation} \label{e0} C_c^*=\{f\in C^\infty_c: \int_Lf=0\}. \end{equation} \end{defin} $0$-minimal ($0$-stable) submanifold are simply called minimal (stable). \section{Stability results} \label{s:1} Oshikiri \cite{o1} has showed that each leaf of a minimal foliation is stable. Now, we give a generalization of this theorem for arbitrary $r>0$. In order to do this, we will need the following proposition (\cite{aw}, see also \cite{csc}). \begin{prop} \label{p1} Let $M$ be a Riemannian manifold with a unit vector field $N$ orthogonal to the foliation $\mathcal F$ of $M$. Then on a leaf $L$ we have \begin{align*} \divergence(T_r\overline\nabla_NN)&=\langle \divergence T_r,\overline\nabla_NN\rangle-N(S_{r+1})+\nonumber\\ &+\operatorname{Tr}(A^2T_r)+\operatorname{Tr}(\overline R(N)T_r)+\langle\overline\nabla_NN,T_r\overline\nabla_NN\rangle. \end{align*} \hfill$\square$ \end{prop} If $M$ is a manifold without boundary, then we have the following theorem. \begin{thm} \label{t:1} Let $M$ be an Einstein (a constant sectional curvature) manifold for $r=1$ ($r>1$) and $\mathcal F$ be a codimension one $r$-tense foliation of $M$. If on a leaf $L$, either $T_r\geq 0$ and $N(S_{r+1})\leq 0$ or $T_r\leq 0$ and $N(S_{r+1})\geq 0$, then $L$ is $r$-stable. \end{thm} {\it Proof.} In our case from Proposition \ref{p1} we get \begin{align*} \divergence(T_r\overline\nabla_NN)&=-N(S_{r+1})+\operatorname{Tr}(A^2T_r)+\operatorname{Tr}(\overline R(N)T_r)+ \langle\overline\nabla_NN,T_r\overline\nabla_NN\rangle. \end{align*} Thus, for any $f\in C_c^*(L)$ we have \begin{align} \label{e1} &\divergence(f^2T_r\overline\nabla_NN)-(T_r\overline\nabla_NN)(f^2)= f^2\divergence(T_r\overline\nabla_NN) \nonumber\\ &=f^2\operatorname{Tr}(T_rA^2)+f^2\operatorname{Tr}(\overline R(N)T_r)+f^2\langle\overline\nabla_NN,T_r\overline\nabla_NN\rangle -f^2N(S_{r+1}) \end{align} Using Proposition \ref{p0}, Eq. (\ref{e1}) and the fact that $T_r$ is selfadjoint, we have \begin{align*} &\int_L\langle T_r(\nabla f+f\overline\nabla_NN),\nabla f+f\overline\nabla_NN\rangle -f^2N(S_{r+1})\\ &=\int_L\langle T_r\nabla f,T_r\nabla f\rangle+2f\langle T_r\nabla_NN,\nabla f\rangle + f^2\langle T_r\overline\nabla_NN,\overline\nabla_NN\rangle-f^2N(S_{r+1})\\ &=\int_L-fL_r(f)+(T_r\overline\nabla_NN)(f^2)+ f^2\langle T_r\overline\nabla_NN, \overline\nabla_NN\rangle-f^2N(S_{r+1})\\ &=\int_L-fL_r(f)-f^2\operatorname{Tr}(T_rA^2)-f^2\operatorname{Tr}(R(N)T_r)+\divergence(f^2\overline\nabla_NN)\\ &=I_r(f,f). \end{align*} This ends the proof.\hfill$\square$ \medskip \par Note that, during the proof of Theorem \ref{t:1}, we did not use the condition from Eq. (\ref{e0}). \begin{cor} Let $M$ be as in Theorem \ref{t:1}. If each leaf of the foliation $\mathcal F$ has the same constant $(r+1)$th mean curvature (especially equal zero) and $T_r$ is semi definite on $M$ then any leaf of $\mathcal F$ is $r$-stable. \end{cor} There are various conditions enforcing (semi) definiteness of the operator $T_r$, see \cite{asz,cr}. One of them implies the following corollary. \begin{cor} Let $M$ be as in Theorem \ref{t:1} and $\mathcal F$ be a $r$-minimal foliation of $M$. If on a leaf $L$, $S_r\neq 0$, then $L$ $r$-stable. \end{cor} \begin{exa} Let $M=\mathbb R\times L$ be a foliated manifold each leaf of which is given by $\{t\} \times L$ where $L$ has constant negative sectional curvature $c$. We define a metric on $M$ by $\langle,\rangle= dt^2+\cosh(\sqrt{-c}t)\langle,\rangle_L$. Then $(M,\langle,\rangle)$ has constant sectional curvature $c$ and foliation is $r$-tense. Moreover, on any leaf $L$, $T_r\geq 0$ and $N(S_{r+1})\leq 0$ or $T_r\leq 0$ and $N(S_{r+1})\geq 0$, thus any leaf is $r$-stable. \end{exa} \begin{exa} Let $M=\mathbb R\times L$ be a foliated manifold each leaf of which is given by $\{t\} \times L$, where $L$ is flat manifold (e.g. $\mathbb R^n$,$T^n$). We define a metric on $M$ by $\langle,\rangle= dt^2+e^{-2at}\langle,\rangle_L$. Then $(M,\langle,\rangle)$ has constant sectional curvature $-a^2$, each leaf has the same constant $S_{r+1}$ and $P_r$ is definite; thus each leaf is $r$-stable. \end{exa} Recall that by a singular foliation of $M$, we mean a foliation $\mathcal F$ of $M\backslash S$, where $S\subset M$ is a set of Lebesgue measure zero \cite{csc}. \begin{exa} Let $\mathcal F$ be a singular foliation of $\mathbb R^{n+1}$ by the concentric cylinders $S^r(R)\times \mathbb R^{n-r}$, where $S^r(R)$ denotes the sphere with center $0\in \mathbb R^{n+1}$ and radius $R > 0$; the singular set of the foliation is the $(n- r)$-hyperplane $\{0\}\times \mathbb R^{n-r}$ in $\mathbb R^{n+1}$. Then $\mathcal F$ is $r$-minimal foliation and $S_r\neq 0$; consequently, any leaf is $r$-stable. \end{exa} \section{Conformal fields} \label{s:2} Let $U$ be a conformal vector field on a manifold $M$ and $f=\langle U,N\rangle$. Recently, Barros - Sousa and Al\'{i}as - Colares \cite{ac,bc} have obtained an expression of $L_rf$ when $M$ is either a manifold with constant sectional curvature or generalized a Robertson-Walker spacetime. Now, we generalize these results to the case of arbitrary manifolds and obtain some other consequences. \begin{thm} \label{t:2} Let $L$ be a submanifold (not necessary a leaf) of an arbitrary manifold $M$ with the unit normal vector field $N$. If $U$ is a conformal vector field on $M$ and $f=\langle U,N\rangle$, then \[ J_rf=-U^\top(S_{r+1})-(r+1)kS_{r+1}-N(k)(n-r)S_r, \] equivalently \[ L_rf=-\langle U,\nabla S_{r+1}\rangle -f\operatorname{Tr}(A^2T_r)-f\operatorname{Tr}(\overline R(N)T_r)-k\operatorname{Tr}(AT_r)-N(k)\operatorname{Tr}(T_r), \] where $2k$ is the conformal factor of $U$. \end{thm} {\it Proof.} Let $X\in\Gamma(T(L))$. Since $U$ is a conformal field, we have \begin{align*} &\langle \nabla f,X\rangle=X(f)=\langle\overline \nabla_XU,N\rangle+\langle U,\overline\nabla_XN\rangle\\ &=-\langle X,\overline\nabla_NU\rangle+\langle U^\top,\overline\nabla_XN\rangle\\ &=-\langle X,\overline\nabla_NU \rangle-\langle U^\top,AX\rangle= -\langle X,(\overline\nabla_NU)^\top+AU^\top\rangle. \end{align*} Thus, we get \begin{equation} \nabla f =-((\overline\nabla_NU)^\top+AU^\top). \end{equation} Let $p\in L$ be an arbitrary point and $\{e_i\}_{i=1}^n$ a local orthonormal frame such that $T_r(e_i(p))=\mu_{i,r}e_i(p)$. By definition of $L_r$ we have \[ (L_rf)(p)=\langle\nabla_{e_i}(\nabla f),T_re_i\rangle(p), \] where as everywhere, repeated indices denote summation. \par Thus at the point $p$ we obtain \begin{align*} &\langle \nabla_{e_i}(\overline\nabla_NU)^\top,T_re_i\rangle = \langle \overline\nabla_{e_i}(\overline\nabla_NU)^\top,T_re_i\rangle \\ &=\langle \overline\nabla_{e_i}\overline\nabla_NU,T_re_i\rangle-\langle\overline\nabla_NU,N\rangle \langle \overline\nabla_{e_i}N,T_re_i\rangle \\ &=\langle\overline R(e_i,N)U,T_re_i\rangle+k\operatorname{Tr}(AT_r) +\langle\overline\nabla_N\overline\nabla_{e_i}U,T_re_i\rangle+ \langle\overline\nabla_{[e_i,N]}U,T_re_i\rangle\\ &=\langle\overline R(e_i,N)U,T_re_i\rangle+k\operatorname{Tr}(AT_r)+ \langle\overline\nabla_N\overline\nabla_{e_i}U,T_re_i\rangle\\ &+\langle\overline\nabla_{\overline\nabla_{e_i}N}U,T_re_i\rangle- \langle\overline\nabla_{\overline\nabla_Ne_i}U,T_re_i\rangle\\ &=\langle\overline R(e_i,N)U,T_re_i\rangle+k\operatorname{Tr}(AT_r)+ \langle\overline\nabla_{\overline\nabla_{e_i}N}U,T_re_i\rangle\\ &+\mu_{i,r}(\langle\overline\nabla_N\overline\nabla_{e_i}U,e_i\rangle- \langle\overline\nabla_{\overline\nabla_Ne_i}U,e_i\rangle). \end{align*} Since, for a fixed $i$ we have $\langle\overline\nabla_{e_i}U,e_i\rangle=k$ and $\langle\overline\nabla_Ne_i,e_i\rangle$=0 thus \[ \langle\overline\nabla_N\overline\nabla_{e_i}U,e_i\rangle= -\langle\overline\nabla_{e_i}U,\overline\nabla_Ne_i\rangle+N(k) =\langle\overline\nabla_{\overline\nabla_Ne_i}U,e_i\rangle+N(k). \] Consequently, at $p$ we have \begin{align} \label{e2} &\langle \nabla_{e_i}(\overline\nabla_NU)^\top,T_re_i\rangle\nonumber\\ &=\langle\overline R(e_i,N)U,T_re_i\rangle+ \langle\overline\nabla_{\overline\nabla_{e_i}N}U,T_re_i\rangle +k\operatorname{Tr}(AT_r)+N(k)\operatorname{Tr}(T_r)\nonumber\\ &=\langle\overline R(T_re_i,N)U,e_i\rangle- \langle Ae_i,e_j\rangle \langle(\overline\nabla_{e_j}U)^\top,T_re_i\rangle +k\operatorname{Tr}(AT_r)+N(k)\operatorname{Tr}(T_r)\nonumber\\ &=\langle\overline R(e_i,U)N,T_re_i\rangle-\operatorname{Tr}(AT_r(\overline\nabla U)^\top) +k\operatorname{Tr}(AT_r)+N(k)\operatorname{Tr}(T_r) \end{align} On the other hand, from the Codazi equation, we obtain \begin{align*} &\langle\overline R(e_i,U^\top)N,T_re_i\rangle=\langle(\nabla_{U^\top}A)e_i,T_re_i\rangle- \langle(\nabla_{e_i}A)U^\top,T_re_i\rangle\\ &=\langle (T_r\nabla_{U^\top}A)e_i,e_i\rangle-\langle\nabla_{e_i}(AU^\top),T_re_i\rangle+ \langle A(\nabla_{e_i}U^\top),T_re_i\rangle\\ &= \operatorname{Tr}(T_r\nabla_{U^\top}A)-\langle\nabla_{e_i}(AU^\top),T_re_i\rangle+ \langle \overline\nabla_{e_i}U^\top,AT_re_i\rangle\\ &=U^\top(S_{r+1})-\langle\nabla_{e_i}(AU^\top),T_re_i\rangle -\langle \overline\nabla_{e_i}(fN),AT_re_i\rangle+\langle \overline\nabla_{e_i}U,AT_re_i\rangle\\ &=U^\top(S_{r+1})-\langle\nabla_{e_i}(AU^\top),T_re_i\rangle+f\langle Ae_i,AT_re_i\rangle +\langle \overline\nabla_{e_i}U,AT_re_i\rangle\\ &=U^\top(S_{r+1})-\langle\nabla_{e_i}(AU^\top),T_re_i\rangle+f\operatorname{Tr}(A^2T_r) +\operatorname{Tr}(T_rA(\overline\nabla U)^\top). \end{align*} Thus \begin{align} \label{e3} \langle\nabla_{e_i}(AU^\top),T_re_i\rangle&= -\langle\overline R(e_i,U^\top)N,T_re_i\rangle+U^\top(S_{r+1})\nonumber\\ &+f\operatorname{Tr}(A^2T_r) +\operatorname{Tr}(T_rA(\overline\nabla U)^\top). \end{align} Since $AT_r=T_rA$, we have \begin{equation} \label{e4} \operatorname{Tr}(T_rA(\overline\nabla U)^\top)=\operatorname{Tr}(AT_r(\overline\nabla U)^\top). \end{equation} Finally, from Eq. (\ref{e2}),(\ref{e3}) and (\ref{e4}) we get at the point $p$ \begin{align*} L_rf=&\langle\nabla_{e_i}(\nabla f),T_re_i\rangle =-\langle\overline R(e_i,U)N,T_re_i\rangle+\langle\overline R(e_i,U^\top)N,T_re_i\rangle\\ -&U^\top(S_{r+1})-f\operatorname{Tr}(A^2T_r)-k\operatorname{Tr}(AT_r)-N(k)\operatorname{Tr}(T_r)\\ =&-f\operatorname{Tr}(\overline R(N)T_r)-f\operatorname{Tr}(A^2T_r)-U^\top(S_{r+1}) -k\operatorname{Tr}(AT_r)-N(k)\operatorname{Tr}(T_r). \end{align*} Since $p$ is arbitrary, the assertion follows.\hfill$\square$ \begin{cor} \label{c:1} When $U$ is a Killing field we get \[ J_r(f)=-U^\top(S_{r+1})=-\langle\nabla S_{r+1},U\rangle. \] \end{cor} For further applications see Proposition \ref{p:2} and Corollary \ref{c:2}. \section{Jacobi fields} \label{s:3} Let $M$ be an arbitrary manifold and $L$ be a submanifold of $M$ with a unit orthogonal field $N$. Then the operator $J_r$ induct a new mapping (denotes also $J_r$) $J_r:\Gamma(T(L)^\bot)\rightarrow \Gamma(T(L)^\bot)$ as follows \[ J_r(fN)=J_r(f)N. \] \begin{defin} We say that $V\in \Gamma(T(L)^\bot)$ is a $r$th Jocobi field of $L$ if $J_r(V)=0$. We say that $V\in \Gamma(T(\mathcal F)^\bot)$ is a $r$th Jacobi field of $\mathcal F$ if is a $r$th Jacobi field for any leaf $L$ of $\mathcal F$. \end{defin} \begin{prop} \label{p:2} Let $L$ be a submanifold of an arbitrary Riemannian manifold $M$, such that $S_{r+1}$ is constant on $L$, then the normal component $U^\bot$ of a Killing vector field $U$ is a $r$th Jacobi vector field. \end{prop} {\it Proof.} The proof follows immediately from Corollary \ref{c:1}.\hfill$\square$ \begin{thm} \label{t:3} Let $M$ be an arbitrary Riemannian manifold and $\mathcal F$ be a foliation of $M$ whose leaves have the same constant $(r+1)$th mean curvature (e.g. zero). If $V\in\Gamma(TM)$ preserves $\mathcal F$ (i.e. maps leaves onto leaves) then $V^\bot=fN$ is a $r$th Jacobi field of $\mathcal F$. \end{thm} {\it Proof.} Since $V$ is foliation preserving, $[V,\Gamma(T(\mathcal F))]\subset \Gamma(T(\mathcal F))$ so $\nabla f+f \overline\nabla_NN=0$ on any leaf $L$. Using this and Proposition \ref{p1}, we get \begin{align*} J_rf=&L_rf+f\operatorname{Tr}(A^2T_r)+f\operatorname{Tr}(\overline R(N)T_r)\\ =&\divergence(T_r(\nabla f))-\langle\divergence T_r,\nabla f\rangle+f\operatorname{Tr}(A^2T_r)+f\operatorname{Tr}(\overline R(N)T_r)\\ =&\divergence(T_r(\nabla f))+f\divergence(T_r\overline\nabla_NN)-\langle\divergence T_r, \nabla f+f\overline\nabla_NN\rangle\\ -&f\langle\overline\nabla_NN,T_r\overline\nabla_NN\rangle\\ =&\divergence(T_r(\nabla f+f\overline\nabla_NN))-\langle\divergence T_r, \nabla f+f\overline\nabla_NN\rangle\\ -&\langle \nabla f+f\overline\nabla_NN,T_r\overline\nabla_NN\rangle =0. \end{align*} \hfill$\square$ \begin{exa} \label{ex:1} Let $M=\mathbb R\times \mathbb R^n$ ($M=\mathbb R\times T^n$) be a foliated manifold which leaves are $\{t\}\times \mathbb R^n$. For functions $\phi_1,\ldots,\phi_n:\mathbb R\rightarrow \mathbb R$ we may define a metric $\langle,\rangle$ on $M$ \[ \langle,\rangle=dt^2+e^{-2\int\phi_i(t)dt}(dx^i)^2. \] Then, $S_{r+1}=\sigma_{r+1}(\phi_1,\ldots,\phi_n)$. So we have a lot of metrics such that $S_{r+1}$ is constant on $M$. Then, a vector filed $V=f(t)\frac{\partial}{ \partial t} $ is $\mathcal F$ foliation preserving and consequently $V$ is $r$th Jacobi field. Note that, $V$ is not a Killing field (in general) and we could not use Proposition \ref{p:2}. \end{exa} \begin{prop} \label{p3} Let $\mathcal F$ be a foliation of a Riemannian manifold $M$ whose leaves are closed and have the same constant $(r+1)$th mean curvature. If on any leaf $L$ operator $T_r$ is definite and $\divergence(T_r)=0$, then any $r$th Jacobi fields of $\mathcal F$ preserves $\mathcal F$. \end{prop} {\it Proof.} If $V=fN$ is a $r$th Jacobi field, then $J_r(V)=0$ thus $I_r(f,f)=0$ on each leaf $L$. On the other, as in the proof of Theorem \ref{t:1}, we get \begin{align*} &\int_L\langle T_r(\nabla f+f\overline\nabla_NN),\nabla f+f\overline\nabla_NN\rangle=I_r(f,f). \end{align*} Thus $V$ is foliation preserving. \hfill$\square$ \medskip \par For example, if $M=\mathbb R\times T^n$ and the metric and foliation are as in Example \ref{ex:1}, then $\nabla_F A=0$ for any $F\in\Gamma(T(L))$. Consequently $\divergence(T_r)=0$ on any leaf (although $M$ need not be a manifold with constant sectional curvature). \begin{cor} \label{c:2} Under the assumptions of Proposition \ref{p3}, if $U$ is a Killing field on $M$ then $U$ preserves $\mathcal F$. \end{cor}
2,877,628,091,446
arxiv
\section{Introduction and main results} Let $V$ be a complex-valued potential on ${\Bbb R}^d$, where $d$ is odd. We study the spectral properties of the Schr\"odinger operator \begin{equation}\label{Hpic} -\Delta+V(x). \end{equation} Namely, denote by $\lambda_j$ the eigenvalues of the operator \eqref{Hpic}. We are interested in an estimate of the total number $N$ of the eigenvalues $\lambda_j$ in the case where $V$ decays exponentially fast. \iffalse \begin{center} \begin{tikzpicture} \draw[thick,->] (3,0) -- (10,0); \draw[thick,-] (5,.05) -- (10,.05); \draw[thick,->] (5,-2) -- (5,5); \node [above, black] at (7,0) {$ \sigma_{cont}(-\Delta+V)$}; \node [above, black] at (8,3) {$ N=7$}; \tikzstyle{every node}=[draw,shape=circle] \draw[xshift=-1cm] (9,-1) node[circle,fill,inner sep=1pt,label=above:$\lambda_1$](a){}; \draw[xshift=-1cm] (7,1) node[circle,fill,inner sep=1pt,label=above:$\lambda_6$](a){}; \draw[xshift=-1cm] (5,-1) node[circle,fill,inner sep=1pt,label=above:$\lambda_3$](a){}; \draw[xshift=-1cm] (4.5,3) node[circle,fill,inner sep=1pt,label=above:$\lambda_4$](a){}; \draw[xshift=-1cm] (6.5,3.5) node[circle,fill,inner sep=1pt,label=above:$\lambda_7$](a){}; \draw[xshift=-1cm] (3.5,1) node[circle,fill,inner sep=1pt,label=above:$\lambda_2$](a){}; \draw[xshift=-1cm]( 6.5,-1.5) node[circle,fill,inner sep=1pt,label=above:$\lambda_5$](a){}; \end{tikzpicture} {\bf Fig. 1. Eigenvalues of $-\Delta+V$} \end{center} \fi There has been a lot of recent activity concerning uniform bounds on eigenvalues of Schr\"odinger operators with complex-valued potentials which are decaying at infinity. By `uniform bounds' we refer to bounds which do not only hold in an asymptotic regime and which depend on the potential only through some simple and computationally easily accessible quantities like $L^p$ norms. We refer to \cite{Da} for a review of the state of the art of non-selfadjoint Schr\"odinger operators and for motivations and applications. Bounds on single eigenvalues were proved, for instance, in \cite{AAD,DN,FrLaSe,Fr,En} and bounds on sums of powers of eigenvalues were proved, for instance, in \cite{FrLaLiSe,LaSa,DeHaKa0,DeHaKa,BGK,FrSa,Fr3}. The latter bounds generalize the Lieb--Thirring bounds \cite{LiTh} to the non-selfadjoint setting. Despite this activity, there have been almost no results on the \emph{number} of eigenvalues of Schr\"odinger operators with complex potentials. (The only exceptions are two papers \cite{St1,St3} in one and three dimensions, whose relation to our work we discuss below.) To see that this is a subtle question we recall that, for instance, for the Schr\"odinger operator $-d^2/dx^2+V$ on the half-line with a Dirichlet boundary condition at the origin, Bargmann's bound states that for \emph{real} potentials $V$ the number of eigenvalues can be bounded by $\int |x| |V(x)|\,dx$. It is a remarkable result of Pavlov \cite{Pa2} that a similar bound cannot hold in the non-selfadjoint case. In fact, he showed that for any $0<\alpha<1/2$ and any $\lambda>0$ there is a (real) potential $V$ satisfying, for some $C,c>0$, \begin{equation} \label{eq:pavlov} |V(x)| \leq C e^{-cx^\alpha} \qquad\text{for all}\ x\in (0,\infty) \end{equation} and a complex number $\sigma$ such that the operator $-d^2/dx^2+V$ in $L^2(0,\infty)$ with boundary condition $\psi'(0)=\sigma\psi(0)$ has an infinite number of eigenvalues accumulating at $\lambda$. On the other hand, Pavlov \cite{Pa1} also showed that if, for some $C,c>0$, \begin{equation} \label{eq:pavlov2} |V(x)| \leq C e^{-cx^{1/2}} \qquad\text{for all}\ x\in (0,\infty) \,, \end{equation} then the number of eigenvalues of the operator $-d^2/dx^2+V$ in $L^2(0,\infty)$ with any boundary condition of the form $\psi'(0)=\sigma\psi(0)$, $\sigma\in\mathbb{C}$, or $\psi(0)=0$ is finite. Pavlov also proves a similar theorem in three dimensions. Pavlov's proofs, however, seem to give no bound on the number of eigenvalues in terms of the constants $c$ and $C$ in \eqref{eq:pavlov2}. Before Pavlov, Naimark \cite{Na} had shown that the number of eigenvalues is finite if \eqref{eq:pavlov} holds with $\alpha=1$. Similar results are known in two and three dimensions, see \cite{Ma1,Ma2,Mu}, but none of these proofs gives uniform bounds on the corresponding number of eigenvalues. Such uniform bounds, in arbitrary odd dimensions, are the main result of the present paper. More precisely, we shall prove the following two theorems. \begin{theorem} \label{main1} The number $N$ of eigenvalues of $-\frac{d^2}{dx^2}+V$ in $L^2(\mathbb{R}_+)$ with a Dirichlet boundary condition, counting algebraic multiplicities, satisfies, for any $\epsilon>0$, $$ N \leq \frac{1}{\epsilon^2} \left( \int_0^\infty e^{\epsilon x} |V(x)|\,dx \right)^2 \,. $$ \end{theorem} \begin{theorem} \label{main2} Let $d\geq 3$ be odd. Then the number $N$ of eigenvalues of $-\Delta+V$ in $L^2(\mathbb{R}^d)$, counting algebraic multiplicities, satisfies, for any $\epsilon>0$, $$ N \leq \frac{C_d}{\epsilon^2} \left( \int_{\mathbb{R}^d} e^{\epsilon |x|} |V(x)|^{(d+1)/2} \,dx \right)^2 $$ with a constant $C_d$ depending only on $d$. \end{theorem} The proofs of both theorems are based on a `trace formula approach' which consists in identifying eigenvalues with zeroes of a certain analytic function and in using bounds on the zeroes of analytic functions. This approach was used before by two of us (A.L. and O.S.) in self-adjoint problems \cite{LNS} and by one of us (O.S.) in non-selfadjoint problems \cite{Sa}. In non-selfadjoint problems, a related method is used, for instance, in \cite{BGK,DeHaKa0,DeHaKa,FrSa}. In the present paper we combine these techniques with novel resolvent bounds in trace ideals, which are our technical main results. Resolvent bounds in operator norm are due to Kenig--Ruiz--Sogge \cite{KRS}. In connection to eigenvalue bounds for non-selfadjoint operators they were exploited in \cite{Fr} and generalized to trace ideals in \cite{FrSa}. Here we go a significant step further and show that, if $V$ decays exponentially in the sense of the assumptions in Theorems \ref{main1} and \ref{main2}, then the Birman--Schwinger operator admits an analytic continuation and resolvent bounds, similar to those of \cite{FrSa}, remain valid for its continuation. Our proof uses complex interpolation as in \cite{KRS} and \cite{FrSa}, but the choice of the analytic family is more involved than in those papers. In dimensions one and three the resolvent kernel is explicit and this is important for the proofs in \cite{St1,St3}. In contrast, our Theorem \ref{main2} is valid in arbitrary odd dimensions $d\geq 3$, where the resolvent kernel is only given in terms of Bessel functions, which become increasingly more complicated as the dimension increases. Complex interpolation helps us to go around this obstacle. The assumption that the space dimension is odd comes from the fact that in this case the resolvent admits an analytic continuation to the lower half-plane (while there is a branch point at zero for even dimensions). Finally, we note that, while we managed to obtain rather explicit and transparent bounds for potentials decaying exponentially, the question whether there is a quantitative version of Pavlov's bound remains a challenging open question. \subsection*{Acknowledgements} The first and third author would like to thank the Mittag--Leffler Institute for hospitality. The first author acknowledges support through NSF grant DMS-1363432. AL was supported by the grant of the Russian Federation Government to support scientific research under the supervision of leading scientist at Siberian Federal University, No 14.Y26.31.0006. \section{Zeroes of analytic functions} The following proposition gives a useful bound on the zeroes of an analytic function in a half-plane. \begin{proposition}\label{zeroes} Let $\eta\in\mathbb{R}\setminus\{0\}$. Let $a$ be an analytic function in $\{\im k>\eta\}$ which is continuous up to the boundary and satisfies \begin{equation} \label{eq:ass1} a(k) = 1 + o(|k|^{-1}) \qquad\text{as}\ |k|\to\infty \ \text{in}\ \{\im k>\eta\} \end{equation} and, for some $A\geq 0$ and $\nu>1$, \begin{equation} \label{eq:ass2} \ln|a(k)| \leq A |k|^{-\nu} \qquad\text{if}\ \im k = \eta \,. \end{equation} Then the zeroes $k_j$ of $a$ in $\{\im k>\eta\}$, repeated according to multiplicities, satisfy \begin{equation} \label{eq:zeroes} \sum_j \left( \im k_j - \eta \right) \leq c_\nu A |\eta|^{-\nu+1} \end{equation} with $$ c_\nu = \frac{1}{2\pi} \int_\mathbb{R} \frac{dt}{(1+t^2)^{\nu/2}} \,. $$ \end{proposition} The integral appearing in $c_\nu$ can be expressed in terms of the Gamma function. For $\nu=2$, the computation is straightforward and we obtain \begin{equation} \label{eq:zeroesconst1} c_2 = 1/2 \,. \end{equation} \begin{proof} We introduce the Blaschke product $$ B(k) = \prod_j \frac{k-k_j}{k-\overline{k_j} - 2i\eta} \,, $$ so that $a(k)/B(k)$ is an analytic and non-zero in $\{\im k>\eta\}$ and $\log (a(k)/B(k))$ exists and is analytic there. For $R>\eta$ we denote by $C_R$ the contour which consists of the interval $\{k\in {\Bbb C}:\, k=x+i\eta,\ |x|\leq \sqrt{R^2-\eta^2}\}$, traversed from left to right, and the circular part $\Gamma_R:=\{k\in {\Bbb C}:\, |k|=R \,, \im k >\eta\}$, traversed counterclockwise. \iffalse \begin{center} \begin{tikzpicture} \draw[thick,->] (-5,0) -- (5,0); \draw[thick,->] (0,-3) -- (0,5); \draw[thick, -] (-3,-.9) -- (3,-.9); \draw[thick,< ->] (1,0) -- (1,-.9); \draw[thick,black] (3,-.9) arc (-10:190:3cm); \node [above, black] at (2,2) {$C_R$}; \node [above, black] at (1.2,-.7) {$\eta$}; \end{tikzpicture} {\bf Fig. 2. Contour of integration $C_R$} \end{center} \fi Then $$ \int_{C_R} \log \frac{a(k)}{B(k)}\,dk = 0 \,, $$ and therefore \begin{equation} \label{eq:zeroesproof1} \re \int_{-\sqrt{R^2-\eta^2}}^{\sqrt{R^2-\eta^2}} \log \frac{a(x+i\eta)}{B(x+i\eta)}\,dx + \re \int_{\Gamma_R} \log\frac{a(k)}{B(k)}\,dk = 0 \,. \end{equation} We note that $|B(x+i\eta)|=1$ if $x\in\mathbb{R}$ and, therefore, \begin{align} \label{eq:zeroesproof2} \re \int_{-\sqrt{R^2-\eta^2}}^{\sqrt{R^2-\eta^2}} \log \frac{a(x+i\eta)}{B(x+i\eta)}\,dx & = \int_{-\sqrt{R^2-\eta^2}}^{\sqrt{R^2-\eta^2}} \ln \left| \frac{a(x+i\eta)}{B(x+i\eta)} \right| \,dx \notag \\ & = \int_{-\sqrt{R^2-\eta^2}}^{\sqrt{R^2-\eta^2}} \ln \left| a(x+i\eta) \right| \,dx \,. \end{align} On the other hand, by \eqref{eq:ass1} and $B(k) = 1+O(|k|^{-1})$ (note that the $|k_j|$ are contained in a bounded set as a consequence of \eqref{eq:ass1}), both $\log a(k)$ and $\log B(k)$ are well-defined for all sufficiently large $|k|$ and we have, for all sufficiently large $R$, \begin{equation} \label{eq:zeroesproof3} \re \int_{\Gamma_R} \log\frac{a(k)}{B(k)}\,dk = \re \int_{\Gamma_R} \log a(k)\,dk - \re \int_{\Gamma_R} \log B(k)\,dk \,. \end{equation} We conclude from \eqref{eq:zeroesproof1}, \eqref{eq:zeroesproof2} and \eqref{eq:zeroesproof3} that \begin{equation} \label{eq:traceformula} \re \int_{\Gamma_R} \log B(k)\,dk = \int_{-\sqrt{R^2-\eta^2}}^{\sqrt{R^2-\eta^2}} \ln \left| a(x+i\eta) \right| \,dx + \re \int_{\Gamma_R} \log a(k)\,dk \end{equation} for all sufficiently large $R$. We assume that $R$ is so large that $|k_j|<R$ and $|k_j -i\eta|<\sqrt{R^2-\eta^2}$ for all $j$. Then, by analyticity, \begin{equation} \label{eq:tf0} \int_{\Gamma_R} \log B(k)\,dk = \int_{\tilde\Gamma_R} \log B(k) \,dk \end{equation} with $\tilde\Gamma_R := \{ |k-i\eta|=\sqrt{R^2-\eta^2}\,, \im k>\eta \}$, traversed counterclockwise. Since $$ \log B(k) = 2i \sum_j \frac{\eta - \im k_j}{k-i\eta} + O( (k-i\eta)^{-2}) \,, $$ we get \begin{align} \label{eq:tf1} \int_{\tilde\Gamma_R} \log B(k) \,dk & = \int_{|\tilde k|=\sqrt{R^2-\eta^2},\ \im\tilde k>0} \log B(\tilde k+i\eta) \,d\tilde k \notag \\ & = -2\pi \sum_j \left(\eta - \im k_j\right) + O( (R^2-\eta^2)^{-1/2}) \qquad\text{as}\ R\to\infty \,. \end{align} On the other hand, by \eqref{eq:ass1}, \begin{align} \label{eq:tf2} \re \int_{\Gamma_R} \log a(k)\,dk = o(1) \qquad\text{as}\ R\to\infty \,. \end{align} Finally, by \eqref{eq:ass2}, \begin{align} \label{eq:tf3} \int_{-\sqrt{R^2-\eta^2}}^{\sqrt{R^2-\eta^2}} \ln \left| a(x+i\eta) \right| \,dx & \leq A \int_{-\sqrt{R^2-\eta^2}}^{\sqrt{R^2-\eta^2}} \frac{dx}{(x^2+\eta^2)^{\nu/2}} \leq A \int_\mathbb{R} \frac{dx}{(x^2+\eta^2)^{\nu/2}} \notag \\ & = A |\eta|^{-\nu+1} \int_\mathbb{R} \frac{dt}{(1+t^2)^{\nu/2}} \,. \end{align} Inequality \eqref{eq:zeroes} now follows from \eqref{eq:traceformula}, \eqref{eq:tf0}, \eqref{eq:tf1}, \eqref{eq:tf2} and \eqref{eq:tf3}. \end{proof} \begin{corollary}\label{zeroescor} Let $\eta<0$. Let $a$ be an analytic function in $\{\im k>\eta\}$ which satisfies \eqref{eq:ass1} with $\eta$ replaced by $\eta'$ for any $\eta'>\eta$. Moreover, assume that \eqref{eq:ass2} holds for some $A\geq 0$ and $\nu>1$ with $\eta$ replaced by $\eta'$ for any $\eta'>\eta$ sufficiently close to $\eta$. Then the zeroes $k_j$ of $a$ in $\{\im k\geq 0\}$, repeated according to multiplicities, satisfy $$ \#\{ j:\ \im k_j\geq 0 \} \leq c_\nu A |\eta|^{-\nu} \,. $$ \end{corollary} \begin{proof} We apply Proposition \ref{zeroes} for every $\eta'>\eta$ sufficiently close to $\eta$ and obtain $$ \sum_j \left( \im k_j - \eta' \right)_+ \leq c_\nu A |\eta'|^{-\nu+1} $$ Clearly, we have $$ \sum_j \left( \im k_j - \eta'\right)_+ \geq |\eta'|\ \#\left\{ j:\ \im k_j\geq 0 \right\} \,. $$ The corollary follows by passing to the limit $\eta'\to\eta$ in the thus obtained inequality. \end{proof} \section{Traces and determinants} We use the standard notation $\mathfrak S_p$ for the Schatten classes with exponent $1\leq p<\infty$. If $n\in\mathbb{N}$, $K\in\mathfrak S_n$ and $\lambda_j(K)$ denote the eigenvalues of $K$, repeated according to algebraic multiplicities, the $n$-th order regularized determinant $\det{}_n(1+K)$ is defined by $$ \det{}_n(1+K) = \prod_j \left( \left( 1+\lambda_j(K) \right) \exp\left( \sum_{m=1}^{n-1} \frac{(-1)^m}{m} \lambda_j(K)^m \right) \right) \,. $$ The following properties are well-known, but we include a proof for the sake of completeness. \begin{lemma}\label{detbounds} Let $n\in\mathbb{N}$. \begin{enumerate} \item For any $n-1\leq p\leq n$ with $p>0$ there is a $\Gamma_{n,p}$ such that $$ \ln |\det{}_n(1+K)| \leq \Gamma_{n,p} \|K\|_p^p \,. $$ \item For any $0\leq\theta<1$ and $0<p\leq n$ there is a $\Gamma_{n,p}(\theta)$ such that, if $\|K\|\leq\theta$, then $$ \left| \log \det{}_n(1+K) \right| \leq \Gamma_{n,p}(\theta)\ \|K\|_p^p \,. $$ \end{enumerate} \end{lemma} \begin{proof} To prove the first assertion, let $f(z) := (1+z) \exp \left( \sum_{m=1}^{n-1} \frac{(-1)^m}{m} z^m \right)$. Then $\ln|f(z)|$ can be bounded by a constant times $|z|^n$ for small $|z|$ and by a constant times $|z|^{n-1}$ for large $|z|$. Thus, $\ln|f(z)|\leq \Gamma_{n,p}|z|^p$ for any $n-1\leq p\leq n$, and so $$ \ln |\det{}_n(1+K)| \leq \Gamma_{n,p} \sum_j |\lambda_j(K)|^p $$ By Weyl's inequality \cite[Thm. 1.15]{Si}, the sum on the right side does not exceed $\|K\|_p^p$. To prove the second assertion, we note that since $|\lambda_j(K)|\leq \|K\|\leq\theta<1$, we have \begin{align*} \log \det{}_n(1+K) & = \sum_j \left( \log \left(1+\lambda_j(K) \right) + \sum_{m=1}^{n-1} \frac{(-1)^m}{m} \lambda_j(K)^m \right) \\ & = \sum_j \sum_{m=n}^\infty \frac{(-1)^{m-1}}{m} \lambda_j(K)^m \,. \end{align*} We bound $$ \left| \log \det{}_n(1+K) \right| \leq \sum_j \sum_{m=n}^\infty \frac{1}{m} |\lambda_j(K)|^n \theta^{m-n} = \gamma_n(\theta) \sum_j |\lambda_j(K)|^n $$ and obtain the assertion for $p=n$ with $\Gamma_{n,n}(\theta)=\gamma_n(\theta)$ again by Weyl's inequality. If $0<p<n$, we simply use $|\lambda_j(K)|^n \leq \theta^{n-p} |\lambda_j(K)|^p$ and get the inequality with $\Gamma_{n,p}(\theta) = \theta^{n-p} \gamma_n(\theta)$. \end{proof} The previous proof and a simple computation show that for $n=p=2$ one can take \begin{equation} \label{eq:consths} \Gamma_{2,2} = 1/2 \,. \end{equation} We next recall a version of the Birman--Schwinger principle. We state it in the setting of \cite{Fr3}, namely, where $H_0$ is a non-negative self-adjoint operator and $G_0$ and $G$ are operators with $\dom G\supset\dom H_0^{1/2}$ and $\dom G_0\supset\dom H_0^{1/2}$ and such that $G_0(H_0+1)^{-1/2}$ and $G(H+1)^{-1/2}$ are compact. Then the quadratic form $$ \| H_0^{1/2} u \|^2 + (G u ,G_0 u) $$ defines an $m$-sectorial operator, which we shall denote by $H$. The Birman--Schwinger principle states that $z\in\rho(H_0)$ is an eigenvalue of $H$ iff $-1$ is an eigenvalue of the Birman--Schwinger operator $G_0(H_0-z)^{-1} G^*$. Moreover, the corresponding geometric multiplicities coincide. The following lemma says that even the algebraic multiplicities of eigenvalues of $H$ can be characterizes in terms of a quantity related to the Birman--Schwinger operator. \begin{lemma}\label{mult} Assume that for some $n\in\mathbb{N}$, $G_0(H_0-\zeta)^{-1}G^*\in\mathfrak S_n$ for all $\zeta\in\rho(H_0)$. Then the function $\zeta\mapsto\det{}_n(1+G_0(H_0-\zeta)^{-1}G)$ is analytic in $\rho(H_0)$. For $z\in\rho(H_0)$ one has $\det{}_n(1+G_0(H_0-z)^{-1}G)=0$ iff $z$ is an eigenvalue of $H$ and the order of the zero coincides with the algebraic multiplicity. \end{lemma} The analyticity of the function $\zeta\mapsto\det{}_n(1+G_0(H_0-\zeta)^{-1}G)$ is well-known and so is the result concerning the algebraic multiplicity in the case $n=1$. The result for general $n$ is essentially due to \cite{LaSu}; see also \cite{Fr3} for an extension of their proof to the present setting. \section{Resolvent bounds}\label{sec:resbounds} In this section we collect trace ideal bounds for the Birman--Schwinger operator \begin{equation} \label{eq:bs} K(k) = \sqrt{V} (-\Delta -k^2)^{-1} \sqrt{|V|} \,. \end{equation} We use the notation $\sqrt{V(x)} = V(x)/\sqrt{|V(x)|}$ if $V(x)\neq 0$ and $\sqrt{V(x)}=0$ if $V(x)=0$. We begin with the case of the half-line, that is, $-\Delta$ in \eqref{eq:bs} denotes the Dirichlet Laplacian on $(0,\infty)$. From the explicit expression of its integral kernel it is easy to see that, if $V$ is bounded and has compact support, $K(k)$ admits an analytic continuation to an entire operator family on $L^2(\mathbb{R}_+)$. The following proposition gives a bound on the Hilbert--Schmidt norm. \begin{proposition}\label{resbound1d} For any $k\in\mathbb{C}\setminus\{0\}$, $$ \| K(k) \|_{\mathfrak S_2} \leq \frac{1}{|k|} \int_0^\infty e^{2x(\im k)_-} |V(x)|\,dx \,, $$ in the sense that $K(k)$ is Hilbert--Schmidt if the integral on the right side is finite. \end{proposition} \begin{proof} The integral kernel of $(-\Delta-k^2)^{-1}$ is the function $$ g_k(x,y) = \frac{1}{2ik} \left( e^{ik(x+y)} - e^{ik|x-y|}\right) \,, $$ which satisfies $$ |g_k(x,y)| \leq \frac{1}{|k|} e^{(x+y)(\im k)_-} \,. $$ Combining this bound with the identity $$ \| K(k) \|_{\mathfrak S_2}^2 = \int_0^\infty \int_0^\infty |V(x)| |g_k(x,y)|^2 |V(y)| \,dx\,dy $$ we obtain the claimed bound. \end{proof} We now consider the case of $\mathbb{R}^d$ with $d\geq 3$ odd. The operator $-\Delta$ in \eqref{eq:bs} denotes the Laplacian in $\mathbb{R}^d$. It is well-known (see, e.g., \cite[Theorem 3.1]{DZ} for a textbook proof) that, since $d$ is odd, $(-\Delta-k^2)^{-1}$ admits an analytic continuation to an entire operator family when considered as an operator from compactly supported functions in $L^2(\mathbb{R}^d)$ to $L^2_{{\rm loc}}(\mathbb{R}^d)$. Thus, if $V$ is bounded and compactly supported, $K(k)$ has a analytic continuation to an entire operator family on $L^2(\mathbb{R}^d)$. The following proposition implies, in particular, that if $V$ decays exponentially, then the Birman--Schwinger operator also admits an analytic continuation to (part of) the lower half-plane and that this continuation belongs to a certain trace ideal. \begin{proposition}\label{resboundoddd} Let $d\geq 3$ be odd. There are constants $C_d>0$, $\beta_d>0$ such that for any $k\in\mathbb{C}\setminus\{0\}$, $$ \| K(k) \|_{\mathfrak S_{d+1}} \leq C_d \left( \frac{1}{|k|} \int_{\mathbb{R}^d} e^{\beta_d |x|(\im k)_-} |V(x)|^{(d+1)/2} \,dx \right)^{2/(d+1)} \,, $$ in the sense that $K(k)\in\mathfrak S_{d+1}$ if the integral on the right side is finite. \end{proposition} The proof of this proposition is somewhat involved and, in fact, presents the technical main result of this paper. In order to present the idea behind the proofs of Theorems \ref{main1} and \ref{main2} more clearly, we defer the proof of Proposition \ref{resboundoddd} to Section \ref{sec:resboundproof}. \section{Proof of Theorems \ref{main1} and \ref{main2}} In this section we prove our main results, Theorems \ref{main1} and \ref{main2}. We prove them simultaneously. Let us assume that $V$ is bounded and has compact support. The bound in this case implies the bound in the general case by a simple density argument. As discussed in Section \ref{sec:resbounds}, the Birman--Schwinger operators $K(k)$ from \eqref{eq:bs} (with $-\Delta$ denoting the Dirichlet Laplacian if $d=1$ and the ordinary Laplacian if $d\geq 3$) extend to an entire family of bounded operators. The same proof shows that they are not only entire with respect to the norm of bounded operators, but even with respect to the norm of operators in $\mathfrak S_{d+1}$. (In fact, even in $\mathfrak S_{p}$ with $p>d/2$, see \cite[Lemma 3.21]{DZ}.) We emphasize that at this point we use the restriction to bounded, compactly supported potentials; in the general case, Propositions \ref{resbound1d} and \ref{resboundoddd} do not allow us to exclude a singularity at the origin. We will apply Corollary \ref{zeroescor} to the function $$ a(k) := \det{}_{d+1}(1+K(k)) $$ with $\eta= - \epsilon/\beta_d$, where $\beta_d$ is from Proposition \ref{resboundoddd} if $d\geq 3$ is odd and $\beta_1=2$ if $d=1$. Since $K(k)$ is analytic with values in $\mathfrak S_{d+1}$, the function $a$ is analytic. It follows from the resolvent bounds in Propositions \ref{resbound1d} and \ref{resboundoddd}, combined with item (2) in Lemma \ref{detbounds} (with $p=n=d+1$), that assumption \eqref{eq:ass1} is valid. Moreover, combining them with item (1) in Lemma \ref{detbounds} (again with $p=n=d+1$), we see that assumption \eqref{eq:ass2} holds with $\nu=2$ and $$ A = \Gamma_{d+1,d+1} C_d^{d+1} \left( \int e^{\epsilon |x|} |V(x)|^{(d+1)/2}\,dx \right)^2 \,. $$ Here $C_d=1$ if $d=1$. Thus, Corollary \ref{zeroescor} implies that $$ \#\{ j:\ \im k_j \geq 0\} \leq \epsilon^{-2} \beta_d^2 c_2 \Gamma_{d+1,d+1} C_d^{d+1} \left( \int e^{\epsilon |x|} |V(x)|^{(d+1)/2}\,dx \right)^2 \,. $$ It remains to use Lemma \ref{mult}, which says that the $k_j$ with $\im k_j> 0$ coincide with the square roots of the eigenvalues of $-\Delta+V$, counting algebraic multiplicities. This proves Theorems \ref{main1} and \ref{main2}. In the case $d=1$, we can use the values of the constants $$ c_2 = 1/2 \,,\quad \Gamma_{2,2} = 1/2 \,,\quad \beta_1 = 2 \,,\quad C_1 = 1 $$ (see \eqref{eq:consths}, \eqref{eq:zeroesconst1} and Proposition \ref{resbound1d}) to get the explicit constant in Theorem~\ref{main1}. \qed \section{Proof of Proposition \ref{resboundoddd}}\label{sec:resboundproof} The bound from Proposition \ref{resboundoddd} for $\im k\geq 0$ is contained in \cite{FrSa}, so we focus on the case $\im k<0$. We are going to break the proof into two steps, according whether $|\im k|/|\re k|$ is large or small. \begin{lemma}\label{resbound1} Let $d\geq 3$. There are $\alpha_d,\beta_d,\gamma_d>0$ such that, for any $k\in\mathbb{C}$ with $\im k< 0$ and $|\re k|> \gamma_d|\im k|$, \begin{equation} \label{eq:resbound1} \left\| K(k) \right\|_{\mathfrak S_{d+1}} \leq \alpha_d \left( \frac{\int_{\mathbb{R}^d} e^{\beta_d |\im k||x|} |V(x)|^{(d+1)/2}\,dx}{|\re k| - \gamma_d |\im k|} \right)^{2/(d+1)} \,. \end{equation} One can take $\beta_d =2(e^{(d+1)/2}-1)/(e-1)$ and $\gamma_d=e^{(d+1)/2}/(e-1)$. \end{lemma} We emphasize that this lemma does not need $d$ to be odd. In this case one can still prove that $K(k)$ has an analytic continuation to the set $\mathbb{C}\setminus(-i)[0,\infty)$. \begin{proof} By a density argument we may assume that $V$ is bounded and compactly supported. As discussed above, under this assumption $K(k)$ has an analytic continuation to an entire function. We will show that for any $k$ as in the lemma and any finite rank operator $Q$, \begin{equation} \label{eq:resbound1proof} \left| \tr K(k)Q\right| \leq \alpha_d \left( \frac{\int_{\mathbb{R}^d} e^{\beta_d (\im k)_-|x|} |V(x)|^{(d+1)/2}\,dx}{|\re k| - \gamma_d |\im k|} \right)^{2/(d+1)} \|Q\|_{\mathfrak{S}_{(d+1)/d}} \,, \end{equation} which will imply the assertion. To prove \eqref{eq:resbound1proof} we use complex interpolation. Namely, for fixed $k$ as in the lemma we will construct an analytic family of operators $K_\zeta$ such that $K_1=K(k)$ for $\zeta=1$. The construction of $K_\zeta$ proceeds as follows. If $\im k>0$ and $\re\zeta\geq 0$, then the operator $(-\Delta-k^2)^{-\zeta}$ is well-defined by the spectral theorem or, equivalently, as a multiplier in Fourier space. Here $(\cdot)^{-\zeta}$ denotes the principal branch. If $\re\zeta>0$, this is an integral operator with integral kernel \begin{equation} \label{eq:reskernel} \left(-\Delta-k^2 \right)^{-\zeta}(x,y) = \frac{1}{(2\pi)^d} \int_{\mathbb{R}^d} \frac{e^{i\xi\cdot (x-y)}}{\left(\xi^2-k^2\right)^\zeta}\,d\xi \,. \end{equation} We recall \cite[Section III.2.8]{GS} the formula \begin{equation} \label{eq:fouriertrafo} \int_{\mathbb{R}^d} \frac{e^{i\xi\cdot x}}{\left(\xi^2-k^2\right)^\zeta}\,d\xi = (2\pi)^{d/2} \, \frac{2^{1-\zeta}}{\Gamma(\zeta)} \left( \frac{-ik}{|x|} \right)^{(d-2\zeta)/2} K_{(d-2\zeta)/2}(-ik|x|) \,, \end{equation} valid for $\im k>0$ and $\re\zeta>0$. Here $K_\nu$ denotes the Bessel function of the third kind. We will need the fact that \begin{equation} \label{eq:besselnu} K_\nu(z) = K_{-\nu}(z) \,, \end{equation} as well as the following integral representation for this function \cite[Section 7.3.4]{EMOT}, \begin{equation} \label{eq:bessel} K_\nu(z) = \frac{1}{\Gamma(\nu+1/2)} \left(\frac{\pi}{2z}\right)^{1/2} e^{-z} \int_0^\infty e^{-t} t^{\nu-1/2} \left( 1+ \frac{t}{2z}\right)^{\nu-1/2}\,dt \qquad\text{if}\ \re\nu>-1/2 \,. \end{equation} For fixed $\zeta$ with $\re\zeta>0$, the right side of \eqref{eq:fouriertrafo} has an analytic continuation with respect to $k$, with $k=0$ being possibly a branch point. This allows us to analytically continue the operator $W(-\Delta-k^2)^{-\zeta}W$ to the lower half-plane if $W$ is a bounded, compactly supported function. At the same time, for fixed $k\neq 0$ (possibly in the lower half-plane), the operator family $W(-\Delta-k^2)^{-\zeta}W$ is analytic with respect to $\zeta$ in the upper half-plane. We now fix $k\in\mathbb{C}$ with $\im k< 0$ and $\re k\neq 0$ and set $$ k_\zeta = k + i |\im k| \frac{e-e^\zeta}{e-1} \,. $$ For fixed $\re\zeta$ this describes a circle centered at $k+i|\im k|e/(e-1)$ with radius $|\im k| e^{\re\zeta}/(e-1)$. \iffalse \begin{center} \begin{tikzpicture} \draw[thick,->] (0,0) -- (10,0); \draw[thick,->] (0,0) -- (0,5); \draw (5,1) circle (1 cm); \node [above, black] at (5,2) {${\rm Re}\,\zeta=0$}; \draw (5,1) circle (2 cm); \node [above, black] at (5,3) {${\rm Re}\,\zeta=1$}; \draw (5,1) circle (3.2 cm); \node [above, black] at (5,4.2) {${\rm Re}\,\zeta=(d+1)/2$}; \node [above, black] at (5,-1) {${\bf k}$}; \draw (5,-1) circle (.5 mm); ; \end{tikzpicture} {\bf Fig. 3. Curves $t\mapsto k_{a+it}$ for $a=0$, $a=1$ and $a=(d+1)/2$} \end{center} \bigskip \fi We consider the function $$ f(\zeta):= e^{\zeta^2} \, \tr \left( S |V|^{\zeta/2} (-\Delta-k_\zeta^2)^{-\zeta} |V|^{\zeta/2} U |Q|^{(d+1-\zeta)/d} \right) \,, $$ where $S(x)=V(x)/|V(x)|$ if $V(x)\neq 0$ and $S(x)=0$ if $V(x)=0$ and where $Q=U|Q|$ is the polar decomposition of $Q$. The function $f$ is analytic in $\zeta$ and at $\zeta=1$ its absolute value coincides with $e$ times the left side of \eqref{eq:resbound1proof}. We will apply Hadamard's three lines lemma to the function $f$, where the bounding lines are given by $\re\zeta=0$ and $\re\zeta=(d+1)/2$. If $\re\zeta=0$, we use the fact that $\im k_\zeta\geq 0$. This implies that the argument of $|\xi|^2 - k_\zeta^2$ is uniformly bounded in $\xi\in\mathbb{R}^d$ and therefore $$ \left\| (-\Delta-k_\zeta^2)^{-\zeta} \right\| = \sup_{\xi\in\mathbb{R}^d} \left| (|\xi|^2-k_\zeta^2)^{-\zeta} \right| \leq C_1 e^{C_1 |\im\zeta|} \qquad\text{if}\ \re\zeta=0 \,. $$ for some $C_1>0$. Thus, because of the superexponential decrease of the factor $e^{-(\im\zeta)^2}$, \begin{equation} \label{eq:hadamard1} |f(\zeta)| \leq C_1' \|Q\|_{(d+1)/d}^{(d+1)/d} \qquad\text{if}\ \re\zeta=0 \,. \end{equation} If $\re\zeta=(d+1)/2$, we bound \begin{align*} |f(\zeta)| & \leq e^{(\re\zeta)^2-(\im\zeta)^2} \, \left\| |V|^{(d+1)/4} (-\Delta-k_\zeta^2)^{-\zeta} |V|^{(d+1)/4} \right\|_{\mathfrak S_2} \left\| |Q|^{(d+1)/(2d)} \right\|_{\mathfrak S_2} \\ & = e^{(\re\zeta)^2-(\im\zeta)^2} \, \left\| |V|^{(d+1)/4} (-\Delta-k_\zeta^2)^{-\zeta} |V|^{(d+1)/4} \right\|_{\mathfrak S_2} \left\|Q \right\|_{\mathfrak S_{(d+1)/d}}^{(d+1)/(2d)} \,. \end{align*} In order to control the Hilbert--Schmidt norm on the right side, we bound the integral kernel of $(-\Delta-k^2)^{-\zeta}$ for $\zeta=(d+1)/2+i\tau$ with $\tau\in\mathbb{R}$. According to \eqref{eq:reskernel}, \eqref{eq:fouriertrafo} and \eqref{eq:besselnu} it is given by $$ (2\pi)^{-d/2} \, \frac{2^{1-(d+1)/2 -i\tau}}{\Gamma((d+1)/2+i\tau)} \left( \frac{-ik}{|x-y|} \right)^{-1/2-i\tau} K_{1/2+i\tau}(-ik|x-y|) \,. $$ We take $k=k_\zeta$ and bound, using \eqref{eq:bessel}, \begin{align*} & \left| K_{1/2+i\tau}(-ik_\zeta|x-y|) \right| \\ & \qquad\qquad \leq \left(\frac{\pi}{2|k_\zeta||x-y|}\right)^{1/2} \frac{e^{-\im k_\zeta|x-y|}}{|\Gamma(1+i\tau)|} \int_0^\infty e^{-t} \left| \left(1+ \frac{it}{2k_\zeta|x-y|}\right)^{i\tau} \right| dt \,. \end{align*} It is easy to see that there is a constant $C>0$ such that $|\re k_\zeta| \geq C |\im k_\zeta|$ for all $\zeta$ with $\re\zeta=(d+1)/2$. This implies that the argument of $1+it/(2k_\zeta|x-y|)$ is uniformly bounded in $t\in [0,\infty)$, $|x-y|\in [0,\infty)$ and $\tau=\im\zeta\in\mathbb{R}$. We now observe that $$ \im k_\zeta = -|\im k| \frac{e^{(d+1)/2}\cos\tau-1}{e-1} \geq - \beta_d |\im k|/2 $$ with $\beta_d= 2(e^{(d+1)/2}-1)/(e-1)$ and $$ |k_\zeta| \geq \sqrt{(\re k)^2 + \frac{(\im k)^2}{(e-1)^2}} - |\im k| \frac{e^{\re \zeta}}{e-1} \geq |\re k| - \gamma_d |\im k| $$ with $\gamma_d = e^{(d+1)/2}/(e-1)$. Thus we conclude that $$ \left| (-\Delta-k_\zeta^2)^{-1}(x,y) \right| \leq C_2 e^{C_2|\tau|} \frac{e^{\beta_d |\im k| |x-y|/2}}{|\re k|-\gamma_d |\im k|} \,, \qquad \zeta = \frac{d+1}{2} + i\tau \,, $$ for some constant $C_2$ depending on $d$, but not on $x,y$ or $\tau$. This implies $$ \left\| |V|^{(d+1)/4} (-\Delta-k_\zeta^2)^{-\zeta} |V|^{(d+1)/4} \right\|_{\mathfrak S_2}^2 \leq C_2^2 e^{2C_2|\tau|} \left( \frac{ \int_{\mathbb{R}^d} e^{\beta_d |\im k| |x|} |V(x)|^{(d+1)/2} \,dx }{|\re k|-\gamma_d |\im k|} \right)^2 $$ and therefore, \begin{equation} \label{eq:hadamard2} |f(\zeta)| \leq C_2' \|Q\|_{\mathfrak S_{(d+1)/d}}^{(d+1)/(2d)} \frac{ \int_{\mathbb{R}^d} e^{\beta_d |\im k| |x|} |V(x)|^{(d+1)/2} \,dx }{|\re k|-\gamma_d |\im k|} \qquad\text{if} \ \re\zeta = \frac{d+1}{2} \,. \end{equation} According to Hadamard's three lines lemma we have $$ |f(1)|\leq \left( \sup_{\re\zeta=0}|f(\zeta)| \right)^{(d-1)/(d+1)} \left( \sup_{\re\zeta=(d+1)/2}|f(\zeta)| \right)^{2/(d+1)} \,. $$ Combining this with the bounds \eqref{eq:hadamard1} and \eqref{eq:hadamard2} we obtain \eqref{eq:resbound1proof} \end{proof} \begin{lemma}\label{resbound2} Let $d\geq 3$ be odd. There is a constant $\alpha_d'$ such that for any $k\in\mathbb{C}$ with $\im k<0$, \begin{equation} \label{eq:resbound2} \left\| K(k) \right\|_{\mathfrak S_{d+1}} \leq \alpha_d' \left( \frac{\int_{\mathbb{R}^d} e^{(d+1) |k||x|} |V(x)|^{(d+1)/2}\,dx}{|k|} \right)^{2/(d+1)} \,. \end{equation} \end{lemma} \begin{proof} Since $d$ is odd, \begin{align*} \left\| K(k) \right\|_{\mathfrak S_{d+1}}^{d+1} & = \tr \left( K(k)^* K(k) \cdots K^*(k) K(k) \right) \\ & = \int\cdots\int |V(x_1)| \overline{g_k(x_{d+1},x_1} |V(x_{d+1})| g_k(x_{d+1},x_d) \cdots g_k(x_4,x_3) \\ & \qquad\qquad \qquad \times |V(x_3)| \overline{g_k(x_2,x_3)} |V(x_2)| g_k(x_2,x_1) \,dx_1\cdots dx_{d+1} \,, \end{align*} where $g_k(x,y)$ is the integral kernel of the operator $(-\Delta-k^2)^{-1}$. It follows from formula \eqref{eq:bessel} for the Bessel function that $$ \left| K_\nu(-ia) \right| \leq e^{2|a|} K_\nu(|a|) \qquad\text{if}\ \im a \leq 0 \,. $$ This implies $$ |g_k(x,y)|\leq e^{2|k||x-y|} g_{i|k|}(x,y) \leq e^{2|k|(|x|+|y|)} g_{i|k|}(x,y) \,, $$ and therefore, in view of the above expression for $\left\| K(k) \right\|_{\mathfrak S_{d+1}}^{d+1}$, \begin{align*} \left\| K(k) \right\|_{\mathfrak S_{d+1}}^{d+1} & = \int\cdots\int e^{2|k||x_1|} |V(x_1)| \, g_{i|k|}(x_{d+1},x_1) \, e^{2|k||x_{d+1}|} |V(x_{d+1})| \, g_{i|k|}(x_{d+1},x_d) \cdots \\ & \qquad\qquad \qquad \times g_{i|k|}(x_4,x_3)\, e^{2|k||x_3|} |V(x_3)| \, g_{i|k|}(x_2,x_3) \, e^{2|k|x_2|} |V(x_2)| \, \\ & \qquad\qquad \qquad \times g_{i|k|}(x_2,x_1) \,dx_1\cdots dx_{d+1} \\ & = \left\| e^{2|k||x|} K(i|k|) e^{2|k||y|} \right\|_{\mathfrak S_{d+1}}^{d+1} \,, \end{align*} To bound the right side we use the Kato--Seiler--Simon bound \cite[Thm. 4.1]{Si} and get \begin{align*} & \left\| e^{|k||x|} K(i|k|) e^{|k||y|} \right\|_{\mathfrak S_{d+1}}^{(d+1)/2} \leq \left\| e^{|k||x|} K(i|k|) e^{|k||y|} \right\|_{\mathfrak S_{(d+1)/2}}^{(d+1)/2} \\ & \qquad\qquad = \left\| (-\Delta+|k|^2)^{-1/2} \sqrt{|V|} e^{|k||x|} \right\|_{\mathfrak S_{d+1}}^{d+1} \\ & \qquad\qquad\leq (2\pi)^{-d} \int_{\mathbb{R}^d} \frac{d\xi}{(|\xi|^2+|k|^2)^{(d+1)/2}} \int_{\mathbb{R}^d} |V(x)|^{(d+1)/2} e^{(d+1)|k||x|} \,dx \\ & \qquad\qquad = (2\pi)^{-d} \int_{\mathbb{R}^d} \frac{d\xi}{(1+|\xi|^2)^{(d+1)/2}} |k|^{-1} \int_{\mathbb{R}^d} |V(x)|^{(d+1)/2} e^{(d+1)|k||x|} \,dx \,. \end{align*} This proves the lemma. \end{proof} Finally, we are in position to give the \begin{proof}[Proof of Proposition \ref{resboundoddd}] The claimed bound for $\im k\geq 0$ follows from \cite{FrSa}. The bound for $\im k<0$ and $|\re k|\geq 2\gamma_d|\im k|$ follows from Lemma \ref{resbound1}. (Note that in this case one can bound $|\re k|-\gamma_d|\im k|\geq (\gamma_d/\sqrt{1+4\gamma_d^2}) |k|$ in the denominator.) Finally, the bound for $\im k<0$ and $|\re k|< 2\gamma_d|\im k|$ follows from Lemma \ref{resbound2}. (Note that in this case one can bound $|k|\leq \sqrt{1+4\gamma_d^2} |\im k|$ in the exponential.) This concludes the proof. \end{proof} \bibliographystyle{amsalpha}
2,877,628,091,447
arxiv
\section{Introduction} Life-long learning finds its application across a wide spectrum of domains and has been a long-standing research task. Its main goal is to update a network to adapt to new data, such as new instances or samples from a new class, without forgetting the learned knowledge on the past data. In some scenarios, on the contrary, we wish to deliberately forget or delete specified knowledge stored in the model, due to privacy or copyright issues. This task, known as, machine unlearning, has also attracted increasing attentions from the industry and research community due to its practical value. Nevertheless, prior attempts in machine unlearning have been mostly focused on deleting the specified knowledge for good, meaning that once removed, it is not possible to revert the knowledge back. Despite the absolute IP protection, such knowledge deletion scheme indeed introduces much inconvenience in terms of the user control and largely hinders the flexibility of model interaction. In this paper, we explore a novel learning scenario, which explicitly allows for the extracted knowledge from a pre-trained networked to be deposited and, whenever needed, injected back to the model. Such a flexible learning strategy grants users a maximum degree of freedom in terms of control over task- or sample-specific knowledge, and meanwhile ensures the network IP protection. Admittedly, this ambitious goal inevitably leads to a more challenging problem to tackle, since again we seek a portable modulation of knowledge on and off a pre-trained network. To this end, we propose a dedicated scheme, termed as Learning with Recoverable Forgetting (LIRF). We illustrate the overall pipeline of LIRF in Fig.~\ref{fig:goal}, When there is a request for deleting specified knowledge, denoted as $\mathcal{D}_r$ (with $\overline{\mathcal{D}}_r$ preserved), due to for example IP issues, LIRF isolates such knowledge from the pre-trained original network and stores it in a \emph{deposit module}; the remaining network with $\mathcal{D}_r$ extracted is then denoted as the \emph{target network}. When the IP issue is resolved and the model owner requests to revert the knowledge back or re-enables $\mathcal{D}_r$, LIRF withdraws the deposited knowledge and amalgamates it with the target network to produce the \emph{recover network}. Specifically, during the knowledge deposit process, we partition the knowledge of the original networks, trained using full data, into sample-specific and general part. The former is deposited to a deposit module consisting of pruned blocks from the original network, while the latter is preserved in the target network. \iffalse Towards tackling online scenarios that the total amount of training data is dynamically changing due to various reasons, life-long learning has been explored and has been formed as a long-standing research area. In the traditional life-long learning setting, the network is updated to adapt to new data (e.g., a new set of classes and a new instance) without forgetting the past learned knowledge on the old data. At the meanwhile, artificial intelligence is also currently facing a new type of problem; as artificial intelligence has become more practical and connected to our everyday lives, various ethical issues such as privacy protection and data leakage prevention have become critical topics. Thus, there is a growing demand to grant citizens the right to be forgotten by products, services and companies. This necessitates deletion of data should not be only from storage archives but also from deep model, where the data is memorized in the form of knowledge. In response to this issue, machine unlearning has received considerable attention~\cite{Bourtoule2021MachineU,Brophy2021MachineUF,gupta2021adaptive}. The main concern in machine unlearning is to forget the knowledge of the deleted data while reducing the model retraining cost. \fi \begin{figure}[t] \centering \includegraphics[scale = 0.42]{figure/goal.pdf} \caption{Illustration of the proposed LIRF framework, comprising the knowledge deposit process and knowledge withdrawal process. } \label{fig:goal} \end{figure} Our contributions are therefore summarized as follows. \begin{itemize} \item We introduce a novel yet practical life-long learning setup, recoverable knowledge forgetting. In contrast to machine unlearning settings that delete specified knowledge for good, recoverable forgetting enables knowledge isolation and recovery from a pre-trained network, which brings in network IP protection alongside user flexibility and control. \item We develop the LIRF framework that explicitly allows for knowledge deposit and withdrawal, to achieve recoverable knowledge forgetting. LIRF is time- and data-efficient, as the deposit process requires only a few epochs to finetuning on the specified samples. \item Experimental results have verified the effectiveness of the proposed method, under various settings including class-incremental learning and machine unlearning. \end{itemize} \iffalse The problem we explore in this paper can be thought as kind of recoverable unlearning problem in the life-long learning setting. On the one hand, several classes of samples are unlearned and the responding knowledge is forgotten by the target deep models. On the other hand, the forgotten knowledge should be recovered once the former deleted samples are available again. We think recoverable unlearning problem is more flexible and practical, which could happen when the data copyright expires or the privacy strategy changes. But also, it's much more challenging than the previous unlearning setting, since it requires the model to delete knowledge in a more elegant way and deposit it in the recoverable and lightweight form. With all the above considerations, we propose \textbf{L}earning w\textbf{I}th \textbf{R}ecoverable \textbf{F}orgetting (LIRF), which is shown in Fig.~\ref{fig:goal}. When there is a request for deleting $\mathcal{D}_1$, the sample-specific knowledge that would lead to information leakage is deleted from the target network and deposited in the deposit module. And with the request for withdrawing, the recover network is obtained with the target network and the deposit module, where the tasks of $\mathcal{D}_1$ are enabled. To the best of our knowledge, our study is the first to introduce the new knowledge recoverable forgetting problem to life-long learning, which includes two processes: knowledge deposit and knowledge withdraw. For the original network trained with full data, in knowledge deposit, we filter the knowledge into the sample-specific part and the general part. The general knowledge is transferred to the target network to maintain the performance of the preserved data, and the sample-specific knowledge is deposited in several pruned blocks from the original network. In this way, the recover network can be reconstructed by concating the target network and the deposit module. In a word, our work makes the following contributions: \begin{itemize} \item We are the first to introduce the problem setting for recoverable knowledge forgetting, which resembles the real-world scenarios more flexibly and closely for data privacy dynamic request in deep models as compared to the existing state-of-the-art life-long learning or machine unlearning settings. \item We develop the LIRF framework with data efficiency and training efficiency, where the whole framework doesn't need to access the preserved old data and only needs a few epochs to finetune in the deposit process. \item Experimental results have shown the effectiveness of the proposed method, which is also demonstrated in the fields of class-decrease life-long learning and machine unlearning. \end{itemize} \fi \section{Related Work} \subsection{Life-long Learning} Life-long/online/incremental learning, which is capable of learning, retaining and transferring knowledge over a lifetime, has been a long-standing research area in many fields~\cite{Wu2018MemoryRG,Shmelkov2017IncrementalLO,Huihui21AAAI}. As the pioneer work, Li~\textit{et al.}~\cite{Li2016LearningWF} propose Learning without Forgetting (LwF) by using only the new-coming examples for the new task's training, while preserving the responses on the existing tasks to prevent catastrophic forgetting. Peng \textit{et al.}~\cite{Peng2017IncrementallyLT} present to train the hierarchical softmax function for deep language models for the new-coming tasks. FSLL~\cite{Mazumder2021FewShotLL} is proposed to perform on the few-shot setting by selecting very few parameters from the model. Apart from those works that still need part of the old data, many researchers are devoted to developing the methods without storing the old data by synthesizing old data~\cite{Choi2019AutoencoderBasedIC,Shin2017ContinualLW,Venkatesan2017ASF} or even without referring to any old data~\cite{Sun2018ActiveLL,Shmelkov2017IncrementalLO,Nekoei2021ContinuousCA}. In addition to the above works that tend to forbid the catastrophic forgetting of the old tasks, some researchers~\cite{Hou2018OnePassLW,Zhang2020LearningWF,Hou2021LearningWF,Hou2021StorageFL} pay more attention on the decremental cases where some features may vanish while feature evolving. Hou \textit{et al.}~\cite{Hou2018OnePassLW} attempt to compress important information of vanished features into functions of survived features, and then expand to include the augmented features in the one-pass learning way. Zhang \textit{et al.}~\cite{Zhang2020LearningWF} propose discrepancy measure for data with evolving feature space and data distribution. Different from the current researches on life-long learning, we propose the more flexible learning scheme, which is capable of dealing with both the data adding and deleting cases. \subsection{Knowledge Transfer} Knowledge transfer aims at transferring knowledge from networks to networks. Here, we mainly discuss the related works in knowledge distillation~\cite{hinton2015distilling,Han2020NeuralCM,yang2020CVPR}, which trains a student model of a compact size by learning from a larger teacher model or a set of teachers handling the same task. It has been successfully conducted in deep model compression~\cite{WangCVPR17}, incremental learning~\cite{Rosenfeld2020IncrementalLT}, continual learning~\cite{Lange2021ACL,ye2022safe} and other tasks other than classification~\cite{Chen2017LearningEO,yang2020NeurIPS,Huang2018KnowledgeDF,Xu2018PADNetMG,Sucheng2022CVPR,Weihao22MetaFormer,YujingCVPR22,JingwenCVPR22,jing2020dynamic}. In addition to the above methods that transfer knowledge from one network to another, it could happen in plenty forms. Such as for combining or amalgamating multi-source knowledge, Gao \textit{et al.}~\cite{gao2017knowledge} introduce a multi-teacher and single-student knowledge concentration approach. And in order to handle multi-task problems in one single network, knowledge amalgamation~\cite{ye2019student} is proposed to train the student network on multiple scene understanding tasks, leading to better performance than the teachers. To make it further, Ye \textit{et al.}~\cite{Ye_Amalgamating_2019} apply a two-step filter strategy to customize the arbitrary task set on TargetNet. Besides, the multi-stage knowledge transfer is enabled by Yuan \textit{et al.}~\cite{Yuan2020CKDCK} to design a multi-stage knowledge distillation paradigm to decompose the distillation process. Knowledge distillation could also be a reliable method to transfer knowledge from old data to new data, and there are also some distillation-based works~\cite{Cheraghian2021SemanticawareKD,Hu2021DistillingCE,Tao2020FewShotCL,Dong2021FewShotCL} for solving the coming new data in life-long learning setting. Cheraghian \textit{et al.}~\cite{Cheraghian2021SemanticawareKD} address the problem of few-shot class incremental learning by utilizing the semantic information. Hu \textit{et al.}~\cite{Hu2021DistillingCE} derive a distillation method to retain the old effect overwhelmed by the new data effect, and thus alleviate the forgetting of the old class in testing. These knowledge transfer methods transfer knowledge from networks to networks, we make the first work to filter and deposit the knowledge. \subsection{Machine Unlearning} The concept of unlearning is firstly introduced by Bourtoule \textit{et al.}~\cite{Bourtoule2021MachineU} to eliminate the effect of data point(s) on the already trained model. Along this line, Neel \textit{et al.}~\cite{Neel2021DescenttoDeleteGM} give the first data deletion algorithms. To minimize the retraining time, data removal-enabled forests~\cite{Brophy2021MachineUF} are introduced as a variant of random forests that enables the removal of training data. Sekhari \textit{et al.}~\cite{sekhari2021remember} initiate a rigorous study of generalization in machine unlearning, where the goal is to perform well on previously unseen datapoints and the focus is on both computational and storage complexity. Gupta \textit{et al.}~\cite{gupta2021adaptive} give a general reduction from deletion guarantees against adaptive sequences to deletion guarantees against non-adaptive sequences, using differential privacy and its connection to max information. Nguyen \textit{et al.}~\cite{nguyen2020variational} study the problem of approximately unlearning a Bayesian model from a small subset of the training data to be erased. As machine unlearning is studied for data privacy purpose, Chen \textit{et al.}~\cite{chen2021machine} firstly study on investigating the unintended information leakage caused by machine unlearning. The previous works only consider the data deletion with the optimization objective of getting the same model as re-training without the deletion data. The proposed LIRF framework only deleted sample-specific knowledge, which can be stored for future use. \begin{figure}[t] \centering \includegraphics[scale = 0.4]{figure/framework.pdf} \caption{The proposed LIRF framework. The knowledge is transferred fully and partially from the original network to the deposit module and the target network. The recover network is withdrawn from the target net and the deposit module. } \label{fig:framework} \end{figure} \section{Knowledge Deposit and Withdrawal} The proposed LIRF framework focuses on the class-level life-long problem, in which the samples from multiple classes may be deposited or withdrawn. We define our new problem illustrated in Fig.~\ref{fig:goal}. Let $\mathcal{D}$ be the full original dataset, and the original network directly trained on $\mathcal{D}$ is donated as $\mathcal{T}_0$. In this problem, each of the learned samples is assigned to either deposit set or preservation set. Formally, \begin{itemize} \item \textbf{Deposit set $\mathcal{D}_r$}: A set of samples that should be forgotten at the target network $\mathcal{T}$, and remembered at the deposit module $\mathcal{T}_r$; \item \textbf{Preservation set $\overline{\mathcal{D}}_r$}: A set of samples that should be memorized at the target network (the complement of $\mathcal{D}_r$). \end{itemize} For clarity, we discuss on the case that one deposit set is required for deposit and withdrawal, which could be definitely generalized to multiple deposit sets. \noindent\textbf{Definition 1} (Deposit Problem). The Learning with knowledge deposit problem is defined as follows: Learn two models, one is $\mathcal{T}:\mathcal{X}\rightarrow\mathcal{Y}$ that should map an input $x$ to its correct class label $y$ if $x\subset\overline{\mathcal{D}}_r$, while map $x$ to a wrong class label if $x\subset\mathcal{D}_r$; the other one is $\mathcal{T}_r:\mathcal{X}\rightarrow\mathcal{F}$ that stores the knowledge of set $\mathcal{D}_r$. \textit{Constraints}: Only the original network $\mathcal{T}_0$ and deposit set $\mathcal{D}_r$ are available. \noindent\textbf{Definition 2} (Withdraw Problem). The Learning with knowledge withdraw problem is defined as follows: Recover a model $\widetilde{{\mathcal{T}}}:\mathcal{X}\rightarrow\mathcal{Y}$ that should map an input $x$ to its correct class label $y$ for both $x\subset{\mathcal{D}}_r$, and $x\subset\overline{\mathcal{D}}_r$. \textit{Constraints}: Only the target network $\mathcal{T}$ and deposit module $\mathcal{T}_r$ are available. \section{Learning with Recoverable Forgetting} The essence of this work is to deposit and withdraw the sample-specific knowledge for the deleted data in the learning with recoverable forgetting way, which, we call LIRF framework. LIRF consists of two processes, one is knowledge deposit that transfers knowledge from original network to target network and deposit module, the other is knowledge withdrawal that recovers the knowledge back to the recover net. These two processes can be described as: \begin{equation} \mathcal{T}_0\xrightarrow[\mathcal{D}_r]{\text{Deposit}} \{\mathcal{T}, \mathcal{T}_r\}\xrightarrow{\text{Withdraw}} \widetilde{{\mathcal{T}}}, \end{equation} where $\mathcal{T}_0$ is the original network trained on the full set $\mathcal{D}$, $\mathcal{T}$ is the target network specified for the task of the preservation set $\overline{\mathcal{D}}_r$, $\mathcal{T}_r$ is the deposit module that only works as a knowledge container and $\widetilde{{\mathcal{T}}}$ is the recover network that is expected to recover all the prediction capacity of the full data set $\mathcal{D}$. Now, given the original network $\mathcal{T}_0$ and the deposit set $\mathcal{D}_r$, the goal of LIRF is to learn $\mathcal{T}$, $\mathcal{T}_r$ and $\widetilde{{\mathcal{T}}}$, which includes three steps. Firstly, LIRF filters knowledge out of the original network to get the target net, at the meanwhile, it deposits the filtered sample-specific knowledge to the deposit module, and finally for recover request, LIRF withdraws the knowledge from the deposit module to recover net. Fig.~\ref{fig:framework} provides an overall sketch of LIRF framework. \subsection{Filter Knowledge out of Target Net} In the process of knowledge deposit, the objective of target net is to remove the sample-specific knowledge of $\mathcal{D}_r$ while maintaining the performance on $\mathcal{D}_r$. To begin with, we divide the original network $\mathcal{T}_0$ into two modules at the $n$-th block, which are denoted as $\mathcal{T}_0^{(-n)}$ and $\mathcal{T}_0^{(n-)}$, respectively. And the target network is divided in the same way as $\mathcal{T}=\mathcal{T}^{(-n)}\circ \mathcal{T}^{(n-)}$. As has been discussed in the previous work~\cite{Lee2021SharingLI} that upper layers are preferable for transfer in life-long learning setting, $\mathcal{T}_0^{(n-)}$ is fully transferred to the target network. That is, we fix the last few blocks ($\mathcal{T}^{(n-)}=\mathcal{T}_0^{(n-)}$) and expect this transfer configuration to benefit tasks that share high-level concepts but have low-level feature differences. Thus, we fully transfer $\mathcal{T}_0^{(n-)}$ to $\mathcal{T}^{(n-)}$, and partially transfer $\mathcal{T}_0^{(-n)}$ to $\mathcal{T}^{(-n)}$, as the lower layers of the network contain more sample-specific knowledge. \subsubsection{Sample-specific knowledge removal.} This removal is conducted in two aspects. One is the logit level that the target network is incapable of making reliable prediction on the deposit set $\mathcal{D}_r$, the other is the feature level that the knowledge of $\mathcal{D}_r$ can't be distilled from $\mathcal{T}$. Thus, for each input $x\subset \mathcal{D}_r$, we assign a \textit{\textbf{random}} label $y_r$, and force $\mathcal{T}$ to randomly predict on $\mathcal{D}_r$. And the loss to maximize attention transfer on the intermediate features is applied to the output of $\mathcal{T}^{(-n)}$, which makes $\mathcal{T}$ undistillable for $\mathcal{D}_r$. Thus, the loss $\mathcal{L}_{kr}$ for knowledge removal is calculated as: \begin{equation} \mathcal{L}_{kr}= \mathcal{L}_{ce}\big(\mathcal{T}(x),y_r\big)-\lambda_{at} \mathcal{L}_{at}\big(\mathcal{T}^{(-n)}(x),\mathcal{T}_0^{(-n)}(x)\big), \label{eq:kr} \end{equation} where $\lambda_{at}$ is the weight, $\mathcal{L}_{ce}(\cdot,\cdot)$ is the cross-entropy loss, and $\mathcal{L}_{at}$ is the filtered attention distillation loss item~\cite{zagoruyko2016paying} that calculates the activated feature-wise similarity of the intermediate features: \begin{equation} \begin{split} \mathcal{L}_{at}(\mathcal{F}_1,\mathcal{F}_2)&= \Big \|f(\frac{A(\mathcal{F}_1)}{\|A(\mathcal{F}_1)\|_2})-f(\frac{A(\mathcal{F}_2)}{\|A(\mathcal{F}_2)\|_2})\Big\|^2,\\ A(\mathcal{F})&= \sum_{i=1}^C \|\mathcal{F}_i\|^2, \quad f\big(a(i)\big)= \begin{cases} 0 & a(i)<\epsilon\\ a(i) & \text{otherwise} \\ \end{cases}, \end{split} \end{equation} where $\mathcal{F}_i\in\mathbb{R}^{H\times W}$ represents the feature $\mathcal{F}\in\mathbb{R}^{H\times W\times C}$ with the size of $H\times W\times C$ at depth $i$, with which, the $l_2$-normalized attention maps are obtained. And before calculating the attention similarity with $\mathcal{L}_{at}$, a filter function $f$ is applied to set 0 to the deactivate regions, which enables the intermediate knowledge undisillable only for $x\subset\mathcal{D}_r$. The knowledge removal loss $\mathcal{L}_{kr}$ is calculated on the deposit set $\mathcal{D}_r$ to fine-tune $\mathcal{T}^{(-n)}$, which is initialized with $\mathcal{T}_0^{(-n)}$. The former loss item of $\mathcal{L}_{kr}$ enables the knowledge forgetting in the logit-level, the latter item of $\mathcal{L}_{kr}$ enables the forgetting in the feature-level, which unlearns $\mathcal{D}_r$ from $\mathcal{T}$ and removes the privacy information of $\mathcal{D}_r$. \subsubsection{General knowledge preservation.} As is stated in Fig.~\ref{fig:goal}, there are two kinds of knowledge that need to be preserved by the target network. One is the knowledge coming from the preservation $\overline{\mathcal{D}}_r$, the other is the general knowledge from the $\mathcal{D}_r$. Since the target network $\mathcal{T}$ is initialized with the original network and the last few blocks of $\mathcal{T}$ keep fixed while fine-tuning, part of the knowledge has already been preserved by fully transferred from $\mathcal{T}_0^{(n-)}$ to $\mathcal{T}^{(n-)}$. In addition to it, the partial knowledge transfer with filter $g$ is proposed on the $\overline{\mathcal{D}}_r$-related knowledge so as to prevent catastrophic forgetting on $\overline{\mathcal{D}}_r$, which is: \begin{equation} \mathcal{L}_{kp} = \mathcal{L}_{kd}\big(g(\frac{z_{\mathcal{T}}(x)}{T}),g(\frac{z_{\mathcal{T}_{0}}(x)}{T})\big), \label{eq:kp} \end{equation} where $\mathcal{L}_{kd}$ is the KL-divergence loss, $T$ is the temperature, and $z_{\mathcal{T}}$ and $z_{\mathcal{T}_0}$ are the output logits of ${\mathcal{T}}$ and ${\mathcal{T}_0}$, respectively. The filter $g$ selects the logits that correspond to the class of the preservation set, in which way the knowledge is partially transferred to target net by minimizing $\mathcal{L}_{kp}$. Note that only the deposit samples are accessible in the whole LIRF framework, the output probabilities on the preservation class are thought to be low and may not be enough to maintain the performance on the preservation set. Thus, we set a higher temperature weight to transfer more knowledge for the preserved tasks. \subsection{Deposit Knowledge to Deposit Module} \label{sec:deposit} The key difference between the proposed LIRF with the traditional unlearning problem is that we store the sample-specific knowledge to the deposit module, which is directly vanished in previous unlearning methods. The deposit module should have two vital characteristics: firstly, it should be withdrawn easily to the recover network with the withdrawal request; Secondly, it should be light-weight to be stored. To get a better knowledge container, we initialized the deposit module with the pruned original network: \begin{equation} \mathcal{T}_r\xleftarrow{\text{initialize}} \mathcal{P}rune\big[ \mathcal{T}_0^{(-n)}\big], \end{equation} where we use the simple ranking method by calculating the sum of its absolute kernel weights for pruning~\cite{2016Pruning}. The detail of pruning is given in the supplementary. Here, the deposit module is designed as the pruned version mainly for the following purposes: one is for model efficiency that the light-weight deposit module is more space-saving for storage ($20\%$ parameters of the original network); the other is for knowledge filtering that pruning would be better described as `selective knowledge damage'~\cite{Hooker2020WhatDC}, where only the activated filters are kept such that we only deposit the sample-specific knowledge of $\mathcal{D}_r$ rather than the whole knowledge. Also, similar as $\mathcal{L}_{kp}$, the partial knowledge transfer loss $\mathcal{L}_{pt}$ with the filter $\overline{g}$ is applied to augment this sample-specific knowledge by: \begin{equation} \mathcal{L}_{pt} = \mathcal{L}_{kd}\big(\overline{g}(\frac{z_{\mathcal{T}_r\circ\mathcal{T}^{(n-)}}(x)}{T}),\overline{g}(\frac{z_{\mathcal{T}_{0}}(x)}{T})\big), \label{eq:pt} \end{equation} where $\mathcal{L}_{kd}$ and $T$ are previously defined in Eq.~(\ref{eq:kp}) and the logits produced by the deposit module are processed by $\mathcal{T}_r$ and $\mathcal{T}^{(n-)}$. The filter $\overline{g}$ selects the logits that correspond to the class of the deposit set, which transfers the $\mathcal{D}_r$-related knowledge from the original network to the deposit module. By minimizing the loss $\mathcal{L}_{pt}$, the sample-specific knowledge is transferred to the deposit module, at the meanwhile we also finetune $\mathcal{T}_r$ to the easy-to-withdraw module, which means that the knowledge is recoverable for the recover network $\widetilde{{\mathcal{T}}}$. Hence the recovered performance on $\mathcal{D}_r$ is considered ahead in the deposit process, which is to minimize the classification loss of the recover net $\mathcal{L}_{re}$: \begin{equation} \mathcal{L}_{re} = \mathcal{L}_{ce}\big(\widetilde{{\mathcal{T}}}(x),y\big), \label{eq:recover} \end{equation} where $y$ is the groundtruth label of input $x$. And the deposit module obtained here only works for storing the knowledge, which can't be used as a independent prediction model. Thus, $\mathcal{D}_r$ is much more safer form for storage than the original images. \subsection{Withdraw Knowledge to Recover Net} \label{sec:recover} Once the knowledge has been successfully deposited, the proposed LIRF framework is completed, where the knowledge can be withdrawn directly without any fine-tuning, let alone no need for any data. The recover net is re-organized without fine-tuning, which is in the form as: \begin{equation} \widetilde{{\mathcal{T}}}(x)=g\big (\mathcal{T}(x)\big)+ \overline{g}\big ( \mathcal{T}_r\circ\mathcal{T}^{(n-)}(x)\big), \end{equation} where the filter functions $g$ and $\overline{g}$ are doing the selection operation, which are the same in Eq.~(\ref{eq:kp}) and Eq.~(\ref{eq:pt}), respectively. Thus, the overall loss function to update the LIRF framework is: \begin{equation} \mathcal{L}_{all}= \mathcal{L}_{kr}+\lambda_{kp}\mathcal{L}_{kp}+\lambda_{re}\mathcal{L}_{re}+\lambda_{pt}\mathcal{L}_{pt}, \end{equation} where $\lambda_{kp}$, $\lambda_{pt}$ and $\lambda_{re}$ are the balancing weights. LIRF is trained by minimizing the overall $\mathcal{L}_{all}$ on the deposit set $\mathcal{D}_r$, where the preservation set $\overline{\mathcal{D}}_r$ doesn't participate in the whole process. \subsubsection{More discussions.} Note that the optimization objective $\mathcal{L}_{all}$ of knowledge deposit is different from machine unlearning, which aims at obtaining a target network that approximates the one trained from scratch with data $\overline{ \mathcal{D}}_r$. In the proposed LIRF, the knowledge capacity of target net is larger than the network only trained with $\overline{ \mathcal{D}}_r$, for it contains the general knowledge filtered from the delete set $\mathcal{D}_r$. And only the sample-specific knowledge that is privacy-related should be stored in the deposit module. In the process of withdrawal, the recover network $\widetilde{\mathcal{T}}$ built in Eq.~(\ref{eq:recover}) isn't forced to approach the original network: $\widetilde{\mathcal{T}}\neq\mathcal{T}_0$. Actually the recover network works better than the original network with the existence of full and partial knowledge transfer. \section{Experiments} \subsection{Experimental settings} \textbf{Datasets.} We use three widely used benchmark datasets for life-long learning, which are CIFAR-10, CIFAR-100 and CUB200-2011 datasets~\cite{Wah2011TheCB}. For CIFAR-10 and CIFAR-100 datasets, we are using input size of $32\times 32$. For CUB200-2011 dataset, we are using input size of $256\times 256$. In the normal knowledge deposit and withdrawal setting, the first 30\% of classes are selected for the deposit set, while the rest classes belong to the preservation set. \noindent \textbf{Training details.} We used PyTorch framework for the implementation. We apply the experiments on the ResNet-18 backbone. For optimizing the target network and the deposit module, we use stochastic gradient descent with momentum of 0.9 and learning rate of 0.01 for 20 epochs. We employed a standard data augmentation strategy: random crop, horizontal flip, and rotation. For applying distillation, we set $T=10$ for CIFAR10 dataset and $T=20$ for CUB200-2011 dataset. For the weights balancing the loss items in $\mathcal{L}_{all}$, we set $\lambda_{kp}=\lambda_{pt}= \lambda_{re}=10$. For the normal LIRF setting, the pruning rate is set as $50\%$ and the original network $\mathcal{T}_0$ is divided into 4 blocks, where the last 2 blocks as well as the fc layers are formed as $\mathcal{T}_0^{(n-)}$. \noindent \textbf{Evaluation metrics.} For evaluation, we need to evaluate the performance of both target net and recover net. For recover net, we use the average accuracy for the preservation set (Pre Acc.), the average accuracy for the deposit set (Dep Acc.), the average accuracy for the full set (Avg Acc.). And for target net, we use the average accuracy for the preservation set (Pre Acc.) and the the average accuracy for the deposit set (Dep Acc.) for the deposit set $\overline{\mathcal{D}}_{r}$. In addition, following the setting of LWSF~\cite{Shibata2021LearningWS}, we use the harmonic mean (H Mean) of the two standard evaluation measures for life-long learning, which is computed by: $ H Mean=\frac{2\cdot Pre Acc\cdot F}{Pre Acc+F}$, where the forgetting measure `F' is computed for the deposit set by the accuracy drop (decrease) before and after knowledge deposit. For testing the withdrawal performance, all the metrics show better performance with higher values, which are similar for testing the deposit performance on target net, except that `Dep Acc.' is better with lower values. \subsection{Experimental Results} \subsubsection{Overall performance.} Table~\ref{tab:mainacc} shows overall performance of knowledge deposit (target network) and withdrawal (recover network) on CIFAR-10 and CUB200-2011 datasets. Besides, we compare the proposed LIRF with the `Independent' networks independently trained on the two sub datasets (preservation set and deposit set) and the `Original' network $\mathcal{T}_0$ trained on the full dataset. From Table~\ref{tab:mainacc}, several observations are obtained: \begin{itemize} \item In the original network, the accuracy on preservation set (`Pre Acc.') is higher than the performance trained dependently (`93.77' vs `92.92' on CIFAR-10), which means that there exits the positive knowledge transfer from the deposit $\mathcal{D}_r$ to the preservation $\overline{\mathcal{D}}_r$. Thus, it is necessary to partial transfer the general knowledge to the preserved tasks. \item As the accuracy for randomly predicting on CIFAR-10 and CUB200-2011 is $10\%$ and $0.5\%$, respectively, the `Pre Acc.' while depositing decreases to $15\%$ and $1.18\%$. This large accuracy drop demonstrates the logit-level forgetting of the deposit set in the target net $\mathcal{T}$. \item The recover network gains higher accuracy on both the preservation set and the deposit set than on the original network, which proves the knowledge has been augmented in LIRF with the partial and full knowledge transfer, which demonstrates the discussions in Sec.~\ref{sec:recover}, \end{itemize} \begin{table}[t] \caption{Experimental results of the proposed LIRF on CIFAR-10 dataset and CUB200-2011 dataset. For each dataset, we randomly select $30\%$ of classes for deposit (Dep Set), while the rest is kept in the preservation set (Pre Set).} \centering \label{tab:mainacc} \begin{tabular}{p{21mm}|p{16mm}<{\centering}|p{18mm}<{\centering}|p{18mm}<{\centering}|p{18mm}<{\centering}|p{18mm}<{\centering}} \toprule Dataset& Metrics &Independent& Original& Deposit & Withdrawal \\\hline\hline CIFAR-10& Pre Acc.$\uparrow$ & 92.92 & 93.77 & 93.41 & 94.56 \\ CIFAR-10& Dep Acc. & 96.61 & 94.60 & 15.00 & 97.92 \\ CIFAR-10& F$\uparrow$ & - & 0 & 79.60 &- \\ CIFAR-10& H Mean $\uparrow$ & - & 0 & 85.95&- \\ CIFAR-10& Avg Acc. $\uparrow$ & 94.02 & 94.06 & - & 95.57 \\\hline\hline CUB200-2011& Pre Acc.$\uparrow$ & 48.15 & 50.33 & 51.64& 53.21 \\ CUB200-2011& Dep Acc. & 52.73 & 48.60 & 1.18 & 55.89 \\ CUB200-2011& F$\uparrow$ & - & 0 & 47.42&- \\ CUB200-2011& H Mean $\uparrow$ & - &0 & 49.44 &- \\ CUB200-2011& Avg Acc. $\uparrow$ & 49.52 & 49.81& - & 54.01 \\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \centering \caption{Experimental results of the ablation study on the proposed LIRF framework.} \label{tab:ablation} \begin{tabular}{c|cc|ccc} \toprule \multicolumn{1}{c|}{\multirow{2}{*}{Method}} & \multicolumn{2}{c|}{\#Target Net} & \multicolumn{3}{c}{\#Recover Net} \\ \multicolumn{1}{c|}{}& Pre Acc. $\uparrow$ & Dep Acc. $\downarrow$ & Pre Acc. $\uparrow$ & Dep Acc. $\uparrow$ & Avg Acc. $\uparrow$ \\ \hline\hline Scratch Train & 92.92 &96.61 & 93.77& 94.60&94.06\\ IL Train & 92.92 &96.61 &90.87 &\textbf{98.37}&93.12 \\ $\mathcal{L}_{kr}$,$\mathcal{L}_{kp[w/o\mathcal{L}_{at}]}$ & 93.38 & 15.55& -&-&-\\ $\mathcal{L}_{kr},\mathcal{L}_{kp[w/o\mathcal{L}_{at}]},\mathcal{L}_{pt},\mathcal{L}_{re}$ & 93.25&14.81&94.26&97.03&95.09\\ $\mathcal{L}_{kr},\mathcal{L}_{kp},\mathcal{L}_{pt},\mathcal{L}_{re}$ & 93.25 &15.29 &94.33 &97.07&95.15\\ $\mathcal{L}_{kr},\mathcal{L}_{kp},\mathcal{L}_{pt},\mathcal{L}_{re}$+Prune& \textbf{93.42}&\textbf{14.15} &\textbf{94.55}&97.67&\textbf{95.49}\\\bottomrule \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[scale = 0.48]{figure/tsne.pdf} \caption{t-SNE plots of the features obtained from the last layer of the network are shown. Each color in the t-SNE plot represents one category, where 3 categories are deposited and the rest 7 categories are in the preservation set (best viewed in color). } \label{fig:tsne} \end{figure} \subsubsection{Sensitive analysis of LIRF.} Here we give a deeper analysis of the proposed LIRF by the ablation study of each loss items in $\mathcal{L}_{all}$. The comparative results are applied on, `Scratch Train': train each net with the corresponding set from scratch; `IL Train': train the target net from scratch and train recover net with KD loss in the incremental learning setting; $\mathcal{L}_{kr},\mathcal{L}_{kp},\mathcal{L}_{pt},\mathcal{L}_{re}$ are the loss items defined in the LIRF framework; $\mathcal{L}_{kp[w/o\mathcal{L}_{at}]}$ is the loss without the attention distillation $\mathcal{L}_{at}$; `Prune' denotes the pruning operation to initialize the deposit module. The experimental results are displayed in Table~\ref{tab:ablation}. As can be observed from the table: (1) The full setting with all the loss functions and the pruning strategy achieves almost the best on each metrics. (2) The attention loss item $\mathcal{L}_{at}$ would not affect the accuracy a lot (`row 4' and `row 5'), but it is of vital importance to prevent the information leakage, which is discussed in the following experiment. (3) The pruning strategy on the deposit module is proved to be effective since the pruned deposit module can be withdrawn to recover net with the best Avg Acc.(`95.49'). The visualization of the t-SNE plots is depicted in Fig.~\ref{fig:tsne}, where the features on the final layer of original net, target net and recover net are visualized. As is shown in the figure, both the original net and the recover net can produce discriminative features on all the 10 categories. And for the target net where the sample-specific knowledge of the deposit set is removed, the visualization proves that the target net produces highly discriminative features for the preservation set while vanishing the predicting capacity for the deposit set. And for visualization the t-SNE plots of the deposit module, we construct a network as $\mathcal{T}_r\circ\mathcal{T}_0^{(n-)}$. As can been seen in the right part of the figure, the pruned deposit module produces more `narrow' features, which are thought as the sample-specific knowledge we want to deposit, proving the `selective knowledge damage' scheme we mentioned in Sec.~\ref{sec:deposit}. \begin{table}[t] \centering \caption{Experimental results of the knowledge transferability to downstream networks. This experiment is conducted on the CIFAR-10 dataset.} \label{tab:distillation} \resizebox{1.0\textwidth}{!}{ \begin{tabular}{c|c|cc|cc|cc} \toprule \multicolumn{1}{c|}{\multirow{2}{*}{Student}}&\multicolumn{1}{c|}{\multirow{2}{*}{Distillation}}& \multicolumn{2}{c|}{\#Original} & \multicolumn{2}{c|}{\#Target (w/0 $\mathcal{L}_{at}$)} &\multicolumn{2}{c}{\#Target (w $\mathcal{L}_{at}$)}\\ \multicolumn{1}{c|}{}& & Pre Acc. & Dep Acc. & Pre Acc.$\uparrow$& Dep Acc.$\downarrow$& Pre Acc.$\uparrow$& Dep Acc.$\downarrow$\\ \hline\hline CNN&Logit-based&85.38&86.15&85.97\up{(+0.59)}&84.26\down{(-1.89)}&85.70\up{(+0.32)}&82.75\down{(-3.40)}\\ CNN&AT-based&85.27&85.94&86.01\up{(+0.74)}&85.72\down{(-0.22)}&85.54\up{(+0.27)}&81.83\down{(-4.11)}\\ ResNet18&Logit-based&94.26&95.73&94.55\up{(+0.29)}&92.70\down{(-3.03)}&94.64\up{(+0.38)}&91.49\down{(-4.24)}\\ ResNet18&AT-based& 94.09&95.24&94.15\up{(+0.06)}&94.61\down{(-0.63)}& 93.85\down{(-0.24)}&88.76\down{(-6.48)}\\\bottomrule \end{tabular} } \end{table} \subsubsection{Knowledge transferability to downstream networks.} We use two evaluations to prove the success of sample-specific knowledge removal in the target net: One is the accuracy drop on the deposit set, which has been proved in the former experiments; The other is the knowledge transferability of the deposit set from the target network to downstream networks, which test the knowledge leakage risk by knowledge distillation. We have conducted the logit-based distillation (KL-divergence loss of the output logits) and the attention-based distillation (MSE loss of the attention maps of the intermediate features). The results are displayed in Table~\ref{tab:distillation}, where we choose the plain CNN and the ResNet-18 as the student. And we have also evaluated the necessity of the loss item $\mathcal{L}_{at}$ in $\mathcal{L}_{kr}$. Note that the groudtruth label of the training data is included for training with distillation, the accuracy wouldn't drop largely even when the knowledge is nontransferable. Thus, from the table, we observe that: (1) The knowledge transferability on the preservation set is guaranteed on both the original and the target networks, which is slightly influenced by target net with $\mathcal{L}_{at}$ distilled in the attention-based way; (2) When training the LIRF framework with loss item $\mathcal{L}_{at}$, the knowledge for the deposit set is hard to be distilled both in the attention-based and the logit-based way. It is much safer with $\mathcal{L}_{at}$, since without it, the knowledge of $\mathcal{D}_r$ is likely to be leaked through distillation in the attention-based manner. The privacy protection on the deposit set is further evaluated in the supplementary tested by the data-free distillation. \subsubsection{The influence of the scale of the deposit module.} There are two factors corresponding to the scale of the deposit module $\mathcal{T}_r$: one is the block number used to divide the original network; the other is the pruning rate. The influence on this two factors is depicted in Fig.~\ref{fig:size}. When the division block number $n$ increases, the size of the deposit module becomes larger and the fully transferred part of the original network ($\mathcal{T}_0^{(n-)}$) becomes smaller. In this way, the deposit accuracy on the target network (`Dep Acc.' in the first sub figure) and the average accuracy on the recover network (`Avg Acc.' in the second sub figure) drop due to the less knowledge directly transferred from $\mathcal{T}_0^{(n-)}$ to $\mathcal{T}$, which is also completely transferred back while recovering. Considering the performance on both target net and recover net, dividing at $n=2$ and $n=3$ satisfies the demand, and we choose $n=2$ for smaller knowledge deposit storage. When the pruning rate increases (the percent of filters to be pruned out), the size of the deposit module becomes smaller, which doesn't influence the deposit performance largely (the third sub figure). The average accuracy on the recover network(`Avg Acc.' in the forth sub figure) increases at first due to selective knowledge damage on the deposit module, but drops at last due to the limit size for knowledge storage. So the pruning rate near $50\%$ is a better choice. \begin{figure}[t] \centering \includegraphics[scale = 0.36]{figure/pruningsize.pdf} \caption{The performance on knowledge deposit and withdrawal with different division block numbers and pruning rates. } \label{fig:size} \end{figure} \begin{table}[t] \caption{Comparative results of incremental learning and unlearning on CIFAR-100. Each column represents a different number of classes per incremental step.} \label{tab:incremental} \centering \begin{tabular}{l|cc|cc|cc} \toprule & \multicolumn{2}{c|}{\# Task:2, \# Class:50} & \multicolumn{2}{c|}{\# Task:5, \# Class:20} & \multicolumn{2}{c}{\# Task:20, \# Class:5} \\ & H $\uparrow$ & (A$\uparrow$, F$\uparrow$) &H $\uparrow$ & (A$\uparrow$, F$\uparrow$) & H $\uparrow$ & (A$\uparrow$, F$\uparrow$) \\\hline\hline Baseline & 55.87 & (55.21,56.55) & 51.79 &(39.66,74.62) &37.88 &(25.41,74.41) \\ LwF&9.02 &(74.69,4.80) &17.23 &(79.05,9.67) &22.50& (80.74,13.07)\\ LwF*& 54.64 &(76.44,42.52) &68.24 &(81.32,58.79)& 63.62 &(82.29,51.85)\\ EWC& 58.58& (56.73,60.55) &48.57 &(36.54,72.42)& 34.91& (23.07,71.70)\\ EWC*& 57.17& (56.25,58.13)& 49.61& (36.58,77.08)& 36.90& (23.68,83.52)\\ EWC*+LwF*& 53.51& (77.11,40.98)& 67.64 &(81.20,57.96)& 69.17& (74.11, 64.85)\\ MAS& 55.44& (54.42,56.49)& 47.46 &(34.89,74.17)& 35.26& (23.25,72.96)\\ MAS+LwF*& 56.54& (76.85,44.72)& 66.35 &(81.83,55.79)& 70.83& (74.63,67.41)\\ LWSF&70.08 &(74.89,65.84)& \textbf{73.21}& (72.61,73.83)& 71.63& (68.56,75.00)\\\hline LIRF &\textbf{77.69}&(79.24,76.19)&{73.08} &({78.24}, {68.56})& \textbf{79.48}&(80.41,78.57)\\ \bottomrule \end{tabular} \end{table} \subsubsection{Comparing with incremental learning and unlearning.} The proposed LIRF can be also conducted on incremental learning task and the machine unlearning task. Here we tested LIRF on these two tasks, following the setting of LWSF~\cite{Shibata2021LearningWS}, whose task is to unlearn several classes of samples while dealing with the new classes. The individual experiments on incremental learning and machine unlearning are given in the supplementary. To begin with, we train the original network with the full dataset and then deposit each sub class into a deposit module set, then withdraw each in each incremental step. Table.~\ref{tab:incremental} shows the comparative results of all the methods, which include: `Baseline' (trained only with the classification loss), `LwF'~\cite{Li2016LearningWF}, `EWC'~\cite{kirkpatrick2017overcoming}, `MAS'~\cite{Aljundi_2018_ECCV}, LWSF~\cite{Shibata2021LearningWS} and the modified `LwF*' and `EWC*' that are modified to enable partial forgetting by the work~\cite{Shibata2021LearningWS}. The metrics of `H Mean' (H), `Pre Acc' (A) and `F' are averaged until the last incremental step. The proposed LIRF works almost the best among all the listed methods, especially on the incremental performance (`A') which is due to partial and fully knowledge transfer in the framework. \section{Conclusions} In this paper, we study a novel life-long learning task, recoverable knowledge forgetting. Unlike prior life-long learning tasks that either aim to prevent the forgetting of old knowledge or delete specified knowledge for good, the investigated setting enables flexible knowledge extraction and inserting, which in turn largely enriches the user control and meanwhile ensures network IP protection. To this end, we introduce a dedicated approach, termed as LIRF, where the innovative operation of knowledge deposit and knowledge withdrawal are proposed. During deposit, the sample-specific knowledge that may lead to privacy leakage is extracted from original network and maintained in the deposit module. Whenever needed, the deposited knowledge can be readily withdrawn to recover the original model. Experimental results demonstrate the effectiveness of the proposed LIRF under various settings, including incremental learning and machine unlearning. \section*{Acknowledgements} This work is supported by NUS Advanced Research and Technology Innovation Centre~(Project Reference: ECT-RP2), Centre for Advanced Robotics Technology Innovation~(CARTIN) of Singapore, NUS Faculty Research Committee~(WBS: A-0009440-00-00), National Natural Science Foundation of China (No.62002318), Zhejiang Provincial Science and Technology Project for Public Welfare (LGF21F020020) and Ningbo Natural Science Foundation (202003N4318). Xinchao Wang is the corresponding author. \iffalse Most of the current methods in life-long learning tend to focus on retaining all knowledge for previous tasks when the data comes in stream to prevent or alleviate catastrophic forgetting. We opened up a new framework for life-long learning called Learning with Recoverable Forgetting (LIRF), which allows a model to temporally forget the undesirable class information, and withdraw it with the future request. Unlike the previous work, considering that some data may be temporarily unavailable due to some privacy issues, we propose the LIRF framework to deal with the data deletion and adding flexibly. For the deposit set, the general knowledge that makes positive transfer to the current tasks is augmented and preserved in the target network, while the sample-specific knowledge that may lead to privacy leakage is vanished and deposited in the deposit network. Thus for further usage of this deposit knowledge, deposit network is directly withdrawn for recovering the former performance. Experimental results testing both the model prediction and knowledge transfer performances have proven the effectiveness of the proposed framework. In addition, the comparative results are also conducted on incremental learning and machine unlearning, showing the wide applicability of the proposed framework. We believe that this paper will bring a new and practical direction of life-long learning to the community and give the first baseline for the knowledge deposit and withdrawal problem. In the future, we would like to explore a more effective knowledge deposit form, and makes both the processes of knowledge deposit and withdrawal free of any training data. \fi \clearpage
2,877,628,091,448
arxiv
\section{Introduction} Consider the following abstract problem: given access to a function $f : \mc{X} \to \R$, where $\mc{X}$ is some space, find $x \in \mc{X}$ minimizing $f(x)$. We study an instantiation of this problem that trades sequential access to $f$ for large batches of parallel queries---one can query $f$ for its value over $n$ points at each of $T$ rounds. In this setting, we propose a general algorithm that effectively optimizes $f$ whenever there is a family of classifiers $h : \mc{X} \to [0,1]$ that can predict sublevel sets of $f$ with high enough accuracy. Our main motivation comes from settings in which $n$ is large---on the order of hundreds to thousands---while possibly small relative to the size of $\mc{X}$. These types of problems occur in biological assays~\cite{knight2008array}, physical simulations~\cite{marsden2004optimal}, and reinforcement learning problems~\cite{schulman2015trust} where parallel computation or high-throughput measurement systems allow efficient collection of large batches of data. More concretely, consider the optimization of protein binding affinity to DNA sequence targets from biosensor data~\cite{chevalier2017massively,knight2008array,wang2014particle}. In this case, assays measure binding of $n \ge 1000$s of sequences and are inherently parallel due to the fixed costs of setting up an experiment, while the time to measure a collection of sequences makes multiple sequential tests prohibitively time-consuming (so $T$ must be small). In such problems, it is typically difficult to compute the gradients of $f$ (if they even exist); consequently, we focus on derivative-free optimization (DFO, also known as zero-order optimization) techniques. \subsection{Problem statement and approach} The batched derivative free optimization problem consists of a sequence of rounds $t = 1, 2, \ldots, T$ in which we propose a distribution $p\sups{t}$, draw a sample of $n$ candidates $X_i \simiid p\sups{t}$, and observe $Y_i = f(X_i)$. The goal is to find at least one example $X_i$ for which the gap \begin{equation*} \min_i f(X_i) - \inf_{x \in \mc{X}} f(x) \end{equation*} is small. Our basic idea is conceptually simple: In each round, fit a classifier $h$ predicting whether $Y_i \lessgtr \alpha\sups{t}$ for some threshold $\alpha\sups{t}$. Then, upweight points $x$ that $h$ predicts as $f(x) < \alpha\sups{t}$ and downweight the other points $x$ for the proposal distribution $p\sups{t}$ for the next round. This algorithm is inspired by classical cutting-plane algorithms~\cite[Sec.~3.2]{Nesterov04}, which remove a constant fraction of the remaining feasible space at each iteration, and is extended into the stochastic setting based on multiplicative weights algorithms~\cite{Littlestone91,AroraHaKa12}. We present the overall algorithm as Algorithm~\ref{alg:cutplane1}. \begin{algorithm}[ht] \caption{Cutting-planes using classifiers} \label{alg:cutplane1} \begin{algorithmic}[1] \REQUIRE Objective $f$, Action space $\mathcal{X}$, hypothesis class $\mathcal{H}$. \STATE Set $p^{(0)}(x) = 1/|\mathcal{X}|$ \STATE Draw $X^{(0)} \sim p^{(0)}$. \STATE Observe $Y^{(0)} = f(X^{(0)})$ \FOR{$t\in\{1\hdots T\}$} \STATE Set $\alpha^{(t)} = \text{median}(\{Y^{(t)}_i\}_{i=1}^n)$ \STATE Set $h^{(t)}\in\mathcal{H}$ as the loss minimizer of $L$ over $(X^{(0)},Y^{(0)}>\alpha^{(t)}) \hdots (X^{(t-1)},Y^{(t-1)}>\alpha^{(t)})$. \STATE Set $p^{(t)}(x) \propto p^{(t-1)}(x) (1-\eta h^{(t)}(x))$ \STATE Draw $X^{(t)} \sim p^{(t)}$ \STATE Observe $Y^{(t)} = f(X^{(t)})$. \ENDFOR \STATE Set $i^* = \arg\min_i Y_i^{(T)}$ \RETURN $X_{i^*}^{(T)}$. \end{algorithmic} \end{algorithm} \subsection{Related work} When, as is typical in optimization, one has substantial \emph{sequential} access to $f$, meaning that $T$ can be large, there are a number of major approaches to optimization. Bayesian optimization~\cite{shahriari2016taking,bogunovic2016truncated} and kernel-based bandits~\cite{bubeck2016multi} construct an explicit surrogate function to minimize; often, one assumes it is possible to perfectly model the function $f$. Local search algorithms~\cite{ConnScVi09,loshchilov2013cma} emulate gradient descent via finite-difference and local function evaluations. Our work differs conceptually in two ways: first, we think of $T$ as being small, while $n$ is large, and second, we represent a function $f$ by approximating its sublevel sets. Existing batched derivative-free optimizers encounter computational difficulties for batch sizes beyond dozens of points~\cite{gonzalez2016batch}. Our sublevel set approach scales to large batches of queries by simply sampling from the current sublevel set approximation. While other researchers have considered level set estimation in the context of Bayesian optimization~\cite{gotovos2013active,bogunovic2016truncated} and evolutionary algorithms~\cite{michalski2000learnable}, these use the level set to augment a traditional optimization algorithm. We show good sublevel set predictions alone are sufficient to achieve linear convergence. Moreover, given the extraordinary empirical success of modern classification algorithms, e.g.\ deep networks for image classification~\cite{LeCunBeHi15}, it is natural to develop algorithms for derivative-free optimization based on fitting a sequence of classifiers. \citet{yu2016derivative} also propose classification based on optimization, but their approach assumes a classifier constrained to never misclassify near the optimum, making the problem trivial. \subsection{Contributions} We present Algorithm~\ref{alg:cutplane1} and characterize its convergence rate with appropriate classifiers and show how it relates to measures of difficulty in active learning. We extend this basic approach, which may be computationally challenging, to an approach based on bootstrap resampling that is empirically quite effective and---in certain nice-enough scenarios---has provable guarantees of convergence. We provide empirical results on a number of different tasks: random (simulated) problems, airfoil (device) design based on physical simulators, and finding strongly-binding proteins based on DNA assays. We show that a black-box approach with random forests is highly effective within a few rounds $T$ of sequential classification; this approach provides advantages in the large batch setting. The approach to optimization via classification has a number of practical benefits, many of which we verify experimentally. It is possible to incorporate prior knowledge in DFO through domain-specific classifiers, and in more generic optimization problems one can use black-box classifiers such as random forests. Any sufficiently accurate classifier guarantees optimization performance and can leverage the large-batch data collection biological and physical problems essentially necessitate. Finally, one does not even need to evaluate $f$: it is possible to apply this framework with pairwise comparison or ordinal measurements of $f$. \section{Cutting planes via classification} Our starting point is a collection of ``basic'' results that apply to classification-based schemes and associated convergence results. Throughout this section, we assume we fit classifiers using pairs $(x, z)$, where $z$ is a $0/1$ label of negative (low $f(x)$) or positive (high $f(x)$) class. We begin by demonstrating that two quantities govern the convergence of the optimizer: (1) the frequency with which the classifier misclassifies (and thus downweights) the optimum $x^*$ relative to the multiplicative weight $\eta$, and (2) the fraction of the feasible space each iteration removes. If the classifier $h^{(t)}(x)$ exactly recovers the sublevel set ($h^{(t)}(x) < 0$ iff $f(x) < \alpha^{(t)}$), $\alpha^{(t)}$ is at most the population median of $f(X^{(t)})$, and $\mc{X}$ is finite, the basic cutting plane bound immediately implies that \begin{multline*} \log \left[\P_{x\sim p^{(T)}}\left( f(x) = \min_{x^*\in\mathcal{X}} f(x^*)\right) \right]\\ \geq \min\left( T \log \left(\frac{2}{2 -\eta}\right) - \log(|\mathcal{X}|), 0\right). \end{multline*} It is not obvious that such a guarantee continues to hold for inaccurate $h^{(t)}$: it may accidentally misclassify the optimum $x^*$, and the thresholds $\alpha^{(t)}$ may not rapidly decrease the function value. To address these issues, we provide a careful analysis in the coming sections: first, we show the convergence guarantees implied by Algorithm~\ref{alg:cutplane1} as a function of classification errors (Theorem \ref{thm:comp-infeasible}), after which we propose a classification strategy directly controlling errors (Sec.~\ref{sec:css}), and finally we give a computationally tractable approximation (Sec.~\ref{sec:bootstrap}). \subsection{Cutting plane style bound} We begin with our basic convergence result. Letting $p\sups{t}$ and $h\sups{t}$ be a sequence of distributions and classifiers on $\mc{X}$, the convergence rate depends on two quantities: the coverage (number of items cut) \begin{equation*} \sum_{x \in \mc{X}} h\sups{t}(x) p\sups{t-1}(x) \end{equation*} and the number of times a hypothesis downweights item $x$ (because $f(x)$ is too large), which we denote $M_T(x) \defeq \sum_{t = 1}^T h\sups{t}(x)$. We have the following \begin{restatable}{thm}{thmbasiccuttingplane} \label{thm:comp-infeasible} Let $\gamma > 0$ and assume that for all $t$, \begin{equation*} \sum_{x \in \mathcal{X}}h^{(t)}(x) p^{(t-1)}(x) \geq \gamma \end{equation*} where $p\sups{t}(x) \propto p^{(t-1)}(x) (1-\eta h\sups{t}(x))$ as in Alg.~\ref{alg:cutplane1}. Let $\eta \in [0,1/2]$ and $p^{(0)}$ be uniform. Then for all $x \in \mc{X}$, \begin{equation*} \log p\sups{T}(x) \ge \frac{\gamma \eta}{\eta + 2} T - \eta(\eta + 1) M_T(x) - \log(2 |\mc{X}|). \end{equation*} \end{restatable} The theorem follows from a modification of standard multiplicative weight algorithm guarantees~\cite{AroraHaKa12}; see supplemental section \ref{sec:cuttingplanes} for a full proof. We say that our algorithm converges \emph{linearly} if $\log p\sups{t}(x) \gtrsim t$. In the context of Theorem~\ref{thm:comp-infeasible}, choice of $\eta$ maximizing $-(\eta^2+\eta)M_T(x^*)+ \frac{\eta}{\eta+2}\gamma T$ yields such convergence, as picking $\eta$ sufficiently small that \begin{equation*} T - \frac{(\eta+1)(\eta+2)}{\gamma}M_T(x^*) = \Omega(T) \end{equation*} guarantees linear convergence if $2 M_T(x^*) < T \gamma$. A simpler form of the above bound for a fixed $\eta$ shows the linear convergence behavior. \begin{restatable}{cor}{convergencecorollary} Let $x \in \mathcal{X}$, where $q_T(x) \defeq \frac{M_T(x)}{\gamma T} \leq 1 /4$. Under the conditions of Theorem \ref{thm:comp-infeasible}, \[ \log(p^{(T)}(x)) \geq \min\left(\frac{1}{5}, \frac{1}{3}-\frac{4q_T(x)}{3}\right) \frac{\gamma T}{2} -\log(2|\mathcal{X}|) \] and \begin{equation*} \frac{1}{4} - \frac{\log(2|\mathcal{X}|) }{2\gamma T} \leq q_T(x). \end{equation*} \end{restatable} \noindent The condition $q_T(x) \geq \frac{1}{4} - \frac{1}{2\gamma T} \log(2|\mathcal{X}|)$ arises because if $M_T(x)$ is small, then eventually we must have $p\sups{T}(x) \ge 1-\gamma$, and any classifier $h$ which fulfils the condition $\sum_{x \in \mathcal{X}}h^{(t)}(x) p^{(t-1)}(x) \geq \gamma$ in Thm.~\ref{thm:comp-infeasible} must downweight $x$. At this point, we can identify the optimum exactly with $O(1/(1-\gamma))$ additional draws. The corollary shows that if $M_T(x^*)=0$ and $\gamma = (1-1/e)-1/2 < 0$, we recover a linear cutting-plane-like convergence rate~\cite[cf.][]{Nesterov04}, which makes constant progress in volume reduction in each iteration. \subsection{Consistent selective strategy for strong control of error} \label{sec:css} The basic guarantee of Theorem~\ref{thm:comp-infeasible} requires relatively few mistakes on $x^*$, or at least on a point $x$ with $f(x) \approx f(x^*)$, to achieve good performance in optimization. It is thus important to develop careful classification strategies that are conservative: they do not prematurely cut out values $x$ whose performance is uncertain. With this in mind, we now show how consistent selective classification strategies~\cite{el2012active} (related to active learning techniques, and which abstain on ``uncertain'' examples similar to the Knows-What-It-Knows framework~\cite{LiLiWa09,AbernethyAmDrKe13}) allow us to achieve linear convergence when the classification problems are realizable using a low-complexity hypothesis class. The central idea is to only classify an example if all zero-error hypotheses agree on the label, and otherwise abstain. Since any hypothesis achieving zero population error must have zero training set errors, we will only label points in a way consistent with the true labels. \citet{el2012active} define the following \emph{consistent selective strategy} (CSS). \begin{defn}[Consistent selective strategy] \label{defn:css} For a hypothesis class $\mathcal{H}$ and training sample $S$, the \emph{version space} $\verspace_{\mathcal{H},S_m}\subset \mathcal{H}$ is the set of all hypotheses which perfectly classify $S_m$. The \emph{consistent selective strategy} is the classifier \[h(x)= \begin{cases} 1 &\text{ if }\forall g \in \verspace_{\mathcal{H},S_m}, g(x)=1 \\ 0 &\text{ if }\forall g \in \verspace_{\mathcal{H},S_m}, g(x)=0 \\ \text{no decision} & \text{ otherwise.} \end{cases} \] \end{defn} Applied to our optimizer, this strategy enables safely downweighting examples whenever they are classified as being outside the sublevel set. Optimization performance guarantees then come from demonstrating that at each iteration the selective strategy does not abstain on too many examples. The rate of abstention for a selective classifier is related to the difficulty of disagreement based active learning, controlled by the disagreement coefficient \cite{hanneke2014theory}. \begin{defn} \label{def:disagree} The \emph{disagreement ball} of a hypothesis class $\mathcal{H}$ for distribution $P$ is \begin{equation*} B_{\mathcal{H},P}(h,r) \defeq \{h' \in \mathcal{H} \mid P(h(X)\neq h'(X)) \leq r\}. \end{equation*} The \emph{disagreement region of a subset $\mathcal{G}\subset \mathcal{H}$} is \begin{equation*} \disagree(\mathcal{G}) \defeq \{x\in\mathcal{X} \mid \exists h_1, h_2 \in \mathcal{G} \text{ s.t. } h_1(x) \neq h_2(x)\}. \end{equation*} The \emph{disagreement coefficient} $\discoeff_h$ of the hypothesis class $\mathcal{H}$ for the distribution $P$ is \begin{equation*} \discoeff_h \defeq \sup_{r > 0} \frac{P(X\in\disagree(B_{\mc{H},P}(h,r)))}{r}. \end{equation*} \end{defn} The disagreement coefficient directly bounds the abstention rate as a function of generalization error. \begin{restatable}{thm}{thmcsscover} \label{thm:csscover} Let $h$ be the CSS classifier in definition \ref{defn:css}, and let $h^*\in\mathcal{H}$ be a classifier achieving zero risk. If $\P(g(X) \neq h^*(X)) < \epsilon$ for all $g \in \verspace_{\mathcal{H},S_m}$, then CSS achieves coverage \begin{equation*} \P(h(X) = \text{no decision}) \leq \discoeff_{h^*} \epsilon \end{equation*} \end{restatable} This follows from the definition of the disagreement coefficient, and the size of the version space (Supp. section \ref{sec:cuttingplanes} contains a full proof). The dependence of our results on the disagreement coefficient implies a reduction from zeroth order optimization to disagreement based active learning~\cite{el2012active} and selective classification~\cite{wiener2011agnostic} over sublevel sets. Implementing the CSS classifier may be somewhat challenging: given a particular point $x$, one must verify that all hypotheses consistent with the data classify it identically. In many cases, this requires training a classifier on the current training sample $S\sups{t}$ at iteration $t$, coupled with $x$ labeled positively, and then retraining the classifier with $x$ labeled negatively~\cite{wiener2011agnostic}. This cost can be prohibitive. (Of course, implementing the multiplicative weights-update algorithm over $x \in \mc{X}$ is in general difficult as well, but in a number of application scenarios we know enough about $\mc{H}$ to be able to approximate sampling from $p\sups{t}$ in Alg.~\ref{alg:cutplane1}.) A natural strategy is to use the CSS classifier as part of Algorithm \ref{alg:cutplane1}, setting all \texttt{no decision} outputs to the zero class, only removing points confidently above the level set $\alpha\sups{t}$. That is, in round $t$ of the algorithm, given samples $S=(X\sups{t}, Z\sups{t})$, we define \begin{equation*} h^{(t)}(x) = \begin{cases} 1 &\text{ if }\forall g \in \verspace_{\mathcal{H},S}, g(x)=1\\ 0 &\text{ if }\forall g \in \verspace_{\mathcal{H},S}, g(x)=0\\ 0 & \text{ otherwise.} \end{cases} \end{equation*} There is some tension between classifying examples correctly and cutting out bad $x \in \mc{X}$, which the next theorem shows we can address by choosing large enough sample sizes $n$. \begin{restatable}{thm}{thmcsscut} \label{thm:csscut} Let $\mathcal{H}$ be a hypothesis class containing indicator functions for the sublevel sets of $f$, with VC-dimension $V$ and disagreement coefficient $\discoeff_h$. There exists a numerical constant $C < \infty$ such that for all $\delta \in [0, 1]$, $\epsilon \in [0, 1]$, and $\gamma \in (\discoeff_h \epsilon, \half)$, and \begin{multline*} n \geq \max\Big\{C\epsilon^{-1} [V \log(\epsilon^{-1}) + \log(\delta^{-1}) + \log(2T)],\\ \frac{1}{2(\gamma-0.5)^2}(\log(\delta^{-1})+\log(2T))\Big\}, \end{multline*} with probability at least $1 - \delta$ \begin{multline*} \log(p^{(T)}(x^*)) \geq \min\Big\{(\gamma-\discoeff_h\epsilon) \frac{\eta}{\eta+2}T -\log(2|\mathcal{X}|), \\ \log(1-\gamma) \Big\} \end{multline*} after $T$ rounds of Algorithm~\ref{alg:cutplane1}. \end{restatable} The proof follows from combining the selective classification bound with standard VC dimension arguments to obtain the sample size requirement (Supp.~\ref{sec:cuttingplanes} contains a full proof). Thus if $\discoeff_h$ is small, such as $\log(|\mathcal{X}|)$, then choosing $\epsilon = \discoeff_h^{-1}$ achieves exponential improvements over random sampling. In the worst case, $\discoeff_h=O(|\mathcal{X}|)$, but small $\discoeff_h$ are known for many problems, for example for linear classification with continuous $\mathcal{X}$ over densities bounded away from zero, $\discoeff_h = \text{poly}(\log(\text{Vol}(\mathcal{X})))$, which would result in linear convergence rates (Theorem 7.16, \cite{hanneke2014theory}). Using recent bounds for the disagreement coefficient for linear separators \cite{BalcanLo13}, we can show that for linear optimization over a convex domain, the CSS based optimization algorithm above achieves linear convergence with $O(d^{3/2}\log(d^{1/2})-d^{1/2}\log(3T\delta))$ samples with probability at least $1-\delta$ (for lack of space, we present this as Theorem~\ref{thm:linear-opt-classifier} in the supplement.) When the classification problem is non-realizable, but the Bayes-optimal hypothesis does not misclassify $x^*$, an analogous result holds through the agnostic selective classification framework of Wiener and El-Yaniv \cite{wiener2011agnostic}. The full result is in supplemental Theorem~\ref{thm:cssagn}. \section{Computationally efficient approximations} \label{sec:bootstrap} While selective classification provides sufficient control of error for linear convergence, it is generally computationally intractable. However, a bootstrap resampling algorithm~\cite{EfronTi93} approximates selective classification well enough to provide finite sample guarantees in parametric settings. Our analysis provides intuition for the empirical observation that selective classification via the bootstrap works well in many real-world problems~\cite{mamitsuka1998query}. Formally, consider a parametric family $\{P_\theta\}_{\theta \in \Theta}$ of conditional distributions $Z \mid X \in [0,1]$ with compact parameter space $\Theta$. Given $n$ samples $X_1,\dots,X_n$, we observe $Z_i | X_i \sim P_{\theta^*}$ with $\theta^* \in \operatorname{int} \Theta$. Let $\ell_\theta(x, z) = -\log(P_\theta(z | x))$ be the negative log likelihood of $z$, which majorizes the 0-1 loss of the linear hypothesis class $\ell_\theta(x, z) \ge \ind{ (2z-1) x^\top \theta < 0 }$. Define the weighted likelihood \[ L_n(\theta,u) \equiv \tfrac{1}{n} \sum_{i=1}^n (1+u_i)\ell_\theta(X_i, Z_i), \] and consider the following multiplier bootstrap algorithm~\cite{EfronTi93,spokoiny2012parametric}, parameterized by $B \in \naturals$ and variance $\sigma^2$. $\sigma$ adds \emph{additional} variation in the estimates to increase parameter coverage. \begin{enumerate} \item Draw $\{(X_i, Z_i)\}_{i=1}^n$ from $\P$. \item Compute $\mle = \arg\min_\theta L_n(\theta, 0)$. \item For $b = 1$ to $B$, \begin{enumerate} \item Draw $u_b \simiid \mbox{Uni}[-1, 1]$. \item Compute \[\theta^\circ_{u_b} = \sigma(\arg\min_\theta L_n(\theta, u_b)-\mle)+\mle.\] \end{enumerate} \item Define the estimator \[ h^\circ(x) = \begin{cases} 1 &\text{ if }\forall b\in[B], x^\top \theta^\circ_{u_b}> 0 \\ 0 &\text{ if }\forall b\in[B], x^\top \theta^\circ_{u_b} \leq 0\\ \text{no decision} & \text{ otherwise.} \end{cases}.\] \end{enumerate} For linear classifiers with strongly convex losses, this algorithm obtains selective classification guarantees under appropriate regularity conditions as presented in the following theorem. \begin{thm} Assume $\ell_\theta$ is twice differentiable and fulfils $\|\nabla \ell_{\theta}(X, Z)\| \le R$, and $\norm{\nabla^2\ell_\theta(X,Z)}_{op} \leq S$ almost surely. Additionally, assume $L_n(\theta, 1)$ is $\gamma$-strongly convex and that $\nabla^2 L_n(\theta, 1)$ is $M$-Lipschitz with probability one. For $h^\circ$ defined above and $x\in\mathcal{X}$, \[P(x^\top \theta^* \leq 0 \text{ and } h^\circ_{u}(x) = 1) < \delta.\] Further, the abstention rate is bounded by \[\int_{x\in \reals^d}\ind{h^\circ_u(x)=\emptyset}p(x)dx \leq \epsilon \discoeff_h\] with probability $1-\delta$ whenever \[B \geq 15\log(3/\delta),\] \[\sigma=O(d^{1/2} + \log(1/\delta)^{1/2}+n^{-1/2}),\] \[\epsilon=O\left(\sigma^2 n^{-1} \log(B/\delta)\right),\] and \[n \geq 2\log(2d/\delta)S/\gamma^2.\] \end{thm} Due to length, the proof and full statement with constants appears in the appendix as Theorem \ref{thm:bootlin}, with a sketch provided here: we first show that a given quadratic version space and a multivariate Gaussian sample $\tq$ obtains the selective classification guarantees (Lemmas \ref{lem:quadmin},\ref{lem:ballmax},\ref{lem:strong-convex}). We then show that $\theta^\circ \approx \tq$ to order $n^{-1}$ which is sufficient to recover Theorem \ref{thm:bootlin}. \begin{figure}[h] \centering \subcaptionbox{Classification confidences formed by bootstrapping approximate selective classification.\label{fig:bootbound}} {\includegraphics[scale=0.27]{figs/theory/coverage.png}} \quad \subcaptionbox{Bootstrapping results in more consistent identification of minima. \label{fig:bootopt}} {\includegraphics[scale=0.24]{figs/theory/linear.png}} \caption{Bootstrap consensus provides more conservative classification boundaries which prevents repeatedly misclassifying the minimum, compared to direct loss minimization (panel b, triangle).} \end{figure} The $d \discoeff_h$ abstention rate in this bound is $d$ times the original selective classification result. This additional factor of $d$ appearing in $\sigma^2$ arises from the difference between finding an optimum within a ball and randomly sampling it: random vectors concentrate within $O(1/d)$ of the origin, while the maximum possible value is 1. This gap forces us to scale the variance in the decision function by $\sigma$ (step 3b). We present selective classification approximation bounds analogous to Theorem~\ref{thm:csscut} for linear optimization in the Appendix as Theorem~\ref{thm:csscutboot}. To illustrate our results through simulations, consider a optimizing a two-dimensional linear function in the unit box. Figure \ref{fig:bootbound} shows the set of downweighted points (colored points) for various algorithms on classifying a single superlevel set based on eight observations (black points). Observe how linear downweights many points (colored `x'), in contrast to exact CSS, which only downweights points guaranteed to be in the superlevel set. Errors of this type combined with Alg.~\ref{alg:cutplane1} result in optimizers which fail to find the true minimum depending on initialization (Figure \ref{fig:bootopt}). The bootstrapped linear classifier behaves similarly to CSS, but is looser due to the non-asymptotic setting. Random forests, another type of bootstrapped classifier is surprisingly good at approximating CSS, despite not making use of the linearity of the decision boundary. \section{Partial order based optimization} One benefit of optimizing via classification is that the algorithm only requires total ordering amongst the elements. Specifically, step 6 of Algorithm \ref{alg:cutplane1} only requires threshold comparisons against a percentile selected in step 5. This enables optimization under pairwise comparison feedback. At each round, instead of observing $f(X^{(t)})$, we observe $g(X_i^{(t)}, X_j^{(t)})=1_{f(X_i^{(t)}) < f(X_j^{(t)})}$, which is a natural form of feedback in domains such as human surveys \cite{phelps2015pairwise} or matched biological experiments \cite{harwood2013microbial}. Given the pairwise comparison function $g$, the threshold $f(X^{(t)}) < \alpha^{(t)}$ can be replaced with the following stochastic quantile estimator: \begin{equation}\label{eq:paircomp} \hat{f}(X^{(t)}_i) = \sum_{k=1}^c g(X^{(t)}_{I_k}, X^{(t)}_i) \leq 0.5, \end{equation} where $I_k \sim \text{Unif}(\{1, 2 \hdots c\})$ with $cn$ total pairwise comparisons. We show that $c > 10$ seems to work well in practice, and more sophisticated preference aggregation algorithms may reduce the number of comparisons even further. \section{Experimental evidence} We evaluate Algorithm \ref{alg:cutplane1} as a DFO algorithm across a few real-world experimental design benchmarks, common synthetic toy optimization problems, and benchmarks that allow only pairwise function value comparisons. The small-batch (n = 1-10) nature of hyperparameter optimization problems is outside the scope of our work, even though they are common DFO problems. For constructing the classifier in Algorithm \ref{alg:cutplane1}, we apply ensembled decision trees with a consensus decision defined as 75\% of trees agreeing on the label (referred to as \textsc{classify-rf}). This particular classifier works in a black-box setting, and is highly effective across all problem domains with no tuning. We also empirically investigate the importance of well-specified hypotheses and consensus ensembling and show improved results for ensembles of linear classifiers and problem specific classifiers, which we call \textsc{classify-tuned}. In order to demonstrate that no special tuning is necessary, the same constants are used in the optimizer for all experiments, and the classifiers use off-the-shelf implementations from \textsc{scikit-learn} with no tuning. For sampling points according to the weighted distribution in Algorithm \ref{alg:cutplane1}, we enumerate for discrete action spaces $\mathcal{X}$, and for continuous $\mathcal{X}$ we perturb samples from the previous rounds using a Gaussian and use importance sampling to approximate the target distribution. Although exact sampling for the continuous case would be time-consuming, the Gaussian perturbation heuristic is fast, and seems to work well enough for the functions tested here. \begin{figure*}[h] \centering \subcaptionbox{Binding to the CRX protein\label{fig:pbm1}} {\includegraphics[scale=0.35]{figs/pbm/CRX_REF_R1_8mers.png}} \subcaptionbox{Binding to the VSX1 protein\label{fig:pbm2}} {\includegraphics[scale=0.35]{figs/pbm/VSX1_G160D_R1_8mers.png}} \subcaptionbox{High-lift airfoil design\label{fig:airfoil}} {\includegraphics[scale=0.35]{figs/airfoil/Airfoil_design1.png}} \caption{Performance on two types of real-world batched zeroth-order optimization tasks. \textsc{classify-rf} consistently outperforms baselines and even randomly sampling twice the batch size. The line shows median function value over runs, shaded area is quartiles.} \label{fig:pbm} \end{figure*} As a baseline, we compare to the following algorithms\vspace{-2ex} \begin{itemize} \item Random sampling (\textsc{random}) \vspace{-1ex} \item Randomly sampling double the batch size (\textsc{random-2x}), which is a strong baseline recently shown to outperform many derivative-free optimizers \cite{li2016hyperband}. \vspace{-1ex} \item The evolutionary strategy (\textsc{CMA-ES}) for continuous problems, due to its high-performance in black box optimization competitions as well as inherent applicability to the large batch setting \cite{loshchilov2013cma} \vspace{-3ex} \item The Bayesian optimization algorithm provided by \textsc{GpyOpt}\cite{gpyopt2016} (\textsc{GP}) for both continuous and discrete problems, using expected improvement as the acquisition function. We use the `random' evaluator which implements an epsilon-greedy batching strategy, since the large batch sizes (100-1000) makes the use of more sophisticated evaluators completely intractable. The default RBF kernel was used in all experiments presented here. The $\sfrac{3}{2}$- and $\sfrac{5}{2}$-Matern kernels and string kernels were tried where appropriate, but did not provide any performance improvements. \vspace{-1ex} \end{itemize} In terms of runtime, all computations for \textsc{classify-rf} take less than 1 second per iteration compared to 0.1s for \textsc{CMA-ES} and 1.5 minutes for \textsc{GpyOpt}. All experiments were replicated fifteen times to measure variability with respect to initialization. All new benchmark functions and reference implementations are made available at \url{http://bit.ly/2FgiIxA}. \subsection{Designing optimal DNA sequences} The publicly available protein binding microarray (PBM) dataset consisting of 201 separate assays \cite{barrera2016survey} allows us to accurately benchmark the optimization protein binding over DNA sequences. In each assay, the binding affinity between a particular DNA-binding protein (transcription factor) and all 8-base DNA sequences are measured using a microarray. This dataset defines 201 separate discrete optimization problems. For each protein, the objective function is the negative binding affinity (as measured by fluorescence), the batch size is 100 (corresponding roughly to the size of a typical 96-well plate), across ten rounds. Each possible action corresponds to measuring the binding affinity of a particular 8-base DNA sequence exactly. The actions are featurized by considering the binary encoding of whether a base exists in a position, resulting in a 32-dimensional space. This emulates the task of finding the DNA binding sequence of a protein using purely low-throughput methods. Figure \ref{fig:pbm1},\ref{fig:pbm2} shows the optimization traces of two randomly sampled examples, where the lines indicate median achieved function value over 15 random initializations, and the shading indicates quartiles. \textsc{classify-rf} shows consistent improvements over all discrete action space baselines. For evaluation, we further sample 20 problems and find that the median binding affinity found across replicates is strictly better on 16 out of 20, and tied with the Gaussian process on 2. In this case, the high performance of random forests is relatively unsurprising, as random forests are known to be high-performance classifiers for DNA sequence recognition tasks \cite{chen2012random,knight2008array}. \begin{figure*}[h] \centering \subcaptionbox{Random linear function\label{fig:lin}} {\includegraphics[scale=0.35]{figs/linfun/Linear.png}} \subcaptionbox{Linear$+$quadratic function\label{fig:quad}} {\includegraphics[scale=0.35]{figs/linfun/Quadratic.png}} \subcaptionbox{Ensembling classifiers improves optimization performance\label{fig:ens}} {\includegraphics[scale=0.35]{figs/ensfun/CRX_REF_R1_8mers.png}} \caption{Testing the importance of ensembling and well-specified hypothesis class in synthetic data where the hypothesis for \textsc{Classify-tuned} exactly matches level set (panel a), matches level sets with some error (panel b). Ensembling also consistently improves performance, and reduces dependence on initialization (panel c)} \end{figure*} \subsection{Designing high-lift airfoils} Airfoil design, and other simulator-based objectives are well-suited to the batched, classification based optimization framework, as 30-40 simulations can be run in parallel on modern multicore computers. In the airfoil design case, the simulator is a 2-D aerodynamics simulator for airfoils \cite{drela1989xfoil}. The objective function is the negative of lift divided by drag (with a zero whenever the simulator throws an error) and the action space is the set of all common airfoils (NACA-series 4 airfoils). The airfoils are featurized by taking the coordinates around the perimeter of the airfoil as defined in the Selig airfoil format. This results in a highly-correlated two hundred dimensional feature space. The batch size is 30 (corresponding to the number of cores in our machine) and $T=10$ rounds of evaluations are performed. We find in Figure \ref{fig:airfoil} that the \textsc{classify-rf} algorithm converges to the optimal airfoil in only five rounds, and does so consistently, unlike the baselines. The Gaussian process beat the twice-random baseline, since the radial basis kernel is well-suited for this task (as lift is relatively smooth over $\ell_2$ distance between airfoils) but did not perform as well as the \textsc{classify-rf} algorithm. \subsection{Gains from designed classifiers and ensembles} Matching the classifier and objective function generally results in large improvements in optimization performance. We test two continuous optimization problems in $[-1,1]^{300}$, optimizing a random linear function, and optimizing a random sum of a quadratic and linear functions. For this high dimensional task, we use a batch size of 1000. In both cases we compare continuous baselines with \textsc{classify-rf} and \textsc{classify-tune} which uses a linear classifier. We find that the use of the correct hypothesis class gives dramatic improvements over baseline in the linear case (Figure \ref{fig:lin}) and continues to give substantial improvements even when a large quadratic term is added, making the hypothesis class misspecified (Figure \ref{fig:quad}). The \textsc{classify-rf} does not do as well as this custom classifier, but continues to do as well as the best baseline algorithm (\textsc{CMA-ES}). We also find that using an ensembled classifier is an important for optimization. Figure \ref{fig:ens} shows an example run on the DNA binding task comparing the consensus of an ensemble of logistic regression classifiers against a single logistic regression classifier. Although both algorithms perform well in early iterations, the single logistic regression algorithm gets `stuck' earlier and finds a suboptimal local minima, due to an accumulation of errors. Ensembling consistently reduces such behavior. \subsection{Low-dimensional synthetic benchmarks} We additionally evaluate on two common synthetic benchmarks (Figure \ref{fig:shekel},\ref{fig:hartmann}). Although these tasks are not the focus of the work, we show that the \textsc{classify-rf} is surprisingly good as a general black box optimizer when the batch sizes are large. We consider a batch size of 500 and ten steps due to the moderate dimensionality and multi-modality relative to the number of steps. We find qualitatively similar results to before, with \textsc{classify-rf} outperforming other algorithms and \textsc{CMA-ES} as the best baseline. \begin{figure}[h] \centering \subcaptionbox{Shekel (4d)\label{fig:shekel}} {\includegraphics[scale=0.26]{figs/toyfun/shekel.png}} \subcaptionbox{Hartmann (6d)\label{fig:hartmann}} {\includegraphics[scale=0.26]{figs/toyfun/hartmann.png}} \caption{\textsc{classify-rf} outperforms baselines on synthetic benchmark functions with large batches} \end{figure} \subsection{Optimizing with pairwise comparisons} Finally, we demonstrate that we can optimize a function using only pairwise comparisons. In Figure \ref{fig:paircomp} we show the optimization performance when using the ordering estimator from equation \ref{eq:paircomp}. For small numbers of comparisons per element $(c=5)$ we find substantial loss of performance, but once we observe at least 10 pairwise comparisons per proposed action, we are able to reliably optimize as well as the full function value case. This suggests that classification based optimization can handle pairwise feedback with little loss in efficiency. \begin{figure}[h] \centering \includegraphics[scale=0.3]{figs/pord/CRX_REF_R1_8mers.png} \caption{Optimization with pairwise comparisons between each action and a small set of $(c)$ randomly selected actions. Between 10-20 pairwise comparisons per action gives sufficient information to fully optimize the function.} \label{fig:paircomp} \end{figure} \section{Discussion} Our work demonstrates that the classification-based approach to derivative-free optimization is effective and principled, but leaves open several theoretical and practical questions. In terms of theory, it is not clear whether a modified algorithm can make use of empirical risk minimizers instead of perfect selective classifiers. In practice, we have left the question of tractably sampling from $p^{(t)}$, as well as how to appropriately handle smaller-batch settings of $d > n$. \clearpage
2,877,628,091,449
arxiv
\part{\sc{Foundations of Information}} \chapter{Information Partitions and Knowledge} Section \ref{sec:Partition} introduces the partitional model of information and three definitions of common knowledge. Sections \ref{sec:AgreetoDisagree} presents \citet{Aumann}'s result that agents cannot agree to disagree, and Section \ref{sec:emailgame} presents \citet{Rubinstein}'s email game. Section \ref{sec:pBelief} defines common $p$-belief. We assume a finite state space in Sections \ref{sec:Partition}-\ref{sec:pBelief} to ease exposition, and discuss in Section \ref{sec:General} how these results extend more generally. \section{Common Knowledge} \label{sec:Partition} An unknown state $\omega$ takes values in the finite set $\Omega$. Agents $i\in \mathcal{I}$ share a \emph{common prior} that the state $\omega$ is distributed according $P \in \Delta(\Omega)$. Each agent $i$'s \emph{information partition} $\Pi_i$ is a partition of $\Omega$, with the property that for any realization of the state $\omega$, agent $i$ is informed that the state belongs to $\Pi_i(\omega)$. \begin{assumption} Every partition element has strictly positive probability under the prior; that is, $P(\Pi_i(\omega))>0$ for every agent $i \in \mathcal{I}$ and state $\omega \in \Omega$. \end{assumption} Everything above is commonly known to all agents; in particular, all agents know the state space $\Omega$ and the information partitions $(\Pi_i)_{i \in \mathcal{I}}$.\footnote{This assumption is less strong than it might initially seem, since we can redefine states and expand the state space to accommodate uncertainty about other players' partitions (see Example \ref{ex:ExpandOmega}).} Throughout, the state space $ \Omega $ is interpreted as capturing all relevant uncertainty. \begin{definition}[Knowledge] \label{def:Knowledge} The set of states at which agent $i$ \emph{knows} the event $A \subseteq \Omega$ to be true is \[K_i(A)=\{\omega: \Pi_i(\omega)\subseteq A \}.\] \end{definition} \noindent No agent can think that an event is true if it is not; that is, $K_i(A) \subseteq A$ for every agent $i$ and event $A$. \begin{definition}[Mutual Knowledge] The set of states at which the event $A \subseteq \Omega$ is \emph{mutual knowledge} is \[K(A)=\bigcap_{i \in \mathcal{I}} \{\omega: \Pi_i(\omega)\subseteq A \}\] i.e., all agents know $A$ to be true. \end{definition} \begin{example} \label{ex:Partition} Suppose the set of states is $\Omega =\{1,2,3,4,5,6\}$, and there are two agents with information partitions $ \Pi_1 =\{\{1,2,3\},\{4,5\},\{6\}\}$ and $\Pi_2 = \{\{1,2\},\{3,4\},\{5\},\{6\}\}$. Let $A = \{3,4,5,6\}$. Then, the set of states at which agent 1 knows $A$ to be true is $K_1(A) = \{4,5,6\}$, the set of states at which agent 2 knows $A$ to be true is $K_2(A) = \{3,4,5,6\}$, and the set of states at which both agents know $A$ to be true is $K(A)=\{4,5,6\}$. \end{example} \begin{example} \label{ex:ExpandOmega} Let $\Omega = \{1,2\}$, $\mathcal{I}=\{1,2\}$, and $\Pi_1 = \Pi_2 = \{ \{1\},\{2\}\}$. Suppose we want to model the situation where agent 1 has uncertainty over whether agent 2's information is the complete partition $\Pi_2$ or the trivial partition $\Pi_2' = \{\{1,2\}\}$. One way to do this is to expand the state space: Define $\widetilde{\Omega}=\Omega \times \{c,t\} = \{\{1,c\},\{1,t\},\{2,c\},\{2,t\}\}$ and revise the agents' information partitions to be \begin{align*} \widetilde{\Pi}_1 & = \{\{(1,c),(1,t)\},\{(2,c),(2,t)\}\} \\ \widetilde{\Pi}_2 & =\{\{(1,c)\},\{(2,c)\},\{(1,t),(2,t)\}\} \end{align*} Then, for example, at state $(1,c)$ both agents know $\omega=1$ to be true, but agent 1 does not know whether agent 2 knows it. \end{example} The knowledge operators $K_i$ and $K$ can be applied to events that themselves represent knowledge or mutual knowledge of a state, thus building up higher-order knowledge (agent 1 knows that agent 2 knows that\dots). \begin{exercise} Suppose there are two agents indexed to $i=1,2$. \begin{itemize} \item[(a)] Prove that $K_1(K_2(A)), K_2(K_1(A)) \subseteq K(A)$ for every event $A\subseteq \Omega$. \item[(b)] Provide an example in which $K(A) \nsubseteq K_1(K_2(A))$, demonstrating that even if both players know an event to be true, either can fail to know that the other knows it. \end{itemize} \end{exercise} \begin{exercise} Prove that $\neg K_i(\neg K_i(A)) = K_i(A)$ for every event $A\subseteq \Omega$ (where $\neg A$ denotes the complement of $A$.) \end{exercise} The event $A$ is \emph{common knowledge} at state $\omega$ if both agents know it to be true, know the other to know it to be true, ad infinitum. We'll cover three equivalent ways to define this. \paragraph{The First Definition.} The most direct approach is to recursively define higher-order levels of knowledge. \begin{definition}[Common Knowledge, Definition 1] \label{def:CK1} For any event $A \subseteq \Omega$, define $ \mathscr{A}^1:=\bigcap_{i\in \mathcal{I}}K_i(A)$ to be the set of states at which every agent knows $A$, and recursively define $$\mathscr{A}^k:= \bigcap_{i\in \mathcal{I}}K_i(\mathscr{A}^{k-1})$$ for each $k \geq 2$. (For example, $\mathscr{A}^2$ is the set of states at which every agent knows that every agent knows $A$.) The set of states at which $A$ is \emph{common knowledge} is $ \mathscr{A}^\infty := \bigcap_{n\geq1} \mathscr{A}^n $. \end{definition} \begin{exercise} Consider the informational environment of Example \ref{ex:Partition}. Find the smallest value of $k$ with the property that $\mathscr{A}^{k'}=\mathscr{A}^{k'+1}$ for all $k' \geq k$. \end{exercise} \paragraph{The Second Definition.} Alternatively, we can define common knowledge using the meet of the players' information partitions. If two partitions $\Pi$ and $\Pi'$ satisfy \[\Pi'(\omega) \subseteq \Pi(\omega) \quad \forall \omega \in \Omega\] then we say that $\Pi$ is a \emph{coarsening} of $\Pi'$ (corresponding to weakly less information at every state), and $\Pi'$ is a \emph{refinement} of $\Pi$ (corresponding to weakly more information at every state). If $\Pi'$ a coarsening of both partitions $\Pi_1$ and $\Pi_2$, then it is a \emph{common coarsening} of $\Pi_1, \Pi_2$. \begin{definition} Let $ \Pi_1 \wedge \Pi_2 $ denote the finest common coarsening of $ \Pi_1,\Pi_2 $, i.e., the common coarsening of these partitions that is moreover a refinement of every other common coarsening of $\Pi_1 \wedge \Pi_2$. \end{definition} \begin{definition} For any sequence of information partitions $(\Pi_1, \dots, \Pi_{\vert \mathcal{I} \vert})$, let $\mathscr{P}_2 = \Pi_1 \wedge \Pi_2 $, and for each $k>2$, recursively define $\mathscr{P}_k = \mathscr{P}_{k-1} \wedge \Pi_k$. The \emph{meet} of $(\Pi_1, \dots, \Pi_{\vert \mathcal{I} \vert})$ is $ \bigwedge_{i \in \mathcal{I}} \Pi_i \equiv \mathscr{P}_{\vert \mathcal{I} \vert}$. \end{definition} \begin{exercise} Prove that for any sequence of information partitions $(\Pi_1, \dots, \Pi_{\vert \mathcal{I} \vert})$, the meet $\mathscr{P}^n$ is invariant to permutations of players indices. \end{exercise} \begin{example} \label{example:Ant} Consider Example \ref{ex:Partition}. Stack the two information partitions on top of one another, and suppose an ant is placed on one of the states in an agent's partition (see Figure \ref{fig:Ant}). \begin{figure}[H] \begin{center} \includegraphics[scale=0.85]{partition.pdf} \caption{$\Pi_1 \wedge \Pi_2(\omega)$ includes all states that an ant seeded at $\omega$ can reach.}\label{fig:Ant} \end{center} \end{figure} \vspace{-4mm} \noindent The ant's movements obey two laws: The ant can move from side to side within an information partition element, and it can jump across the players' information partitions along the same state. The ant's full range of motion when seeded at any state $\omega$ then recovers the member of the meet that includes that state. So in this example, we have $ \Pi_1 \wedge \Pi_2 = \{\{1,2,3,4,5\},\{6\}\}.$ \end{example} \begin{exercise} Formalize the statements in the example above by proving that two points $x'$ and $x''$ belong to the same element of $\bigwedge_{i \in \mathcal{I}} \Pi_i$ if and only if there is a sequence $(x_0,x_1,x_2,…,x_n,x_{n+1})$, with $x_0=x'$ and $x_{n+1}=x''$, such that for every $0 \leq m \leq n$, $x_m$ and $x_{m+1}$ belongs to the same element of $\Pi_i$ for some $i\in \mathcal{I}$. \end{exercise} \begin{definition}[Common Knowledge, Definition 2] \label{def:CK2} An event $ A \subseteq \Omega $ is common knowledge at state $ \omega \in \Omega$ if $\bigwedge_{i \in \mathcal{I}} \Pi_i(\omega) \subseteq A$. \end{definition} \begin{remark} It is immediate that the set $\Omega$ is common knowledge at every $\omega \in \Omega$. \end{remark} \paragraph{The Third Definition.} Our final definition of common knowledge starts from the definition of an \emph{evident} event which, upon its occurrence, is known to all agents. \begin{definition}[Evident Events] \label{def:Evident} The event $A \subseteq \Omega$ is \emph{evident} (or \emph{public}) if $A \subseteq K(A)$. \end{definition} \begin{definition}[Common Knowledge, Definition 3] \label{def:CK3} The event $A \subseteq \Omega$ is common knowledge at $\omega$ if and only if there is an evident event $E$ such that $\omega \in E$ and $E \subseteq K(A)$. \end{definition} \begin{exercise} Let $\mathcal{I}=\{1,2\}$. Prove that an event $E\subseteq \Omega$ is evident if and only if it is a union of elements of the meet $\Pi_1 \wedge \Pi_2$. \end{exercise} These three definitions of common knowledge are equivalent (see for example \citet{MondererSamet}). \section{Agreeing to Disagree} \label{sec:AgreetoDisagree} Often we are interested not only in agents' knowledge (which depends only on the agents' information partitions) but also in agents' posterior beliefs (which depend additionally on the prior $P$). At any state $\omega$ and for any event $A \subseteq \Omega$, agent $i$'s posterior probability of event $A$ is pinned down by Bayes' rule (Section \ref{sec:Bayes}): \[P(A \mid \Pi_i(\omega)) = \frac{P(A\cap \Pi_i(\omega))}{P(\Pi_i(\omega))}\] Our assumption that every partition element has strictly positive prior probability ensures that this expression is well-defined. One event of interest is the one in which a player's posterior belief takes on a particular value. Fixing an event $A$ and a number $p\in [0,1]$, define $A_p= \{ \omega : P(A \mid \Pi_i(\omega)) = p\}$ to be the set of states at which player $i$ assigns posterior probability $p$ to the event $A$ being true. If player 1 announces that he assigns probability $p$ to $A$, then all other agents know that the state must belong to $A_p$. \begin{example} Consider the informational environment of Example \ref{ex:Partition} with $A = \{2,3\}$. Agent 2 has four partition elements, $\{1,2\}$, $\{3,4\}$, $\{5\}$, $\{6\}$, and assigns to $A$ a posterior probability of $1/2$, $1/2$, $0$, and $0$ (respectively) on these partition elements. So the set of states $A_{1/2}$ at which agent 2 assigns probability $1/2$ to event $A$ being true, is $A_{1/2} = \{1,2,3,4\}$. \end{example} The following theorem shows that whenever players' posterior beliefs about an event are common knowledge (e.g., because players have publicly announced these beliefs), then these posterior beliefs must be identical. So disagreement cannot be sustained whenever players' beliefs are commonly known. \begin{theorem}[\citet{Aumann}] \label{thm:Aumann} Suppose $\mathcal{I}=\{1,2\}$. Fix any state $ \omega\in\Omega $ and event $A \subseteq \Omega$. If it is common knowledge at $ \omega $ that agent 1 assigns (posterior) probability $q_1$ to event $A$, while agent 2 assigns (posterior) probability $q_2$ to the same event, then $q_1=q_2$. \end{theorem} \noindent The result is stated for two agents, but the proof below directly extends for an arbitrary finite number of players. \bigskip \begin{proof} Let $\bf{P}$ be the element of $\Pi_1\wedge\Pi_2 $ that contains $ \omega $. Then we can write $ \textbf P = \cup_k \mathcal{P}^k $ where $ P^k $ are disjoint elements of $ \Pi_1 $. Since the event $\{$ agent 1's posterior belief is $ q_1 \}$ is common knowledge at $ \omega $, agent 1 must assign probability $ q_1$ to event $A$ at every partition element $ \mathcal{P}^k $. So $ q_1=P(A\cap \mathcal{P}^k)/P(\mathcal{P}^k) $ for each $ k $. This implies $q_1 \cdot P(\mathcal{P}^k) = P(A\cap \mathcal{P}^k) $. Summing over each of player 1's partition elements, we have $ q_1 \sum_k P(\mathcal{P}^k)=\sum_k P(A\cap \mathcal{P}^k) $. Thus $ q_1\cdot P(\textbf P) = P(A \cap \textbf P)$. But repeating the same line of logic for player 2, we obtain $ q_2\cdot P(\textbf P) = P(A\cap \textbf P).$ So it must be that $q_1=q_2$. \end{proof} \medskip The following example explains why it is important that players' posterior beliefs are common knowledge and not simply mutual knowledge. \begin{example} Let $ \Omega = \{1,2,3,4\} $ with a uniform prior, and define $ \Pi_1 =\{\{1,2\},\{3,4\}\} $ and $ \Pi_2 =\{\{1,2,3\},\{4\}\} $. Choose $A=\{1,4\} $ and $ \omega=2.$ Then agent 1 assigns posterior probability $1/2$ to $A$ while agent 2 assigns posterior probability $1/3$. Moreover, each agent knows one another's posterior probability. But Theorem \ref{thm:Aumann} is not violated: agent $ 2 $ does not know that agent $ 1 $ knows his posterior probability to be $\frac13 $, so posterior beliefs are mutual knowledge but not common knowledge. \end{example} The starting hypothesis of Theorem \ref{thm:Aumann}---that individuals have common knowledge of one anothers' beliefs---is strong. \citet{GeanakoplosPolemarchakis} show that the same result obtains under a more realistic process: Communication of posterior beliefs converges to common knowledge of identical posterior beliefs, where this convergence occurs in fewer than $ n_1+n_2 $ steps with $n_i$ the size of agent $i$'s partition. \begin{example} Let $\Omega =\{1,2,3,4,5,6,7,8,9\} $ with all states equally likely. There are two agents, Bob and Carly, with information partitions \[\Pi_B =\{\{1,2,3\},\{4,5,6\},\{7,8,9\}\}\] and \[\Pi_C =\{\{1,2,3,4\},\{5,6,7,8\},\{9\}\}.\] Suppose the true state is $\omega=1$, and the agents repeatedly communicate their beliefs about the event $A=\{3,4\}$. \textbf{Round 1:} Bob's information partition reveals to him that the state belongs to $ \{1,2,3\} $, so he assigns posterior probability $1/3$ to the event $A$. Carly's information partition reveals to her that the state belongs to $ \{1,2,3,4\} $, so she assigns posterior probability $1/2$ to the event $A$. The two agents announce these posterior beliefs. \textbf{Round 2:} That Bob assigns probability $1/3$ to $A$ reveals to Carly that Bob was either informed that the state belongs to $\{1,2,3\}$ or informed that the state belongs to $\{4,5,6\}$. But Carly already knew in round 1 that these were the two partition elements that Bob might have been informed of (since she knew the state to be either 1, 2, 3, or 4), and so there is no information for her in this announcement. That Carly assigns probability $1/2$ to $A$ reveals to Bob that agent 2 knows $\{1,2,3,4\}$, but again Bob knew this in round 1. So both agents' posterior beliefs are unchanged. They again announce $1/3$ and $1/4$. And now something interesting happens. That Bob sticks to his original belief of $1/3$ tells Carly that Bob must have observed $\{1,2,3\}$. If instead Bob observed $ \{4,5,6\} $, then upon hearing that Carly's belief was $1/2$ (and thus learning that Carly observed $ \{1,2,3,4\}$), Bob would have deduced that the state was 4 with certainty, and hence revised his posterior belief of $A$ to 1. So Carly now knows that the state is in $ \{1,2,3\}$ and shares Bob's posterior belief, $ \frac 13 $. The two agents' beliefs have converged, and it is straightforward to show that these beliefs will not move after subsequent communication. \end{example} Although agents' beliefs must converge, the belief that they converge to need not be the belief that agents would have held had they pooled their information: \begin{example} Let $ \Omega = \{1,2,3,4\} $ with each state equally likely. Agents' partitions are given by $ \Pi_1 =\{\{1,2\},\{3,4\}\} $ and $ \Pi_2 =\{\{1,3\},\{2,4\}\} $. Let $ \omega = 1 $ and $ A=\{1,4\} $. Both posteriors are $ 1/2 $ and the process of belief revision converges in one step. But had agents shared their information, they would have learned that $\omega \in \{1,2\}$ and also $\omega \in \{1,3\}$, leading to a (common) posterior belief that $A$ is true with probability 1. \end{example} \section{The Email Game} \label{sec:emailgame} Common knowledge assumptions appear frequently in analyses of strategic environments; for example, payoffs are assumed to be common knowledge in any complete-information game. Do strategic predictions made under an assumption of common knowledge approximately hold when we relax the assumption of common knowledge? \citet{Rubinstein}'s email game shows that for one formalism of what ``almost common knowledge" means, the answer is no: Strategic predictions can change discontinuously when we move from common knowledge to almost common knowledge. In this game, two agents each choose an action from $ \{A,B\}$. There are two possible payoff matrices indexed to $ \{a,b\} $ (depicted below with $a$ on the left and $b$ on the right). The agents share a common prior that assigns probability $ 1-p>\frac 12$ to the matrix indexed to $a$. \\[-12mm] \begin{center} \[\begin{array}{ccc} & A & B\\ A& M,M & 0,-L\\ B&-L,0 & 0,0 \end{array} \hspace{5em} \begin{array}{ccc} & A & B\\ A& 0,0 & 0,-L\\ B&-L,0 & M,M \end{array}\] \end{center} We assume throughout that $ L>M>0 $. Thus $ (A,A) $ yields higher payoffs for both agents when the payoff parameter is $ a $ while $ (B,B) $ yields higher payoffs when the payoff parameter is $ b $. The action $ A $ is ``safe," in that it never yields a negative payoff. \medskip \textbf{Communication Protocol.} Both players have an automated email server, which is the only means by which the players can communicate. Agent $ 1 $ is informed of the payoff parameter. If (and only if) the parameter is $ b $, agent $ 1 $'s email server automatically sends an email to agent $ 2 $ announcing that the parameter is $b$. All emails are independently lost with probability $ \eps>0 $, so the agents' email servers are set up to automatically send back confirmations that emails have been received, and confirmations of confirmations, etc. Each agent $i$'s type is the number of emails that agent $i$'s computer sends, which is privately known to agent $i$. In the special case $ T_1=T_2=\infty$, there is common knowledge that the parameter is $b$. But if for example $ T_1=2 $, then agent $ 1 $ knows the parameter is $ b $, and agent 1 knows that agent $ 2 $ knows that the parameter is $ b $, but agent $ 1 $ does not know that agent $ 2 $ knows that agent $ 1 $ knows that agent 2 knows that the parameter is $ b $. In general, so long as $T_1$ and $T_2$ are finite, then higher-order knowledge of parameter $b$ must break down at some stage. \begin{remark} \label{remark:Partition} In the partitional framework of Section \ref{sec:Partition}, we would model this information environment as follows: The state space is \[ \Omega = \left\{(a,0,0),(b,1,0),(b,1,1),(b,2,1),(b,2,2),\dots \right\}\] and agents' information partitions are given by \begin{align*} \Pi_1 & =\left\{\{(a,0,0)\},\{(b,1,0),(b,1,1)\},\{(b,2,1),(b,2,2)\},\dots\right\} \\ \Pi_2 & =\left\{\{(a,0,0),(b,1,0)\},\{(b,1,1),(b,2,1)\},\dots\right\} \end{align*} where, for example, $T_1=0$ reveals to player 1 the partition element $\{(a,0,0)\}$, while $T_2=0$ reveals to player 2 the partition element $\{(a,0,0),(b,1,0)\}$. \end{remark} \begin{proposition} There is a unique Bayesian Nash equilibrium in which agent $ 1 $ plays $ A $ when the payoff parameter is $ a $. In this equilibrium, both agents play $ A $ independently of the number of messages sent. \end{proposition} \begin{proof} Let $s_i : T_i \rightarrow \Delta(\{A,B\})$ denote player $i$'s equilibrium strategy. By assumption, $ s_1(0)=A $. We will show that also $s_2(0)=A$. Agent 2 of type $T_2=0$ knows that either agent $ 1 $'s first message was never sent (the state is $(a,0,0)$), or agent $ 1 $'s first message was sent but lost (the state is $(b,1,0)$). Unconditionally, the probabilities of these states are $(1-p)$ and $p\eps$. Conditional on $T_2=0$, agent $ 2 $ assigns a posterior probability of $\frac{1-p}{1-p+p\eps}$ to $(a,0,0)$, a posterior probability of $\frac{p\eps}{1-p+p\eps}$ to $(b,1,0)$ and zero probability to all other states. So agent 2's expected payoff from playing $ A $ is at least \begin{equation} \label{eq:payoffA} M \cdot \left(\frac{1-p}{1-p+p\eps}\right) + 0\cdot \left(\frac{p\eps}{1-p+p\eps}\right) \end{equation} while agent 2's expected payoff from playing $ B $ is no more than \begin{equation} \label{eq:payoffB} (-L) \cdot \left(\frac{1-p}{1-p+p\eps}\right) + M \cdot \left(\frac{p\eps}{1-p+p\eps}\right). \end{equation} Since $ 1-p>\frac 12 $ and $ L>M$ by assumption, (\ref{eq:payoffA}) strictly exceeds (\ref{eq:payoffB}), and so agent 2's strategy must satisfy $s_2(0)=A $. Now suppose $ s_i(T_i)=A $ for $ i=1,2 $ and all $ T_i<t $. We'll argue that $s_1(t)=s_2(t)=A$. Suppose first that agent 1's computer sends $t$ emails exactly, i.e., $T_1=t$. Since agent 1's computer did not send a $(t+1)$-th email, it must either be that agent $ 1 $'s $ t $-th message was lost (the state is $(b,t,t-1)$), or that agent $ 1 $'s $ t $-th message was received, but its confirmation was lost (the state is $(b,t,t)$). Agent 2's posterior belief conditional on $T_1=t$ then assigns probability $ z := \frac{\eps}{\eps+(1-\eps)\eps} >\frac 12 $ to $(b,t,t-1)$ and probability $1-z$ to $(b,t,t)$. So the expected payoff to playing $B$ is $z(-L)+(1-z)(M)<0$, while the payoff to playing $A$ is zero. We conclude that agent 1's strategy must satisfy $ s_1(t)=A $, with nearly identical reasoning yielding $s_2(t) = A$. \end{proof} \medskip This result shows a sharp discontinuity in strategic predictions at common knowledge. That is, $(B,B)$ is an equilibrium when agents have common knowledge of the payoff parameter $b$, but fails to be an equilibrium when players have knowledge of $b$ to arbitrarily high (finite) orders. Whether this result is surprising depends on how natural we consider the relaxation of common knowledge to be. Rubinstein (1989) argues that ``high $ T_i $" is intuitively like common knowledge. Another view is that these are substantially different, since for arbitrarily small but strictly positive $\eps$ the informational model is the one described in Remark \ref{remark:Partition}, but for $\eps=0$ (corresponding to common knowledge of the state) the set of states with positive ex-ante probability is $\Omega = \{(a,0,0), (b,\infty,\infty)\}$ and the agents' information partitions are complete. So there is a discontinuity in the informational environments as $\eps \rightarrow 0$, and in this sense small $\eps$ may be quite unlike $\eps=0$. \section{(Common) $p$-Belief} \label{sec:pBelief} We now consider an alternative approach to formalizing almost common knowledge, which defines common ``almost-knowledge" in contrast to the above ``almost-common" knowledge. \begin{definition} \label{def:pBelief} For any $p \in [0,1]$, say that agent $i$ \emph{$ p$-believes} $ A $ at $ \omega $ if $ P(A\mid \Pi_i(\omega))\geq p $. The set of states at which agent $ i $ $ p $-believes $ A $ is $$ \mathcal{B}_i^p(A) = \{\omega: P(A\mid \Pi_i(\omega))\geq p\}.$$ \end{definition} \begin{remark} Is the case $ p=1 $ equivalent to knowledge? Suppose $\Omega = \{1,2,3\}$ and the prior is $P = (0,1/2,1/2)$. Agent 1's partition is $\{\{1,2\},\{3\}\}$ while agent 2's partition is $\{\{1\},\{2\},\{3\}\}$. The state is $\omega=2$. Then according to Definition \ref{def:Knowledge}, agent 2 knows $\{2\}$ but agent 1 does not, while according to Definition \ref{def:pBelief}, both agents have $1$-belief of $\{2\}$. Whether knowledge and 1-belief represent distinct modes of understanding is an interesting philosophical question, but we will not have more to say on it here. \end{remark} The following construction of \emph{common $p$-belief}, due to \citet{MondererSamet}, is parallel to Definition \ref{def:CK1} for common knowledge. \begin{definition}[Common $p$-Belief] \label{def:CommonpBelief1} For any $p\in [0,1]$ and event $ A \subseteq \Omega$, define $ \mathscr{A}^1 = \bigcap_{i\in \mathcal{I}} \mathcal{B}_i^p(A)$ to be the set of states at which every agent $p$-believes $A$ to be true, and recursively define $\mathscr{A}^k = \bigcap_{i\in \mathcal{I}} \mathcal{B}^p_i(\mathscr{A}^{k-1})$ for every $k\geq 2$. Then $A$ is common $p$-belief at the set of states $\mathscr{A}^\infty = \cap_{n\geq 1} \mathscr{A}^n$. \end{definition} We can also define common $p$-belief by generalizing the definition of an evident event (Definition \ref{def:Evident}) to events that are evident $p$-belief. \begin{definition} For any $p\in [0,1]$, the event $A \subseteq \Omega$ is \emph{evident $p$-belief} if $A \subseteq \bigcap_{i \in \mathcal{I}}B_i^p(A).$ \end{definition} \begin{definition} \label{def:CommonpBelief2} For any $p\in [0,1]$, the event $A \subseteq \Omega$ is common $p$-belief at $\omega$ if there exists an evident $p$-belief event $E$ such that \[\omega \in E \subseteq \bigcap_{i \in \mathcal{I}} B_i^p(A).\] \end{definition} Definitions \ref{def:CommonpBelief1} and \ref{def:CommonpBelief2} are introduced in \citet{MondererSamet} and shown to be equivalent. \begin{exercise} Consider the email game of Rubinstein (1989). Let $P$ denote the common prior on $\Omega$ (as defined in Remark \ref{remark:Partition}), and define $\mathscr{C}^p$ to be the event that agents have common $p$-belief in parameter $b$. For each $\varepsilon \geq 0$, let \[ \overline{p}(\varepsilon) = \sup_{p \in [0,1]} \{p: P(\mathscr{C}^p) > 0\}\] be the supremum of the set of values of $p$ such that $\mathscr{C}^p$ has positive ex-ante probability. Is $\overline{p}(0)$ equal to the limit of $\overline{p}(\varepsilon)$ as $\varepsilon \rightarrow 0$? Discuss your answer. \end{exercise} \section{General State Spaces} \label{sec:General} To show that the preceding insights do not require assumption of a finite state space, we now briefly discuss two generalizations of these ideas. In each case, we begin with a probability space $(\Omega, \Sigma, P)$ where $\Omega$ is a set of states endowed with $\sigma$-algebra $\Sigma$, and $P: \Sigma \rightarrow [0,1]$ is a probability measure. \paragraph{The first generalization.} Let each information partition $\Pi_i$ be a partition of $\Omega$, where we require that each partition element is $\Sigma$-measurable and has strictly positive measure under $P$ (see e.g., \citet{MondererSamet}). All of the above definitions and proofs generalize as stated. \paragraph{The second generalization.} Alternatively, we might model each agent $i$'s information as a $\sigma$-algebra $\Pi_i$, where we assume that $\Pi_i \subseteq \Sigma$ for every $i \in \mathcal{I}$. One foundation for this approach (which we will examine in detail in subsequent chapters) is that each agent $i$ privately observes a random variable $X_i: \Omega \rightarrow \mathbb{R}$ that is measurable with respect to $\Sigma$. In this case, each agent $i$'s $\sigma$-algebra is $\sigma(X_i)$, the $\sigma$-algebra generated by $X_i$, which is indeed coarser than $\Sigma$. The definition of knowledge can be extended as follows. \begin{definition} \label{def:KnowledgeGeneral} Agent $i$ \emph{knows} the event $A \subseteq \Omega$ to be true at $\omega$ if there exists some $B \in \Pi_i$ such that $\omega \in B \subseteq A$. \end{definition} Common knowledge cannot in general be iteratively constructed (\`{a} la Definition \ref{def:CK1}) using this definition of $K_i$, since the set of states at which agent $i$ knows $A$ to be true may not be $\Sigma$-measurable. Nevertheless, similar to Definition \ref{def:CK2}, we can define $\bigwedge_{i \in \mathcal{I}} \Pi_i$ to be the finest common coarsening of the $\sigma$-algebras $\Pi_1, \dots, \Pi_n$, and say that an event $A$ is common knowledge at $\omega$ if there is an element $A$ of $\bigwedge_{i \in \mathcal{I}} \Pi_i$ such that $\omega \in A$. We can also generalize Definition \ref{def:CK3} as follows: \begin{definition} The event $A \in \Sigma$ is \emph{evident} if $A \in \Pi_i$ for every $i \in \mathcal{I}$, i.e., $A$ belongs to every agent's $\sigma$-algebra. \end{definition} \begin{definition} The event $A \in \Sigma$ is common knowledge at state $\omega$ if there is an evident event $E$ such that $\omega \in E$ and $E \subseteq A$. \end{definition} Theorem \ref{thm:Aumann} can also be generalized, although the previous proof does not extend (for example, there is no longer guaranteed to be a unique element of $\Pi_1 \wedge \Pi_2$ that contains $\omega$). \begin{proposition} \label{prop:AumannGeneral} Let $X \in \mathcal{L}^1(\Omega,\Sigma,P)$, and define $Y = \mathbb{E}(X \mid \Pi_1)$, $Z=\mathbb{E}(X \mid \Pi_2)$. If it is common knowledge that $Y=y$ and $Z=z$ at a state $\omega$ with strictly positive probability, then it must be that $y=z$. \end{proposition} \begin{proof} If it is common knowledge that $Y=y$ and $Z=z$, there must exist an event $E \in \Pi_1 \cap \Pi_2$, where $Y$ takes the constant value $y$ on $E$, and $Z$ takes the constant value $z$ on $E$. So \begin{align*} y \cdot P(E) & = \mathbb{E}(Y I_E) \\ & = \mathbb{E}(X I_E) \\ & = \mathbb{E}(Z I_E) = z \cdot P(E) \end{align*} using in the second and third equalities that $Y$ and $Z$ are conditional expectations of $X$. Since $P(E) > 0$ (by assumption that $\omega \in E$ has strictly positive probability), it follows that $y=z$ as desired. \end{proof} \medskip This result is in fact more general than Theorem \ref{thm:Aumann}, nesting the previous result as a special case when we choose $X$ to be an indicator function on some set. \begin{exercise} Generalize Proposition \ref{prop:AumannGeneral} by demonstrating that the conclusion still holds if we assume that there is a measurable set of states $B \subseteq \Omega$ with strictly positive probability, where at every $\omega \in B$ it is common knowledge that $Y=y$ and $Z=z$. \end{exercise} \section{Additional Exercises} \begin{exercise} Two spies in an underground organization are stationed at remote locations. Each spy privately observes whether the coast is clear at their location. The spies share a common prior that the coast is clear at each location independently with probability $1/2$. \textbf{Communication protocol.} The spies communicate by email with a third party electronic server at their home base. If and only if the coast is clear at a spy's location, that spy's computer will automatically send a message to the home base with the information that the coast is clear. If the home base electronic server receives information from both spies indicating that the coast is clear, then it will automatically send a message to both spies indicating that it has received both messages. (Otherwise, it will send no messages.) As these are dangerous times, each message has only a $1-\varepsilon$ chance of being received (again independent). If either spy receives a message from the home base, that spy will send a reply to the home base confirming receipt. The reply is lost with probability $\varepsilon$, independently of everything that's happened before. So on and so forth. Everything stated above is common knowledge. Each spy observes the number of messages he has sent, and chooses an action in $\{A,B\}$. If the coast is clear at both locations, then payoffs are given by the \textbf{right} matrix below, and otherwise payoffs are given by the \textbf{left} matrix below. \[\begin{array}{ccc} & A & B\\ A& M,M & 0,-L\\ B&-L,0 & 0,0 \end{array} \hspace{5em} \setlength{\tabcolsep}{6pt}\renewcommand{\arraystretch}{1.2} \begin{array}{ccc} & A & B\\ A& 0,0 & 0,-L\\ B&-L,0 & M,M \end{array}\] The payoff parameters satisfy $L > 3M > 0$. \begin{itemize} \item[(a)] Prove the following analogue of Rubinstein (1989)'s result: Let $T_1 = \mathbb{Z}_+$ and $T_2=\mathbb{Z}_+$ denote the two players' type spaces. There is a unique pure-strategy equilibrium in which both players choose $A$ when the coast is \textbf{not} clear at their location, i.e. $s_1(0)=s_2(0)=A$. In this equilibrium, players choose $A$ for any number of messages sent, i.e. $s_i(t)=A$ for both players $i$ and all $t\in T_i$. \item[(b)] Suppose instead that $L=2$ while $M=1$, and demonstrate that the result in Part (a) no longer holds by finding some $\varepsilon>0$ and a pair of strategies $(s_1,s_2)$ that constitute a pure-strategy Bayesian Nash equilibrium, where $s_1(0)=s_2(0)=A$ and $s_i(t)=B$ for some player $i$ and type $t\in T_i$. \end{itemize} \end{exercise} \begin{exercise} Let $X \in \mathcal{L}^1(\Omega,\Sigma,P)$, and define $Y = \mathbb{E}(X \mid \Pi_1)$, $Z=\mathbb{E}(X \mid \Pi_2)$. Prove that if it is common knowledge that $Y \in A$ and $Z \in B$ at a state $\omega$ with strictly positive probability, then it must be that $A \cap B \neq \emptyset$. \end{exercise} \chapter{Bayesian Updating and Beliefs} Section \ref{sec:Preliminaries} introduces the canonical Bayesian framework and the definition of a signal. Section \ref{sec:Bayes} reviews Bayes' rule and key properties of Bayesian posteriors. Section \ref{sec:Gaussian} provides closed-form expressions for posterior beliefs in the special case of Bayesian updating to normal signals, with applications. \section{Preliminaries} \label{sec:Preliminaries} There is a set of \emph{parameters} $\Theta$ endowed with a $\sigma$-algebra $\Sigma$. An agent has a \emph{prior} $p \in \Delta(\Theta)$, where $\Delta(\Theta)$ denotes the set of $\Sigma$-measurable probability measures on $\Theta$. The prior describes the agent's belief at an ``ex-ante" stage in the absence of any information, where what is ex-ante is understood in the context of a specific model. The focus of this chapter is on the object that we will call an \emph{information structure}, \emph{experiment}, or a \emph{signal}, which can be formalized in either of several ways: \begin{itemize} \item[(a)] We can define the signal to be a mapping $\sigma: \Theta \rightarrow \Delta(S)$ from the set of parameters to distributions over a set of signal realizations $S$. See for example \citet{henrique}. \item[(b)] We can define a signal to be an $(S,\mathcal{S})$-valued random variable $X$ on an underlying probability space $(\Omega,\Sigma,P)$, where $\Omega = \Theta \times E$ for some set $E$. For example, we might define the signal to be $X=\theta + \varepsilon$ for an $E$-valued noise term $\varepsilon$ that is independent of $\theta$, as we do in Section \ref{sec:Gaussian}. \item[(c)] We can define a signal $S$ to be a finite partition of $\Omega = \Theta \times [0,1]$, whose elements are non-empty and measurable with respect to the Lebesgue sigma-algebra on $\Omega$. Conditional on parameter $\theta$, the probability of observing $s \in S$ is the Lebesgue measure of $\{x \in [0,1] \mid (\theta,x) \in s\}$. See for example \citet{FrankelKamenica}. \end{itemize} \begin{remark} It is straightforward to see that the first two formalisms nest one another when all the relevant sets are finite. Suppose we are given a prior $p \in \Delta(\Theta)$ and a signal $\sigma: \Theta \rightarrow \Delta(S)$. Define the expanded state space to be $\Omega = \Theta \times S$ and let $P(\theta,s) = p(\theta) \sigma(s \mid \theta)$. Then the random variable $X: \Omega \rightarrow S$ satisfying $X(\theta,s)=s$ is equivalent to $\sigma$ in the sense that posterior beliefs about $\theta$ are the same whether we condition on the realization of $X$ or the realization of $\sigma(\theta)$. In the other direction, if we start with a random variable $X: \Theta \times E \rightarrow S$ and a distribution $P\in \Delta(\Theta \times E)$, then we can define $\sigma: \Theta \rightarrow \Delta(S)$ to satisfy $\sigma(s \mid \theta) = P(X^{-1}(s) \mid \theta)$. The formalism in (c) is a special case of (b), where $E=[0,1]$, the random variable $X: \Omega \rightarrow S$ maps each $\omega$ into the partition element of $S$ to which it belongs, and the probability distribution $P$ is the Lebesgue measure. \end{remark} Example families of signals include: \begin{example}[\citet{Aumann}'s Partitional Information Structures] For each agent $i$, let $\Pi_i$ be a finite partition of $\Theta$ into measurable elements of strictly positive measure. Index these partition elements to $S=\{1,\dots,n\}$ where $n$ is the size of $\Pi_i$. Then let $\sigma$ map each $\theta$ with probability 1 to the index of the partition element to which $\theta$ belongs. \end{example} \begin{example}[Finite Information Structures] Suppose $ \lvert \Theta \rvert, \lvert S \rvert<\infty $. Then we can express $\sigma$ as a $\vert \Theta \vert \times \vert S \vert$ matrix where (1) all entries are nonnegative, and (2) all rows sum to 1. For example, suppose a drug is either good (g) or bad (g). The drug is administered to a patient who is either cured (C) or not (N). The patient is cured with probability $3/4$ if the drug is good and with probability $1/4$ if the drug is bad. Then $ \Theta=\{g,b\} $ and $ S=\{C,N\} $ and the information structure is \\[-4mm] \[\begin{array}{ccc} & C & N \\[-1mm] g & 3/4 & 1/4 \\ b & 1/4 & 3/4\end{array}\] with each row depicting the probability over the signal realizations in the associated state. \end{example} \begin{example}[Gaussian Information] \label{ex:Gaussian} The signal is $X = \theta + \eps$, where $\theta \sim \mathcal{N}(\mu_\theta,\sigma_\theta^2)$, $\eps \sim \mathcal{N}\left(0,\sigma_\eps^2\right)$, and $\theta \perp \!\!\! \perp \eps$. \end{example} \section{Posterior Beliefs} \label{sec:Bayes} \subsection{Bayes' Rule} \label{sec:BayesRule} The agent updates his prior to the realization of the signal using Bayes' rule. \begin{definition}[Bayes' Rule, Finite Case] Suppose $\vert \Theta \vert < \infty$. Fix any distribution $P \in \Delta(\Theta)$ and any events $A, B \subseteq \Theta$ where $P(A),P(B) >0$. Then \begin{equation} \label{eq:Bayes} P(A \mid B) = \frac{P(B \mid A) P(A)}{P(B)}. \end{equation} \end{definition} \begin{remark} Rather than memorizing this formula, it is easier to remember that by the law of total probability, we can rewrite $P(A \cap B)$ as $P(A \mid B) P(B) $ or as $P(B \mid A) P(A)$, so \[P(A \mid B) P(B) = P(B \mid A) P(A).\] Dividing through by $P(B)$ yields (\ref{eq:Bayes}). \end{remark} \begin{remark} Applying (\ref{eq:Bayes}) twice for the pairs of events $(A,E)$ and $(B,E)$, we have \[\frac{P(A \mid E)}{P(B \mid E)} = \frac{P(E \mid A)}{P(E \mid B)} \cdot \frac{P(A)}{P(B)}\] so the relative conditional probabilities of events $A$ and $B$ is determined by their relative probabilities under the prior, $\frac{P(A)}{P(B)}$, and the \emph{likelihood ratio} of $E$ under $A$ and $B$, $\frac{P(E \mid A)}{P(E \mid B)}$. \emph{Base-rate neglect} is the tendency to falsely equate $\frac{P(A \mid E)}{P(B \mid E)}$ with $\frac{P(E \mid A)}{P(E \mid B)}$, neglecting the prior distribution. This can lead to compelling but inaccurate statistical conclusions. For example, suppose $\Omega = \{p,n\}$, where $\omega = p$ indicates that an individual is positive for a medical condition while $\omega=n$ indicates that the individual is negative, with $P(\omega = p) = 0.01$. Let $X \in \{+,-\}$ be the outcome of a test where $P(X = + \mid \omega = p) = 0.95$ and $P(X=+ \mid \omega = n) = 0.05$. Since the likelihood of observing $X=+$ is much higher when the individual has the condition than when he does not---indeed, the likelihood ratio is $\frac{P(X=+ \mid \omega = p)}{P(X=+ \mid \omega = n)} = 19$---it is tempting to conclude from a positive test result that the individual has the condition. But correctly applying Bayes' rule yields that $\frac{P(\omega = p \mid X=+)}{P(\omega = n \mid X=+)}<1$; that is, even with a positive test it is more likely that the individual is negative for the condition. \end{remark} A useful rewriting of Bayes' rule is \begin{equation} \label{eq:BayesFinite} P(\theta \mid X=x) = \frac{P(X=x \mid \theta) P(\theta)}{\sum_{\theta' \in \Theta} P(X=x \mid \theta') P(\theta')} \quad \forall \theta \in \Theta \end{equation} where the conditional distribution $P(\cdot \mid X=x)$ is precisely the agent's posterior belief upon observing $X=x$. \begin{example} \label{ex:BinaryExample} A drug is either effective ($\theta = A$) or not ($\theta=B$), where the prior probability that the drug is effective is $p \in (0,1)$. The signal is \[\begin{array}{ccc} & a & b\\ A & q & 1-q \\ B & 1-q & q \end{array}\] for some $q\in (0,1)$. Then upon observing $a$, the agent assigns to $\theta=A$ a posterior probability of \[\frac{p q}{p q + (1-p) (1-q) } = \frac{1}{1 + \frac{1-p}{p} \left(\frac{1-q}{q}\right)}\] which exceeds the prior belief of $p$ if and only if $q \geq \frac{1}{2}$. \end{example} More generally, when $\theta$ and $X$ are (not necessarily finite-valued) random variables with densities $f_\theta$ and $f_X$ and conditional densities $f_{\theta \mid X=x}$ and $f_{X \mid \theta=t}$, then the posterior belief given $X=x$ is \begin{equation} \label{eq:BayesContinuous} f_{\theta \mid X=x} (t) = \frac{f_{X\mid \theta=t} (x ) f_\theta(t)}{\int_{\theta' \in \Theta} f_{X\mid \theta =t'}(x) f_\theta(t') dt'} \quad \forall t \in \Theta. \end{equation} Somewhat more generally, we may suppose that the joint distribution of $(\theta,X)$ is such that for every realization $x$ of $X$, there is a (measurable) function $q_x$ satisfying \[q_x(A) = \mathbb{E}(\mathbbm{1}_A \mid X=x) \quad \mbox{ for all events $A \subseteq \Theta$}\] Then this $q_x$ is the posterior belief. \subsection{Bayes' Plausibility} \label{sec:BayesPlausibility} Outside of special cases (such as the one we will cover in Section \ref{sec:Gaussian}), posterior beliefs often cannot be expressed in closed-form. Nevertheless, there are certain properties they must satisfy. One important property is that beliefs are a martingale, i.e., the expected posterior is equal to the prior. Intuitively, if you expect to change your mind given more information, then why haven't you done so already? \begin{fact}[Beliefs are a martingale.] \label{fact:Martingale} Let $p \in \Delta(\Theta)$ denote the agent's prior belief, and choose any event $A$. Then the posterior probability assigned to this event conditional on the realization of random variable $X$ is $\mathbb{E}(\mathbbm{1}_A \mid X)$. By the law of iterated expectations, \[\mathbb{E}(\mathbb{E}(\mathbbm{1}_A \mid X)) = \mathbb{E}(\mathbbm{1}_A)\] so the expected posterior probability of $A$ is equal to the prior probability of $A$. Since the event $A$ was arbitrarily chosen, we can conclude that the expected posterior belief is equal to the prior belief. (In the case of a finite state space $\Theta$, choosing $A=\{\theta\}$ yields $\mathbb{E}(p(\theta \mid X)) = p(\theta)$ for every $\theta$.) \end{fact} Since any signal $X$ induces a distribution $\tau \in \Delta(\Delta(\Theta))$ over posterior beliefs, Fact \ref{fact:Martingale} implies that this distribution must average to the prior. \begin{definition} \label{def:BayesPlausible} Fixing a prior $p\in \Delta(\Theta)$, say that a distribution of posteriors $\tau$ is \emph{Bayes plausible} if \[\int q d\tau(q) = p\] i.e. the expected posterior is equal to the prior. We'll use \[\mathcal{T}(p) \equiv \left\{ \tau \in \Delta(\Delta(\Theta)) \, \mid \, \int q d\tau(q) = p\right\}\] to denote the set of Bayes plausible posterior distributions given prior $p$. \end{definition} Not only are we guaranteed that any signal induces a Bayes-plausible distribution over posterior beliefs, but also any Bayes-plausible distribution over posterior beliefs can be induced by some signal. \begin{definition} For any signal $X \sim P_X$, let $\tau_X \in \Delta(\Delta(\Theta))$ satisfy $\tau_X(q) = P_X( \{x:q_x =q \})$. Say that $\tau \in \Delta(\Delta(\Theta))$ is \emph{induced by $X$} if $\tau = \tau_X$. \end{definition} \begin{proposition} \label{prop:BayesPlausible} Suppose the prior $p$ belongs to the interior of the set $\Delta(\Theta)$. Then every Bayes-plausible distribution $\tau \in \mathcal{T}(p)$ is induced by some signal $X$. \end{proposition} The proof (demonstrated in \citet{KamenicaGentzkow} and \citet{ShmayaYariv} among others) proceeds by construction. For any distribution $\tau$, index the distinct posterior beliefs in the support of $\tau$ to be $\{q_x\}_{x\in \mathcal{X}}$, where $\mathcal{X}$ may not be finite. Then define $\sigma: \Theta \rightarrow \Delta(\mathcal{X})$ to satisfy \begin{equation} \label{eq:ConstructSignal} \sigma(x \mid \theta) = \frac{q_x(\theta) \tau(q_x)}{p(\theta)} \end{equation} We have constructed a signal $X$ whose realizations $x$ are identified with posterior beliefs $q_x$, where the conditional distribution over signal realizations mimics Bayes' rule $p(x \mid \theta) = \frac{p(\theta \mid x) p(x)}{p(\theta)}$, setting $q_x(\theta) = p(\theta \mid x)$ and $\tau(q_x) = p(x)$ . This is a valid signal structure since \begin{align*} \int_{\mathcal{X}} \sigma(x \mid \theta) dx = \int_{\mathcal{X}} \frac{q_x(\theta) \tau(q_x)}{p(\theta)} dx = 1 \end{align*} by (\ref{eq:ConstructSignal}) and the definition of Bayes-plausibility. Moreover, \begin{align*} \frac{\sigma(x \mid \theta) p(\theta)}{\int_\Theta \sigma(x \mid \theta) p(\theta) d\theta} & = \frac{\sigma(x \mid \theta) p(\theta)}{\tau(q_x) \int_\Theta q_x(\theta) d\theta} = \frac{\sigma(x \mid \theta) p(\theta)}{\tau(q_x)} = q_x(\theta) \end{align*} so $q_x(\cdot)$ is precisely the posterior belief when updating to the signal $\sigma$. Thus the probability that the posterior belief is $q_x$ is exactly the probability that the realization of the constructed signal $\sigma$ is $x$, so $\tau$ is induced by $\sigma$ as desired. \begin{exercise} Suppose the prior is over $\Theta = \{\theta_1, \theta_2\}$ is $(1/3,2/3)$. Provide a set $S$ and a signal structure $\sigma: \Theta \rightarrow \Delta(S)$ that induces the belief (0,1) with probability 1/3, and the belief (1/2,1/2) with probability 2/3. \end{exercise} Together, Fact \ref{fact:Martingale} and Proposition \ref{prop:BayesPlausible} imply: \begin{corollary} Fix any prior belief $p \in Int(\Delta(\Theta))$. Then a distribution over posteriors $\tau \in \Delta(\Delta(\Theta))$ is induced by some signal if and only if it is Bayes-plausible, i.e., $\tau \in \mathcal{T}(p)$. \end{corollary} \subsection{Application of Bayes' Rule: Incompatibility of Fairness Definitions} Here we take a detour to demonstrate the power of Bayes' rule. Individuals in a population are each described by a covariate vector $C \in \mathcal{C}$, a group membership $ G \in \{g_1,g_2\}$, and a type $\theta \in \{0,1\}$. For example, we might interpret $\theta$ as the individual's creditworthiness (whether the individual would pay back a loan if approved), $G$ as a demographic group, and $C$ as the individual's credit history. Across individuals, the random vector $(C,G,\theta)$ is distributed according to $P$, and we use $p_g= P(\theta=1 \mid G=g)$ for the base rate of $\theta=1$ in each group $g$. A \emph{scoring rule} is any mapping $S: \mathcal{C} \rightarrow \{0,1\}$ that predicts the type given the covariate vector. \begin{definition}[Equality of False Positives] A scoring rule $S$ has equal false positive rates if \[P(S=1 \mid \theta=0, G=g_1) = P(S=1 \mid \theta=0, G=g_2)\] \end{definition} In words, the probability of being incorrectly assessed to pay back the loan is independent of group membership. Equivalently: $S \perp \!\!\! \perp G \mid \theta=0$, i.e., the score is conditionally independent of group membership given type $\theta=0$. \begin{definition}[Equality of False Negatives] A scoring rule $S$ has equal false negative rates if \[P(S=0 \mid \theta=1, G=g_1) = P(S=0 \mid \theta=1, G=g_2)\] \end{definition} In words, the probability of being incorrectly assessed to not pay back the loan is independent of group membership. Equivalently: $S \perp \!\!\! \perp G \mid \theta=1$, i.e., the score is conditionally independent of group membership given type $\theta=1$. \begin{definition}[Calibrated] A score $S$ is calibrated if for each $s\in \{0,1\}$, \[P(\theta=1 \mid S = s, G=g_1) = P(\theta=1 \mid S=s, G=g_2)\] \end{definition} In words, among those assessed to pay back the loan (or, to not pay back the loan), the probability of paying back the loan is independent of group membership. Equivalently: $\theta \perp \!\!\! \perp G \mid S$, i.e., type is independent of group membership conditional on the score. The following impossibility result demonstrates that (outside of edge cases) these fairness criteria cannot be simultaneously satisfied. \begin{proposition}[\citet{KMR},\citet{chouldechova}] Suppose $p_{g_1} \neq p_{g_2}$. Then no scoring rule $S$ can simultaneously satisfy calibration, equal false positive rates, and equal false negative rates. \end{proposition} \begin{proof} Choose either group $g$ and define $FP_g= P(S=1 \mid \theta=0,G=g)$, $FN_g = P(S=0 \mid \theta=1,G=g)$, and $PPV_g = P(\theta=1 \mid S=1,G=g)$. We'll show that these quantities are related by the following identity: \begin{equation} \label{eq:Identity} FP_g = \frac{p_g}{1-p_g} \times \frac{1-PPV_g}{PPV_g} \times (1-FN_g). \end{equation} To simplify notation, let $Q$ denote the joint distribution over $(C,G,S)$ after conditioning on $G=g$. Then, expanding (\ref{eq:Identity}), we have \[Q(S=1\mid \theta=0) = \frac{Q(\theta=1)}{Q(\theta=0)} \times \frac{Q(\theta=0 \mid S=1)}{Q(\theta=1 \mid S=1)} \times Q(S=1 \mid \theta=1)\] Multiplying both sides by $Q(\theta = 0)$ and applying Bayes' rule, \begin{align*} Q(S=1 , \theta=0) & = \frac{Q(\theta=0 \mid S=1)}{Q(\theta=1 \mid S=1)} \times Q(S=1, \theta=1) \end{align*} Thus, (\ref{eq:Identity}) is equivalent to \begin{equation} \label{eq:RatioIdentity} \frac{Q(S=1,\theta=0)}{Q(S=1,\theta=1)} = \frac{Q(\theta=0 \mid S=1)}{Q(\theta=1 \mid S=1)} \end{equation} Again using Bayes' rule, the RHS can be rewritten \[\frac{Q(\theta=0 \mid S=1)}{Q(\theta=1 \mid S=1)} = \frac{Q(\theta=0, S=1)/Q(S=1)}{Q(\theta=1, S=1)/Q(S=1)} = \frac{Q(S=1,\theta=0)}{Q(S=1,\theta=1)}\] so (\ref{eq:RatioIdentity}) is equivalent to \[ \frac{Q(S=1,\theta=0)}{Q(S=1,\theta=1)} = \frac{Q(S=1,\theta=0)}{Q(S=1,\theta=1)}\] and is therefore trivially true. The identity (\ref{eq:Identity}) holds for both groups $g\in \{g_1,g_2\}$. So if $FP_{g_1}=FP_{g_2}$ (as required by equality of false positive rates), $FN_{g_1}=FN_{g_2}$ (as required by equality of false negative rates), and also $PPV_{g_1}=PPV_{g_2}$ (as required by calibration), it must also hold that $p_{g_1}=p_{g_2}$. \end{proof} \section{Gaussian Information} \label{sec:Gaussian} Gaussian information environments are unusually tractable, since the posterior belief can be expressed in closed-form. We'll cover the main formulae for Bayesian updating in these environments, and show how these can be used to derive results in three applications. \subsection{Formulae} We'll start with the simplest case. The state is $\theta \sim \mathcal{N}(\mu ,\sigma_\theta^2)$ and the signal is $X=\theta + \eps$, where $\eps \sim \mathcal{N}(0,\sigma_\eps^2)$ and $\theta \perp \!\!\! \perp \eps$. Then: \begin{fact} \label{fact:BiVar} The agent's posterior belief about $\theta$ conditional on signal realization $X=x$ is normally distributed with mean \[\mathbb{E}(\theta \mid X=x) = \left(\frac{\sigma_\eps^2}{\sigma_\theta^2+\sigma_\eps^2}\right) \mu + \left(\frac{\sigma_\theta^2}{\sigma_\theta^2+\sigma_\eps^2}\right)x\] and variance \[Var(\theta \mid X=x) = \frac{\sigma_\theta^2 \sigma_\eps^2}{\sigma_\theta^2+\sigma_\eps^2}.\] \end{fact} A key property worth remembering is that the posterior mean is a linear combination of the prior mean $\mu$ and the signal realization $x$, where the weights are proportional to prior precision and signal precision. Additionally, while the posterior mean depends on the signal realization, the posterior variance is a constant. Fact \ref{fact:BiVar} is also sometimes written as: \[(\theta \mid X=x) \sim \mathcal{N}\left(\left(\frac{\tau_\theta}{\tau_\theta + \tau_\eps}\right) \mu + \left(\frac{\tau_\eps}{\tau_\theta + \tau_\eps}\right)x\, , \, \frac{1}{\tau_\theta + \tau_\eps} \right)\] where $\tau_\theta = 1/\sigma_\theta^2$ is the precision of the prior belief and $\tau_\eps = 1/\sigma_\eps^2$ is the precision of the signal. This restatement makes it apparent that the posterior precision is the sum of the prior precision and signal precision. We can use Fact \ref{fact:BiVar} to derive the distribution of the posterior mean. \begin{exercise} Suppose we write the posterior belief as $\mathcal{N}(\hat{\mu}, \hat{\sigma}^2)$, where $\hat{\mu}$ is a random variable that depends on the realization of the signal $X$. Prove that \[\hat{\mu} \sim \mathcal{N}\left(\mu, \sigma_\theta^2 - \frac{\sigma_\theta^2 \sigma_\eps^2}{\sigma_\theta^2 + \sigma_\eps^2}\right),\] i.e. the expected posterior mean is the prior mean, and the variance of the posterior mean is equal to the prior variance ($\sigma_\theta^2$), reduced by the posterior variance, $\left(\frac{\sigma_\theta^2 \sigma_\eps^2}{\sigma_\theta^2 + \sigma_\eps^2}\right)$.\end{exercise} \noindent This characterization implies that the more informative the signal is, the more variable the posterior mean is. \begin{remark} More generally (i.e., for $\theta$ and $X$ that are not necessarily normally-distributed), the law of total variance says that \[\Var(\mathbb{E}[X \mid \theta]) = \Var(\theta) - \mathbb{E}[\Var(\theta \mid X) ]\] so the variance of the posterior mean is equal to the difference of the prior variance and the expectation of the posterior variance. \end{remark} Similar closed-forms exist for multivariate Gaussian states and signals. Suppose $Z$ is a $1\times K$ vector distributed according to $\mathcal{N}(\mu, \Sigma)$, where $\Sigma$ has full rank. Partition the vector as follows: \[\left(\begin{array}{c} Z_1 \\ Z_2 \end{array}\right) \sim \mathcal{N}\left(\left(\begin{array}{c} \mu_1 \\ \mu_2 \end{array}\right), \left(\begin{array}{cc} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22} \end{array}\right)\right)\] \begin{fact} \label{fact:MultiVar} The conditional distribution of $Z_1$ given $Z_2=z_2$ is $\mathcal{N}(\hat{\mu},\widehat{\Sigma})$ where \begin{align*} \hat{\mu} & = \mu_1 + \Sigma_{12} \Sigma_{22}^{-1}(z_2 - \mu_2) \\ \widehat{\Sigma} &= \Sigma_{11} - \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21} \end{align*} \end{fact} \noindent Again, the posterior mean depends on the signal realization, but the posterior covariance matrix does not. \begin{example} Let $\left(\begin{array}{c} Z_1 \\ Z_2 \end{array}\right) \sim \mathcal{N}\left(\left(\begin{array}{c} \mu_1 \\ \mu_2 \end{array}\right), \left(\begin{array}{cc} \sigma_1^2 & \rho \sigma_1 \sigma_2 \\ \rho \sigma_1 \sigma_2 & \sigma_2^2 \end{array}\right)\right)$. Then $(Z_1 \mid Z_2 = z_2) \sim \mathcal{N}(\hat{\mu}, \widehat{\Sigma})$ where \begin{align*} \hat{\mu} & = \mu_1 + \rho \frac{\sigma_1}{\sigma_2} (z_2 - \mu_2) \\ \widehat{\Sigma} & = \sigma_1^2 (1-\rho^2) \end{align*} \end{example} \begin{exercise} Let $Z_1 = \theta$ and $Z_2 = X$ where $\theta$ and $X$ are as defined at the beginning of this section. Show that Fact \ref{fact:MultiVar} implies Fact \ref{fact:BiVar}. \end{exercise} \noindent Sections \ref{sec:CareerConcerns}-\ref{sec:DataSharing} demonstrate three applications of these Bayesian updating formulae. \subsection{Application 1: Career Concerns} \label{sec:CareerConcerns} Our first application is solving the two-period version of H\"{o}lmstrom (1982)'s model of career concerns. There is a single agent and a manager. The agent has a type $\theta \sim \mathcal{N}(\mu, \sigma_\theta^2)$ that is unknown to both the agent and the manager. In period 1, the agent chooses an effort level $a \in \mathbb{R}_+$ at cost $c(a) = \frac12 a^2$. This effort is not observed by the manager. The agent's type and effort jointly determine the realization of an output signal \[X = \theta + a + \varepsilon\] where $\theta \perp \!\!\! \perp \eps$ and $\varepsilon \sim \mathcal{N}(0,\sigma_\eps^2)$. In period 2, the manager observes the realization of $X$ and forms an expectation about the agent's type. Since the manager does not observe $a$, this expectation is taken with respect to the manager's possibly misspecified perception about the distribution of $X$ (more soon). The agent receives the manager's expectation of his type. For arbitrary $a \in \mathbb{R}$, write $\mathbb{E}^a(\theta \mid X)$ for the conditional expectation of $\theta$ with respect to $X= \theta+a+\varepsilon$. If the manager expects the agent to choose effort $a^*$ while the agent in fact chooses effort $a$, then the agent's total expected payoff is \[\mathbb{E}^a[\mathbb{E}^{a^*}(\theta \mid X)] - c(a),\] where the inner expectation $\mathbb{E}^{a^*}(\theta \mid X)$ is the manager's expectation of the agent's type, and $\mathbb{E}^a[\mathbb{E}^{a^*}(\theta \mid X)] $ is the agent's expectation of the manager's expectation. \begin{claim} \label{claim:Holmstrom} There is a unique equilibrium in which the agent chooses effort $a^* = \frac{\sigma_\theta^2}{\sigma_\theta^2 + \sigma_\eps^2}$. \end{claim} \begin{corollary} Equilibrium effort $a^*$ is decreasing in $\sigma_\eps^2$ (i.e., it is less valuable to manipulate a noisier signal) and is increasing in $\sigma_\theta^2$ (i.e., it is more valuable to manipulate information about a more uncertain unknown). \end{corollary} We'll now prove Claim \ref{claim:Holmstrom}. Equilibrium effort $a^*$ must satisfy the first-order condition \begin{equation} \label{eq:FOC} \left.\frac{\partial \mathbb{E}^a[\mathbb{E}^{a^*}(\theta \mid X)]}{\partial a}\right|_{a = a^*} = a^* \end{equation} equating the marginal value of increasing effort (over $a^*$) to the marginal cost of increasing effort (over $a^*$). Applying Fact \ref{fact:BiVar}, the manager's expectation of $\theta$ with respect to the de-biased signal $X-a^* = \theta +\eps$ is \[ \mathbb{E}^{a^*}(\theta \mid X) = \frac{\sigma_\theta^2}{\sigma_\theta^2 + \sigma_\eps^2} (X-a^*) + \frac{\sigma_\eps^2}{\sigma_\theta^2 + \sigma_\eps^2} \mu \] The agent's expectation of this expectation (with respect to $X=\theta+a+\eps$) is \begin{align*} \mathbb{E}^a\left[\mathbb{E}^{a^*}(\theta \mid X)\right] = \mu + \frac{\sigma_\theta^2}{\sigma_\theta^2 + \sigma_\eps^2} (a-a^*) \end{align*} So (\ref{eq:FOC}) implies that equilibrium effort is $a^* = \frac{\sigma_\theta^2}{\sigma_\theta^2 + \sigma_\eps^2}$. (Uniqueness follows from strict concavity of the agent's payoff function.) \begin{exercise} Consider a variation on \citet{Holmstrom}'s career concerns model, in which the type $\theta$ and noise term $\eps$ are correlated. Specifically, the type is decomposed as $\theta = \theta_1 + \theta_2$, the signal is $X= \theta + \varepsilon + a$, and we suppose that \begin{align*} \theta_2 = \alpha \theta_1 + z \\ \eps = \beta \theta_1 + w \end{align*} where $\alpha,\beta \in \mathbb{R}$ are known constants, and $\theta_1 \sim \mathcal{N}(\mu_\theta, \sigma_\theta^2)$, $z\sim \mathcal{N}(0,\sigma_z^2)$, and $w\sim \mathcal{N}(0,\sigma_w^2)$ are mutually independent and unknown to both the agent and the manager. \begin{itemize} \item[(a)] Solve for equilibrium effort. How does this compare to Claim \ref{claim:Holmstrom} in the special case $\alpha=\beta=0$? \item[(b)] Suppose $\alpha,\beta>0$. How does equilibrium effort change in the parameters $\alpha$ and $\beta$? Provide intuition. \end{itemize} \end{exercise} \subsection{Application 2: Linear-Quadatic Coordination Games} Our second application is solving for equilibrium in a two-agent linear-quadratic coordination game \citep{MorrisShin}. Let $\theta\sim\mathcal{N}(\mu,\sigma_\theta^2)$ be an unknown state. Each agent $i=1,2$ receives a private signal about the state \[X_i = \theta + \eps_i\] where $\eps_i \sim \mathcal{N}(0,\sigma_\eps^2)$ is independent of the state and across agents. Each agent chooses an action $a_i \in \mathbb{R}$ given their signal realization $x_i$. Agent $i$'s payoff is \[U_i(a_1, a_2) = - (1-\beta) (a_i - \theta)^2 - \beta (a_i - a_j)^2\] where $\beta \in (0,1)$ controls how much the agent cares about matching the state versus matching the other agent's action. We'll solve for a symmetric linear Bayesian Nash equilibrium $(a^*_1, a^*_2)$ in which each agent's strategy satisfies \begin{equation} \label{eq:Eq} a_i^*(x_i) = c x_i + \kappa \end{equation} for some constants $c, \kappa \in \mathbb{R}$. Let's first conjecture that such an equilibrium exists. Given agent $j$'s strategy $a_j(x_j) = c x_j + \kappa$, agent $i$'s expected payoff (conditional on $X_i=x_i$) is \[\mathbb{E}[ -(1-\beta)(a_i - \theta)^2 - \beta(a_i - (cX_j + \kappa))^2 \mid X_i=x_i] \] Taking a derivative with respect to $a_i$, agent $i$'s best reply is \[ a_i^*(x_i) = (1-\beta) \mathbb{E}(\theta \mid X_i=x_i) + \beta ( c\mathbb{E}( \theta \mid X_i=x_i) + \kappa ).\] Plugging in the expression for $\mathbb{E}(\theta \mid X_i = x_i)$ from Fact \ref{fact:BiVar}, and matching coefficients with (\ref{eq:Eq}), we have $ c = \frac{\sigma_\theta^2 (1-\beta)}{\sigma_\eps^2 + \sigma_\theta^2(1-\beta)}$ and $\kappa = \frac{\sigma_\eps^2}{\sigma_\eps^2+\sigma_\theta^2(1-\beta)} \mu$. Thus a symmetric linear equilibrium exists in which each agent $i$ chooses \begin{equation} \label{eq:EqAction} a_i^*(x_i) = \frac{\sigma_\theta^2 (1-\beta)}{\sigma_\eps^2 + \sigma_\theta^2(1-\beta)} x_i + \frac{\sigma_\eps^2}{\sigma_\eps^2+\sigma_\theta^2(1-\beta)} \mu \end{equation} \citet{MorrisShin} further show that this is the unique pure-strategy equilibrium. Suppose we interpret the common prior $\mathcal{N}(\mu, \sigma_\theta^2)$ as informed by a public signal, where a more informative signal implies a smaller $\sigma_\theta^2$. Then we see from (\ref{eq:EqAction}) that the more informative the public signal is, the less weight agents place on their private signal. \subsection{Application 3: Data Sharing} \label{sec:DataSharing} Our final application is an example from \citet{AcemogluMakhdoumiMalekianOzdaglar} regarding why online platforms don't compensate users for the data that they give up. There is a single platform and two agents $i=1,2$ with types distributed \[\begin{pmatrix} \theta_1 \\ \theta_2 \end{pmatrix} \sim \mathcal{N} \left( \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}\right) \] Each agent $i$ privately observes the realization of a signal $X_i = \theta_i + \eps_i$, where $\eps_i \sim \mathcal{N}(0,1)$ is independent across agents and independent of both types. The platform chooses a payment $p_i$ to offer to each agent $i$ for sharing their data. After receiving these offers, each agent $i$ chooses whether to share $(a_i=1)$ or withhold $(a_i=0)$ their signal realization. Write $X_{\bold{a}}$ for the signals shared under action profile $\bold{a}=(a_1,a_2)$. For example, if $\bold{a}=(1,0)$, then $X_{\bold{a}} = X_1$, while if $\bold{a}=(1,1)$, then $X_{\bold{a}}=(X_1,X_2)$. Each agent $i$'s payoff is the platform's posterior uncertainty about his type plus his payment, \[u_i(\bold{a},\bold{p})= \Var(\theta_i \mid X_{\bold{a}}) + p_i \cdot \mathbbm{1}(a_i = 1) \] and the platform's payoff is $u_P(\bold{a},\bold{p})=-u_1(\bold{a},\bold{p})-u_2(\bold{a},\bold{p})$. So the agents prefer for the platform to be more uncertain about their types, while the platform prefers to be less uncertain. We'll show that when agent types are sufficiently correlated, i.e., $\rho$ is large, then the platform can induce both agents to share their data at a lower total payment than what is required to induce exactly one agent to share. Let's first solve for payment vectors $(p_1,p_2)$ given which it is an equilibrium for both agents to share their signals. Suppose agent $j$ chooses to share. Then if agent $i$ does not share, the platform's belief about $\theta_i$ is updated only to $X_j$. Since \[\begin{pmatrix} \theta_i \\ X_j \end{pmatrix} \sim \mathcal{N} \left( \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \begin{pmatrix} 1 & \rho \\ \rho & 2 \end{pmatrix}\right) \] the platform's posterior variance of $\theta_i$ is $1-\rho^2/2$ (by Fact \ref{fact:MultiVar}). So agent $i$'s payoff is $ v \cdot \left(1-\rho^2/2\right) $. If agent $i$ does share, then beliefs about $\theta_i$ are further updated to the signal $X_i$, and (by Fact \ref{fact:BiVar}) the platform's posterior variance of $\theta_i$ reduces to $\frac{2-\rho^2}{4-\rho^2}$. So agent $i$'s payoff is $ v \cdot \left(\frac{2-\rho^2}{4-\rho^2}\right) + p_i. $ Thus, agent $i$'s best reply to $a_j=1$ is to share if and only if \[p_i \geq v\cdot \left( \frac{(2-\rho^2)^2}{2(4-\rho^2)}\right)\] and the action profile $(a_1,a_2)=(1,1)$ is an equilibrium if the above display holds for both agents $i$. The minimum total payment is $v\cdot \left( \frac{(2-\rho^2)^2}{(4-\rho^2)}\right)$. Let's now solve for payment vectors $(p_1,p_2)$ given which it is an equilibrium for exactly one agent to share his data. Without loss, fix $a_2=0$. If agent 1 chooses $a_1=0$, then the platform's uncertainty about $\theta_1$ is its prior uncertainty, 1, so agent $1$'s payoff is $v$. If agent $1$ chooses $a_1=1$, then the platform's belief about $\theta_1$ updates to the signal $X_1$. Applying Fact \ref{fact:BiVar}, the platform's posterior variance about $\theta_1$ is $1/2$ and so agent $1$'s payoff is $v \cdot (1/2) + p_1.$ Thus, $a_1=1$ is a best reply to $a_2=0$ if and only if $p_1 \geq v/2.$ So the platform can induce (exactly) one agent to share if it offers one agent a payment of at least $v/2$ (which is accepted) and another a payment of no more than $v/2$ (which is rejected), at a total payment of $v/2$. When $\rho^2 \geq \frac{7-\sqrt{17}}{4} \approx 0.71$, then $v \cdot \left( \frac{(2-\rho^2)^2}{(4-\rho^2)}\right) < v/2$, so the platform pays less to induce two users share than one. Intuitively, each agent's choice to share their data exerts a negative externality on other agent: When both users share, each of their signals is less valuable in view of the signal revealed by the other. Agents paid their marginal value thus receive lower compensation, and in a limiting version of this model with a growing number of agents, the amount of compensation needed to induce all agents to share vanishes to zero. \section{Additional Exercises} \begin{exercise} Suppose $\theta \sim \mathcal{N}(0, \sigma_\theta^2)$ and \begin{align*} Y_1 & = \theta + b + \eps_1 \\ Y_2 & = b + \eps_2 \end{align*} where $\theta$, $b$, $\eps_1$, and $\eps_2$ are all independent of one another, $b \sim \mathcal{N}(0, \sigma_b^2)$, $\eps_1 \sim \mathcal{N}(0,\sigma_1^2)$, and $\eps_2 \sim \mathcal{N}(0, \sigma_2^2)$. We can interpret $Y_1$ as a biased signal about $\theta$ and $Y_2$ as a signal about the size of the bias. Your friend says: ``The only value of $Y_1$ and $Y_2$ for learning about $\theta$ is to provide information about the size of $b$. Since $Y_1 - Y_2$ is an unbiased signal about $b$, it is equally valuable to learn the outcome of $Y_1-Y_2$ as it is to learn the pair of signals $(Y_1,Y_2)$." Show that your friend is wrong: The distribution of $\theta \mid Y_1, Y_2$ is different from the distribution of $\theta \mid Y_1 - Y_2$. Also provide an intuition explaining to your friend the error in their reasoning. \end{exercise} \begin{exercise} \label{ex:Average} Suppose $\theta$ is normally distributed. For each $i=1, \dots, n$, let $X_i = \theta + \eps_i$ where $\eps_i$ is independent of $\theta$, the vector $(\eps_1, \dots, \eps_n)$ is jointly normal, and the signals $X_1, \dots, X_n$ are exchangeable. Define $\overline{X} = \frac1n (X_1 + \dots + X_n)$. Prove that $\theta \mid X_1, \dots, X_n$ is identical in distribution to $\theta \mid \overline{X}$. \begin{hint*} Recall that $\mathbb{E}(\theta \mid X)$ minimizes $\mathbb{E}[ (\hat{\theta} - \theta)^2]$ among all $\sigma(X)$-measurable random variables $\hat{\theta}$. \end{hint*} \end{exercise} \begin{exercise} Consider two processes of social learning about an unknown state $\theta \sim \mathcal{N}(0, 1)$. \\ \textbf{Scenario 1:} At $t=0$, a single agent privately observes the signal \[Y = \theta + \delta, \quad \delta \sim \mathcal{N}(0,1/\tau)\] where $\theta$ and $\delta$ are independent of one another, and the precision $\tau \in \mathbb{R}_+$ is a known constant. The agent chooses an action $y$ and receives the payoff $-\mathbb{E}[(y-\theta)^2]$. At $t=1$, each of $n$ agents, indexed by $i$, privately observes a signal \[X_i=\theta + \varepsilon_i, \quad \varepsilon_i \sim \mathcal{N}(0,1)\] as well as the action $y$ of the first agent. The error terms $\varepsilon_i$ are independent across agents, and independent of $\theta$ and $\delta$. Each agent $i$ from this generation then takes an action $a_i$ to maximize the payoff $-\mathbb{E}[(a_i - \theta)^2]$. At $t=2$, you arrive, observe the actions $(a_1, \dots, a_n)$ of the preceding generation (but \emph{not} the action of the first agent), and choose an action $a^*$ with payoff $-\mathbb{E}[(a^* - \theta)^2]$. \\ \textbf{Scenario 2:} At $t=1$, each of $m$ agents observes a private signal \[Z_i=\theta + \eta_i, \quad \eta_i \sim \mathcal{N}(0,1)\] where the error terms $\eta_i$ are independent across agents and of $\theta$. Each agent $i$ takes an action $b_i$ with payoff $-\mathbb{E}[(b_i - \theta)^2]$. At $t=1$, you arrive, observe the actions $(b_1, \dots, b_m)$ of the preceding generation, and choose an action $a^*$ with payoff $-\mathbb{E}[(a^* - \theta)^2]$. \\ Characterize the function $h(n)$ such that your expected payoff is higher in scenario 1 if and only if $m<h(n)$. As clearly as you can, write out an intuition for this result. \begin{hint*} Use the fact given in Exercise \ref{ex:Average}. \end{hint*} \end{exercise} \chapter{Properties of Information} Many economic settings involve an unknown type or quality $\theta$ and a signal $X$ about $\theta$, where both $\theta$ and $X$ are ordered (i.e., there are ``better" qualities $\theta$ and ``higher" signal realizations $X$). In these settings, we might think that higher realizations of $X$ are good news about $\theta$---for example, that higher test scores suggest higher ability or that better reviews for a product suggest higher quality. These positive inferences are not in general justified, requiring assumptions on the joint distribution of $(\theta,X)$. Section \ref{sec:DefinePD} presents three useful definitions of positive dependence between random variables, which are applied to our motivating problem (inference about $\theta$ from observation of a signal $X$) in Section \ref{sec:Relationship}. Section \ref{sec:LL} presents an example of the kind of counterintuitive result that can obtain when these properties are not imposed on the informational environment. \section{Definitions} \label{sec:DefinePD} \subsection{Monotone Likelihood Ratio Property} Consider two random variables $Z$ and $\widetilde{Z}$ with distributions $F$ and $\widetilde{F}$ that admit densities $f$ and $\tilde{f}$, which we assume are everywhere strictly positive.\footnote{The assumption that densities are everywhere strictly positive allows us to define the monotone likelihood ratio property in terms of likelihood ratios. More generally, we can consider a distribution $F$ to likelihood-ratio dominate another distribution $\widetilde{F}$ if $f(z)\tilde{f}(z') \geq f(z')\tilde{f}(z)$ for all $z > z'$.} \begin{definition} \label{def:LRDominance} The distribution $F$ \emph{likelihood-ratio dominates} the distribution $\widetilde{F}$ if \[\frac{f(z)}{f(z')} \geq \frac{\widetilde{f}(z)}{\widetilde{f}(z')} \quad \quad \mbox{for all $z > z'$}\] \end{definition} \noindent Intuitively, moving up in the likelihood-ratio dominance order renders higher realizations of $z$ more likely relative to lower realizations. This definition is often specialized to conditional densities in the following way. Suppose $\theta$ and $X$ are real-valued random vectors defined on the same probability space with densities $f_\theta$ and $f_X$ and conditional densities $f_{\theta \mid X}$ and $f_{X \mid \theta}$. \begin{definition} \label{def:MLRP} The family of conditional densities $\{f_{X \mid \theta}(\cdot \mid \theta)\}_{\theta \in \Theta}$ have the \emph{monotone likelihood ratio property} (MLRP) if for every $x>x'$ and $\theta > \theta'$, \begin{equation} \label{eq:MLRP} \frac{f_{X\mid \theta} (x\mid \theta)}{f_{X\mid \theta}(x' \mid \theta)} \geq \frac{f_{X\mid \theta}(x \mid \theta')}{f_{X\mid \theta}(x' \mid \theta')}. \end{equation} If the inequality above holds strictly at every $x>x'$, then we say that $\{f_{X \mid \theta}(\cdot \mid \theta)\}$ have the \emph{strict} monotone likelihood ratio property. \end{definition} \begin{remark} If $\{f_{X \mid \theta}(\cdot \mid \theta)\}$ satisfy MLRP, then $\{f_{\theta \mid X}(\cdot \mid X)\}$ also satisfy MLRP. To see this, observe that by Bayes' rule, (\ref{eq:MLRP}) can be rewritten \[ \frac{f_{\theta \mid X} (\theta\mid x) f_X(x)}{f_\theta(\theta)} \frac{f_\theta(\theta)}{f_{\theta \mid X}(\theta\mid x') f_X(x')} \geq \frac{f_{\theta \mid X}(\theta'\mid x)f_X(x)}{f_\theta(\theta')} \frac{f_\theta(\theta')}{f_{\theta \mid X}(\theta'\mid x')f(x')}\] which simplifies to the condition that $\{f_{\theta \mid X}(\cdot \mid X)\}$ have the monotone likelihood ratio property. \end{remark} \medskip In the special case of an additive signal $X = \theta + \eps$, where $\eps$ is independent of $\theta$ and has density $f_\eps$, \[\frac{f_{\theta\mid X} ( \theta \mid x)}{ f_{\theta \mid X}( \theta' \mid x)} = \frac{f_\eps(x-\theta)}{f_\eps(x-\theta')}\] so the MLRP condition in (\ref{eq:MLRP}) becomes \[\frac{f_\eps(x-\theta)}{f_\eps(x'-\theta)} \geq \frac{f_\eps(x-\theta')}{f_\eps(x'-\theta')} \quad \quad \mbox{for every $x>x'$ and $\theta > \theta'$},\] i.e., for every $\theta > \theta'$, the function $\frac{f_\eps(x-\theta)}{f_\eps(x-\theta')}$ is nondecreasing in $x$. It turns out that this is precisely the condition that $f_\eps$ is log concave. \begin{definition} A function $f$ that maps a convex set into the positive reals is \emph{log-concave} if the function $\ln f$ is concave. \end{definition} \begin{proposition}[\citet{SaumardWellner}] A density function $f$ on $\mathbb{R}$ is log-concave if and only if for every $\theta > \theta'$, the ratio $\frac{f(x-\theta)}{f(x-\theta')}$ is a non-decreasing function of $x$. \end{proposition} \noindent Thus, in any model where (1) $X= \theta + \eps$, (2) the noise term $\eps$ is independent of $\theta$, and (3) $\eps$ has a log-concave density, we can be guaranteed that $\{f_{\theta \mid X}(\cdot \mid x)\}$ has the monotone likelihood ratio property (no matter the distribution of $\theta$). Many distributions have log-concave densities---for example, normal distributions, exponential distributions, the uniform distribution over any convex set, the logistic distribution, and the extreme value distribution. But others do not---for example, the Pareto distribution and Cauchy distribution. See \citet{SaumardWellner} or \citet{BagnoliBergstrom} for other examples and properties of log-concave distributions. \subsection{Affiliation} Let $Z_1, \dots, Z_n$ be real-valued random variables taking values in $\mathbb{R}^n$ and admitting joint density $f$, which again we'll assume to be everywhere strictly positive. For any $z,z' \in \mathbb{R}^n$, let $z \wedge z'$ (``z meet z') denote the component-wise minimum of $z$ and $z'$, and $z \vee z'$ (``z join z') denote the component-wise maximum, i.e., \begin{align*} z \vee z' & = (\max(z_1,z'_1), \dots, \max(z_n,z'_n)) \\ z \wedge z' & = (\min(z_1,z'_1), \dots, \min(z_n,z'_n)) \end{align*} \begin{definition} \label{def:Affiliation} The variables $Z_1, \dots, Z_n$ are \emph{affiliated} if \begin{equation} \label{eq:Affiliated} f(z \vee z')f(z \wedge z') \geq f(z)f(z') \end{equation} for all $z,z' \in \mathbb{R}^n.$ \end{definition} \noindent This condition loosely says that larger realizations of any one variable make larger realizations of the other variables more likely. Figure \ref{fig:Affiliation} depicts this relationship for two binary variables. \begin{figure}[H] \begin{center} \includegraphics[scale=0.4]{affiliation.pdf} \end{center} \caption{Two binary variables with joint density $f$ are affiliated if $f(1,1)f(0,0)\geq f(1,0)f(0,1)$.} \label{fig:Affiliation} \end{figure} \begin{remark} If $Z_1, \dots, Z_n$ are mutually independent, then they are affiliated. \end{remark} Besides Definition \ref{def:Affiliation}, there are several equivalent ways to characterize affiliation. \begin{proposition} $Z_1, Z_2, \dots, Z_n$ are affiliated if and only if $f$ is log-supermodular. \end{proposition} \begin{proof} Taking logs of both sides of (\ref{eq:Affiliated}), we obtain \[\log f(z \vee z') + \log f(z \wedge z') \geq \log f(z) + \log f(z')\] i.e., $\log f $ is supermodular. \end{proof} \begin{proposition} \label{prop:SecondDerivative} Suppose the joint density $f$ is twice-differentiable. Then $Z_1, Z_2, \dots, Z_n$ are affiliated if and only if $\frac{\partial^2 \log f}{\partial z_i z_j} \geq 0$. \end{proposition} We show the only if direction below, leaving the if direction for an exercise. \begin{proof} Without loss let $i=1$ and $j=2$. Choose any $z_1,z'_1, z_2,z_2' \in \mathbb{R}$ where $z_1 > z_1'$ and $z_2 > z_2'$. Suppose $Z_1,Z_2, \dots, Z_n$ are affiliated. Then by definition \[\log f(z_1, z_2, z_{-12}) - \log f(z_1', z_2, z_{-12}) \geq \log f(z_1, z_2', z_{-12}) - \log f(z_1', z_2', z_{-12}).\] Rewrite $z_1$ as $z_1' +\eps$ and divide both sides by $\eps$. Taking the limit as $\eps \rightarrow 0$, we have \begin{align*} \lim_{\eps \rightarrow 0} & \left(\frac{\log f(z_1' +\eps, z_2, z_{-12}) - \log f(z_1', z_2, z_{-12})}{\eps}\right) \\ & \quad \quad \quad \quad \quad \geq \lim_{\eps \rightarrow 0} \left(\frac{\log f(z_1'+\eps, z_2', z_{-12}) - \log f(z_1', z_2', z_{-12})}{\eps}\right) \end{align*} so $\frac{\partial \log f}{\partial z_1}$ is increasing in $z_2$, as desired. \end{proof} \begin{exercise} Prove the `if' direction of Proposition \ref{prop:SecondDerivative}: If the joint density $f$ is twice-differentiable and satisfies $\frac{\partial^2 \log f}{\partial z_i z_j} \geq 0$, then $Z_1, Z_2, \dots, Z_n$ are affiliated. \end{exercise} The next characterization simplifies (\ref{eq:Affiliated}) to a pairwise condition. Specifically, for any $(Z_i,Z_j)$ and any realization of the remaining variables $Z_{-ij}$, higher realizations of $Z_i$ must imply higher realizations of $Z_j$. \begin{proposition} \label{prop:Pairwise} $Z_1, \dots, Z_n$ are affiliated if and only if \begin{equation} \label{eq:Pairwise} f(z_i,z_j,z_{-ij}) f(z_i',z_j',z_{-ij}) \geq f(z'_i,z_j,z_{-ij}) f(z_i,z_j',z_{-ij}) \end{equation} for every pair of distinct indices $i$, $j$, and every $z_i > z_i'$, $z_j > z_j'$, and $z_{-ij} \in \mathbb{R}^{n-2}$. \end{proposition} \begin{exercise} Prove Proposition \ref{prop:Pairwise}. \end{exercise} This pairwise characterization immediately implies the following characterization, which says that $(Z_1, \dots, Z_n)$ are affiliated if and only if for every pair of variables $i,j$, and every realization of $z_{-ij}$, the family of conditional densities $\{f(\cdot \mid z_j, z_{ij})\}_{z_j \in \mathbb{R}}$ has the monotone-likelihood ratio property. \begin{proposition} \label{prop:A_MLRP}$Z_1, \dots, Z_n$ are affiliated if and only if \begin{equation} \label{eq:A_MLRP} f(z_i \mid z_j, z_{-ij})f(z'_i \mid z'_j, z_{-ij}) \geq f(z_i \mid z'_j, z_{-ij})f(z'_i \mid z_j, z_{-ij}) \end{equation} for every pair of distinct indices $i$, $j$, and every $z_i > z_i'$, $z_j > z_j'$, and $z_{-ij} \in \mathbb{R}^{n-2}$. \end{proposition} \begin{proof} The displays in (\ref{eq:Pairwise}) and (\ref{eq:A_MLRP}) are equivalent to one another by Bayes' rule, so Proposition \ref{prop:Pairwise} implies Proposition \ref{prop:A_MLRP}. \end{proof} \medskip Operations that preserve affiliation include: \begin{proposition}[Increasing Functions] Suppose $Z_1, \dots, Z_n$ are affiliated, and the functions $g_i: \mathbb{R} \rightarrow \mathbb{R}$, $1\leq i \leq n$, are either all nondecreasing or all nonincreasing. Then the variables $g_1(Z_1), \dots, g_n(Z_n)$ are affiliated. \end{proposition} \begin{proposition}[Subsets] Suppose $Z_1, \dots, Z_n$ are affiliated and let $A \subseteq \{1, \dots, n\}$ be any subset of these variables. Then the variables $(Z_i)_{i \in A}$ are affiliated. \end{proposition} \begin{proposition}[Order Statistics] For each $i\leq i \leq n$, let $z^{(i)}$ denote the $i$-th largest realization among $(z_1, \dots, z_n)$. Then the variables $(Z^{(1)}, \dots, Z^{(n)})$ are affiliated. \end{proposition} \begin{exercise} Show that affiliation is not preserved under arbitrary linear combinations of affiliated variables by constructing an example of random variables $Z_1,Z_2,Z_3$ where $(Z_1,Z_2,Z_3)$ are affiliated but $(Z_1+Z_2,Z_3)$ are not. \end{exercise} \subsection{First-Order Stochastic Dominance} Again consider two real-valued random variables, a parameter $\theta$ and a signal $X$, defined on the same probability space with joint distribution $F$. In many applications we may expect a higher signal realization to lead to a higher inference about the unknown parameter. We now formalize `higher inference' as a first-order stochastic dominance shift in the posterior belief. \begin{definition} A distribution $F$ \emph{first-order stochastically dominates} $\widetilde{F}$, which we denote by $F \geq_{FOSD} \widetilde{F}$, if \[\int u(\theta) dF(\theta) \geq \int u(\theta) d\widetilde{F}(\theta)\] for every nondecreasing function $u: \mathbb{R} \rightarrow \mathbb{R}$. Equivalently, $F(\theta) \leq \tilde{F}(\theta)$ at every $\theta \in \Theta$. \end{definition} \noindent If $u$ is interpreted as a utility function over money, then a monetary gamble distributed according to $F$ is preferred over one distributed according to $\tilde{F}$ by every agent that prefers more money over less, regardless of the specific shape of the agent's utility function. We can use this definition to compare conditional beliefs about $\theta$. \begin{definition} \label{def:FOSDProperty} Say that $F$ has the \emph{FOSD property} if $F_{\theta \mid X}(\cdot \mid X=x) \geq_{FOSD} F_{\theta \mid X}(\cdot \mid X=x')$ for all $x > x'$. \end{definition} \citet{Milgrom} proposed a closely related property, which is imposed on conditional distributions $F_{X \mid \theta}$ rather than joint distributions $F$. (This is analogous to considering a signal $\sigma: \Theta \rightarrow \Delta(S)$ without fixing a prior on $\Theta$.) \begin{definition} Say that a signal realization $x$ is \emph{more favorable than} signal realization $x'$ if for every prior distribution $F_\theta \in \Delta(\Theta)$, the posterior distribution $F_{\theta \mid X}(\cdot \mid x)$ first-order stochastically dominates the posterior distribution $F_{\theta \mid X}(\cdot \mid x')$. \end{definition} That is, $x$ is more favorable than $x'$ if observing the realization $x$ leads to a FOSD-higher posterior belief about $\theta$ (compared to observing $x'$). If $x$ is more favorable than $x'$ for all $x>x'$, then we have a stronger version of the FOSD property (given in (\ref{def:FOSDProperty})) that holds not only for the specific joint distribution $F$, but for all joint distributions $F$ that are generated by $F_{X \mid \theta}$ and some choice of prior $F_\theta$. \begin{example} Recall that in the normal-updating setting with $\theta \sim \mathcal{N}(\mu, \sigma_\theta^2)$, $X= \theta + \eps$, $\eps \sim \mathcal{N}(0, \sigma_\eps^2)$, and $\theta \perp \!\!\! \perp \eps$, the agent's posterior belief about $\theta$ conditional on $X$ is \[\mathcal{N}\left(\frac{\sigma_\theta^2}{\sigma_\eps^2 + \sigma_\theta^2} X + \frac{\sigma_\eps^2}{\sigma_\eps^2 + \sigma_\theta^2} \mu , \frac{\sigma_\eps^2 \sigma_\theta^2}{\sigma_\eps^2 + \sigma_\theta^2}\right).\] This distribution is increasing (in the FOSD order) in the realization of $X$ for all parameters $\mu$ and $\sigma_\theta^2$. So $x$ is more favorable than $x'$ for every pair $x>x'$. \end{example} \section{How They are Related} \label{sec:Relationship} Let $\theta$ and $X$ be real-valued random vectors defined on the same probability space. We'll use $F$ to denote their joint distribution, and assume throughout that the densities $f_\theta$ and $f_X$ and conditional densities $f_{\theta \mid X}$ and $f_{X \mid \theta}$ exist. In this setting, our main properties from above are: \begin{itemize} \item[] \quad \textbf{A:} $(X,\theta)$ are affiliated. \item[] \quad \textbf{MLRP:} $\{f_{X \mid \theta}(\cdot \mid \theta)\}$ satisfies MLRP. \item[] \quad \textbf{FOSD:} For all $x > x'$, $F(\cdot \mid X=x) \geq_{FOSD} F(\cdot \mid X=x')$ \item[] \quad \textbf{MF:} For all $x > x'$, $x$ is more favorable than $x'$ \end{itemize} \noindent These properties are related in the following way: \[\mbox{\textbf{(A)}} \quad \Longleftrightarrow \quad \mbox{\textbf{(MLRP)}} \quad \Longleftrightarrow \quad \mbox{\textbf{(MF)}}\quad \Longrightarrow \quad \mbox{\textbf{(FOSD)}}\] where the one-directional implication from (MF) to (FOSD) is strict. See \citet{Castro} for an example of a distribution satisfying (FOSD) but not (MLRP). \begin{remark} (MLRP) is equivalent to (MF) but strictly stronger than (FOSD). Thus if a joint distribution $F$ satisfies (MLRP) then it must satisfy (FOSD), but $F$ can satisfy (FOSD) and fail (MLRP). On the other hand, a conditional distribution $F_{X \mid \theta}$ that satisfies (FOSD) for every completion to a joint distribution $F$ (i.e., for every choice of prior $F_\theta$) must also satisfy (MLRP). So "FOSD for every prior" is equivalent to MLRP, while ``FOSD for some prior" is weaker. \end{remark} We've already established the equivalence between (A) and (MLRP) in Proposition \ref{prop:A_MLRP}. Since (FOSD) is necessary for (MF), clearly (MF) implies (FOSD). The following result proves equivalence of (MLRP) and (MF). \begin{proposition}[\citet{Milgrom}] $x$ is more favorable than $x'$ if and only if for every $\theta > \theta'$, \begin{equation} \label{eq:Milgrom_MLRP} \frac{f_{X\mid \theta} (x\mid \theta)}{f_{X\mid \theta}(x' \mid \theta)} \geq \frac{f_{X\mid \theta'}(x \mid \theta')}{f_{X\mid \theta'}(x' \mid \theta')} \end{equation} \end{proposition} \begin{proof} We will first show that if (\ref{eq:Milgrom_MLRP}) is satisfied at every $\theta > \theta'$, then $x$ must be more favorable than $x'$. Fix any prior $F_\theta$ and parameter $\theta^* \in \Theta$. If $F_\theta(\theta^*) \in \{0,1\}$ then the conclusion is trivially reached. So suppose $F_\theta(\theta^*) \in (0,1)$. For any $\theta \leq \theta^*$ and $\tilde{\theta}>\theta^*$, (\ref{eq:Milgrom_MLRP}) implies \[\frac{f (x\mid \tilde{\theta})}{f(x \mid \theta)} \geq \frac{f(x' \mid \tilde{\theta})}{f(x' \mid \theta)}\] where we omit subscripts on the densities here and elsewhere in the proof to ease notation. Integrating over all $\tilde{\theta}$ such that $\tilde{\theta}>\theta^*$ (with respect to the prior distribution $F_\theta$), we obtain \[\frac{\int_{\tilde{\theta} > \theta^*} f (x\mid \tilde{\theta}) dF_\theta(\tilde{\theta})}{f(x \mid \theta)} \geq \frac{\int_{\tilde{\theta} > \theta^*} f(x' \mid \tilde{\theta}) dF_\theta(\tilde{\theta})}{f(x' \mid \theta)}\] or equivalently \[\frac{f(x \mid \theta)}{\int_{\tilde{\theta} > \theta^*} f (x\mid \tilde{\theta}) dF_\theta(\tilde{\theta})} \leq \frac{f(x' \mid \theta)}{\int_{\tilde{\theta} > \theta^*} f(x' \mid \tilde{\theta}) dF_\theta(\tilde{\theta})}.\] Integrating over all $\theta$ such that $\theta\leq\theta^*$, we obtain \[\frac{\int_{\theta \leq \theta^*} f(x \mid \theta) dF_\theta(\theta)}{\int_{\tilde{\theta} > \theta^*} f (x\mid \tilde{\theta}) dF_\theta(\tilde{\theta})} \leq \frac{\int_{\theta\leq\theta^*} f(x' \mid \theta) dF_\theta(\theta)}{\int_{\tilde{\theta} > \theta^*} f(x' \mid \tilde{\theta}) dF_\theta(\tilde{\theta})}.\] Recall that $f(x \mid \theta) f(\theta) = f(\theta \mid x)f(x)$, so the above display implies \[\frac{\int_{\theta \leq \theta^*} f(\theta \mid x) d\theta}{\int_{\tilde{\theta} > \theta^*} f(\tilde{\theta} \mid x) d\tilde{\theta}} \leq \frac{\int_{\theta\leq\theta^*} f(\theta \mid x') d\theta}{\int_{\tilde{\theta} > \theta^*} f(\tilde{\theta}\mid x') d\tilde{\theta}}\] or more simply \[\frac{F(\theta^* \mid x)}{1-F(\theta^* \mid x)} \leq \frac{F(\theta^* \mid x')}{1-F(\theta^* \mid x')} \] Since $\frac{y}{1-y}$ is a strictly increasing function in $y$, we have $F(\theta^* \mid x) \leq F(\theta^* \mid x')$ as desired. \medskip In the other direction, we will show that if $x$ is more favorable than $x'$, then (\ref{eq:Milgrom_MLRP}) holds everywere. Consider any two parameter values $\theta > \theta'$, and let $F_\theta$ be a prior distribution supported on these two points with equal probability on each. Since by assumption $x$ is more favorable than $x'$, then $F(\theta' \mid x) \leq F(\theta' \mid x')$, implying \[\frac{F(\theta'\mid x)}{ 1- F(\theta' \mid x)} \leq \frac{F(\theta'\mid x')}{ 1- F(\theta'\mid x')}\] or equivalently \[\frac{f(\theta'\mid x')}{ f(\theta \mid x')} \geq \frac{f(\theta'\mid x)}{f(\theta\mid x)}.\] Applying Bayes' rule again, we can rewrite the above as $\frac{f(x \mid \theta)}{f(x' \mid \theta)} \geq \frac{f(x \mid \theta')}{f(x' \mid \theta')}$, which is the desired conclusion. \end{proof} \begin{remark} \citet{Milgrom}'s result is not precisely the proposition above, but instead the equivalence between strict MLRP (as defined in Definition \ref{def:MLRP}) and a definition of ``more favorable" that replaces FOSD with strict FOSD. Specifically, say that $F$ strictly first-order stochastically dominates $\widetilde{F}$ if $F(\theta) \leq \widetilde{F}(\theta)$ everywhere with strict inequality at some $\theta$. (Equivalently, $\int u(\theta) dF(\theta) > \int u(\theta) d\widetilde{F}(\theta)$ for every strictly increasing function $u: \mathbb{R} \rightarrow \mathbb{R}$.) Say that $x$ is strictly more favorable than $x'$ if for every prior distribution $F_\theta$, the posterior distribution $F_{\theta \mid X}(\cdot \mid x)$ strictly first-order stochastically dominates $F_{\theta \mid X}(\cdot \mid x')$. Then, by substituting strict inequalities in place of weak inequalities in the proof above where appropriate, we can conclude that $\{f_{X\mid \theta}(\cdot \mid \theta)\}$ satisfies strict MLRP if and only if $x$ is strictly more favorable than $x'$.\footnote{Indeed, the same proof demonstrates a stronger (if slightly more cumbersome to state) result: If and only if $\{f_{X\mid \theta}(\cdot \mid \theta)\}$ satisfies strict MLRP, then $F_{\theta \mid X}(\theta \mid x) < F_{\theta \mid X}(\theta \mid x')$ at every $\theta$ such that $0<F(\theta)<1$.} \end{remark} We conclude by briefly summarizing other notions of positive dependence and placing the above properties relative to these. \begin{itemize} \item[] \textbf{Positive covariance (C):} $Cov(X,\theta) \geq 0$ \item[] \textbf{Positive quadrant dependence (QD):} $Cov(g(X),h(\theta)) \geq 0$ for all non-decreasing functions $g$ and $h$ \item[] \textbf{Association (As):} $Cov(g(X,\theta),h(X,\theta))\geq 0$ for all non-decreasing functions $g$ and $h$ \item[] \textbf{Left-Tail Decreasing (LT):} For all $x$, $F_{X \mid \theta}(X \leq x \mid \theta \leq t)$ is non-increasing in $t$, and for all $t$, $F_{\theta \mid X}(\theta \leq t \mid X \leq x)$ is non-increasing in $x$. \item[] \textbf{Inverse Hazard Rate Decreasing (IHR):} For all $x$, $F_{X \mid \theta} (x\mid t)/f_{X \mid \theta}(x\mid t)$ is non-increasing in $t$, and for all $t$, $F_{\theta \mid X}(t \mid x)/f_{\theta \mid X}(t\mid x)$ is non-increasing in $x$. \end{itemize} \noindent These properties are extensively studied in, for example, \citet{Lehmann}, \citet{Esary}, \citet{Castro}, and Chapter 3 of \citet{Balakrishna2009}. The following chain of implications is summarized in \citet{Castro}: \begin{theorem} \emph{(A)} $\Longleftrightarrow$ \emph{(MLRP)} $\Longrightarrow$ \emph{(IHR)} $\Longrightarrow$ \emph{(FOSD)} $\Longrightarrow$ \emph{(LT)} $\Longrightarrow$ \emph{(As)} $\Longrightarrow$ \emph{(QD)} $\Longrightarrow$ \emph{(C)} \end{theorem} Thus the standard properties of affiliation and MLRP are in fact strong, implying all of the other properties but not in general implied by them. These properties are equivalent to one another in the special case in which the two variables are jointly normal. \begin{exercise} Suppose $(X_1, \dots, X_n)$ are jointly normal and exchangeable, where $\sigma^2=\Var(X_i)$ for each $i$, and $\rho=Cov(X_i,X_j)$ for each pair of indices $i,j$. Prove that these variables are affiliated if and only if $\rho \geq 0$. \begin{hint*} Use the fact given in Exercise \ref{ex:Average}. \end{hint*} \end{exercise} \section{When These Conditions Fail} \label{sec:LL} An example from \citet{LagzielLehrer} demonstrates the kind of counterintuitive result that can hold in settings where (A) and (MLRP) fail. An editor chooses which papers to publish. Papers have unknown quality graded on a 9-point scale (A+, A, A-, B+, B, B-, C+,C,C-), whose prior distribution is given in Figure \ref{fig:ImpactDistr}. \begin{figure}[H] \centering \includegraphics[scale=.3]{bias1.png} \caption{Distribution of Papers' Quality} \label{fig:ImpactDistr} \end{figure} The editor learns about quality via a noisy refereeing process, which generates an unbiased signal $X$ about the paper. The realization of $X$ is equal to the true quality with probability 0.8, and otherwise exactly two levels higher or lower than the true quality (each with probability 0.1). The distribution of $X$ is reported in Figure \ref{fig:XDistr}: \begin{figure}[H] \centering \includegraphics[scale=.35]{bias2.png} \caption{Distribution of Referee Signal} \label{fig:XDistr} \end{figure} The editor chooses a threshold and accepts all papers whose expected quality (given the referee's report) exceeds this threshold. Intuitively, we may expect that the editor faces a tradeoff between publishing more papers versus publishing higher quality papers, where a higher threshold corresponds to publishing fewer but higher quality papers. But observe that if the editor chooses to publish only papers with an expected quality that (weakly) exceeds $A$ (i.e., the top-rated 5\% of papers), then the expected value of the published work is close to $B+$. If the editor lowers the bar to $A-$ (i.e., the top-rated 13\%), then the expected value of the published work \emph{increases} to $A-$. Not only are more papers published, but their expected quality is higher. In this example, we have $\mathbb{E}(\theta \mid X=x) <\mathbb{E}(\theta \mid X=x')$ even while $x>x'$, so clearly the posterior belief at $x'$ does not first-order stochastically dominate the posterior belief at $x$. \citet{ChambersHealy} demonstrate an even stronger reversal by constructing signals such that the posterior belief at the lower signal realization first-order stochastically dominates the posterior belief at the higher signal realization. Notably, their result relies on natural-seeming signals that satisfy various reasonable properties. \begin{theorem} For every non-degenerate, bounded $\theta$ there exists a signal structure $X$ and two signal realizations $x'>x$ such that $f(\theta \mid X=x')$ is strictly first-order stochastically dominated by $f(\theta \mid X=x)$. Furthermore, $X$ can be chosen to have the following properties: i) $X$ is an additive signal structure, and ii) $e:=X-\theta$ is mean-zero, symmetric, quasiconcave, and has bounded support. \end{theorem} \noindent See \citet{Heinsalu} for a strengthening of \citet{LagzielLehrer}'s example using this result, in which lowering the threshold not only increases the expected quality, but results in a quality distribution for published papers that first-order stochastically dominates the one that would obtain at the higher threshold. \section{Additional Exercises} \begin{exercise} Let $Z_1, \dots, Z_n$ be affiliated and let $h: \mathbb{R}^n \rightarrow \mathbb{R}$ be any function that is nondecreasing in each of its coordinates. Prove that the function \[\mathbb{E}(h(Z_1, \dots, Z_n) \mid Z_1 = z_1)\] is nondecreasing in $z_1$. \end{exercise} \begin{exercise} Let $X$ be any real-valued random variable and let $f: \mathbb{R} \rightarrow \mathbb{R}$ and $g: \mathbb{R} \rightarrow \mathbb{R}$ be bounded nondecreasing functions. Prove that $Cov(f(X),g(X)) \geq 0$. (Do not apply the FKG inequality.) \begin{hint*} There are at least two short proofs, one that uses Fubini's theorem and the fact that $\mathbb{E}(XY) = \mathbb{E}(X)\mathbb{E}(Y)$ for any independent random variables $X$ and $Y$, and another which relies entirely on elementary (if not obvious) arguments. \end{hint*} \end{exercise} \begin{exercise} Ann and Bob share the same prior $p$ over an unknown real-valued state $\theta$, and observe a common realization of the signal $X$, but disagree about the distribution of $S$. Ann believes that $X = \theta + \eps$, where $\theta \perp \!\!\! \perp \eps$ and $\eps $ is a real-valued noise term with density $f_\eps$. Bob believes that $X = \theta + \eps + \Delta$ for some $\Delta > 0$. That is, Ann perceives Bob as adding $\Delta$ to the realization of the signal, while Bob perceives Ann as subtracting $\Delta$ from the realization of the signal. Let $f_A$ denote the joint density of $(\theta, X)$ according to Ann's model and $f_B$ denote the joint density according to Bob's model, with $\mathbb{E}^A$ and $\mathbb{E}^B$ denoting their respective expectation operators. Impose the monotone likelihood-ratio property on $\{f_A(\cdot \mid \omega)\}$, that is, \[\frac{f_A(x' \mid \theta')}{f_A(x \mid \theta')} \geq \frac{f_A(x' \mid \theta)}{f_A(x \mid \theta)} \quad \forall x'>x, \theta'>\theta\] \begin{itemize} \item[(a)] Prove that $\{f_B(\cdot \mid \theta)\}$ also satisfies MLRP. \item[(b)] Prove that $\mathbb{E}^A[\mathbb{E}^B[\theta \mid X]]$ is decreasing in $\Delta$, and interpret this result. \item[(c)] Suppose that Ann and Bob now additionally observe a common vector of iid signals $(Y_1, Y_2, \dots, Y_N)$ where each $Y_i = \theta + \delta_i$ with $\theta \perp \!\!\! \perp \delta_i$ and $\delta_i$ are iid across signals. Prove that \[\mathbb{E}^A[\mathbb{E}^B[\theta \mid X, Y_1, \dots, Y_N]] \leq \mathbb{E}^A[\mathbb{E}^B[\theta \mid X, Y_1, \dots, Y_N, Y_{N+1}]]\] for every $N\geq 1$. Again, interpret the result. \end{itemize} \end{exercise} \chapter{Comparing Information I: The Blackwell Order} When an agent has access to a choice between multiple signals, we may desire to order these signals based on how informative they are. Intuition can guide us on how to define such an ordering in specific cases, for example: \begin{itemize} \item Adding noise to a signal decreases its informativeness. \item Observing the realization of $(X,Y)$ is more informative than observing the realization of $X$ alone. \end{itemize} Any informativeness ordering should satisfy these properties, but there are different ways to generalize from here. One approach is to fix a decision problem and characterize the instrumental value of the signal for that decision problem. Alternatively, we could look for a universal informativeness ordering over signals that holds for all decision problems (as will be the focus of this chapter). Yet another approach is to quantify the ``signal content" contained within the signal based on the physical difficulty of producing or processing that information (see Chapter \ref{sec:CostofInformation}). In this chapter we introduce the Blackwell partial order on signals, which considers one signal to be more informative than another if it is more useful for all decision problems. If $\sigma$ dominates $\sigma'$ in this Blackwell order, we will say that \emph{$\sigma$ is more informative than $\sigma'$} or that \emph{$\sigma$ Blackwell-dominates $\sigma'$}. The following sections demonstrate five perspectives on this order, culminating in Blackwell's theorem (establishing their equivalence) and the proof of this theorem. \section{Garblings} We may consider a signal to be more informative than another if the latter is a noised-up version of the former. \begin{definition}[Markov matrix] A matrix $M$ is a \emph{Markov matrix} if its entries are nonnegative and its rows sum to 1. \end{definition} Recall that when the set of states and the set of signal realizations are finite, we can represent any signal as a Markov matrix. \begin{definition}[Garblings, Finite Version] \label{def:Garbling} Markov matrix $ P $ is a \emph{garbling} of Markov matrix $ Q $ if there exists a Markov matrix $ M $ s.t. $ QM=P $ \end{definition} \begin{example} \label{ex:Garbling} Let $\Theta = \{\theta_1,\theta_2\}$ and consider the signals \[P = \left(\begin{array}{cc} 3/4 & 1/4 \\ 1/4 & 3/4\end{array}\right) \quad \quad \quad Q = \left(\begin{array}{cccc} 9/16 & 3/16 & 3/16 & 1/16 \\ 1/16 & 3/16 & 3/16 & 9/16 \end{array}\right) \] where as usual the rows are indexed to states and the columns are indexed to signal realizations. Then since \[ \underbrace{\left(\begin{array}{cccc} 9/16 & 3/16 & 3/16 & 1/16 \\ 1/16 & 3/16 & 3/16 & 9/16 \end{array}\right)}_{Q} \underbrace{\left(\begin{array}{cc} 1 & 0 \\ 1 & 0 \\ 0 & 1 \\ 0 & 1 \\ \end{array}\right)}_{M} = \underbrace{\left(\begin{array}{cc} 3/4 & 1/4 \\ 1/4 & 3/4\end{array}\right)}_{P}\] where $M$ is a Markov matrix, we can conclude that $P$ is a garbling of $Q$. This example has a particularly nice intuition. Label the possible realizations of the first information structure $P$ as $s_1$ and $s_2$, and consider the signal which is two independent realizations of $P$. The set of possible realizations of this new signal is then $\{ s_1s_1, s_1s_2,s_2s_1,s_2s_2\}$ with the conditional distributions over these realizations given precisely by $Q$. So observing $P$ is statistically equivalent to observing $Q$ and forgetting the second realization. Clearly then $P$ is less informative than $Q$. \end{example} More generally, we can replace the Markov matrix $M$ in Definition \ref{def:Garbling} with a Markov kernel. \begin{definition}[Garblings, General Version] The signal $ \sigma':\Theta \to\Delta(S') $ is a \emph{garbling} of the signal $ \sigma:\Theta\to\Delta(S) $ if there exists a Markov kernel $ \gamma:S\to \Delta (S') $ such that $$ \sigma'(s'\mid \theta) = \int_{s\in S} \gamma (s'\mid s) \sigma(s\mid \theta) ds $$ \end{definition} \begin{example} Let $\theta$, $\eps$, and $\delta$ be independent real-valued random variables with densities $f_\theta$, $f_\eps$, and $f_\delta$. Then the signal $X = \theta + \eps + \delta $ is a garbling of $Y = \theta+\eps$, since \[f_{X \mid \theta}(x \mid t) = \int_{y \in \mathbb{R}} f_\delta (x - y) f_{Y \mid \theta} (y \mid t ) dy\] where $f_\delta$ is a Markov kernel. \end{example} \begin{example} Consider an arbitrary finite set $\Theta$ and let $I$ be the $\vert \Theta \vert \times \vert \Theta \vert$ identity matrix. Then for any set of signal realizations $S$ and any $\vert \Theta \vert \times \vert S \vert$ Markov matrix $Q$, we have $IQ=Q$, so $Q$ is a garbling of $I$. \end{example} \begin{exercise} Is it possible for $P$ and $Q$ to both be garblings of one another if $P \neq Q$? Provide an example if so, and otherwise prove that it is not possible. \end{exercise} \begin{remark} \label{remark:Garbling} Let $X$ and $X'$ respectively denote the random realizations of the signals $\sigma$ and $\sigma'$. Then $\theta$, $X$, and $X'$ are random variables which can be defined on a common probability space. The property that $\sigma'$ is a garbling of $\sigma$ does not however pin down the joint distribution of $(\theta,X,X')$. What it guarantees is that there is a way of generating these variables such that $\theta$ is independent of $X'$ conditional on $X$, in which case $\theta \mid X$ is identical in distribution to $\theta \mid X, X'$.\footnote{First draw the state $\theta$, then draw $X$ according to its conditional distribution, and finally draw $X'$ according to the garbling kernel $\gamma$, independent of $\theta$.} Other ways of generating these variables---still consistent with $\sigma'$ being a garbling of $\sigma$---can yield different relationships. For example, suppose $\theta \sim \mathcal{N}(0,1)$ while \begin{align*} X&=\theta + \eps_1 \\ X'&=\theta + \eps_2 \end{align*} where $\eps_1 \sim \mathcal{N}(0,1)$ and $\eps_2 \sim \mathcal{N}(0,2)$ are both independent of $\theta$. Then clearly the latter signal is a garbling of the former. If we further assume that $\eps_2 = \eps_1 + \delta$ where $\delta \sim \mathcal{N}(0,1)$ is an independent noise term, then the following statements are true: \begin{itemize} \item $X'$ is independent of $\theta$ conditional on $X$. \item $X'$ is not independent of $X$ conditional on $\theta$ (since they are further related through the common component $\eps_1$). \end{itemize} On the other hand, if we assume that $\eps_1$ and $\eps_2$ are independent, then the statements above are reversed: \begin{itemize} \item $X'$ is not independent of $\theta$ conditional on $X$ (since $X'$ provides additional information about $\theta$ beyond what is revealed by $X$). \item $X'$ is independent of $X$ conditional on $\theta$. \end{itemize} Thus in general, the assumption that two signals are related by a garbling does not imply either conditional independence statement given above. \end{remark} \section{Decision Problems} \label{sec:DecisionProblem} Our next two definitions are based on the instrumental value of the signal for decision problems. \begin{definition} A decision problem is any pair $\bold{D} = (A,u)$ where $A$ is an action set and $u: A \times \Theta \rightarrow \mathbb{R}$ is a payoff function. \end{definition} \noindent The full decision problem is described as follows. Fix a prior $p \in \Delta(\Theta)$ and a signal $\sigma: \Theta \rightarrow \Delta(S)$. \begin{enumerate} \item The agent chooses a strategy $\alpha: S \rightarrow A$. \item The state $\theta \sim p$ and signal realization $s \sim \sigma(\cdot \mid \theta) $ are realized, and the agent takes action $\alpha(s)$. The agent's payoff is $u(\alpha(s),\theta)$. \end{enumerate} Without the benefit of further information, the best expected payoff the agent can achieve is \begin{equation} \label{payoff:NoInfo} \sup_{a \in A} \mathbb{E}\left[u(a,\theta)\right] \end{equation} With the benefit of the signal, the agent can achieve an expected payoff of \begin{equation} \label{payoff:Info} \sup_{\alpha: S \rightarrow A} \mathbb{E}\left[u(\alpha(s),\theta)\right] = \mathbb{E} \left[ \sup_{a \in A} \mathbb{E}\left[u(a,\theta) \mid s \right]\right] \end{equation} where we abuse notation on the LHS by using $s$ to denote the random variable which is the realization of the signal. On the RHS, the inner expectation is with respect to uncertainty about $\theta$ (conditional on the realization of $s$) and the outer expectation is with respect to uncertainty about $s$. One measure of the value of the signal is the difference in these expected payoffs, i.e., \begin{align*} V_{\bold{D},p}(\sigma) \equiv \mathbb{E} \left[ \sup_{a \in A} \mathbb{E}\left[u(a,\theta) \mid s \right]\right] - \sup_{a \in A} \mathbb{E}\left[u(a,\theta)\right] \end{align*} where $\bold{D}=(A,u)$ is the decision problem and $p \in \Delta(\Theta)$ is the agent's prior. \begin{remark} It is without loss to assume the use of pure strategies above, but in the subsequent development of the Blackwell order it will be useful to replace $a$ with a mixed strategy $\alpha \in \Delta(A)$ in (\ref{payoff:NoInfo}) and $\alpha$ with a stochastic map $\alpha: S \rightarrow \Delta(A)$ in (\ref{payoff:Info}). \end{remark} \begin{example} Suppose $\Theta = \{\theta_1, \theta_2\}$ with a uniform prior $p$. The decision problem is $(A,u)$ where $A = \{a_1,a_2\}$ and the utility function $u: A \times \Theta \rightarrow \mathbb{R}$ assigns a payoff of 1 when the action matches the state, and zero otherwise. The signal $\sigma$ is \[\begin{array}{ccc} & s_1 & s_2 \\ \theta_1 & q & 1-q \\ \theta_2 & 1-q & q \end{array}\] where $q > 1/2$. Then the agent's ex-ante payoff is maximized by choosing the strategy $\alpha$ that maps $s_1$ to action $a_1$ and $s_2$ to action $a_2$, with an expected payoff of $q$. In the absence of information the agent's best payoff is $1/2$, so $V_{\mathbf{D},p}(\sigma) = q-1/2$. \end{example} \begin{example} Suppose $\Theta = \mathbb{R}$ with a prior $\theta \sim \mathcal{N}(0,\sigma_\theta^2)$. The decision problem is $(A,u)$ where $A = \mathbb{R}$ and $u(a,\theta) = -(a-\theta)^2$. The signal is $X = \theta+\eps$ where $\eps \sim \mathcal{N}(0, \sigma_\eps^2)$ is independent of $\theta$. Then the agent's ex-ante payoff is maximized by choosing the strategy $\alpha(x) = \mathbb{E}(\theta \mid X=x)$, with an expected payoff of \[\mathbb{E}_X\left[ - (\mathbb{E}(\theta \mid X) - \theta)^2\right] = \mathbb{E}_X\left[-\Var(\theta \mid X)\right] = -\frac{\sigma_\theta^2 \sigma_\eps^2}{\sigma_\theta^2 + \sigma_\eps^2}\] using Fact \ref{fact:BiVar} in the final equality (and in particular, the property that posterior variance is independent of the signal realization). In the absence of information the agent's best payoff is $-\Var(\theta) = -\sigma_\theta^2$, so $V_{\mathbf{D},p}(X) = \frac{\sigma_\theta^2 \sigma_\eps^2}{\sigma_\theta^2 + \sigma_\eps^2} - \sigma_\theta^2 = \frac{\sigma_\theta^4}{\sigma_\theta^2 + \sigma_\eps^2}$. \end{example} In any specific decision problem, a signal that is informative (in the sense of moving the agent's beliefs about $\theta$) may nevertheless have no instrumental value, as demonstrated in the following exercise. \begin{exercise} \label{ex:Meyer} Suppose $\Theta = \{1,2\}$ and let $p$ assign equal probability to either state. Consider the decision problem $(A,u)$ with $A= \{1,2\}$ and $u(a,\theta)=\mathbbm{1}(a = \theta)$. Let $\sigma_P$ and $\sigma_Q$ respectively be the two signals described by $P$ and $Q$ in Example \ref{ex:Garbling}. Show that $V_{\bold{D},p}(\sigma_P) = V_{\bold{D},p}(\sigma_Q)$. That is, the second independent observation of signal $P$ has no value to the agent over the first. \end{exercise} \subsection{Uniformly Better} We'll say that a signal is more informative than another if it is more useful in every decision problem and for every prior belief. \begin{definition} \label{def:MoreInformative} The signal $ \sigma $ is more informative than $ \sigma' $ if $V_{\bold{D},p}(\sigma) \geq V_{\bold{D},p}(\sigma')$ for every decision problem $\bold{D}$ and every prior $p$. \end{definition} This is a strong condition, and we generally won't be able to order signals in this way. \begin{exercise} Let $ \Theta = \{\theta_1,\theta_2,\theta_3\}$ with a uniform prior $p$. Let $A=\{a_1,a_2\} $ and consider two utility functions: Let $u: A \times \Theta \rightarrow \mathbb{R}$ take value 1 if $ (a,\theta)\in \{ (a_1,\theta_1), (a_2, \theta_2), (a_2, \theta_3)\}$, and value 0 otherwise. Let $u': A \times \theta \rightarrow \mathbb{R}$ take value 1 if $(a,\theta)\in \{ (a_1,\theta_1), (a_2, \theta_2), (a_1, \theta_3)\}$, and value 0 otherwise. Consider the two information structures \begin{center} $\sigma$: \begin{tabular}{ccc} & $ s_1 $ & $ s_2 $\\ $ \theta_1 $& $ 1 $ & $ 0 $\\ $ \theta_2 $& $ 0 $ & $ 1 $\\ $ \theta_3 $& $ 0 $ & $ 1 $\\ \end{tabular} \quad \quad \quad \quad $\sigma'$: \begin{tabular}{ccc} & $ s_1 $ & $ s_2 $\\ $ \theta_1 $& $ 1 $ & $ 0 $\\ $ \theta_2 $& $ 0 $ & $ 1 $\\ $ \theta_3 $& $ 1 $ & $ 0 $\\ \end{tabular} \end{center} Show that $V_{\bold{D},p}(\sigma) > V_{\bold{D},p}(\sigma')$ where $\bold{D}=(A,u)$, but $V_{\bold{D}',p}(\sigma) > V_{\bold{D}',p}(\sigma')$ where $\bold{D}' = (A,u')$, i.e. the agent prefers the first information given payoffs $u$ and the second given payoffs $u'$. \end{exercise} The definition of uniformly better varies both the decision problem and also the prior, but the additional flexibility due to arbitrary priors is not substantial: \begin{exercise} Prove that if there is a full-support prior $p_0 \in \Delta(\Theta)$ such that \[V_{\bold{D},p_0}(\sigma) \geq V_{\bold{D},p_0}(\sigma') \quad \mbox{for every decision problem $\bold{D}$}\] then $\sigma$ is more informative than $\sigma'$. \end{exercise} \subsection{Feasible Actions} Our third definition says that a signal is more informative if observing the realization of the signal allows the agent to more effectively tailor his action to the state. \begin{definition} \label{def:Feasible} Fix any action set $A$. A conditional distribution over actions $ d:\Theta \to \Delta(A) $ is \emph{feasible under $ \sigma: \Theta \rightarrow \Delta(S)$} if there exists a mapping $\alpha :S\to \Delta(A) $ such that $$ d(a\mid \theta) = \int_{s\in S} \alpha(a\mid s) \sigma(s\mid \theta) ds$$ We'll use $ \Lambda_\sigma (A)$ to denote the set of all feasible distributions under $\sigma$ given action set $A$. \end{definition} When $\sigma$ is a fully revealing signal (e.g., $\sigma: \Theta \rightarrow \Delta(\Theta)$ satisfying $\sigma(\theta \mid \theta) = 1$ for every $\theta$), then every mapping $d: \Theta \rightarrow \Delta(A)$ is feasible under $\sigma$. (Simply set $\alpha=d$.) When $\sigma$ is uninformative---for example, a constant---then $\Lambda_\theta(A)$ consists of all mappings $d: \Theta \rightarrow \Delta(A)$ that take each state into the same distribution over actions. Larger sets $\Lambda_\sigma(A)$ allow the agent more flexibility in tailoring his action to the state, and in this sense are more valuable. \begin{remark} Observe that $\alpha$ is itself a Markov kernel, so $d$ can be interpreted as a garbling of $\sigma$ where $A$ is the set of signal realizations. \end{remark} \section{Dispersion of Posterior Beliefs} Our final perspective adopts the view on a signal introduced in Section \ref{sec:Bayes}, where a signal is identified with the distribution over posterior beliefs that it induces. We consider the dispersion of these posterior beliefs. Given an uninformative signal, the agent's posterior is deterministically equal to the agent's prior, so there is no dispersion. And if the signal reveals the state directly, then the posterior belief is a point mass on the true state, which ``maximally varies" depending on the realization of the signal. We may expect more informative signals to be associated with more disperse beliefs, but the measure of dispersion is important. For example, using variance to measure dispersion yields a complete order on signals, which cannot possibly be equivalent to the (strict) partial order described in the previous definitions. Below we define two alternative measures for dispersion---mean-preserving spreads and dominance in the convex order---which will turn out to again characterize the previous partial order on signals. \subsection{Mean-Preserving Spreads} \begin{definition} A distribution of posterior beliefs $F \in \Delta(\Delta(\Theta))$ is a \emph{mean-preserving spread} of another distribution $\widetilde{F}$ if there exist $\Delta(\Theta)$-valued random variables $Z, \widetilde{Z}$ satisfying the following conditions: \begin{enumerate} \item $Z \sim F, \widetilde{Z} \sim \widetilde{F}$ \item $\mathbb{E}(Z \mid \widetilde{Z}) = \widetilde{Z}$ (thus in particular $\mathbb{E}(Z) = \mathbb{E}(\widetilde{Z})$) \end{enumerate} \end{definition} \noindent The name ``mean-preserving spread" reflects that each realization of $\widetilde{Z}$ is spread out into a random $Z$ with the same mean. When $Z$ and $\widetilde{Z}$ are both real-valued, then the second condition can also be stated as $Z = \widetilde{Z} + \eps$ for some random variable $\eps$ satisfying $\mathbb{E}(\eps \mid \widetilde{Z}) = 0$. \begin{example} Consider the two signals \[P = \left(\begin{array}{cc} 3/4 & 1/4 \\ 1/4 & 3/4\end{array}\right) \quad \quad \quad Q = \left(\begin{array}{cccc} 9/16 & 3/16 & 3/16 & 1/16 \\ 1/16 & 3/16 & 3/16 & 9/16 \end{array}\right) \] from Example \ref{ex:Garbling}, where the set of states is $\Theta = \{\theta_1,\theta_2\}$. Let the agent's prior be uniform over these states. Then the agent has two possible posterior beliefs after observing $P$, $(3/4,1/4)$ and $(1/4,3/4)$, which are equally likely. We will write the distribution of posterior beliefs as \[F_P = 1/2 \cdot (3/4,1/4) + 1/2 \cdot (1/4,3/4).\] Under $Q$, the distribution of posterior beliefs is instead \[F_Q = 5/16 \cdot (9/10,1/10) + 3/8 \cdot (1/2,1/2) + 5/16 \cdot (1/10,9/10).\] We will now show that $F_Q$ is a mean-preserving spread of $F_P$. Let $\widetilde{Z}$ be a random variable satisfying $\widetilde{Z} \sim F_P$ and construct the random variable $Z$ given $\widetilde{Z}$ as follows: \begin{itemize} \item If $\widetilde{Z} = (1/4,3/4)$ then $Z=(1/10,9/10)$ with probability $5/8$ and $Z=(1/2,1/2)$ with probability $3/8$. \item If $\widetilde{Z} = (3/4,1/4)$ then $Z = (9/10,1/10)$ with probability $3/8$ and $Z= (1/10,9/10)$ with probability $5/8$. \end{itemize} Then $\mathbb{E}(Z \mid \widetilde{Z}) = \widetilde{Z}$ and also $Z \sim F_Q$, so $F_Q$ is a mean-preserving spread of $F_P$ as desired. This construction is depicted in Figure \ref{fig:MPS}. \begin{figure}[H] \begin{center} \includegraphics[scale=0.6]{mps.pdf} \end{center} \caption{Depiction of the mean-preserving spread, where the numbers represent the probability of state $\theta_1$.} \label{fig:MPS} \end{figure} \end{example} \begin{exercise} Let $\Theta = \{\theta_1, \theta_2\}$ and consider the two signals \[P = \left(\begin{array}{cc} 2/3 & 1/3 \\ 1/4 & 3/4\end{array}\right) \quad \quad \quad Q = \left(\begin{array}{cccc} 1/3 & 1/2 & 1/6 \\ 1/8 & 1/2 & 3/8 \end{array}\right) \] Define $F_P$ to be the distribution of posterior beliefs induced by $P$ and $F_Q$ to be the distribution of posterior beliefs induced by $Q$. Prove that $F_P$ is a mean-preserving spread of $F_Q$. \end{exercise} \begin{exercise} Suppose $Y_1, Y_2, \dots, Y_n$ are independent and identically distributed random variables, and define $\overline{Y}_n = \frac1n \sum_{i=1}^n Y_i$ to be their sample average. Let $n' < n$ and define $\overline{Y}_{n'} = \frac{1}{n'} \sum_{i=1}^{n'} Y_i$. Prove that the distribution of $\overline{Y}_{n'}$ is a mean preserving spread of the distribution of $\overline{Y}_n$. \end{exercise} \subsection{Convex Order} \label{sec:ConvexOrder} \noindent Another partial order of dispersion is the following: \begin{definition} A distribution of posterior beliefs $ F \in \Delta(\Delta(\Theta))$ \emph{dominates} another distribution $G$ \emph{in the convex order} if for every continuous convex function $h: \Delta (\Theta ) \rightarrow \mathbb{R}$, \[\int_{\Delta(\Theta)} h(p) dF(p) \geq \int_{\Delta(\Theta)} h(p) dG(p)\] \end{definition} \noindent This implies that $F$ and $G$ have the same mean (choosing $h(p)=p$) and that $F$ has the larger variance (choosing $h(p) = \| p \|^2$). You may recall the concept of \emph{second order stochastic dominance}: \begin{definition} For any lotteries $F$ and $G$, $F$ \emph{second-order stochastically dominates} $G$ if and only if \[\int_{\Delta(\Theta)} h(p) dF(p) \geq \int_{\Delta(\Theta)} h(p) dG(p)\] for every nondecreasing and concave utility function $u$. \end{definition} Dominance in the convex order is stronger than SOSD. \begin{exercise} Prove that if $F$ dominates $G$ in the convex order, then $G$ second order stochastically dominates $F$. \end{exercise} The converse is not in general true. \begin{example} Let $G$ be a distribution uniform on $[1,2]$ and let $F$ be a point mass at zero. Then $G$ second order stochastically dominates $F$ but $F$ does not dominate $G$ in the convex order. \end{example} Intuitively, second-order stochastic dominance confounds changes in the dispersion of the distribution with shifts in the distribution, while dominance in the convex order isolates the former comparison. \section{Blackwell's Theorem and Proof} \label{sec:BlackwellProof} We now state and prove \citet{Blackwell}'s theorem, which demonstrates equivalence of these five definitions. For the proof we will work with finite sets (in particular assuming finite $\Theta$) but several parts of the proof extend more generally. \begin{theorem} The following are equivalent: \begin{enumerate} \item $ \sigma' $ is a garbling of $ \sigma $. \item $ \sigma $ is more informative than $ \sigma' $. \item $ \Lambda_\sigma (A) \supseteq \Lambda_{\sigma'} (A) $ for every finite action set $A$. \item For any prior on $\Theta$, if we define $F$ and $F'$ to be the distributions of posterior beliefs induced by $\sigma$ and $\sigma'$ (under this prior), then $F$ is a mean-preserving spread of $F'$. \item For any prior on $\Theta$, if we define $F$ and $F'$ to be the distributions of posterior beliefs induced by $\sigma$ and $\sigma'$ (under this prior), then $F$ dominates $F'$ in the convex order. \end{enumerate} \end{theorem} Several proofs exist for different parts of this result (see e.g., \citet{Blackwell} and \citet{LeshnoSpector}). Our proof of the equivalence of (1)-(3) below is based on \citet{henrique}, which presents a particularly simple and elegant argument. \bigskip \begin{proof} Throughout, given stochastic mappings $ \alpha:X\to \Delta(Y) $ and $ \beta:Y\to \Delta(Z) $, let \[ \beta \circ \alpha (z\mid x) \equiv \sum_{y\in Y} \beta(z\mid y)\alpha (y\mid x) \quad \quad \forall x \in X, z \in Z. \] ($ 1\Rightarrow 3$) 1 implies existence of a mapping $\gamma: S \rightarrow \Delta(S')$ such that $\gamma \circ \sigma = \sigma'$, as illustrated below: \begin{center} \begin{Tabular}{l} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] { \Theta & S \\ S' & \\}; \path[-stealth] (m-1-1) edge node [left] {$\sigma'$} (m-2-1) edge node [above] {$\sigma$} (m-1-2) (m-1-2) edge node [below] {$ \gamma $} (m-2-1); \end{tikzpicture} \end{Tabular} \end{center} \noindent Consider any action set $A$ and mapping $\alpha': S' \rightarrow \Delta(A)$, where $d = \alpha' \circ \sigma'$ is a feasible distribution under $\sigma'$. Define $\alpha = \alpha' \circ \gamma$. Then \[\alpha \circ \sigma = (\alpha' \circ \gamma) \circ \sigma = \alpha' \circ (\gamma \circ \sigma) = \alpha' \circ \sigma' = d\] using associativity of the operation $\circ$. So $d$ is feasible also under $\sigma$, as depicted in the figure below: \begin{center} \begin{Tabular}{l} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] { \Theta & S \\ S' & A \\}; \path[-stealth] (m-1-1) edge node [left] {$\sigma'$} (m-2-1) edge node [above] {$\sigma$} (m-1-2) (m-2-1.east|-m-2-2) edge node [below] {$\alpha'$} (m-2-2) (m-1-2) edge node [right] {$\alpha$} (m-2-2) edge node [below] {$ \gamma $} (m-2-1); \end{tikzpicture} \end{Tabular} \end{center} \medskip ($ 3\Rightarrow 1 $) Let the action set be $S'$ and define $\alpha'$ to be the identity mapping $id_{S'}: S' \rightarrow \Delta(S')$ which satisfies $ id_{S'}(s')=\delta_{s'} $ for all $s' \in S'$ (where $\delta_{s'}$ denotes a point mass at $s'$). By 3, there must exist some $\alpha: S \rightarrow \Delta(S')$ such that \[\alpha \circ \sigma = id_{S'} \circ \sigma'\] The RHS reduces to $\sigma'$ since for any $s' \in S'$, \[ id_{S'} \circ \sigma'(s' \mid \theta) = \sum_{s \in S'} id_{S'}(s' \mid s) \sigma'(s \mid \theta) = \sigma'(s' \mid \theta).\] Thus $\alpha \circ \sigma = \sigma'$. But this implies that $\sigma'$ is a garbling of $\sigma$, as depicted below. \begin{center} \begin{Tabular}{l} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] { \Theta & S \\ S' & S' \\}; \path[-stealth] (m-1-1) edge node [left] {$\sigma'$} (m-2-1) edge node [above] {$\sigma$} (m-1-2) (m-2-1.east|-m-2-2) edge node [below] {$id_{S'}$} (m-2-2) (m-1-2) edge [dashed] node [right] {$\alpha$} (m-2-2); \end{tikzpicture} \end{Tabular} \end{center} \medskip ($3 \Rightarrow 2$) Clear. \medskip ($2 \Rightarrow 3$) Suppose 3 fails. Then there is a finite action set $A$ and a vector $\lambda' \in \Lambda_{\sigma'}(A)$ such that $ \lambda'\not \in \Lambda_\sigma(A)$. The set $ \Lambda_\sigma $ is a compact and convex subset of $ \mathbb{R}^{\vert \Theta \vert \times \vert A \vert} $ (you will be asked to prove this in Exercise \ref{ex:LsA}). Thus by the Separating Hyperplane Theorem, there exists a vector $ v\in \mathbb{R}^{\vert \Theta \vert \times \vert A \vert} $ such that for all $\lambda \in \Lambda_\sigma(A)$, \begin{equation} \label{eq:SeparatingHyperplane} \sum v(a,\theta)\lambda(a, \theta) < \sum v(a,\theta)\lambda'(a, \theta) \end{equation} as depicted in Figure \ref{fig:SeparatingHyperplane}. \begin{figure}[H] \begin{center} \includegraphics[scale=0.4]{blackwellproof.pdf} \end{center} \caption{Separation of $\lambda'$ from $\Lambda_\sigma(A)$.} \label{fig:SeparatingHyperplane} \end{figure} \noindent Consider an agent with a uniform prior $p$ on $\Theta$ and utility function $v$, and define $d(a\mid \theta) \equiv \lambda(a, \theta)$ and $d'(a \mid \theta) \equiv \lambda'(a,\theta)$. Then \begin{align*} \sup_{\alpha: S \rightarrow \Delta(A)} \sum_{a,s,\theta} v(a,\theta) \alpha(a \mid s)p(\theta,s) & = \sup_{\alpha: S \rightarrow \Delta(A)} \frac{1}{\vert \Theta \vert} \sum_{\theta, a,s} \sigma(s \mid \theta) \alpha(a \mid s) v(a,\theta) \\ & = \sup_{d \in \Lambda_\sigma(A)} \frac{1}{\vert \Theta \vert} \sum_{\theta,a} d(a \mid \theta) v(a,\theta) \\ & < \frac{1}{\vert \Theta \vert} \sum_{\theta,a} d'(a \mid \Theta)v(a, \theta) \end{align*} using (\ref{eq:SeparatingHyperplane}) in the final inequality. Thus there is a decision problem and a prior for which an agent can achieve a strictly higher payoff by conditioning on $\sigma'$ rather than on $\sigma$, and so 2 fails. \medskip ($1 \Rightarrow 4$) Let $X$ and $X'$ respectively denote the random realizations of the signals $\sigma$ and $\sigma'$. Since by assumption $\sigma'$ is a garbling of $\sigma$, we can generate $\theta$, $X$, $X'$ in a way such that $X'$ is independent of $\theta$ conditional on $X$ (see Remark \ref{remark:Garbling}). On this probability space, define $Z$ to be the random posterior belief of $\theta$ given $X$, i.e., the distribution of $\theta \mid X$, and define $Z'$ to be the random posterior belief of $\theta$ given $X'$, i.e., the distribution of $\theta \mid X'$. We need to show that $\mathbb{E}[Z \mid Z'] = Z'$. For any realization $\theta_i$ of $\theta$, define $Z_i \equiv \mathbb{E}[\mathbbm{1}_{\theta_i} \mid X] = \mathbb{E}[\mathbbm{1}_{\theta_i} \mid X, X']$ (where the second equality is due to independence of $\theta$ and $X'$ conditional on $X$) and define $Z'_i \equiv \mathbb{E}[\mathbbm{1}_{\theta_i} \mid X']$. Then \begin{align} \mathbb{E}[Z_i \mid X'] & = \mathbb{E}[[\mathbbm{1}_{\theta_i} \mid X,X'] \mid X' ] \nonumber \\ & = \mathbb{E}[\mathbbm{1}_{\theta_i} \mid X' ] \nonumber \\ & = Z'_i \label{eq:reduceZ} \end{align} where the second equality follows from the law of iterated expectations (henceforth abbreviated to L.I.E.). Moreover, \begin{align*} \mathbb{E}[Z_i \mid Z'] & = \mathbb{E}[ \mathbb{E}[Z_i \mid X',Z'] \mid Z'] && \mbox{by L.I.E.}\\ & = \mathbb{E}[\mathbb{E}[Z_i \mid X'] \mid Z'] && \mbox{since $Z'$ is a function of $X'$} \\ & = \mathbb{E}[Z'_i \mid Z'] && \mbox{using (\ref{eq:reduceZ})}\\ & = Z'_i \end{align*} Repeating this argument for every $\theta_i$, we have the desired result. \\[2mm] \medskip ($4 \Rightarrow 5$) Suppose $F$ is a MPS of $F'$ with associated random variables $Z$ and $Z'$ satisfying $\mathbb{E}(Z \mid Z') = Z'$. Then for any continuous and convex function $h: \Delta(\Theta) \rightarrow \mathbb{R}$, \begin{align*} \int_{\Delta(\Theta)} h(p) dF(p) & = \mathbb{E}[h(Z)] \\ & = \mathbb{E}[\mathbb{E}[h(Z) \mid Z']] && \mbox{by L.I.E.}\\ & \geq \mathbb{E}[h(\mathbb{E}[Z \mid Z')] && \mbox{by Jensen's inequality}\\ & = \mathbb{E}[h(Z')] && \mbox{by assumption of MPS}\\ & = \int_{\Delta(\Theta)} h(p) dF'(p) \end{align*} So $F$ dominates $F'$ in the convex order. \medskip ($5 \Rightarrow 2$) Fix any action set $A$ and utility function $u$, and define $h: \Delta(\Theta) \rightarrow \mathbb{R}$ by \[h(p) = \max_{a \in A} \sum_{\theta \in \Theta} p(\theta) u(a,\theta)\] to be the maximum achievable payoff under belief $p$. The function $h$ is the pointwise maximum of linear functions, and hence it is continuous and convex. Letting $p\sim F$ denote the agent's posterior belief, the maximum \emph{ex-ante} payoff is \[\int_{\Delta(\Theta)} h(p)dF(p)\] So dominance in the convex order implies ``more valuable." \end{proof} \section{Additional Exercises} \begin{exercise}[based on \citet{Meyer}] Consider the setting of Example \ref{ex:Meyer}. It turns out that we can make the second realization of $P$ strictly valuable again by biasing it in favor of the more likely signal realization. That is, let the realizations of $P$ be denoted $s_1$ and $s_2$, where \[P = \left(\begin{array}{cc} 3/4 & 1/4 \\ 1/4 & 3/4\end{array}\right)\] and modify the second signal in the following way: If the first realization is $s_1$, then the second signal realization is determined by \[Q_1 = \left(\begin{array}{cc} 3/4 + c& 1/4-c \\ 1/4 + c & 3/4-c \end{array}\right) \] and if the first realization is $s_2$, the second signal realization is determined by \[ Q_2 = \left(\begin{array}{cc} 3/4 - c& 1/4 + c \\ 1/4 - c & 3/4 + c \end{array}\right)\] where in both cases the realization of the second signal is independent of the first conditional on the state. \begin{itemize} \item[(a)] Show that for any $c \in (0,1/4]$, the value of observing this second (biased) signal is strictly positive. \item[(b)] Solve for the size of the bias $c \in (0,1/4]$ that leads to the highest expected payoff for the agent. \end{itemize} \end{exercise} \begin{exercise} \label{ex:LsA} Let the sets $A$, $\Theta$, and $S$ be finite, and prove that the set $\Lambda_\sigma(A)$ (from Definition \ref{def:Feasible}) is compact and convex for every $\sigma: \Theta \rightarrow \Delta(S)$. \end{exercise} \begin{exercise} Consider two random variables $ X=\theta+\varepsilon$ and $Y=\theta +\varepsilon'$, where $\theta$, $\eps$, and $\eps'$ are mutually independent. \begin{itemize} \item[(a)] Suppose that $\theta \sim \mathcal{N}(0,1)$ and $\varepsilon,\varepsilon' \in \mathbb{R}$ are distributed $(\varepsilon, \varepsilon') \sim \mathcal{N}\left(\mu, \Sigma \right).$ Prove that $X$ and $Y$ are Blackwell comparable for all mean vectors $\mu$ and covariance matrices $\Sigma$. \item[(b)] Suppose that \[\theta \sim \mathcal{N}\left(\left(\begin{array}{c} 0 \\ 0 \end{array} \right), \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right) \right)\] and $\varepsilon,\varepsilon' \in \mathbb{R}^2$ are distributed $(\varepsilon, \varepsilon') \sim \mathcal{N}\left(\mu, \Sigma \right).$ Prove that $X$ and $Y$ are not always Blackwell ranked by demonstrating a pair $(\mu,\Sigma)$ such that $X$ allows for a strictly higher expected payoff for one decision problem, and $Y$ allows for a strictly higher expected payoff given another. \end{itemize} \end{exercise} \begin{exercise} In each of the following parts, determine whether the statement is true or false and prove your claim in either case. \begin{itemize} \item[(a)] The state $\theta$ belongs to $\{\theta_1,\theta_2\}$ and the two signals are defined as \begin{align*} X = \theta + \eps_1, \quad \eps_1 \sim U([-1/2,1/2]) \\ \widetilde{X} = \theta + \eps_2, \quad \eps_2 \sim U([-1/3,1/3]) \end{align*} where $U$ denotes the uniform distribution. The signals $X$ and $\widetilde{X}$ can be Blackwell ranked. \item[(b)] The state $\theta$ belongs to $\{0,1/3,2/3,1\}$ and the two signals are defined as \begin{align*} X = \theta + \eps_1, \quad \eps_1 \sim U([-1/2,1/2]) \\ \widetilde{X} = \theta + \eps_2, \quad \eps_2 \sim U([-1/3,1/3]) \end{align*} The signals $X$ and $\widetilde{X}$ can be Blackwell ranked. \end{itemize} \end{exercise} \begin{exercise}[based on \citet{BrooksFrankelKamenica2}] Consider the following strengthening of the Blackwell order. Let $\theta$, $X$, and $X'$ be random variables defined on the same probability space $(\Omega, \Sigma, P)$. \begin{definition} Say that $X$ \emph{strongly Blackwell dominates} $X'$ if $(X,\widetilde{X})$ Blackwell dominates $(X',\widetilde{X})$ for every random variable $\widetilde{X}$ also defined on $(\Omega, \Sigma, P)$. \end{definition} \noindent Clearly a necessary condition is for $X$ to Blackwell dominate $X'$ (choose $\widetilde{X}$ to be null information). A sufficient condition is for the realization of $X'$ to be known from the realization of $X'$, i.e., for the distribution of $X'\mid X$ to be degenerate for every realization of $X$ (what \citet{BrooksFrankelKamenica} call the \emph{refinement order}). Provide an example in between, namely a signal $X$ that strongly Blackwell dominates $X'$, where the realization of $X'$ is not known from $X$. \end{exercise} \chapter{Comparing Information II: Cost of Information} \label{sec:CostofInformation} So far we have considered decision problems in which the signal informing the agent's decision is given exogenously. In many economic applications, agents can acquire information at a cost and thereby control the signal that they observe. The full problem the agent faces is often specified as \[\max_{\sigma: \Theta \rightarrow \Delta(S)} \int_{\Delta(\Theta)} \max_{a \in A} \mathbb{E}_q[u(a,\theta)] d\tau_\sigma(q) - \mbox{cost of acquiring $\sigma$}\] where $\tau_\sigma$ denotes the distribution over posterior beliefs induced by signal $\sigma$. This chapter discusses how to model the cost of information, and is divided into two sections. Section \ref{sec:PriorDependent} considers \emph{prior-dependent} cost functions that are a function both of the agent's prior $p \in \Delta(\Theta)$ and of the signal $\sigma: \Theta \rightarrow \Delta(S)$. Section \ref{sec:PriorIndependent} considers \emph{prior-independent} cost functions that depend only the signal $\sigma$. The former are often interpreted as costs of information processing while the latter are often associated with a physical or exogenous cost of producing information. Both approaches draw from information theory, and we review relevant concepts in Section \ref{sec:InformationTheory}. Two useful benchmarks to keep in mind are the following. \begin{example}[Binary] \label{ex:BinaryCost} The unknown state $\theta$ is equally likely to take the value 0 or 1, and the agent chooses an action $a \in \{0,1\}$ with payoff $u(a,\theta) = \mathbbm{1}(a=\theta)$. This action is based on the signal \[\begin{array}{ccc} & s=0 & s=1 \\ \theta=0 & \varphi & 1-\varphi \\ \theta=1 & 1-\varphi & \varphi \end{array}\] where the agent chooses $\varphi \in [0,1]$. \end{example} \begin{example}[Gaussian] \label{ex:GaussianCost} An agent chooses an action $a \in \mathbb{R}$ and receives the payoff $-(a-\theta)^2$, where $\theta \sim \mathcal{N}(\mu, \sigma_\theta^2)$ is an unknown state. This action is based on a signal $X= \theta + \eps$ where $\eps \sim \mathcal{N}(0, \sigma_\eps^2)$, and the signal noise $\sigma_\eps^2$ is chosen by the agent. \end{example} \section{Information Theoretic Preliminaries} \label{sec:InformationTheory} This section reviews the definitions of entropy and KL divergence. \subsection{Entropy} \label{sec:Entropy} First assume a finite set of states $\Theta$ with $n \equiv \vert \Theta \vert$, and consider beliefs $p=(p_1, \dots, p_n)$ defined over this set. \begin{definition}[\citet{shannon}] \label{def:Entropy} Let $\Theta = \{\theta_1, \dots, \theta_n\}$ for any $n<\infty$. The \emph{entropy} of belief $p \in \Delta(\Theta)$ is \[H(p) = - \sum_{\theta \in \Theta} p(\theta) \ln(p(\theta)) = \mathbb{E}_{\theta \sim p}[-\ln(p(\theta))]\] where $0\ln0 = 0$. \end{definition} \begin{remark} Entropy is also sometimes defined as a function of the random variable rather than its distribution, i.e., $H(\theta) = \mathbb{E}[-\ln(p(\theta))].$ \end{remark} \medskip Entropy is a quantification of uncertainty in a distribution. The higher the entropy of the distribution, the more information is contained in the realization of a random variable it governs. (Entropy is also often interpreted as the ``surprise factor" of the outcome.) \begin{example} \label{ex:BinaryEntropy} Suppose $\Theta = \{\theta_1, \theta_2\}$. The entropy of any belief $(q,1-q)$ is \begin{equation} \label{eq:BinaryEntropy} H(q) = -q\ln(q) - (1-q) \ln(1-q). \end{equation} This curve is depicted in Figure \ref{fig:Entropy} below. It is concave, minimized at the two degenerate distributions $(0,1)$ and $(1,0)$, and maximized at the uniform distribution $(1/2,1/2)$. \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{entropy.pdf} \caption{Plot of the entropy of the distribution $(q,1-q)$ as $q$ varies in $[0,1]$.} \label{fig:Entropy} \end{center} \end{figure} \end{example} Several key properties of entropy are collected below. \medskip \begin{property}[Maximal Value] \label{propEntr:Max} $H(p) \leq H\left(\frac{1}{n}, \dots, \frac{1}{n}\right)$ for every $n<\infty$ and $p \in \Delta(\{\theta_1, \dots, \theta_n\})$; that is, entropy is maximized at the uniform distribution. \end{property} \medskip \begin{property}[Probability Zero States] $H(p) = H(p_1, \dots, p_n,0)$ for every $n<\infty$ and $p \in \Delta(\{\theta_1, \dots, \theta_n\})$; that is, entropy is unchanged by an expansion of the state space to include probability-zero outcomes. \end{property} \medskip \begin{property}[Continuity] $H$ is continuous with respect to all of its arguments. \end{property} \medskip \begin{property}[Chain Rule] \label{propEntr:Chain} Suppose $(X,Y) \in \mathcal{X} \times \mathcal{Y}$ with $\mathcal{X} = \{x_1, \dots, x_n\}$ and $\mathcal{Y} = \{y_1, \dots, y_m\}$, where the joint distribution of $(X,Y)$ is denoted $p$, the marginal distribution of $X$ is $p_X$, and the conditional distribution of $Y$ given $X$ is $p_{Y \mid X}$. Then \[H(p) = H(p_X) + \sum_{i=1}^n p_X(x_i) H(p_{Y \mid X=x_i})\] or more simply \[H(X,Y) = H(X) + H(Y \mid X)\] where $H(X,Y) \equiv H(p)$ is the entropy of the joint distribution, $H(X) \equiv H(p_X)$ is the entropy of of the marginal distribution of $X$, and $$H(Y \mid X) \equiv \sum_{i=1}^n p_X(x_i) H(p_{Y \mid X=x_i})$$ is the expected entropy of the conditional distribution of $Y$ given $X$, also known as the \emph{conditional entropy} of $Y$ given $X$. \end{property} \begin{remark} In the special case where $X$ and $Y$ are independent, Property \ref{propEntr:Chain} implies $H(X,Y) = H(X) + H(Y)$. \end{remark} \medskip \begin{property}[Nonnegativity] \label{propEntr:Positive} $H(p)\geq 0$ for all distributions $p$. \end{property} \medskip \begin{property}[Degenerate Distributions] \label{propEntr:Degenerate} $H(p)=0$ for all degenerate distributions $p$. \end{property} \medskip \begin{property}[Concavity] \label{propEntr:Concave} $H$ is concave. \end{property} \medskip \begin{property}[Relabelling of States] \label{propEntr:Relabel} $H(p_1, \dots, p_n) = H(p_{\pi(1)}, \dots, p_{\pi(n)})$ for any bijection $\pi$ from $\{1,\dots, n\}$ to itself; that is, entropy is invariant to a relabelling of states. \end{property} \medskip \begin{property}[Information Reduces Uncertainty] \label{propEntr:Information} $H(Y\mid X) \leq H(Y)$ with equality if and only if $X$ and $Y$ are independent; that is, conditioning on information reduces expected entropy. \end{property} Properties \ref{propEntr:Max}-\ref{propEntr:Chain} constitute a set of necessary and sufficient conditions for the form of $H$ given in (\ref{def:Entropy}), up to rescaling. \begin{proposition}[\citet{Khinchin}] Let $H(p_1, \dots, p_n)$ be a function defined for any $n \in \mathbb{Z}_{+}$ and for all values $p_1, \dots, p_n$ satisfying $p_i \geq 0$ for each $i=1, \dots, n$ and $\sum_{i=1}^n p_i = 1$. Then $H$ satisfies Properties \ref{propEntr:Max}-\ref{propEntr:Chain} if and only if \[H(p_1, \dots, p_n) = -\lambda \sum_{i=1}^n p_i \ln(p_i)\] for some constant $\lambda >0$.\footnote{Recalling that $\log_b(x) = \frac{\log_a(x)}{\log_a(b)}$ for any two bases $a, b >0$, changing the logarithm to a different basis simply rescales the measure. Choice of base $2$ and of base $e$ are both common.} \end{proposition} Properties \ref{propEntr:Positive}, \ref{propEntr:Degenerate}, and \ref{propEntr:Relabel} are immediate from the functional form of entropy. Property \ref{propEntr:Concave} (concavity) follows because $-x\log(x)$ is concave, and the sum of concave functions is concave. (In fact, the same argument shows that entropy is \emph{strictly} concave, so Property \ref{propEntr:Max} can be strengthened to the statement that the uniform distribution is the unique maximum.) The following exercise asks you to prove that entropy satisfies Property \ref{propEntr:Information}. \begin{exercise} Suppose $(X,Y) \in \mathcal{X} \times \mathcal{Y}$ with $\vert \mathcal{X} \vert = n$ and $\vert \mathcal{Y} \vert = m$, where $p_X$ and $p_Y$ denote the marginal distributions of $X$ and $Y$, and $p_{Y \mid X}$ denotes the conditional distribution of $Y$ given $X$. Let $H(Y) \equiv H(p_Y)$ be the entropy of of the marginal distribution of $Y$, and $H(Y \mid X) \equiv \sum_{i=1}^n p_X(x_i) H(p_{Y \mid X=x_i})$ be the conditional entropy of $Y$ given $X$. Prove that $H(Y \mid X) \leq H(Y)$. \end{exercise} \citet{shannon} defines a continuous version of entropy. \begin{definition} The entropy of probability density $p$ on $\Theta \subseteq \mathbb{R}$ is \[H(p) = - \int_{\theta \in \Theta} p(\theta) \ln(p(\theta)) d\theta \] \end{definition} \begin{example} Recall that the normally distributed variable $\theta \sim \mathcal{N}(\mu,\sigma^2)$ has density $p(\theta) = \frac{1}{\sigma \sqrt{2\pi}}e^{-\frac12 \left(\frac{\theta-\mu}{\sigma}\right)^2}$. The entropy of this distribution is \begin{align} \mathbb{E}\left[ -\ln\left(\frac{1}{\sigma \sqrt{2\pi}}e^{-\frac12 \left(\frac{\theta-\mu}{\sigma}\right)^2}\right)\right] & = -\ln\left(\frac{1}{\sigma \sqrt{2\pi}}\right) + \frac{1}{2\sigma^2}\mathbb{E}\left[(\theta - \mu)^2\right] \nonumber \\ & = \frac12 \ln\left(2\pi\sigma^2\right) + \frac12 \label{eq:GaussianEntropy} \end{align} using in the second equality that $\mathbb{E}[(\theta-\mu)^2] = \sigma^2$. So entropy and variance order normal distributions in the same way. \end{example} \subsection{Kullback-Liebler Divergence} \label{sec:KL} The \emph{Kullback-Liebler Divergence (KL divergence)}, also known as \emph{relative entropy}, quantifies how different two distributions are. \begin{definition}[KL-Divergence] \label{def:KL} Let $\Theta = \{\theta_1, \dots, \theta_n\}$ for any $n<\infty$, and let $p,q \in \Delta(\Theta)$. Then the KL divergence from $q$ to $p$ is \[D(p \| q) = \sum_{\theta \in \Theta} p(\theta) \ln\left(\frac{p(\theta)}{q(\theta)}\right) = \mathbb{E}_{\theta \sim p}\left[\ln\left(\frac{p(\theta)}{q(\theta)}\right)\right]\] where $0\ln 0 =0$. \end{definition} \begin{example}[Binary] Let $\Theta = \{\theta_1,\theta_2\}$ with $(p,1-p)$ and $(q,1-q)$ be two distributions on this set. Then \[D(p \| q) = p \ln\left(\frac{p}{q}\right) + (1-p)\ln\left(\frac{1-p}{1-q}\right).\] Intuitively, larger log likelihood ratios $\ln\left(\frac{p}{q}\right)$ and $\ln\left(\frac{1-p}{1-q}\right)$ reflect distributions that are more different. KL divergence aggregates these log likelihood ratios by weighting them with respect to their probabilities under a reference distribution, which is chosen to be either of $p$ or $q$. \end{example} \begin{example}[Gaussian] \label{ex:GaussianKL} Let $p$ and $q$ denote two Gaussian densities with common variance $\sigma$ and different means $\mu_p$ and $\mu_q$. Then \begin{align*} D(p \| q) & = \mathbb{E}_{\theta \sim p} \left[\ln \left(\frac{e^{-\frac12 \left(\frac{\theta-\mu_p}{\sigma}\right)^2}}{e^{-\frac12 \left(\frac{\theta-\mu_q}{\sigma}\right)^2}}\right)\right] \\ & = \frac{\mu_q^2 - \mu_p^2}{2\sigma^2} - \frac{\mu_q - \mu_p}{\sigma^2} \cdot \mathbb{E}_{\theta \sim p}(\theta) = \frac{(\mu_q -\mu_p)^2}{2\sigma^2} \end{align*} So as we might expect, the further the two means, the larger the KL divergence between the two distributions. \end{example} \noindent KL divergence is not in general symmetric (with Example \ref{ex:GaussianKL} being a notable exception) and hence it is not a metric. Other key properties of the KL divergence include: \begin{property}[Nonnegativity] \label{propKL:Nonnegative} $D(p \| q) \geq 0$ for all $p,q\in \Delta(\Theta)$, with equality if and only if $p=q$. \end{property} \noindent To prove this, observe that \begin{align*} -D(p \| q) & = \mathbb{E}_{\theta \sim p}\left[ \ln\left(\frac{q(\theta)}{p(\theta)}\right) \right] \\ & \leq \ln \left( \mathbb{E}_{\theta \sim p}\left[ \frac{q(\theta)}{p(\theta)} \right]\right) && \mbox{by Jensen's inequality} \\ & = \ln(1) = 0 && \mbox{since $\sum_{\theta \in \Theta} p(\theta) \left(\frac{q(\theta)}{p(\theta)}\right) =1$} \end{align*} \begin{property}[Additivity for Independent Distributions] Suppose $p_1\in \Delta(\mathcal{X}_1)$ and $p_2 \in \Delta(\mathcal{X}_2)$ are independent distributions, with $p(x_1,x_2)=p_1(x_1)p_2(x_2)$. Likewise suppose $q_1 \in \Delta(\mathcal{X}_1)$ and $q_2 \in \Delta(\mathcal{X}_2)$ are independent distributions with $q(x_1,x_2)=q(x_1)q(x_2)$. Then \[D(p \| q) = D(p_1 \| q_1) + D(p_2 \| q_2),\] i.e., KL divergence is additive for independent distributions. \end{property} This property follows from straightforward algebra: \begin{align*} D(p \| q) & = \sum_{x_1 \in \mathcal{X}_1} \sum_{x_2 \in \mathcal{X}_2} p(x_1,x_2) \ln \left(\frac{p(x_1,x_2)}{q(x_1,x_2)}\right) \\ & = \sum_{x_1 \in \mathcal{X}_1} \sum_{x_2 \in \mathcal{X}_2} p_1(x_1)p_2(x_2)\ln \left(\frac{p_1(x_1)p_2(x_2)}{q_1(x_1)q_2(x_2)}\right) \\ & = \sum_{x_2 \in \mathcal{X}_2} p_2(x_2) \left(\sum_{x_1 \in \mathcal{X}_1} p_1(x_1) \ln \left(\frac{p_1(x_1)}{q_1(x_1)}\right)\right) \\ & \quad + \sum_{x_1 \in \mathcal{X}_1} p_1(x_1) \left(\sum_{x_2 \in \mathcal{X}_2} p_2(x_2) \ln \left(\frac{p_2(x_2)}{q_2(x_2)}\right)\right) = D(p_1 \| q_1) + D(p_2 \| q_2) \end{align*} where independence is invoked in the second equality. \begin{property}[Convexity] $D$ is convex: For any two pairs $(p,q)$ and $(p',q')$, and any $\alpha \in [0,1]$, we have \[D\left(\alpha p + (1-\alpha) p' \| \alpha q + (1-\alpha) q' \right) \leq \alpha D(p \| q) + (1-\alpha) D(p' \| q')\] \end{property} \begin{exercise} Prove the above property using the following fact: \begin{fact}[Log-Sum Inequality] Let $a_1, \dots a_n$ and $b_1, \dots b_n$ be nonnegative real numbers. Then \[\sum_{i=1}^n a_i \ln\left(\frac{a_i}{b_i}\right) \geq \left(\sum_{i=1}^n a_i\right) \ln\left(\frac{\sum_{i=1}^n a_i}{\sum_{i=1}^n b_i}\right).\] \end{fact} \end{exercise} There is a close relationship between KL divergence and entropy. First, the entropy of a distribution $p \in \Delta(\Theta)$ with $n \equiv \vert \Theta \vert < \infty$ can be rewritten directly in terms of KL divergence: \[H(p) = \ln n - D(p \| U)\] where $U$ denotes the uniform distribution on $\Theta$. Thus, the larger the KL divergence from the uniform distribution to $p$, the lower the entropy of $p$. This is proved by observing that \begin{align*} \ln n - D(p \| U) &= \ln n - \sum_{\theta \in \Theta} p(\theta) \ln\left(\frac{p(\theta)}{1/n}\right) \\ & = \sum_{\theta \in \Theta} p(\theta) (\ln n - \ln\left(n p(\theta) \right)) && \mbox{since $\sum_{\theta \in \Theta} p(\theta) =1$} \\ & = -\sum_{\theta \in \Theta} p(\theta) \ln(p(\theta)) = H(p) \end{align*} \begin{remark} Together with Property \ref{propKL:Nonnegative}, the above relationship implies that entropy is maximized at the uniform distribution (Property \ref{propEntr:Max}). \end{remark} KL divergence cannot be rewritten directly in terms of entropy, although \[D(p \| q) = - \sum_{\theta \in \Theta} p(\theta) \ln\left(q(\theta)\right) - H(p)\] where $-\sum_{\theta \in \Theta} p(\theta) \ln\left(q(\theta)\right)$ is the \emph{cross-entropy} of distribution $q$ relative to $p$. \section{Prior-Dependent Costs} \label{sec:PriorDependent} Returning to the question of how to model the cost function, we begin with \emph{prior-dependent} cost functions. Dependence on the prior belief means that the cost of absorbing the information content of a signal varies with what the agent already knows. This feature may be justified if we view the cost of information as an information processing or cognitive cost: For example, processing a news article about a proposed tax change may be relatively easy for someone who already understands this tax change well, but cognitively taxing for someone who does not. It will be convenient to represent signals as distributions over posterior beliefs, as in Section \ref{sec:BayesPlausibility}. Following Definition \ref{def:BayesPlausible}, we use $\mathcal{T}(p)$ to denote the set of Bayes plausible distributions given prior $p$, and we further define \[\mathcal{S} = \{(p, \tau) : p \in \Delta(\Theta), \tau \in \mathcal{T}(p)\}\] to be the domain of prior beliefs and Bayes plausible distributions. The cost functions in this section will take the form $C: \mathcal{S} \rightarrow \mathbb{R}$. \subsection{Uniform Posterior Separability} \label{sec:ReductionUncertainty} One popular class of cost functions are those that are \emph{uniformly posterior separable}. \begin{definition}[\citet{CaplinDean2013,CaplinDeanLeahy2022}] \label{def:UPS} The cost function $C: \mathcal{S} \rightarrow \mathbb{R}$ is \emph{uniformly posterior separable} (henceforth UPS) if there is a strictly concave function $\Phi$ such that \[C(p,\tau) = \Phi(p) - \mathbb{E}_{q \sim \tau}[\Phi(q)] \quad \forall (p,\tau) \in \mathcal{S}.\] \end{definition} We can interpret this cost of information as the expected reduction of uncertainty, where $\Phi: \Delta(\Theta) \rightarrow \mathbb{R}$ measures how uncertain the belief is. \begin{remark} The cost of ``no information" is zero, since $\Phi(p) - \mathbb{E}_{q \sim \delta_p}[\Phi(q)]=\Phi(p)-\Phi(p)=0$ (with $\delta_p$ denoting the degenerate distribution at the prior $p$). \end{remark} \begin{remark} Concavity of $\Phi$ guarantees that uncertainty decreases in expectation when more information is received. Together with Bayes plausibility of $\tau$, this further implies that UPS cost functions are everywhere positive: \begin{align*} \Phi(p) - \mathbb{E}_{q \sim \tau}[\Phi(q)] & \geq \Phi(p) - \Phi(\mathbb{E}_{q \sim \tau}[q]) && \mbox{by Jensen's inequality}\\ & = \Phi(p) - \Phi(p) && \mbox{by Bayes plausibility of $\tau$} \\ & = 0 \end{align*} \end{remark} \begin{remark} UPS cost functions are consistent with the Blackwell order. That is, let $\sigma$ and $\sigma'$ be arbitrary signals where $\sigma$ Blackwell dominates $\sigma'$. Fix any prior $p$, and let $\tau_\sigma$ and $\tau_{\sigma'}$ denote the distributions over posteriors that are induced by $\sigma$ and $\sigma'$. Then for any UPS cost function $C$, we have $C(p,\tau_\sigma) \geq C(p,\tau_{\sigma'})$ since \[C(p,\tau) = \int (\Phi(p) - \Phi(q))d\tau(q)\] where $\Phi(p) - \Phi(q)$ is convex in $q$, and $\tau_\sigma$ dominates $\tau_{\sigma'}$ in the convex order (see the characterization of the Blackwell order in Section \ref{sec:ConvexOrder}). \end{remark} The leading specification of $C$ is the expected reduction of the entropy of the agent's belief. \begin{example}[Entropy Reduction] Let $H$ be the entropy function given in Definition \ref{def:Entropy}. Then define \begin{equation} \label{def:EntropyCost} C_{\text{Ent}}(p, \tau) = H(p) - \mathbb{E}_{q \sim \tau}[H(q)] \quad \forall (p,\tau) \in \mathcal{S} \end{equation} to be the expected reduction in the entropy of the agent's belief. \end{example} Initially proposed as an information cost in \citet{Sims2003}, this cost function is a cornerstone of the rational inattention literature \citep{CaplinDean2013,CaplinDean2015,HebertWoodfordAER,HebertLaO}. Various conceptual foundations for entropic costs and uniformly posterior separable cost functions (as well as the broader class of posterior separable cost functions discussed in Section \ref{sec:PosteriorSeparable}) can be found in \cite{CaplinDean2013}, \citet{MatejkaMcKay2015}, \citet{MorrisStrack}, \citet{HebertWoodford}, \citet{BloedelZhong}, and \citet{Denti2022} among others. \begin{example} In the setting of Example \ref{ex:BinaryCost}, we have \[C_{\text{Ent}}(p,\tau_\varphi) = -\ln\left(\frac{1}{2}\right) + \left(\varphi\ln(\varphi) + (1-\varphi) \ln(1-\varphi)\right)\] where $\tau_\varphi$ denotes the distribution over posterior beliefs induced by the signal indexed to $\varphi$. The cost of the signal is largest when $\varphi\in \{0,1\}$ (corresponding to a fully revealing signal) and smallest when $\varphi=1/2$ (corresponding to an uninformative signal). \end{example} \bigskip Besides entropy, another natural choice of $\Phi$ is variance. \begin{example}[Variance Reduction] Let \begin{equation} \label{def:VarCost} C_{\text{Var}}(p, \tau) = \Var(p) - \mathbb{E}_{q \sim \tau}[\Var(q)] \end{equation} be the expected reduction in the variance of the agent's belief. \end{example} \begin{exercise} Prove that variance is strictly concave, so $C_{\text{Var}}$ is a UPS cost function. \end{exercise} \begin{example} Consider the setting of Example \ref{ex:GaussianCost} (where we use $\tau_{\sigma_\eps^2}$ to denote the distribution over posterior beliefs induced by observing the signal $X=\theta +\eps$, $\eps \sim \mathcal{N}(0,\sigma_\eps^2)$). Applying (\ref{eq:GaussianEntropy}), \begin{align*} C_{Ent}(p,\tau_{\sigma_\eps^2}) & = \left(\frac12 \ln(2\pi\sigma_\theta^2) + \frac12 \right) - \left(\frac12 \ln\left(2\pi\left(\frac{\sigma_\theta^2\sigma_\eps^2}{\sigma_\theta^2 + \sigma_\eps^2}\right)\right) + \frac12 \right)\\ & = \frac12 \ln\left(\frac{\sigma_\theta^2 + \sigma_\eps^2}{\sigma_\eps^2}\right) \end{align*} while \begin{align*} C_{Var}(p,\tau_{\sigma_\eps^2}) & = \sigma_\theta^2 - \frac{\sigma_\theta^2 \sigma_\eps^2}{\sigma_\theta^2 + \sigma_\eps^2} = \frac{\sigma_\theta^4}{\sigma_\theta^2 + \sigma_\eps^2}. \end{align*} For every fixed prior variance $\sigma_\theta^2$, both cost functions are strictly decreasing in the noise variance $\sigma_\eps^2$, and thus correspond to different cardinal representations of the same ordering over signals. One interesting contrast is that $C_{Ent}(p,\sigma_\eps^2) \rightarrow \infty$ as $\sigma_\eps^2 \rightarrow 0$, while $C_{Var}(p,\sigma_\eps^2) \rightarrow \sigma^2$. That is, the cost of information using $C_{\Var}$ is bounded above by the agent's prior uncertainty, while entropy cost is unbounded. \end{example} \subsection{Decision-Theoretic Foundations} The function $\Phi$ is interpreted in the previous section as a ``pure" measure of uncertainty, without reference to why this uncertainty matters. Parallel to Section \ref{sec:DecisionProblem}'s assessment of the value of information using decision problems, \citet{FrankelKamenica} microfound the function $\Phi$ as measuring the instrumental loss of uncertainty for a specific decision problem. \begin{definition} For any belief $q \in \Delta(\Theta)$ and decision problem $\mathcal{D}=(A,u)$, let \[\Phi_{\mathcal{D}}(q) = \mathbb{E}_q\left[\max_{a \in A} u(a, \theta)\right] - \max_{a \in A} \mathbb{E}_q\left[u(a,\theta)\right].\] \end{definition} The first term of this expression is the agent's best expected payoff when conditioning his action directly on the realized state (which is random and distributed according to the agent's belief $q$). The second term is the best expected payoff that the agent with belief $q$ can achieve given no additional information on which to condition his action. Thus $\Phi_{\mathcal{D}}$ quantifies the agent's payoff loss from not knowing a state which is distributed according to $q$. \begin{definition}[\citet{FrankelKamenica}] Say that $\Phi: \Delta(\Theta) \rightarrow \mathbb{R}$ is \emph{valid} if there is a decision problem $\mathcal{D}$ such that $\Phi=\Phi_{\mathcal{D}}$. \end{definition} Any function $\Phi$ that is concave and takes value zero at degenerate distributions (i.e., satisfies Properties \ref{propEntr:Degenerate} and \ref{propEntr:Concave}) can be microfounded using a decision problem in this way. \begin{proposition}[\citet{FrankelKamenica}] \label{prop:FK} $\Phi: \Delta(\Theta) \rightarrow \mathbb{R}$ is valid if and only if it satisfies Properties \ref{propEntr:Degenerate} and \ref{propEntr:Concave}. \end{proposition} This result follows from the subsequent lemma, which is of independent interest. \begin{lemma} \label{lemm:Support} Let $\Theta$ be a finite set. Then every convex function $V: \Delta(\Theta) \rightarrow \mathbb{R}$ can be represented as \begin{equation} \label{eq:VDecision} V(q) = \sup_{a \in A} \mathbb{E}_{q}[u(a,\theta)] \quad \forall q \in \Delta(\Theta) \end{equation} for some decision problem $(A,u)$, where $A$ is a set (not necessarily finite) and $u$ is a map $u: \Theta \times A \rightarrow [-\infty,+\infty]$. \end{lemma} \noindent The key points in the proof of this lemma are that $\mathbb{E}_q(u(a,\theta))$ is affine in $q$, and that every convex function is the supremum of affine functions lying below it. We'll prove this lemma assuming that $V$ is continuous and has a nonvertical supporting hyperplane at every point $ q\in \Delta(\Theta)$, leaving the completion of the proof when these assumptions fail as Exercise \ref{ex:Vertical}.\footnote{Under these assumptions, the supremum in (\ref{eq:VDecision}) can be replaced with maximum, as the following proof demonstrates.} \medskip \begin{proof} Our approach is to construct a set of actions indexed to beliefs, $A = \{a_q \, : \, q \in \Delta^{n}\}$, and to construct a utility function such that each action $a_{q}$ is optimal at the belief $q$. To do this, define a family of affine functions $(U_{a_{q}})_{q \in \Delta^{n}}$, where each $U_{a_{q}}: \Delta^{n} \rightarrow \mathbb{R}$ is a supporting hyperplane of the epigraph of $V$ at $q$, as depicted below in Figure \ref{fig:Support}.\footnote{Recall that the epigraph of $V$ is $\{(q,v): v \geq V(q)\}$, the set of points lying on or above $V$.} \begin{figure}[h] \begin{center} \includegraphics[scale=0.65]{supportinghyperplane.pdf} \end{center} \caption{Example construction for a binary state space $\Theta = \{\theta_0,\theta_1\}$. The action $a_q$ is optimal at belief $q$; that is, for every other belief $q'$ we have $\mathbb{E}_q[u(a_q,\theta)]\geq \mathbb{E}_q[u(a_{q'},\theta)]$, as depicted here.} \label{fig:Support} \end{figure} Since $V$ is continuous and convex, it can be represented on its domain as the supremum of all affine functions lying below it. Since each $U_{a_{q}}$ is affine and lies below $V$, we have that \[V(q) \geq U_a(q) \quad \forall a \in A,q\in \Delta^{n}.\] Moreover (by definition) $U_{a_{q}}$ supports $V$ at $q$, so $U_{a_{q}}(q) = V(q)$. This implies that \begin{equation} \label{eq:V} U_{a_{q}}(q) = \max_{a \in A} U_a(q) \quad \forall q \in \Delta(\Theta) \end{equation} We now need to express $U_{a_q}$ as an expected utility function. Since each belief $q'$ is a convex combination of the degenerate beliefs $(\delta_{\theta})_{\theta \in \Theta}$ (with weights given by $q'(\theta)$), and $U_{a_q}$ is affine, it follows that \begin{equation} \label{eq:SetU} U_{a_{q}}(q') = \sum_{\theta \in \Theta} q'(\theta) U_{a_q}(\delta_\theta) \quad \forall q' \in \Delta(\Theta) \end{equation} Now define the utility function $u: \mathbb{R}^n \rightarrow \mathbb{R}$ to satisfy $u(a, \theta) = U_a(\delta_\theta)$ for every $a \in A$ and $\theta \in \Theta$. Then from (\ref{eq:SetU}), \[U_{a_q}(q') = \sum_{\theta \in \Theta} q'(\theta) u(a_q,\theta)\] and so (\ref{eq:V}) implies that \[\mathbb{E}_q[u(a_q,\theta)] \geq \mathbb{E}_q[u(a,\theta)]\] for every $q \in \Delta(\Theta)$ and $a \in A$. Thus each action $a_q$ is optimal at belief $q$, and achieves the expected utility $U_{a_q}(q) = V(q)$ as desired. \end{proof} \begin{exercise} \label{ex:Vertical} Complete the proof by showing that the statement of Lemma \ref{lemm:Support} continues to hold when $V$ is discontinuous and/or there exists a belief $q$ at which every supporting hyperplane of $V$ is vertical. \begin{hint} Observe that vertical supporting hyperplanes can only exist on the boundary of $\Delta(\Theta)$, and that discontinuities can only occur at degenerate beliefs. \end{hint} \end{exercise} We'll now use this lemma to prove Proposition \ref{prop:FK}. \medskip \begin{proof} Suppose $\Phi$ satisfies Assumptions \ref{propEntr:Degenerate} and \ref{propEntr:Concave}. Then $-\Phi$ is continuous and convex, so by Lemma \ref{lemm:Support}, there is a set of actions $A$ and a utility function $u:A \times \Theta \rightarrow \mathbb{R}$ such that \begin{equation} \label{eq:ApplyLemma} -\Phi(q) = \max_{a \in A} \mathbb{E}_q[u(a,\theta)] \quad \forall q\in \Delta(\Theta). \end{equation} \noindent We need to verify that \begin{equation} \label{eq:ToShowPhi} \Phi(q) = \mathbb{E}_q\left[\max_{a \in A} u(a,\theta)\right] - \max_{a\in A} \mathbb{E}_q[u(a,\theta)] \end{equation} for every $q \in \Delta(\Theta)$ and $a\in A$. Again index the states by $\theta_1, \dots, \theta_n$ (where $n \equiv \vert \Theta \vert$), and define $\delta_{\theta_i}$ to be the belief that is degenerate at state $\theta_i$. Then for any $\theta_i \in \Theta$ \begin{align*} \max_{a \in A} u(a,\theta_i) & = \max_{a \in A} \mathbb{E}_{\delta_{\theta_i}}[u(a,\theta)] \\ & = - \Phi(\delta_{\theta_i}) && \mbox{by (\ref{eq:ApplyLemma})} \\ & = 0 && \mbox{by Assumption \ref{propEntr:Degenerate}} \end{align*} Thus also $\mathbb{E}_q\left[\max_{a \in A} u(a,\theta)\right]=0$ for any belief $q$, which together with (\ref{eq:ApplyLemma}) implies that (\ref{eq:ToShowPhi}) reduces to $\Phi(a) = 0 - (-\Phi(a))$ and is thus true. In the other direction, \[\Phi(\delta_{\theta}) = \max_{a \in A} u(a,\theta) - \max_{a \in A} u(a,\theta)=0 \quad \forall \theta \in \Theta\] implying Property \ref{propEntr:Degenerate}. Concavity of $\Phi$ (Property \ref{propEntr:Concave}) follows by construction of $\Phi$ since $\mathbb{E}_q\left[\max_{a \in A} u(a,\theta)\right]$ is affine while $\sup_{a\in A} \mathbb{E}_q[u(a,\theta)]$ is a pointwise supremum of affine functions, and thus convex. \end{proof} \bigskip By Proposition \ref{prop:FK}, the two example cost functions from the previous section, $C_{Ent}$ and $C_{Var}$, can be microfounded using decision problems. These decision problems are given below. \begin{example}[Microfoundation for Entropy Cost] Set $A = \Delta(\Theta)$ and $u(a,\theta) = \ln(a(\theta))$, where $\ln0=-\infty$. Then the cost of uncertainty is \[\Phi_{\mathcal{D}}(q) = \mathbb{E}_q\left[\max_a\left[ \ln(a(\theta))\right]\right] - \max_a \mathbb{E}_q\left[\ln(a(\theta))\right] = H(q).\] \end{example} \begin{example}[Microfoundation for Variance Cost] Set $A = \Theta \subseteq \mathbb{R}$ and $u(a, \theta) = -(a-\theta)^2$. Then \[\Phi_{\mathcal{D}}(q) = \mathbb{E}_q\left[\max_a\left[ - (a- \theta)^2\right]\right] - \max_a \mathbb{E}_q\left[-(a-\theta)^2\right] = Var_q(\theta)\] \end{example} \subsection{Posterior Separability} \label{sec:PosteriorSeparable} A weaker requirement than uniform posterior separability is that the cost of $\tau$ can be written in a way that is separable in the realized posteriors. \begin{definition}[\citet{CaplinDean2013,CaplinDeanLeahy2022}] \label{def:PosteriorSeparable} The cost function $C: S \rightarrow \mathbb{R}$ is \emph{posterior separable} if \[C(p,\tau) = \mathbb{E}[\Phi_p(q)]\] for some family of convex functions $(\Phi_p)_{p \in \Delta(\Theta)}$ where each $\Phi_p: \Delta(\Theta) \rightarrow \mathbb{R}$ is everywhere weakly positive, and $\Phi_p(p)=0$ for every $p$. \end{definition} \begin{remark} When the cost function is posterior separable but not uniformly posterior separable, the cost of acquiring two signals in sequence may depend on the order in which these signals are acquired. This is not true for for UPS cost functions \citep{FrankelKamenica,BloedelZhong}. \end{remark} When the cost function is posterior separable, then the agent's payoff from choosing signal $\sigma:\Theta \rightarrow \Delta(S)$ and strategy $\alpha: S \rightarrow \Delta(A)$ is \[\int_{\Delta(\Theta)} \int_{a \in A} \alpha(a \mid q) \mathbb{E}_q[u(a,\theta)] d\tau_\sigma(q) - C(p,\tau_\sigma),\] and can be rewritten as \[\int_{\Delta(\Theta)} \int_{a \in A} \alpha(a \mid q) \left( \mathbb{E}_q[u(a,\theta)] - \Phi_p(q)\right) d\tau_\sigma(q)\] where the concave function $\mathbb{E}_q[u(a,\theta)] - \Phi_p(q)$ is the ``net utility" of action $a$ under posterior $q$. So maximizing the value function is equivalent to maximizing the expected net utility over all Bayes-plausible distributions and strategies, which is an optimization problem that can be solved using standard methods. This tractability is a part of the appeal of this family of cost functions. A closely related concept appears in \citet{FrankelKamenica}, where $\Phi_p(q)$ is interpreted as the amount of information in news that moves an agent's belief from $p$ to $q$. \citet{FrankelKamenica} define the pair $(\Phi_p,\Phi)$ as \emph{coupled} if $\mathbb{E}[\Phi_p(q)] = \mathbb{E}[\Phi(p)-\Phi(q)]$, in which case the cost function is not only posterior separable but also uniformly posterior separable. That uniform posterior separability is strictly stronger than posterior separability is nearly immediate, except for the requirement in the definition of posterior separable cost functions that $\Phi_p(q)$ is everywhere positive. We cannot therefore simply convert a UPS cost function $C(p,\tau) = \Phi(p) - \mathbb{E}_{q \sim \tau}[\Phi(q)]$ into a posterior separable cost function $C(p,\tau) = \mathbb{E}[\Phi_p(q)]$ by setting $\Phi_p(q) \equiv \Phi(p ) - \Phi(q)$, as this quantity may be negative for some posterior beliefs $q$. The correct construction is instead to choose $\Phi_p$ to be a \emph{Bregman divergence} of $\Phi$ \citep{FrankelKamenica,CaplinDeanLeahy2022}. \begin{definition} Let $\Phi: \Delta(\Theta) \rightarrow \mathbb{R}$ be a concave function. A \emph{supergradient} of $\Phi$ at $p \in \Delta(\Theta)$ is any vector $\nabla \Phi(p)$ such that \[\Phi(p) + \nabla \Phi(p) \cdot (q-p) \geq \Phi(q)\] for every $q \in \Delta(\Theta)$. \end{definition} \begin{remark} When $\Phi$ is concave, then a supergradient $\nabla \Phi(q)$ exists for every $q$. When $\Phi$ is smooth at $q$, then $\nabla \Phi(q)$ is unique and equal to $\Phi'(q)$. \end{remark} \begin{definition} Let $\Phi: \Delta(\Theta) \rightarrow \mathbb{R}$ be a concave function. A \emph{Bregman divergence} of $\Phi$ is any map $D_\Phi: \Delta(\Theta) \times \Delta(\Theta) \rightarrow \mathbb{R}$ satisfying \[D_\Phi(p,q) = \Phi(p) - \Phi(q) + \nabla \Phi(p) \cdot (q-p) \quad \forall (p,q) \in \Delta(\Theta) \times \Delta(\Theta)\] where $\nabla \Phi(q)$ is a supergradient of $\Phi$ at $q$. \label{def:Bregman} \end{definition} \noindent This is the difference between the value of $\Phi$ at $q$ and the value of the first-order Taylor expansion of $\Phi$ around $p$ evaluated at point $q$. Setting $\Phi_p(q) = D_\Phi(p,q)$ from Definition \ref{def:Bregman}, we have \[\Phi_p(q) = \left(\Phi(p) + \nabla \Phi(p) \cdot (q-p)\right) - \Phi(q) \geq 0 \quad \forall q \in \Delta(\Theta)\] since $\nabla \Phi(p)$ is a supergradient of $\Phi$, and also \begin{align*} \mathbb{E}_{q \sim \tau} [\Phi_p(q)] & = \mathbb{E}_{q \sim \tau}[\Phi(p) - \Phi(q) + \nabla \Phi(p) \cdot (q-p)] \\ & = \Phi(p) - \mathbb{E}_{q\sim \tau}[\Phi(q)] \end{align*} using in the second inequality that $\mathbb{E}_{q \sim \tau}(q-p)=0$. The relationship between $\Phi_p$ and $\Phi$ is depicted in Figure \ref{fig:Bregman}. \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{bregman.pdf} \end{center} \caption{Relationship between $\Phi_p$ and $\Phi$.} \label{fig:Bregman} \end{figure} \begin{example} Consider entropy cost $C_{\text{Ent}}(p, \tau) = H(p) - \mathbb{E}_{q \sim \tau}[H(q)]$. The Bregman divergence of entropy is KL divergence \citep{Bregman1967}, so \[C_{\text{Ent}}(p, \tau) = H(p) - \mathbb{E}_{q \sim \tau}[H(q)] = \mathbb{E}[D(p \| q)].\] Thus we can view the cost of a signal that generates the distribution of beliefs $\tau$ either as the expected reduction in the entropy of the agent's belief, or as the expected KL divergence from the agent's prior to the realized posterior belief. \end{example} \section{Prior-Independent Costs} \label{sec:PriorIndependent} We now turn to cost functions that do not depend on the agent's prior belief. If the cost of information is exogenous to the agent---for example, a price determined within a market, or a physical cost of producing information---then we may expect the cost of acquiring information to be the same for all consumers regardless of their beliefs or expertise in the area, and thus prior independent. One common cost specification is the following. \begin{example} \label{ex:GaussianPrecision} In the setting of Example \ref{ex:GaussianCost}, let \begin{equation} \label{cost:Precision} C(\sigma_\eps^2) = \frac{\kappa}{\sigma_\eps^2} \end{equation} Then the cost of the signal scales linearly with the precision of the signal, $1/\sigma_{\eps}^2$. This formulation of the cost is especially sensible if we interpret $\theta$ as an unknown population parameter (for instance, the average height in a population) and the signal as a sample of individuals from this population. Modeling each observation as $X_i = \theta + \eps_i$ with $\eps_i \sim \mathcal{N}(0,\sigma^2)$ independent of $\theta$ and independent across agents, the conditional distribution of $\theta$ given the sample $(X_1, \dots, X_n)$ is the same as the conditional distribution of $\theta$ given the signal $X = \theta + \delta, \,\, \delta \sim \mathcal{N}(0, \sigma^2/n)$ (see Exercise \ref{ex:Average}). So (\ref{cost:Precision}) corresponds to a fixed cost of $\kappa/\sigma^2$ for each individual in the sample. This cost function is used in Wald's classic model of sequential sampling \citep{Wald,ArrowBlackwellGirshick}, and is a common modeling choice in continuous-time sequential sampling problems where the signal corresponds to observation of a Brownian motion \citep{FudenbergStrackStrzalecki,LiangMuSyrgkanis}. \end{example} We now present a generalization of the above cost function due to \citet{PomattoStrackTamuz}. Let $\Theta$ be a finite set and $S$ be a set of signal realizations equipped with $\sigma$-algebra $\Sigma$, with $\Delta(S)$ denoting the set of measurable probability distributions on $S$. A signal is a mapping $ \sigma: \Theta \to \Delta(S)$, and we use $\sigma_\theta \equiv \sigma( \cdot \mid \theta) \in \Delta(S)$ to denote the conditional distribution over signal realizations when the state is $\theta$. \begin{definition} The log-likelihood ratio between states $\theta$ and $\theta'$ at signal realization $s$ is \[\ell^\sigma_{\theta,\theta'}(s) = \ln\left(\frac{d\sigma_\theta(s)}{d\sigma_{\theta'}(s)}\right)\] \end{definition} \begin{definition} For any state $\theta \in \Theta$ and map $\alpha: \Theta \rightarrow \mathbb{N}$, define \[M_\theta^\sigma(\alpha) = \int_S \left\vert \prod_{\theta' \neq \theta} \left(\ell_{\theta,\theta'}^\sigma(s)\right)^{\alpha(\theta')} \right\vert d\sigma_\theta\] \end{definition} \begin{assumption} \label{assp:FiniteMoment} The expectation $M_\theta^\sigma(\alpha)$ is finite for every $\theta$ and every $\alpha: \Theta \rightarrow \mathbb{N}$. \end{assumption} This assumption says that the log-likelihood ratios have finite moments, ruling out for example the signal structure \[\begin{array}{ccc} & s_1 & s_2 \\ \theta_1 & 0 & 1 \\ \theta_2 & \frac12 & \frac12 \end{array}\] where the signal realization $s_1$ is perfectly revealing of the state $\theta_2$. Let $\mathcal{E}$ be the class of all signals satisfying Assumption \ref{assp:FiniteMoment}. An \emph{information cost function} is any map $C: \mathcal{E} \rightarrow [0,\infty)$. \citet{PomattoStrackTamuz} propose four axioms that such a cost function should further satisfy. \begin{axiom}[Consistency with the Blackwell order] \label{axiom:Blackwell} If $\sigma$ dominates $\sigma'$ in the Blackwell order, then $C(\sigma)\geq C(\sigma')$. \end{axiom} That is, more informative signals are more costly to acquire. \begin{definition}[Combining Independent Signals] For any two signals $\sigma: \Theta \rightarrow \Delta(S)$ and $\sigma': \Theta \rightarrow \Delta(S')$, let $\sigma \otimes \sigma'$ denote the product signal \[\sigma \otimes \sigma': \Theta \rightarrow \Delta(S \times S')\] where $\sigma \otimes \sigma'(s,s' \mid \theta)=\sigma(s\mid \theta)\sigma(s'\mid \theta)$. \end{definition} \begin{axiom}[Additivity with respect to independent experiments] \label{axiom:AdditiveCost} For any two signals $\sigma$ and $\sigma$', $C(\sigma \otimes \sigma')= C(\sigma) + C(\sigma')$. \end{axiom} That is, the cost of acquiring two (conditionally) independent signals is equal to the sum of their costs. This axiom imposes a constant marginal cost on information similar to the one used to motivate Example \ref{ex:GaussianPrecision}. \begin{definition}[Diluting Signals] For any signal $\sigma$, the $\alpha$-\emph{dilution} of $\sigma$, denoted $\alpha \cdot \sigma$, is a signal where with probability $\alpha$ the realization of $\sigma$ is observed, and otherwise a completely uninformative signal is observed. Formally, $\alpha \cdot \sigma$ is a map from $\Theta$ to $S \cup \{\emptyset\}$ where the signal outcome $\emptyset$ has a constant $1-\alpha$ probability at every state $\theta \in \Theta$, and the remaining probability is assigned to $S$ in proportion to $\sigma$. \end{definition} \begin{axiom}[Linearity in the ``dilution" of the experiment] \label{axiom:LinearDilution} $C(\alpha \cdot \sigma) = \alpha \cdot C(\sigma)$ for every signal $\sigma$ and weight $\alpha \in [0,1]$. \end{axiom} That is, the cost of a signal is linear in the probability that it generates information. \begin{remark} Every posterior separable cost function $C(p,\tau) = \mathbb{E}_{q \sim \tau}[\Phi_p(q)]$ satisfies Axiom \ref{axiom:LinearDilution}. To see this, observe that the distribution over posterior beliefs given the diluted signal $\alpha \cdot \sigma$, denoted $\tau_{\alpha \cdot \sigma}$, is the convex combination that puts weight $\alpha$ on the distribution $\tau_\sigma$ generated by $\sigma$, and weight $1-\alpha$ on the prior. So \begin{align*} C(p,\tau_{\alpha \cdot \sigma}) & = \mathbb{E}_{q \sim \alpha \tau_\sigma + (1-\alpha) \delta_p}[\Phi_p(q)] \\ & = \alpha \mathbb{E}_{q \sim \tau_\sigma}[\Phi_p(q)] + (1-\alpha) \Phi_p(p)\\ & = \alpha \cdot C(p,\tau_\sigma) \end{align*} where the second equality uses that $C$ is affine in $\tau$ and the third uses that $\Phi_p(p)=0$ in the definition of a posterior separable cost function. \end{remark} The final axiom imposes continuity of the cost function with respect to a nonstandard (pseudo)-metric given below.\footnote{This is a pseudometric rather than a metric, since $d_N(\sigma,\sigma')$ is equal to zero for $\sigma \neq \sigma'$ if they induce the same distribution over posterior beliefs.} \begin{definition} Given an upper bound $N \geq 1$, define \[d_N(\sigma,\sigma') = \max_{\theta \in \Theta} d_{TV}(\sigma_\theta,\sigma'_\theta) + \max_{\theta \in \Theta} \max_{\alpha \in \{0, \dots, N\}^n} \vert M_\theta^\sigma(\alpha) - M_\theta^{\sigma'}(\alpha)\vert\] where $d_{TV}$ denotes the total variation distance. \end{definition} Two signals $\sigma$ and $\sigma'$ are close under this pseudo-metric if for every state $\theta$, the induced distributions of log-likelihood ratios are close in total-variation distance and additionally have similar moments, for any vector of moments lower or equal to $(N,\dots, N)$. \begin{axiom}[Continuity.] \label{axiom:Continuity} The function $C$ is uniformly continuous with respect to $d_{N}$. \end{axiom} \begin{remark} The topology of weak convergence of likelihood ratios and the topology of convergence of likelihood ratios in total variation distance are both more standard. But no cost function which satisfies Axioms \ref{axiom:Blackwell}-\ref{axiom:LinearDilution} is continuous in these alternative topologies. To see this, let $\theta$ be the unknown bias of a coin, and let $\sigma_n$ be the signal where with probability $1/n$ the outcome of $n$ independent flips of this coin is observed, and otherwise no information is revealed. Axioms \ref{axiom:Blackwell}-\ref{axiom:LinearDilution} imply that $C(\sigma_n) = C(\sigma_{n'})$ for all finite $n,n'$. But the likelihood ratios of these signals converge in the weak topology (and in the total variation topology) to those of the signal that produces no information, and thus a stronger form of Axiom \ref{axiom:Continuity} based on either of these alternative topologies would require these signals to all have zero cost. \end{remark} \begin{proposition} \label{prop:PST} The cost function $C: \mathcal{E} \rightarrow \mathbb{R}$ satisfies Axioms \ref{axiom:Blackwell}-\ref{axiom:Continuity} if and only if there exists a unique collection of $\mathbb{R}_+$-valued parameters $(\beta_{\theta,\theta'})_{\theta, \theta' \in \Theta}$ such that \begin{equation} \label{eq:PST} C(\sigma) = \sum_{\theta,\theta' \in \Theta} \beta_{\theta,\theta'} \times \underbrace{\int_S \ln \frac{d\sigma_\theta(s)}{d\sigma_{\theta'}(s)} d\sigma_\theta(s)}_{\mbox{KL-divergence from $\sigma(\cdot \mid \theta')$ to $\sigma(\cdot \mid \theta)$}} \end{equation} \end{proposition} As discussed in Section \ref{sec:KL}, the KL-divergence from $\sigma( \cdot \mid \theta')$ to $\sigma(\cdot \mid \theta)$ is a measure of how different the distributions are. The larger this divergence is, the easier it is to reject the hypothesis that the state is $\theta'$ when it truly is $\theta$. \begin{remark} Axiom \ref{axiom:Continuity} can be dispensed with if $\Theta = \{\theta_0,\theta_1\}$, in which case Proposition \ref{prop:PST} simplifies to the statement that $C$ satisfies Axioms \ref{axiom:Blackwell}-\ref{axiom:LinearDilution} if and only if there exist parameters $\beta_{01},\beta_{10} \geq 0$ such that \[C(\sigma) = \beta_{01} D(\sigma(\cdot \mid \theta_0) \| \sigma(\cdot \mid \theta_1)) + \beta_{10} D(\sigma(\cdot \mid \theta_1) \| \sigma(\cdot \mid \theta_0)).\] \end{remark} A notable contrast with entropy cost is that this cost function permits differentiation between states. \begin{example}[\citet{PomattoStrackTamuz}] \label{ex:DistinguishStates} Suppose the unknown state $\theta$ is the US GDP per capita, and the agent holds a uniform prior over $\Theta = \{20,000, \dots, 80,000\}$. Then under entropy cost $C_{Ent}$, it is equally costly to acquire the signal that reveals whether $\theta$ is above or below \$50,000, or the signal that reveals whether $\theta$ is even or odd. \end{example} \noindent The free parameters $\beta_{\theta,\theta'}$ in the representation in (\ref{eq:PST}) reflect potentially different costs to distinguishing between different pairs of states. Specifically, we can interpret each $\beta_{\theta,\theta'}$ as the marginal cost of increasing the expected log-likelihood ratio of a signal with respect to states $\theta$ and $\theta'$ (when $\theta$ is the true state). Thus in Example \ref{ex:DistinguishStates}, we may specify (for example) that it is easier to distinguish between states that are far apart than those that are nearby, i.e., if GDP is in fact 80,000 then it is easier to rule out that GDP is 20,000 than it is to rule out that it is 79,999. In the special case where no pair of states is a priori harder to distinguish than another, then all coefficients are equal to one another. \begin{example} Returning to the setting of Example \ref{ex:GaussianCost}, where we now use $C(\sigma_\eps^2)$ to mean the cost of acquiring the signal $X=\theta +\eps$, $\eps \sim \mathcal{N}(0,\sigma_\eps^2)$, we have \[C(\sigma_\eps^2) = \sum_{\theta,\theta' \in \Theta} \beta_{\theta,\theta'} \frac{(\theta - \theta')^2}{2\sigma_\eps^2}.\] This nests the precision of the signal $(1/\sigma_\eps^2)$ as a special case when $\beta_{\theta,\theta'} = \frac{1}{(\theta - \theta')^2}$, with the interpretation that states that are closer (in squared distance) are harder to distinguish. \end{example} \begin{remark} The class of cost functions identified in Proposition \ref{prop:PST} does not presuppose that the agent is Bayesian and has a prior belief over the state space. But if the agent does have a prior $p$, then the cost of the signal that induces distribution $\tau$ over posterior beliefs can be restated as \begin{equation} \label{eq:PSVersion} \mathbb{E}_{q \sim \tau}[\Phi_p(q)] \end{equation} where \begin{equation} \label{eq:Phi} \Phi_p(q) = \Phi(p) - \sum_{\theta,\theta'} \beta_{\theta,\theta'} \frac{q_\theta}{p_\theta} \ln \left(\frac{q_\theta}{q_{\theta'}}\right) \end{equation} so this family of cost functions belongs to the class of posterior-separable cost functions (Definition \ref{def:PosteriorSeparable}), although not to the class of uniform posterior separable cost functions (Definition \ref{def:UPS}).\footnote{\citet{PomattoStrackTamuz} show that a generalization of the representation in (\ref{eq:PST}), which permits the parameters $\beta_{\theta,\theta'}$ to depend on the prior, can accommodate uniformly posterior separable cost functions.} \begin{exercise} Verify that (\ref{eq:PSVersion}) is equivalent to the original representation in (\ref{eq:PST}) when $\Phi$ is defined according to (\ref{eq:Phi}). \begin{hint} Recall from Section \ref{sec:Bayes} that the prior $p$ and posterior $q$ at signal realization $s$ are related by $\log\left(\frac{q(\theta)}{q(\theta')}\right) = \log\left(\frac{p(\theta)}{p(\theta')}\right) + \log\left(\frac{d\sigma_\theta}{d\sigma_{\theta'}}(s)\right)$. \end{hint} \end{exercise} \end{remark} \section{Additional Exercises} \begin{exercise}[Chain Rule for KL Divergence] Suppose $p,q \in \Delta(\mathcal{X} \times \mathcal{Y})$ with $p_X$ and $q_X$ denoting the marginal distributions on $\mathcal{X}$, and $p_{Y \mid X}$ and $q_{Y \mid X}$ denoting the respective conditional distributions. Prove that \[D(p \| q) = D(p_X \| q_X) + D(p_{Y \mid X} \| q_{Y \mid X}).\] \end{exercise} \begin{exercise} Prove that the entropy cost function in Definiton \ref{def:EntropyCost} fails \citet{PomattoStrackTamuz}'s Axiom \ref{axiom:AdditiveCost}. \end{exercise} \part{\sc{Learning}} \chapter{Learning} \label{sec:Learning} We now extend the Bayesian framework described in \ref{sec:Preliminaries} to accommodate learning from a sequence of signals. Section \ref{sec:Doob} asks whether an agent will eventually learn the state. Section \ref{sec:Merging} asks whether agents with different prior beliefs will eventually hold similar beliefs. Section \ref{sec:KLS} asks whether agents with different priors expect their disagreement to reduce given information (thus studying a second-order belief). Section \ref{sec:CommonLearning} asks whether agents will commonly learn, i.e., whether agents will eventually believe that other agents believe that they ... have learned the state. \section{Preliminaries} \label{sec:LearningFramework} Let $(\Theta,d_\Theta)$ be a complete separable metric space endowed with its Borel $\sigma$-algebra $\Sigma$, and let $p \in \Delta(\Theta)$ be a ($\Sigma$-measurable) probability measure on $\Theta$. As before, we interpret $\theta \sim p$ as an unknown parameter of interest. The space of signal realizations $(\mathcal{X}, d_X)$ is again a complete separable metric space endowed with its Borel $\sigma$-algebra $\mathcal{B}$. There is an infinite sequence of signal realizations $X_1, X_2, \dots$ taking values in the set $\mathcal{X}^\infty = \mathcal{X}_1 \times \mathcal{X}_2 \times \dots$ where each $\mathcal{X}_t$ is a copy of $\mathcal{X}$. Conditional on the realized $\theta$, signals $X_1, X_2, \dots$ are generated iid according to a conditional density $f_\theta$, and we refer to each $X_t$ as the period-$t$ signal. The full state space is $\Omega = \Theta \times \mathcal{X}^\infty = \Theta \times \mathcal{X}_1 \times \mathcal{X}_2 \times \dots$ and it is equipped with the product $\sigma$-algebra $\Sigma \times \mathcal{B}_1 \times \mathcal{B}_2 \times \dots$ where each $\mathcal{B}_t$ is a copy of $\mathcal{B}$. Throughout, we use $P$ to denote the measure on $\Omega$ induced by $p$ and the family $(f_\theta)_{\theta \in \Theta}$, and we use $P_\theta$ to denote the conditional measure on $\mathcal{X}^\infty$ when the parameter is $\theta$. \section{Binary Example} \label{sec:LearningExample} First consider a single-agent environment with two possible parameter values $\theta \in \{A,B\}$. Each period $t\in \mathbb{Z}_+$ a signal realization from $\{a,b\}$ is generated iid according to \[\begin{array}{ccc} & a & b\\ A & q& 1-q \\ B & 1-q & q \end{array}\] where $q>1/2$. Will an agent who holds a prior belief that the probability of $A$ is $p\in (0,1)$ eventually learn the value of the parameter? Suppose first that the parameter is $\theta=A$, in which case signals are drawn iid according to $f_A = (q,1-q)$. For any infinite sequence $\bold{x} \in \{a,b\}^\infty$ and any $t \in \mathbb{Z}_+$, let \[n_t(\bold{x}) \equiv \#\{ 1 \leq t' \leq t : x_{t'} =a\}\] denote the number of $a$-realizations among the first $t$ realizations of $\bold{x}$. By the strong law of large numbers, there is a set $\mathcal{X}_0^\infty \subseteq \mathcal{X}^\infty$ of $P_A$-measure 1 such that \begin{align*} \label{eq:LimitingQ} \lim_{t \rightarrow \infty} \frac{n_t(\bold{x})}{t} = q \quad \forall \bold{x} \in \mathcal{X}_0^\infty. \end{align*} That is, the limiting fraction of $a$-realizations is $q$ along each sequence in $\mathcal{X}_0^\infty$. Since signals are assumed to be conditionally independent, the agent's posterior belief about $A$ following any sequence $(x_1, \dots, x_t)$ depends only on the count of $a$ and $b$-realizations. Let $n$ denote the number of $a$-realizations. Then applying Bayes' rule (Section \ref{sec:BayesRule}), the agent's posterior belief is \begin{align*} P(\theta = A \mid x_1, \dots, x_t) & = \frac{p q^n (1-q)^{t-n}}{p q^n (1-q)^{t-n} + (1-p) (1-q)^n q^{t-n}} \\ & = \frac{1}{1 + \frac{1-p}{p} \left(\frac{1-q}{q}\right)^{2n-t}} \end{align*} Along any $\bold{x} \in \mathcal{X}_0^\infty$ we have \[\lim_{t \rightarrow \infty} P(\theta=A \mid x_1, \dots, x_t) = \lim_{t \rightarrow \infty} \left(1+\frac{1-p}{p} \left[\left(\frac{1-q}{q}\right)^{2\frac{n_t(\bold{x})}{t} - 1}\right]^t \right)^{-1} = 1\] recalling that $q>1/2$ by assumption. So the agent's posterior belief $P_A$-almost surely converges to certainty of the correct value of the parameter, $A$. An identical argument shows that when the parameter is $B$ then the agent's posterior belief $P_B$-almost surely converges to certainty of $B$. Thus the agent (eventually) learns the parameter. \section{Doob's Consistency Theorem} \label{sec:Doob} A classic result due to \citet{Doob} generalizes the individual learning result from the previous section.\footnote{Our presentation of this material follows \citet{Miller2018}.} \begin{assumption}[Identifiability] \label{assp:Identifiability} If $\theta \neq \theta'$, then $P_\theta \neq P_{\theta'}$. \end{assumption} In words, Assumption \ref{assp:Identifiability} is satisfied if no pair of parameter values induce the same distribution over signals, meaning the parameter is identifiable from its observable implications. \begin{proposition} \label{prop:Doob1} Suppose Assumption \ref{assp:Identifiability} is satisfied, and let $g:\Theta \rightarrow \mathbb{R}$ be any measurable function satisfying $\mathbb{E}\vert g(\theta)\vert < \infty$. Then \[\lim_{t \rightarrow \infty} \mathbb{E}(g(\theta) \mid X_1, X_2, \dots, X_t) = g(\theta) \quad P\text{-a.s.}\] \end{proposition} In the special case where $g(\theta) = \theta$, the result implies that the posterior expectation of $\theta$ converges to its true value almost surely. The following proposition is a Bayesian analogue of the above result, and says that posterior beliefs converge almost surely to a degenerate measure at the true state. \begin{proposition}[Posterior Consistency] \label{prop:PosteriorConsistency} Suppose Assumption \ref{assp:Identifiability} holds. Then, there exists a set $\Theta' \subseteq \Theta$ with $p(\Theta')=1$ such that for every $\theta_0 \in \Theta'$ and every neighborhood $B$ of $\theta_0$, \[\lim_{t \rightarrow \infty} \mathbb{P}(\theta \in B \mid X_1, X_2, \dots, X_t) = 1 \quad P_{\theta_0} \text{-a.s.}\] \end{proposition} That is, for any prior distribution, the posterior belief is guaranteed to concentrate in a neighborhood of the true parameter $\theta$---except possibly on a set of parameter values that has measure zero under the agent's prior. \begin{remark} The qualification that learning occurs except on a set of ``measure zero under the agent's prior" is less harmless than it might initially seem. Consider $\Theta = \mathbb{R}$ where the agent's prior $p \in \Delta(\Theta)$ is a point mass at $\theta=0$. Then the posterior is also a point mass at zero, so the agent will fail to learn any parameter which is different from $0$. But because the set $\mathbb{R} \backslash \{0\}$ has measure zero under the agent's prior, the statement of the result holds in a trivial sense. See also the subsequent discussion in Section \ref{sec:Berk}. \end{remark} \begin{remark} Proposition \ref{prop:PosteriorConsistency} implies that the agent's posterior belief converges almost surely to a point mass on the true parameter in the topology of weak convergence, i.e., there is a $P_\theta$-measure 1 set of sequences of signal realizations such that \[d(P^t,\delta_\theta) \rightarrow 0\] along each of these sequences, where $d$ denotes the Levy-Prokhorov metric and $P^t \in \Delta(\Theta)$ denotes the posterior belief after observing the first $t$ coordinates of the sequence. Since $d$ is a metric, we also have that for any alternative prior $\widetilde{p} \in \Delta(\Theta)$ and corresponding posterior belief $\widetilde{P}^t \in \Delta(\Theta)$ (updating to the same $t$ realizations), \[d(P^t,\widetilde{P}^t) \leq d(P^t,\delta_\theta) + d(\delta_\theta,\widetilde{P}^t).\] Since the RHS converges to zero almost surely (by Proposition \ref{prop:PosteriorConsistency}), the two agents' posterior beliefs converge to one another almost surely in the topology of weak convergence. The subsequent section provides an even stronger version of this result. \end{remark} \section{Merging of Beliefs} \label{sec:Merging} Assume that for each $t \geq 1$, a unique conditional probability distribution $P^t(x_1, \dots, x_t)(C)$ exists for all realized sequences $x_1, \dots, x_t \in \mathcal{X}_1 \times \dots \times \mathcal{X}_t$ and unknown events $C \in \mathcal{B}_{t+1} \times \mathcal{B}_{t+2} \times \dots $.\footnote{\citet{BlackwellDubins} work with the more general notion of ``predictive probabilities" $P$ where conditional probabilities can be defined.} \citet{BlackwellDubins} show that even if players start out with different prior beliefs, their conditional beliefs will merge to one another in a strong sense. To state the result formally, recall that for any two probability measures $\mu_1,\mu_2$ defined on the same $\sigma$-algebra $\mathcal{F}$, \emph{total variation distance} and \emph{absolute continuity} are defined as follows. \begin{definition} The \emph{total variation distance} between $\mu_1$ and $\mu_2$ is \[d_{TV}(\mu_1, \mu_2) = \sup_{ D \in \mathcal{F}} \vert \mu_1(D) - \mu_2(D) \vert\] \end{definition} \begin{definition} If $\mu_2(D)=0$ implies $\mu_1(D)=0$ for every $D \in \mathcal{F}$, then $\mu_1$ is \emph{absolutely continuous} with respect to $\mu_2$, denoted $\mu_1 \ll \mu_2$. \end{definition} Now we are ready to state the main result: \begin{proposition} \label{prop:BlackwellDubins} Suppose $p,\widetilde{p} \in \Delta(\Theta)$ are absolutely continuous with respect to one another, and define $P$, $\widetilde{P}$ to be the measures on $\Omega$ induced by the respective priors $p,\tilde{p}$, and the family $(P_\theta)_{\theta \in \Theta}$. Then \[ \lim_{t \rightarrow \infty} d_{TV}(P^t(x_1, \dots, x_t), \widetilde{P}^t(x_1, \dots, x_t))=0 \quad P\mbox{-almost surely}\] \end{proposition} That is, if two agents hold different prior beliefs about the parameter but agree on the set of measure-0 events, then their conditional beliefs merge in a strong sense: For \emph{all} measurable future events, agents eventually assign similar probabilities. \begin{example} To clarify the difference between this result and the one examined in the previous section, consider the problem of learning the unknown bias of a coin, which is parametrized to $p \in [0,1]$. A coin whose bias is $p$ lands on Heads with probability $p$ and lands on Tails with probability $1-p$. Two agents have different prior beliefs on $[0,1]$ and each observe $t$ independent flips of this coin. Proposition \ref{prop:PosteriorConsistency} says that the two agents will eventually learn the bias of the coin as $t$ grows large. Proposition \ref{prop:BlackwellDubins} says instead: Suppose the two agents have observed $t$ independent flips of the coin; then, their beliefs over all events regarding the future---e.g., that over half of the remaining coin flips will turn up Heads, or that the limiting fraction of Heads realizations is 1/2---must eventually become close (uniformly across such events). \end{example} \section{(Expected) Disagreement} \label{sec:KLS} We now turn to the impact of information on agents' second-order beliefs---i.e., what they think about what others think. \citet{KartikLeeSuen} show that when signals satisfy an MLRP condition, then agents with different beliefs expect information to reduce the extent of disagreement. Here we assume the set of parameters $\Theta \subseteq \mathbb{R}$ is finite and ordered. Two signals $X$ and $\widetilde{X}$ respectively take values in $\mathcal{X}$ and $\mathcal{\widetilde{X}}$, and we assume that $X$ is Blackwell more informative than $\widetilde{X}$. There are two agents, Ann and Bob, who have common knowledge of the conditional distributions $\{f_{X \mid \theta}(x \mid \theta)\}_{\theta \in \Theta}$ and $\{f_{\widetilde{X}\mid \theta}(\widetilde{x}\mid \theta)\}_{\theta \in \Theta}$. But Ann and Bob hold different prior beliefs $f_\theta^A,f_\theta^B \in \Delta(\Theta)$ about the parameter. We use $F^A$ and $F^B$ to denote their perceived joint distributions of $(\theta,X,\widetilde{X})$ (induced by the respective priors and the common knowledge signal distributions), and $\mathbb{E}_A$ and $\mathbb{E}_B$ to denote expectations with respect to these distributions. \begin{assumption} \label{assp:MLRP} There is an order $\succ$ on $\mathcal{X}$ and an order $\widetilde{\succ}$ on $\mathcal{\widetilde{X}}$ such that the families $\{f_{X\mid \theta}(\cdot \mid \theta)\}_{\theta \in \Theta}$ and $\{f_{\widetilde{X}\mid \theta}(\cdot \mid \theta)\}_{\theta \in \Theta}$ each have MLRP (see Definition \ref{def:MLRP}). \end{assumption} \begin{assumption} \label{assp:LR} Bob's prior $f_\theta^B$ likelihood-ratio dominates Ann's prior $f_\theta^A$ (see Definition \ref{def:LRDominance}). \end{assumption} The agents' prior expectations of the parameter are $\mu_A \equiv \mathbb{E}_A(\theta)$ and $\mu_B \equiv \mathbb{E}_B(\theta)$. We are interested in Ann's prior expectation of Bob's posterior expectation (updated to $X$), and Bob's prior expectation of Ann's posterior expectation (updated to $X$), respectively denoted by \begin{align*} \mu_{AB}(X) &\equiv \mathbb{E}_A[\mathbb{E}_B(\theta \mid X)] \\ \mu_{BA}(X) &\equiv \mathbb{E}_B[\mathbb{E}_A(\theta \mid X)] \end{align*} \begin{proposition} \label{prop:KLS} Suppose Assumptions \ref{assp:MLRP} and \ref{assp:LR} are satisfied. If $X$ is Blackwell more informative than $\widetilde{X}$, then \[\mu_A \leq \mu_{AB}(X) \leq \mu_{AB}(\widetilde{X}) \leq \mu_B\] \[\mu_A \leq \mu_{BA}(\widetilde{X}) \leq \mu_{BA}(X) \leq \mu_B\] \end{proposition} \noindent That is, Ann expects that a more informative experiment will, in expectation, bring Bob's posterior mean closer to Ann's prior, and vice versa. These are both subjective statements, and indeed only one of Ann and Bob can be correct. We'll prove this proposition using the following relationships, which are left as an exercise. \begin{exercise} \label{exercise:KLS} Prove the following statements: \begin{itemize} \item[(a)] $F^B_{\theta \mid X}(\theta \mid X=x)$ first-order stochastically dominates $F^A_{\theta \mid X}(\theta \mid X=x)$ for every signal realization $x \in \mathcal{X}$ \item[(b)] $F^B_{X\mid \widetilde{X}}(X \mid \widetilde{X}=\tilde{x})$ first-order stochastically dominates $F^A_{X \mid \widetilde{X}}(X \mid \tilde{X}=\tilde{x})$ for every signal realization $\tilde{x} \in \widetilde{\mathcal{X}}$ \end{itemize} \end{exercise} \begin{proof} Part (a) of Exercise \ref{exercise:KLS} implies $\int \theta dF^A_{\theta \mid X}(\theta \mid x) \leq \int \theta dF^B_{\theta \mid X}(\theta \mid x)$ for every realization $x$, so also \begin{equation}\label{eq:KLS1} \int \int \theta dF^A_{\theta \mid X}(\theta \mid x) dF^A_{X}(x) \leq \int \int \theta dF^B_{\theta \mid X}(\theta \mid x) dF^A_X(x). \end{equation} By assumption that $\{f_{X\mid\theta}(\cdot \mid \theta)\}_{\theta \in \Theta}$ has MLRP, the integral $\int \theta dF^B_{\theta \mid X}(\theta \mid x)$ is an increasing function of $x$. Moreover, Part (b) of Exercise \ref{exercise:KLS} says that $F_X^B$ first-order stochastically dominates $F_X^A$ (taking $\widetilde{X}$ to be any constant signal). Thus \begin{equation} \label{eq:KLS2} \int \int \theta dF^B_{\theta \mid X}(\theta \mid x) dF^A_X(x) \leq \int \int \theta dF^B_{\theta \mid X}(\theta \mid x) dF^B_X(x). \end{equation} Together, (\ref{eq:KLS1}) and (\ref{eq:KLS2}) imply \[\int \int \theta dF^A_{\theta \mid X}(\theta \mid x) dF^A_{X}(x) \leq \int \int \theta dF^B_{\theta \mid X}(\theta \mid x) dF^A_X(x) \leq \int \int \theta dF^B_{\theta \mid X}(\theta \mid x) dF^B_X(x)\] which is precisely the desired inequality $\mu_A \leq \mu_{AB}(X) \leq \mu_B.$ It follows by identical arguments that $\mu_A \leq \mu_{BA}(X) \leq \mu_B$. To show that $\mu_{AB}(\widetilde{X}) \geq \mu_{AB}(X)$, we use the fact that (since $X$ Blackwell-dominates $\widetilde{X}$) we can generate the two variables in such a way that $\widetilde{X}$ is conditionally independent of $\theta$ conditional on $X$.\footnote{See Remark \ref{remark:Garbling} for further detail. Note also that the correlation between $X$ and $\widetilde{X}$ is irrelevant for the comparison of $\mu_{AB}(X)$ and $\mu_{AB}(\widetilde{X})$.} Then on this probability space \begin{align*} \mu_{AB}(\widetilde{X}) &= \mathbb{E}_A \left[\mathbb{E}_B\left(\theta \mid \widetilde{X}\right) \right] \\[2mm] & = \mathbb{E}_A\left[\mathbb{E}_B\left(\mathbb{E}_B\left(\theta \mid X, \widetilde{X}\right) \mid \widetilde{X}\right)\right] && \mbox{by L.I.E.} \\[2mm] & = \mathbb{E}_A\left[\mathbb{E}_B\left(\mathbb{E}_B\left(\theta \mid X\right) \mid \widetilde{X}\right) \right] && \mbox{since $\widetilde{X}\perp \!\!\! \perp \theta \mid X$} \\[2mm] &= \int \int \mathbb{E}_B(\theta \mid x) dF^B_{X \mid \widetilde{X}}(x \mid \widetilde{x}) dF_A(\widetilde{x}) \\[2mm] & \geq \int \int \mathbb{E}_B(\theta \mid x) dF^A_{X \mid \widetilde{X}} (x \mid \widetilde{x}) dF_A(\widetilde{x}) \\[2mm] & = \mathbb{E}_A\left[\mathbb{E}_A\left(\mathbb{E}_B\left(\theta \mid X\right) \mid \widetilde{X}\right)\right] \\ & = \mathbb{E}_A\left[\mathbb{E}_B\left(\theta \mid X\right)\right] && \mbox{by L.I.E.} \\ & = \mu_{AB}(X) \end{align*} where the crucial inequality follows by observing that $\mathbb{E}_B(\theta \mid x)$ is an increasing function of $x$ (by Assumption \ref{assp:MLRP}) while $F_{X\mid \widetilde{X}}^B(\cdot \mid \widetilde{x})$ first-order stochastically dominates $F_{X\mid \widetilde{X}}^A(\cdot \mid \widetilde{x})$ for every realization of $\widetilde{x}$ (by Part (b) of Exercise \ref{exercise:KLS}). Since the previous arguments apply to show also that $\mu_{AB}(\widetilde{X}) \leq \mu_B$, we are done. \end{proof} \section{Common Learning} \label{sec:CommonLearning} Suppose Assumption \ref{assp:Identifiability} (Identifiability) holds, so that agents eventually learn the true parameter. Does this imply that agents will eventually have \emph{common knowledge} of the true parameter? \cite{CEMS2008} adapt \citet{MondererSamet}'s definition of common $q$-belief for the present learning environment, and show that individual learning does imply common learning when the set of signal realizations is finite, but that this implication may otherwise fail. In what follows recall that each state $\omega \in \Omega = \Theta \times \mathcal{X}^\infty$ describes both the value of the parameter and the infinite sequence of signal profiles. As before, $P_\theta$ denotes the measure on $\mathcal{X}^\infty$ conditional on parameter $\theta$, and again assume that $\Theta$ is finite. There are two agents $i=1,2$, and (different from the previous sections) we decompose $\mathcal{X} = \mathcal{X}^1 \times \mathcal{X}^2$ where $\mathcal{X}^i$ denotes the set of agent $i$ signal realizations. Each agent privately observes their own signal each period. We use $h_{it}(\omega)=(x^i_{1}(\omega), \dots, x^i_{t}(\omega))$ for agent $i$'s history at time $t$ when $\omega$ is the realized state, and $\mathcal{H}_{it}$ to denote the filtration induced by agent $i$'s histories. \begin{definition} For any $q \in [0,1]$ and (measurable) event $F$, agent $i$ \emph{$q$-believes} in $F$ at time $t$ on \[B_{it}^q(F) = \{\omega \in \Omega \mid P(F \mid h_{it}(\omega)) \geq q\}\] \end{definition} \begin{definition} For any $q \in [0,1]$, there is \emph{common $q$-belief} in $F$ at time $t$ on \[C_t^q(F) = \bigcap_{n \geq 1} [B_t^q]^n(F)\] where $B_t^q(F) = B_{1t}^q(F) \bigcap B_{2t}^q(F)$. \end{definition} \begin{definition}[Individual Learning] Agent $i$ \emph{learns} $\theta$ if for each $q \in (0,1)$ there exists $T<\infty$ such that \[P_\theta(B_{it}^q(\{\theta\} \times \mathcal{X}^\infty)) > q \quad \forall t>T\] Equivalently: $\lim_{t \rightarrow \infty} P_\theta(B_{it}^q(\{\theta\} \times \mathcal{X}^\infty))=1$ for all $q\in (0,1)$. Agent $i$ \emph{individually learns} if the agent learns each $\theta \in \Theta$. \end{definition} \begin{definition}[Common Learning] Agents \emph{commonly learn} $\theta$ if for each $q \in (0,1)$ there exists $T<\infty$ such that \[P_\theta(C_{t}^q(\{\theta\} \times \mathcal{X}^\infty)) > q \quad \forall t>T\] Equivalently: $\lim_{t \rightarrow \infty} P_\theta(C_{t}^q(\{\theta\} \times \mathcal{X}^\infty))=1$ for all $q\in (0,1)$. Agents \emph{commonly learn} if they commonly learn each $\theta \in \Theta$. \end{definition} Clearly if signals are perfectly correlated (or public), so that $P(\theta \mid \mathcal{H}_{1t}) = P(\theta \mid \mathcal{H}_{2t})$ for all $\theta$ and $t$, then individual learning implies common learning. This result also holds at the other extreme of independent signals. \begin{proposition} \label{prop:CommonLearning} Suppose agents individually learn, and their signals are conditionally independent given the parameter. That is, there exist families $(P^i_\theta)_{\theta \in \Theta}$, with each $P_\theta^i \in \Delta(\mathcal{X}^i)$, such that $P_\theta(A \times B)=P_\theta^1(A)P_\theta^2(B)$ for each $\theta \in \Theta$ and measurable $A \subseteq \mathcal{X}^1$, $B \subseteq \mathcal{X}^2$. Then, agents commonly learn. \end{proposition} \cite{CEMS2008} proves this proposition using a result from \citet{MondererSamet} (adapted to the present learning context). \begin{lemma} \label{lemm:MS} Agents commonly learn if and only if for every $\theta \in \Theta$ and $q \in (0,1)$, there is a sequence of events $F_t$ and a period $T$ such that for all $t>T$, \begin{enumerate} \item[(a)] $F_t \subseteq B_t^q(\theta)$ (``$\theta$ is $q$-believed on $F_t$ at time $t$'') \item[(b)] $P_\theta(F_t)>q$ (``probability of $F_t$ is sufficiently high") \item[(c)] $F_t \subseteq B_{it}^q(F_t)$ for $i=1,2$ (``$F_t$ is evident $q$-belief at time $t$") \end{enumerate} \end{lemma} We'll now prove Proposition \ref{prop:CommonLearning}. \begin{proof} Henceforth write $\{\theta\}$ for the event $\{\theta\} \times \mathcal{X}^\infty$. Define $F_t = \{\theta\} \cap B_t^{\sqrt{q}}(\theta)$ to be the set of states at which $\theta$ is true and both agents $\sqrt{q}$-believe it. We'll verify that the conditions of Lemma \ref{lemm:MS} hold for the sequence of events $(F_t)_{t=1}^\infty$, from which Proposition \ref{prop:CommonLearning} follows. First observe that \begin{align*} F_t & \subseteq B_t^{\sqrt{q}}(\theta) && \mbox{by definition of $F_t$}\\ & \subseteq B_t^q(\theta) && \mbox{since $q <\sqrt{q}$} \end{align*} yielding Part (a) of Lemma \ref{lemm:MS}. Part (b) holds since individual learning implies that there exists $T<\infty$ such that for both agents $i=1,2$, \[P_\theta\left(B_{it}^{\sqrt{q}}(\theta)\right)>\sqrt{q} \quad \forall t>T\] and thus \[P_\theta(F_t) = P_\theta\left(B_{1t}^{\sqrt{q}}(\theta)\right)P_\theta\left(B_{2t}^{\sqrt{q}}(\theta)\right) > q \quad \forall t>T\] from the assumption of conditional independence. It remains to show Part (c). First rewrite the set $B_{1t}^q(F_t) $ as follows: \begin{align*} B_{1t}^q(F_t) & = \left\{\omega \mid \mathbb{E}\left[\mathbbm{1}_{F_t} \mid \mathcal{H}_{1t}\right] \geq q) \right\} && \mbox{by definition of $B_{1t}^q$}\\ & = \left\{\omega \mid \mathbb{E}\left[\mathbbm{1}_{B_{1t}^{\sqrt{q}}(\theta)} \mathbbm{1}_{B_{2t}^{\sqrt{q}}(\theta) \cap \{\theta\}} \mid \mathcal{H}_{1t}\right]\geq q\right\} && \mbox{by definition of $F_t$}\\ & = \left\{\omega \mid \mathbbm{1}_{B_{1t}^{\sqrt{q}}(\theta)} \mathbb{E}\left[ \mathbbm{1}_{B_{2t}^{\sqrt{q}}(\theta) \cap \{\theta\}} \mid \mathcal{H}_{1t}\right] \geq q \right\} && \mbox{since } B_{1t}^{\sqrt{q}}(\theta) \in \mathcal{H}_{1t} \\ & = B_{1t}^{\sqrt{q}}(\theta) \cap B_{1t}^q \left(B_{2t}^{\sqrt{q}}(\theta) \cap \{\theta\}\right) \end{align*} By definition we have that $F_t \subseteq B_{1t}^{\sqrt{q}}(\theta)$. As above, individual learning implies existence of $T$ sufficiently large that $P_\theta\left(B_{2t}^{\sqrt{q}}(\theta)\right)>\sqrt{q}$ for all $t>T$. Since signals are conditionally independent, agent 1's history is uninformative about agent 2's history, implying that \begin{align} P_\theta\left(B_{2t}^{\sqrt{q}}(\theta) \mid \mathcal{H}_{1t}\right) \geq \sqrt{q} \label{eq:qBound} \end{align} holds uniformly across agent 1 histories (for all $t>T$). So on $F_t$ (for $t>T$) we have \[P(B_{2t}^{\sqrt{q}}(\theta) \cap \{\theta\} \mid \mathcal{H}_{1t}) = \underbrace{P_\theta(B_{2t}^{\sqrt{q}}(\theta) \mid \mathcal{H}_{1t} )}_{>\sqrt{q} \text{ by } (\ref{eq:qBound})} \underbrace{P(\theta \mid \mathcal{H}_{1t}) }_{>\sqrt{q} \text{ since } F_t \subseteq B_{1t}^{\sqrt{q}}(\theta)} >q.\] Apply Lemma \ref{lemm:MS} and we are done. \end{proof} \begin{remark} This proof extends for arbitrary finite numbers of agents, setting $F_t = \{\theta\} \cap B_t^{\sqrt[n]{q}}(\theta)$. \end{remark} \bigskip Although common learning is implied by individual learning when agents have either perfect information or no information about the other agent's history, intermediate cases of correlation can break this result. \\ \begin{example} (Twist on Rubinstein (1989)'s email game.) \label{ex:EmailGameTwist} The unknown parameter is $\theta \in \{\theta', \theta"\}$, where $0\leq \theta' < \theta'' \leq 1$. Suppose that every period a signal profile is independently drawn according to: \[ \begin{array}{ccc} \mbox{Probability} & \mbox{Agent-1 Signal} & \mbox{Agent-2 Signal} \\ \theta & 0 & 0 \\[-1mm] \eps(1-\theta) & 1 & 0 \\[-1mm] (1-\eps)\eps(1-\theta) & 1 & 1 \\[-1mm] (1-\eps)^2 \eps(1-\theta) & 2 & 1 \\[-1mm] (1-\eps)^3 \eps(1-\theta) & 2 & 2 \\[-1mm] (1-\eps)^4 \eps (1-\theta) & 3 & 2 \\[-1mm] (1-\eps)^5 \eps (1-\theta) & 3 & 3 \\[-1mm] \vdots & \vdots & \vdots \end{array}\] \noindent This signal structure generalizes the information structure in the email game from Section \ref{sec:emailgame}, where $\theta=1$ corresponds to state $a$ in the email game and $\theta=0$ corresponds to state $b$. Agents observe repeated independent realizations of the signal. Will they commonly learn the game parameter? When $\theta$ is restricted to values 0 and 1 (as per \citet{Rubinstein}'s email game), the answer is yes. \begin{exercise} Prove that common learning occurs if $\theta \in \{\theta',\theta''\} \equiv \{0,1\}$. \end{exercise} But common learning fails whenever $0 <\theta'<\theta" < 1$ as agents cannot commonly learn $\theta''$, the parameter placing more weight on the lower signal realizations. Intuitively, when 1 sees the signal $k$, then he believes with some probability (that can be uniformly lower bounded across histories) that 2 has also observed at least $k$. And if 2 observes $k$, then he believes with some probability (that again can be uniformly lower bounded) that 1 observed $k+1$. Since the number of signal realizations is infinite, there is unbounded contagion upwards: The agent always believes with some probability that the other agent believes with some probability that he has observed\dots such a large signal that he believes that the state is (very likely to be) $\theta'$. And thus we cannot establish common $q$-belief of $\theta''$ for large $q$. \end{example} The main result in \citet{CEMS2008} establishes that infinite signal realizations are critical to the previous counterexample. When the number of signal realizations is finite, then individual learning always implies common learning. \begin{assumption}[Finite Signal Sets] $\vert \mathcal{X}^1 \vert, \vert \mathcal{X}^2 \vert <\infty$ \label{assp:FiniteSignal} \end{assumption} \begin{proposition} If Assumption \ref{assp:FiniteSignal} is satisfied, then individual learning implies common learning. \end{proposition} A brief idea of the proof follows. Define $\pi^\theta(ij)$ to be the probability of realization $(x^1_{t},x^2_{t})=(i,j)$ when the parameter is $\theta$, and define \[\phi^\theta(i) = \sum_{j \in \mathcal{X}^2} \pi^\theta (ij)\] to be the marginal probability of signal $i$, with $\phi^\theta \equiv (\phi^\theta(i))_{i \in \mathcal{X}^1}$. Likewise define \[\psi^\theta(j) = \sum_{i\in \mathcal{X}^1} \pi^\theta(ij)\] to be the marginal probability of signal $j$, with $\psi^\theta \equiv (\psi^\theta(j))_{j \in \mathcal{X}^2}$. Then (by the results in Section \ref{sec:Doob}), individual learning follows whenever $\phi^\theta \neq \phi^{\theta'}$ and $\psi^\theta \neq \psi^{\theta'}$ for every $\theta \neq \theta'$. Define $\hat{\phi}_t$ to be the empirical frequency of agent $1$ signals and $\hat{\psi}_t$ to be the empirical frequency of agent $2$ signals. Under the assumption of individual learning, empirical frequencies must converge to the theoretical frequencies, i.e., for each parameter $\theta$, $\hat{\phi}_t \rightarrow \phi^\theta$ and $\hat{\psi}_t \rightarrow \phi^\theta$ $P_\theta$-almost surely. Thus each agent eventually assigns a high probability to true $\theta$. The crucial next step is establishing that when agent 1 assigns a high probability to $\theta$, he believes that agent 2 does as well (and vice versa). To see why this might be the case, let $M_1^\theta$ be the $\vert \mathcal{X}^1 \vert \times \vert \mathcal{X}^2 \vert$ matrix whose $(i,j)$-th entry is $\frac{\pi^\theta(ij)}{\phi^\theta(i)}$, i.e. the conditional probability (under $\theta$) that agent 2 observes $j$ given that agent 1 observed $i$, and define $M_2^\theta$ analogously. Then $\hat{\phi}_t M_1^\theta$ is agent 1's expectation of agent 2's realized frequencies (conditional on $\theta$), and $\hat{\phi}_t M_1^\theta M_2^\theta $ is agent 1's expectation of agent 2's expectation of agent 1's realized frequencies (again conditional on $\theta$). Observe (by algebra) that \begin{align*} \phi^\theta M_1^\theta &= \psi^\theta \\ \psi^\theta M_2^\theta &= \phi^\theta \end{align*} so $\phi^\theta M_1^\theta M_2^\theta = \phi^\theta$. Indeed the matrix $M_{12}^\theta \equiv M_1^\theta M_2^\theta$ is a Markov transition matrix on $\mathcal{X}^1$ with stationary distribution $\phi^\theta$, and it is moreover a contraction mapping on $\Delta(\mathcal{X}^1)$. These properties together imply that the higher order beliefs cannot run away from the agent's first-order belief as they did in Example \ref{ex:EmailGameTwist}. \section{Additional Exercises} \begin{exercise} Let $\theta \sim \mathcal{N}(0,1)$ be an unknown parameter. Each agent $i=1,2$ observes $n$ signals $X_1^i, \dots, X_n^i$ where each \[X_m^i = \theta + \eps^i_m\] with $\eps^i_m \sim \mathcal{N}(0,1)$ independent of $\theta$, independent across agents, and independent across signals. Suppose that the true value of $\theta$ is strictly positive, and let $E_p$ be the event that the two agents have common $p$-belief that $\theta$ is positive, where $p>1/2$. What is the probability of $E_p$ under the actual data-generating process? \end{exercise} \chapter{Model Uncertainty and Misspecification} We have so far assumed that agents' model of the world is \emph{correctly specified}: Their prior belief over $\Theta$ assigns positive probability to the true parameter $\theta$ and they update to information correctly, i.e. with knowledge of the true signal generating distribution $(P_\theta)_{\theta \in \Theta}$. Some reasons to question this model of learning include: \begin{itemize} \item We see substantial and persistent disagreement between individuals, but Sections \ref{sec:Doob} and \ref{sec:Merging} imply that agents will eventually hold similar beliefs. \item It is unclear how agents came to know $(P_\theta)_{\theta \in \Theta}$. \item The assumption that agents perceive only one signal-generating distribution $(P_\theta)_{\theta \in \Theta}$ as possible means that agents never abandon their model, even as evidence accumulates against it. As we discuss in Section \ref{sec:BinaryACY}, this dogmatism has some strange implications. \end{itemize} This section relaxes the standard learning model by allowing for \emph{model uncertainty} (Section \ref{sec:ModelUncertainty}) and \emph{model misspecification} (Section \ref{sec:Misspecification}). In the former class of models, agents hold non-degenerate beliefs over the signal generating distribution. In the second, agents assign probability zero to the true parameter. \section{Model Uncertainty} \label{sec:ModelUncertainty} \subsection{Motivation} \label{sec:BinaryACY} Recall the binary setting from Section \ref{sec:LearningExample}: There is an unknown parameter $\theta \in \{A,B\}$, and each period $t\in \mathbb{Z}_+$ a signal is generated iid according to \[\begin{array}{ccc} & a & b\\ A & q& 1-q \\ B & 1-q & q \end{array}\] where $q>1/2$. Agents may hold different (non-degenerate) prior beliefs $\pi_i \in \Delta(\Theta)$ about the parameter, but the value of $q$ is common knowledge. In Section \ref{sec:LearningExample}, we observed that these agents almost surely learn the true parameter as the sample size grows large, and moreover their disagreement about the parameter vanishes. This is because (1) agents assign probability 1 to the event in which the limiting fraction of $a$-realizations is either $q$ or $1-q$, and (2) the parameter is identified, so for either of these limiting frequencies agents (eventually) assign probability 1 to the correct parameter value. What happens along sequences in which the limiting frequency is neither $(q,1-q)$ nor $(1-q,q)$? Although agents assign probability zero to this event, sampling variation can explain any empirical frequency of $a$ and $b$ realizations (however surprising) in finite sequences. Thus Bayes' rule yields well-defined posterior beliefs. For example, suppose $q \in (1/2, 1)$ and let $\bold{x}$ be the (infinite) sequence of $a$-realizations. For any $t$, the unconditional probability of the event that all $t$ realizations are $a$ is \[\pi^i_A \cdot q^t + (1-\pi^i_A)\cdot (1-q)^t\] where $\pi^i_A$ denotes the prior probability of $A$. This expression converges to zero as $t$ grows large but is strictly positive for every $t$. The agent's limiting belief along $\bold{x}$ can thus be computed to be \[\lim_{t \rightarrow \infty} P^i(\theta = A \mid \bold{x}_t) = \lim_{t \rightarrow \infty} \frac{1}{1 + \frac{1-\pi^i_A}{\pi^i_A} \left(\frac{1-q}{q}\right)^{t}} = 1\] So the agent is increasingly convinced that the state is $A$, even as the observed sequence grows increasingly unlikely under the agent's model. Even more striking, as signals accumulate in the frequency $(1,0)$, the agent becomes increasingly confident that future signals will appear in the frequency $(q,1-q)$! These conclusions are a consequence of the agent's dogmatic view of the signal generating distribution---he is unwilling to abandon this model even as mounting evidence points to its error. \subsection{Expanded Framework} We can introduce \emph{model uncertainty} into this learning model by expanding the state space to $\Omega = \Theta \times \Gamma \times \mathcal{X}^\infty$ where the new parameter $\gamma$ indexes the signal-generating distribution, and the parameters $\theta$ and $\gamma$ jointly determine a family $(P_{\theta,\gamma})_{\theta \in \Theta, \gamma \in \Gamma}$ of conditional distributions over signals. The key distinction between $\theta$ and $\gamma$ is that only $\theta$ is payoff-relevant. We'll use $P^i$ to denote agent $i$'s subjective prior belief on $\Omega$, which is common knowledge to all agents. If people do not in fact have dogmatic beliefs about the signal-generating distribution, a natural question is whether modeling agents in this way is still a good abstraction, in the sense that the qualitative insights of this model are robust to introduction of a small amount of model uncertainty. \citet{ACY2015} demonstrate one important sense in which this is not so. \subsection{Failure of Asymptotic Agreement} For any infinite sequence $\bold{x} \in \mathcal{X}^\infty$, write \[\phi_{\theta,t}^i \equiv P^i(\theta \mid x_1, \dots x_t )\] for the posterior probability that agent $i$ assigns to $\theta$ following the first $t$ realizations of the sequence $\bold{x}$. Further define \begin{equation} \label{eq:AsymptoticBelief} \phi_{\theta,\infty}^i(\bold{x}) = \lim_{t \rightarrow \infty} \phi^i_{\theta,t}(\bold{x}) \end{equation} to be the asymptotic posterior probability that agent $i$ assigns to $\theta$ along sequence $\bold{x}$. \begin{definition} Say that \emph{asymptotic agreement} occurs if for each agent $i$, \[P^i(\phi^1_{\theta,\infty} = \phi^2_{\theta,\infty}) =1 \quad \forall \theta\in \Theta\] \end{definition} \noindent That is, both agents believe their asymptotic beliefs will be identical. When agents hold a dogmatic belief about the signal-generating distribution, asymptotic agreement occurs whenever the parameter is identified (Proposition \ref{prop:PosteriorConsistency}). But \citet{ACY2015} show that asymptotic agreement can fail when an arbitrarily small amount of model uncertainty is introduced. The basic idea behind this fragility can be seen through this following example from their paper. Let $\Theta = \{A,B\}$, with each agent $i$'s prior about the parameter denoted by $\pi^i \equiv (\pi^i_A,\pi^i_B)$. Agent $i$ believes that signals are generated iid from the set $\{a,b\}$ with state-dependent distribution \[\begin{array}{ccc} & a & b\\ A & \gamma & 1-\gamma \\ B & 1-\gamma & \gamma \end{array}\] where $\gamma$ is unknown and distributed according to $G^i$ with density \[g^i(\gamma) = \left\{ \begin{array}{cl} \eps + \frac{1-\eps}{\lambda} & \mbox{if } \gamma \in (\gamma^i - \lambda/2, \gamma^i + \lambda/2) \\ \eps & \mbox{otherwise} \end{array} \right.\] for some $\gamma^i >1/2$. Assume that $\gamma^1$ and $\gamma^2$ are different from one another. This density is depicted in Figure \ref{fig:DensityACY}. \begin{figure}[h] \centering \includegraphics[scale=.7]{ACY.pdf} \caption{Depiction of $g^i$.} \label{fig:DensityACY} \end{figure} The limit as $\eps \rightarrow 0$ and $\lambda \rightarrow 0$ returns the model in which each agent $i$ dogmatically believes the signal structure to be given by \[\begin{array}{ccc} & a & b\\ A & \gamma^i & 1-\gamma^i \\ B & 1-\gamma^i & \gamma^i \end{array}\] At this limit, asymptotic agreement holds. Now suppose $\eps$ and $\lambda$ are strictly positive and $\lambda$ is small (specifically, let $\lambda < \vert \gamma^1 - \gamma^2 \vert$ and suppose $\gamma^i - \frac{\lambda}{2} > \frac12$ for each agent $i$). As in Section \ref{sec:LearningExample}, define \[n_t(\bold{x}) \equiv \#\{ 1 \leq t' \leq t : \bold{x}_{t'} =a\} \quad \forall \bold{x} \in \mathcal{X}^\infty \] to be the count of $a$-realizations among the first $t$ realizations of $\bold{x}$, and let \[\rho(\bold{x})= \lim_{t \rightarrow \infty} n_t(\bold{x})/t \quad \forall \bold{x} \in \mathcal{X}^\infty\] be the asymptotic frequency of $a$-realizations along $\bold{x}$. The following lemma provides a simple expression for the agent's asymptotic belief (\ref{eq:AsymptoticBelief}) on the set of sequences $\widetilde{\mathcal{X}}^\infty \subseteq \mathcal{X}^\infty$ where the limiting frequency $\rho(\bold{x})$ exists. \begin{lemma}[\citet{ACY2015}] \label{lemm:Asymptotic} For every sequence $\bold{x} \in \widetilde{\mathcal{X}}^\infty$, \[ \phi^i_{A, \infty}(\bold{x}) = \left(1 + \frac{1-\pi^i_A}{\pi^i_A} \cdot \frac{f^i_B(\rho(\bold{x}),1-\rho(\bold{x}))}{f^i_A(\rho(\bold{x}),1-\rho(\bold{x}))}\right)^{-1} \] where $\frac{f^i_B(\rho(\bold{x}),1-\rho(\bold{x}))}{f^i_A(\rho(\bold{x}),1-\rho(\bold{x}))}$ is the asymptotic likelihood ratio under agent $i$'s subjective model. \end{lemma} In the running example of this section, the asymptotic likelihood ratio can be simplified to \begin{align*} \frac{f_B^i(\rho, 1-\rho)}{f_A^i(\rho,1-\rho)} = \frac{g^i(1-\rho)}{g^i(\rho)} \end{align*} This ratio takes on either of three possible values. For any $\rho \in (\gamma^i - \lambda/2, \gamma^i + \lambda/2)$, \[\frac{g^i(1-\rho)}{g^i(\rho)} = \frac{\eps \lambda}{1-\eps(1-\lambda)} \] which converges to zero as $\eps$ and $\lambda$ grow small (implying $\phi^i_{A,\infty} \rightarrow 1$). By a mirror argument, if the limiting frequency of $a$-realizations is some $\rho \in (1-\gamma^i - \lambda/2, 1-\gamma^i + \lambda/2)$, then \[\frac{g^i(1-\rho)}{g^i(\rho)} = \frac{1-\eps(1-\lambda)}{\eps \lambda} \] which converges to $\infty$ as $\eps$ and $\lambda$ grow small (implying $\phi_{A,\infty} \rightarrow 0$). For all other limiting frequencies, the asymptotic likelihood ratio is simply $\frac{g^i(1-\rho)}{g^i(\rho)} =1$. These unlikely signal sequences are considered possible but uninformative about the parameter. Applying Lemma \ref{lemm:Asymptotic}, Figure \ref{fig:AsymptoticPosterior} depicts agent $i$'s asymptotic posterior as a function of the limiting signal frequency. \begin{figure}[H] \centering \includegraphics[scale=0.75]{ACY1.pdf} \caption{Agent $i$'s asymptotic posterior in the limit as $\eps \rightarrow 0$.} \label{fig:AsymptoticPosterior} \end{figure} In the limit as $\eps \rightarrow 0$ and $\lambda \rightarrow 0$, each agent $i$ is increasingly sure that the limiting frequency $\rho$ will either be close to $\gamma^i$ or $1-\gamma^i$, so he believes that he will (approximately) learn the parameter. But when a sequence of signals has a long-run frequency that leads agent 1 to learn $\theta=A$ or $\theta=B$, agent 1 knows that this sequence has led agent 2 to consider the signal uninformative, in which case agent 2's limiting belief is the same as his prior. Likewise whenever agent 2 believes the signal sequence to be informative about $\theta$, he knows that agent 1 considers the signal sequence to be uninformative. So not only does asymptotic agreement fail, but we have the stronger conclusion that the limiting beliefs $\phi^1_\infty$ and $\phi^2_\infty$ are different on \emph{all} sample paths. Figure \ref{fig:AsymptoticDisagreement} depicts $\vert \phi^1_{A,\infty} - \phi^2_{A,\infty}\vert$ as a function of the limiting signal frequency. To summarize, asymptotic agreement holds in the limiting model $\eps=0,\lambda=0$ (with no model uncertainty), but fails when the model is perturbed to include an arbitrarily small amount of model uncertainty via $\eps>0,\lambda>0$. \begin{remark} As in Section \ref{sec:Learning}, there is no ground truth---whether asymptotic agreement does or doesn't hold is determined solely with respect to the agents' subjective beliefs. \end{remark} \begin{remark} In this example, the two agents' prior beliefs on $\Theta \times \Gamma$ are absolutely continuous with respect to one another. So Proposition \ref{prop:BlackwellDubins} tells us that their beliefs about future signal realizations will eventually merge. But $(\theta,\gamma)$ is not identified: For example, $(A,1)$ and $(B,0)$ identically lead to a degenerate distribution on the infinite sequence of $a$-realizations. Thus asymptotic agreement about the expanded parameter $(\theta,\gamma)$ is not guaranteed from the results of Sections \ref{sec:Doob} and \ref{sec:Merging}. \end{remark} \begin{figure}[h] \centering \includegraphics[scale=.8]{ACY2.pdf} \caption{Asymptotic disagreement $\vert \phi^1_{A,\infty} - \phi^2_{A,\infty}\vert$ in the limit as $\eps \rightarrow 0$, for parameter values $ \pi_B^1 > \pi_A^2 > \pi_B^2 > \pi_A^1$.} \label{fig:AsymptoticDisagreement} \end{figure} \section{Misspecified Learning} \label{sec:Misspecification} Next suppose the agent is not simply uncertain about the signal-generating distribution, but in fact rules out the true distribution. \begin{example} \label{ex:Misspecified} Let $\Theta = \{A,B,C\}$ where the conditional distributions over signal realizations $\{a,b\}$ are given as follows: \[\begin{array}{ccc} & a & b\\ A & 4/5 & 1/5 \\ B & 1/2 & 1/2 \\ C & 2/3 & 1/3 \end{array}\] The agent has a uniform prior on $\{A,B\}$, but the true parameter is $C$. Given repeated independent observations from the distribution $(2/3,1/3)$, will the agent's beliefs converge and if so to what limiting belief? \end{example} \subsection{Role of KL Divergence} \label{sec:Berk} Intuitively, we may expect that the agent's beliefs converge to certainty of the parameter whose distribution is ``closer" to the true distribution. The right notion of closeness here turns out to be KL Divergence (Section \ref{sec:KL}). Here is a heuristic argument for how KL divergence emerges. Suppose the agent only considers parameter values $\theta =A $ and $\theta=B$ to be possible, where the prior probability of $\theta=A$ is $\pi \in (0,1)$. We'll use $f_\theta(x)$ to denote the conditional probability of signal realization $x$ when the parameter is $\theta$. The agent observes a sequence of signals drawn iid according to $f_{\theta^*}$, where the ``true" parameter value $\theta^*$ may be different from both $A$ and $B$. For any signal sequence $\bold{x}_t = (x_1, \dots, x_t)$, the conditional probability of $A$ can be rewritten \begin{align*} \mathbb{P}&(\theta = A \mid \bold{x}_t) \\ & = \left(1 + \frac{1-\pi}{\pi} \left(\prod_{i=1}^t \frac{f_B(x_i)}{f_A(x_i)}\right)\right)^{-1} \\ & = \left(1 + \frac{1-\pi}{\pi} \left(\prod_{i=1}^t \frac{f_B(x_i)/f_{\theta^*}(x_i)}{f_A(x_i)/f_{\theta^*}(x_i)}\right)\right)^{-1} \\ &= \left(1 + \frac{1-\pi}{\pi} \exp\left( - \log \left(\prod_{i=1}^t \frac{f_{\theta^*}(x_i)}{f_B(x_i)}\right) + \log \left(\prod_{i=1}^t \frac{f_{\theta^*}(x_i)}{f_A(x_i)}\right)\right)\right)^{-1} \\ &= \left(1 + \frac{1-\pi}{\pi} \exp\left(-n \cdot \left(\frac1t \sum_{i=1}^t \log \left(\frac{f_{\theta^*}(x_i)}{f_B(x_i)}\right) - \frac1t \sum_{i=1}^t\log \left(\frac{f_{\theta^*}(x_i)}{f_A(x_i)}\right)\right)\right)\right)^{-1} \end{align*} and for large $t$ this final display is approximately equal to \begin{equation} \left(1 + \frac{1-\pi}{\pi} \exp\left(-t \cdot \left(D(f_{\theta^*} \| f_B) - D(f_{\theta^*} \| f_A)\right)\right)\right)^{-1} \label{eq:KLBerk} \end{equation} If $\theta^* \in \{A,B\}$, then either $D(f_{\theta^*}\| f_A) =0 < D(f_{\theta^*}\| f_B)$ (in which case the expression in (\ref{eq:KLBerk}) converges to 1) or $D(f_{\theta^*}\| f_B)=0 < D(f_{\theta^*}\| f_A)$ (in which case the expression in (\ref{eq:KLBerk}) converges to 0). In either case beliefs converge to certainty of the true parameter, as previously implied by Proposition \ref{prop:PosteriorConsistency} (Section \ref{sec:Doob}). Suppose now that $\theta^* \notin \{A,B\}$. Proposition \ref{prop:PosteriorConsistency} no longer applies: \citet{Doob}'s consistency result is with respect to a $P$-measure 1 set of sequences, (where $P$ is the agent's prior on $\Theta \times \mathcal{X}^\infty$), but in this example $\theta^*$ falls in the $P$-measure zero set on which consistency is not guaranteed. Indeed, in Section \ref{sec:Doob} we made no reference to a ``true" distribution---consistency was demonstrated within the agent's subjective model. But (\ref{eq:KLBerk}) is useful even when $\theta^*$ has zero probability under the agent's prior. Specifically, when $D(f_{\theta^*}\| f_A) < D(f_{\theta^*}\| f_B)$, then (\ref{eq:KLBerk}) converges to 1 as $t\rightarrow \infty$, yielding certainty of $\theta=A$, and when $D(f_{\theta^*}\| f_A) > D(f_{\theta^*}\| f_B)$, then (\ref{eq:KLBerk}) converges to zero as $t \rightarrow \infty$. So the agent's beliefs concentrate on the parameter that induces a distribution over signals that is closest in Kullback-Liebler divergence to the true distribution. \citet{Berk1966} establishes this result more generally. We'll use the notation of Section \ref{sec:LearningFramework}, introducing $\theta^*$ as new notation for the true parameter, and assuming that the observed signals are drawn iid according to the density $f_{\theta^*}$ (with $P_{\theta^*}$ denoting the induced measure on $\mathcal{X}^\infty$). To simplify exposition, assume that $\Theta$ is finite. \begin{proposition}[\citet{Berk1966}] \label{prop:Berk} Let \[A \equiv \argmin_{\theta \in Supp(P)} D(f_{\theta^*} \| f_\theta) \] be the set of parameters in the support of the agent's prior that minimize KL divergence to the true distribution. Then \[\lim_{t \rightarrow \infty} P(A \mid X_1, \dots, X_t) =1 \quad P_{\theta^*}\mbox{-a.s.}\] \end{proposition} \begin{example} Returning to Example \ref{ex:Misspecified}, since \begin{align*} D(f_C \| f_A ) = (2/3) \cdot \log\left(\frac{2/3}{4/5}\right) +(1/3) \cdot \log\left(\frac{1/3}{1/5}\right) \approx 0.021 \\ D(f_C \| f_B) = (2/3) \cdot \log\left(\frac{2/3}{1/2}\right) +(1/3) \cdot \log\left(\frac{1/3}{1/2}\right) \approx 0.025 \end{align*} Proposition \ref{prop:Berk} implies that the agent's beliefs converge to certainty of $\theta=A$. \end{example} \subsection{Berk Nash Equilibrium} Standard equilibrium concepts in game theory assume that players best-respond to correct and common beliefs. \citet{EspondaPouzo} proposes a new equilibrium concept (modifying Nash equilibrium) that allows players to be misspecified. As this definition can be applied also within a single-agent setting, and as the notation is substantially lighter in this case, we start by defining Berk Nash equilibrium with one agent. \subsubsection{Single Agent Settings} There is a finite set of payoff-relevant states $\Omega$, a finite set of signal realizations $\mathbb{S}$, and a finite set of actions $\mathbb{A}$. The agent holds a prior $p$ over $\Omega \times \mathbb{S}$. Additionally, there is a finite set of consequences $\mathbb{Y}$, which are determined by the agent's action and the state via a feedback function $f : \mathbb{A} \times \Omega \rightarrow \mathbb{Y}$. The agent's payoff function is $u : \mathbb{A} \times \mathbb{Y} \rightarrow \mathbb{R}$. The timing is as follows. First the agent chooses a strategy $\sigma: \mathbb{S} \rightarrow \Delta(\mathbb{A})$ mapping the observed signal into a distribution over actions. Then, the state and signal $(\omega, s)$ are drawn according to $p$, and the action $\sigma(s)$ is implemented. Finally, the consequence $y$ is determined given the action and state $(a,\omega)$, and the agent obtains payoff $u(a,y)$. There is an \emph{objective} mapping $Q: \mathbb{S} \times \mathbb{A} \rightarrow \Delta(\mathbb{Y})$ from actions and signals into distributions over consequences, where \[Q(y \mid s,a) = \sum_{\omega : f(\omega,a) =y} p(\omega \mid s) \quad \forall (y,s,a).\] This is the conditional distribution over consequences that a Bayesian agent with knowledge of $f$, the action $a$, and the signal realization $s$ would expect. The agent does not know $Q$ (or $f$). His \emph{subjective model} $\mathcal{Q}= \langle \Theta, (Q_\theta)_{\theta \in \Theta} \rangle$ is a parametrized family of mappings $Q_{\theta} : \mathbb{S} \times \mathbb{A} \rightarrow \Delta(\mathbb{Y})$. \begin{definition} The agent is \emph{correctly specified} if there exists $\theta \in \Theta$ such that $Q_{\theta}(\cdot \mid s,a) = Q(\cdot \mid s,a)$ for all $(s,a) \in \mathbb{S} \times \mathbb{A}$; otherwise the agent is \emph{misspecified}. \end{definition} The following example is adapted from \citet{EspondaPouzo}: \begin{example} A monopolist chooses a price $a$, which together with a random shock $\omega \sim \mathcal{N}(0,1)$ determines demand \[y = f(a,\omega) = \phi(a) + \omega.\] The monopolist's payoff is $u(a,y) = a\cdot y$. Under the objective mapping $f$ , the conditional distribution $Q(\cdot \mid a)$ is normal with mean $\phi(a)$ and variance 1. The monopolist's subjective model is instead the family $Q_\theta(\cdot \mid a)$ of normal distributions indexed to $\theta =(\theta_0,\theta_1) \in \mathbb{R} \times \mathbb{R}$, where each $Q_\theta(\cdot \mid a)$ is normal with mean $\theta_0 + \theta_1 a$ and variance 1, corresponding to a perceived feedback function \[f_\theta(a,\omega) = \theta_0 + \theta_1 a.\] If $\phi$ is not in fact affine in $a$, then the monopolist is misspecified. (This example did not include a signal.) \end{example} For any agent strategy $\sigma: \mathbb{S} \rightarrow \Delta(\mathbb{A})$, define \[q_\sigma(s,a) \equiv p_S(s) \sigma(a\mid s)\] to be the distribution on $\mathbb{S} \times \mathbb{A}$ induced by the strategy $\sigma$ and the agent's prior $p$. Further define \[K(\sigma, \theta) = \sum_{(s,a) \in \mathbb{S} \times \mathbb{A}} \left(\mathbb{E}_{Q(Y \mid s, a)} \left[ \ln \frac{Q(Y \mid s, a)}{Q_{\theta}(Y \mid s, a)} \right]\right) q_\sigma(a,s)\] to be the expected Kullback-Leibler divergence between $Q_\theta(\cdot \mid s,a)$ and the objective distribution $Q(\cdot \mid s,a)$, weighted by $q_\sigma \in \Delta(\mathbb{S} \times \mathbb{A})$. Given the agent's strategy $\sigma$, the set of closest parameters (in weighted KL divergence) is \[\Theta^*(\sigma) = \arg \min_{\theta \in \Theta} K(\sigma,\theta) \] \begin{definition} \label{def:BerkNash} A strategy profile $\sigma$ is a \emph{Berk-Nash equilibrium} if there exists a $\mu \in \Delta(\Theta)$ such that \begin{itemize} \item[(a)] $\mu \in \Delta(\Theta^*(\sigma))$; i.e., $\mu$ has support on the set of KL-minimizers. \item[(b)] $\sigma$ is optimal given $\mu$; namely, $\sigma(a \mid s)>0$ implies that \[a \in \arg \max_{a' \in \mathbb{A}} \mathbb{E}_{\overline{Q}_{\mu}( y \mid s, a')} [u(a', y)]\] where $\overline{Q}_{\mu}(y \mid s, a) = \int_{\Theta} Q_{\theta}(y \mid s, a) \mu(\theta) d\theta$ is the conditional distribution over consequences that is induced by $\mu$. \end{itemize} \end{definition} \begin{example} \label{ex:BerkNash} A researcher's project is either good or bad, $\Omega = \{g,b\}$. The researcher observes a reaction to the project, which is either positive or negative, $\mathbb{S} = \{+,-\}$ where $(\omega,s)$ are jointly distributed according to: \[\begin{array}{ccc} & s=+ & s=-\\ \omega = g & 1/3 & 1/6 \\ \omega = b & 1/6 & 1/3 \\ \end{array}\] The researcher observes the signal $s \in \mathbb{S}$ and decides whether to exert high or low effort towards developing the project, $A = \{H, L\}$. The unknown true quality of the project, and the researcher's effort, jointly determine a journal outcome in $\mathbb{Y} = \{A,R\}$ (accept or reject) according to the following function \[f(a,\omega) = \left\{ \begin{array}{cc} A & (a,\omega)=(H,g) \\ R & otherwise \end{array}\right.\] That is, the project is accepted if it is good and also the researcher's effort is high, and it is rejected otherwise. The researcher's payoff is \[u(a,y) = \left\{ \begin{array}{cc} 1 & (a,y)=(H, A) \\ -1 & (a,y)=(H, R)\\ 2 & (a,y)=(L,A) \\ 0 & (a,y)=(L,R) \end{array} \right.\] The true distribution $Q(y \mid a,s)$ is described by $Q(A \mid +,L)= Q(A \mid -, L) = 0$ (since the paper will not be accepted if effort is low) and \begin{align*} Q(A \mid +,H) & = p(\{\omega : f(H,\omega) =A\} \mid +) = p(g \mid +) = 2/3 \\ Q(A \mid -,H) & = p(\{\omega : f(H,\omega) =A\} \mid -) = p(g \mid -) = 1/3 \end{align*} since conditional on high effort, the probability of acceptance is equal to the probability that the paper is good. These conditional distributions are summarized as follows: \[\begin{array}{ccc} & A & R\\ (+,H) & 2/3 & 1/3 \\ (-,H)& 1/3 & 2/3 \\ (+,L)& 0 & 1 \\ (-,L) & 0 & 1 \end{array}\] Suppose the researcher's subjective model allows only for the parameters $\theta_1$ and $\theta_2$ which are indexed to the following conditional distributions: \[\begin{array}{ccc} & A & R\\ (+,H) & 3/4 & 1/4 \\ (-,H)& 1/2 & 1/2 \\ (+,L)& 0 & 1 \\ (-,L) & 0 & 1 \end{array} \quad \quad \begin{array}{ccc} & A & R\\ (+,H) & 2/3 & 1/3 \\ (-,H)& 1/3 & 2/3 \\ (+,L)& 1/10 & 9/10 \\ (-,L) & 1/10 & 9/10 \end{array}\] The distribution on the left, $Q_{\theta_1}$, overestimates the value of hard work, and the distribution on the right, $Q_{\theta_2}$, is overly optimistic about the probability of acceptance given low effort. Is the strategy profile $\sigma(+)=H$, $\sigma(-)=L$ (in which the research exerts high effort after a positive signal and low effort after a low signal) a Berk Nash equilibrium? The distribution $q_\sigma$ assigns probability $1/2$ to $(+,H)$ and to $(-,L)$. So \begin{align*} K(\sigma, \theta) & = \frac12 \left(\sum_{y \in \{A,R\}} Q(y \mid +,H) \cdot \ln\left( \frac{Q(y\mid +,H)}{Q_{\theta}(y\mid +,H) } \right)\right) \\ & \quad \quad \quad + \frac12 \left(\sum_{y \in \{A,R\}} Q(y \mid -,L) \cdot \ln\left( \frac{Q(y\mid -,L)}{Q_{\theta}(y\mid -,L) } \right)\right) \end{align*} and thus \begin{align*} K(\sigma, \theta_1) & = \frac12 \cdot \left( \frac{2}{3} \ln\left(\frac{2/3}{3/4}\right) + \frac{1}{3} \ln\left(\frac{1/3}{1/4}\right) \right) \approx 0.0038 \\ K(\sigma, \theta_2) & = \frac12 \cdot \ln\left(\frac{1}{9/10}\right) \approx 0.02 \end{align*} Hence $\theta_1$ is the unique minimizer of KL divergence to the true distribution, i.e., $\Theta^*(\sigma) = \{\theta_1\}$. Only $\mu = \delta_{\theta_1}$ (a point mass at $\theta_1$) satisfies Part (a) of Definition \ref{def:BerkNash}, and the distribution $\overline{Q}_\mu$ in Part (b) of Definition \ref{def:BerkNash} simplifies to $Q_{\theta_1}$. To determine whether $\sigma$ is a Berk Nash equilibrium, it remains to verify that $\sigma$ satisfies the optimality condition in Part (b) of Definition \ref{def:BerkNash}. Suppose the signal realization is $s=+$. Then the action $H$ yields an expected payoff of \[\mathbb{E}_{Q_{\theta_1}(y \mid +,H)}[u(H,y)] = 1 \cdot \frac{3}{4} - 1 \cdot \frac{1}{4} = \frac{1}{2}\] while the action $L$ yields an expected payoff of \[\mathbb{E}_{Q_{\theta_1}(y \mid +,L)}[u(L,y)] = 0\] so $a=H$ is indeed optimal. Suppose the signal realization is $s=-$. Then the action $H$ yields an expected payoff of \[\mathbb{E}_{Q_{\theta_1}(y \mid -,H)}[u(H,y)] = 1 \cdot \frac{1}{2} - 1 \cdot \frac{1}{2} = 0\] while the action $L$ yields an expected payoff of \[\mathbb{E}_{Q_{\theta_1}(y \mid -,L)}[u(L,y)] = 0.\] So $a=L$ is a best reply, and we conclude that $\sigma$ is a Berk Nash equilibrium. In sum, we have shown that the strategy $\sigma$ is a best reply to a point mass on the unique parameter that minimizes KL divergence to the distribution over consequences induced by $\sigma$. In this sense the strategy $\sigma$ is internally consistent with respect to the agent's misspecified model. \end{example} \begin{exercise} Solve for whether there are any other pure-strategy Berk Nash equilibria in Example \ref{ex:BerkNash}. \end{exercise} \subsubsection{Simultaneous-Move Games} We turn now to the definition of Berk Nash equilibrium in simultaneous-move games. There is a set of players $I$, a set of payoff-relevant states $\Omega$, a set of signal profiles $\mathbb{S} = \times_i \mathbb{S}_i$, and a probability distribution $p$ over $\Omega \times \mathbb{S}$ whose marginals have full-support. There is a set of action profiles $\mathbb{A} = \times_i \mathbb{A}_i$, a set of \emph{consequence} profiles $\mathbb{Y} = \times_i \mathbb{Y}_i$, and a profile of \emph{feedback functions} $f = (f_i)_{i \in \mathcal{I}}$ where each $f_i : \mathbb{A} \times \Omega \rightarrow \mathbb{Y}_i$ maps outcomes in $\Omega \times \mathbb{A}$ into consequences for player $i$. Agents have payoff functions $u_i : \mathbb{A}_i \times \mathbb{Y}_i \rightarrow \mathbb{R}$. The timing of the game is as follows: First, the state and signal $(\omega, s)$ are drawn according to $p$. Then each player $i$ privately observes his own signal $s_i$ and chooses an action $a_i$. The profile of consequences is determined via $f$ as a function of the action profile and the state, and payoffs are realized. For any player $i$, action $a_i \in \mathbb{A}_i$, and consequence $y_i \in \mathbb{Y}_i$, let \[\Lambda^i(a_i,y_i) = \{(\omega,a_{-i}): f_i(a_i,a_{-i},\omega) = y_i\}\] be the state and opponent action profiles that induce consequence $y_i$ given player $i$'s choice of $a_i$. The \emph{objective distribution} over player $i$'s consequences is $Q_\sigma^i : \mathbb{S}_i \times \mathbb{A}_i \rightarrow \Delta(\mathbb{Y}_i)$, where \[Q_\sigma^i(y_i \mid s_i, a_i) = \sum_{(\omega,a_{-i}) \in \Lambda^i(a_i,y_i) } \sum_{s_{-i} \in S_{-i}} \left(\prod_{j \neq i} \sigma^j (a^j \mid s^j)\right) \cdot p_{\Omega \times S_{-i} \mid S_i} (\omega, s_{-i} \mid s_i) \] for all $(s_i, a_i, y_i) \in \mathbb{S}_i \times \mathbb{A}_i \times \mathbb{Y}_i$. This is the conditional distribution over consequences that a Bayesian agent with knowledge of $f$, the strategy profile $\sigma$, and the signal realization $s_i$ would expect. \bigskip The subjective model $\mathcal{Q}= \langle \Theta, (Q_\theta)_{\theta \in \Theta} \rangle$, with $\Theta = \prod_{i \in \mathcal{I}} \Theta^i$ and $Q_\theta = (Q^i_{\theta_i})_{i \in \mathcal{I}}$, describes the set of distributions over consequences that each player considers possible. Each player's parameter set $\Theta_i$ indexes distributions $Q^i_{\theta_i} : \mathbb{S}_i \times \mathbb{A}_i \rightarrow \Delta(\mathbb{Y}_i)$. \begin{definition} A game is \emph{correctly specified given $\sigma$} if for all players $i$, there exists $\theta_i \in \Theta_i$ such that $Q_{\theta_i}^i(\cdot \mid s_i,a_i) = Q_\sigma^i(\cdot \mid s_i,a_i)$ for all $(s_i,a_i) \in \mathbb{S}_i \times \mathbb{A}_i$; otherwise the game is \emph{misspecified given $\sigma$}. A game is \emph{correctly specified} if it is correctly specified for all $\sigma$; otherwise it is \emph{misspecified}. \end{definition} For any strategy profile $\sigma$, define \[q_{\sigma_i}(s_i,a_i) \equiv \sigma_i(a_i \mid s_i) p_{S_i}(s_i)\] For any strategy profile $\sigma$, define \[K_i(\sigma, \theta_i) = \sum_{(s_i,a_i) \in \mathbb{S}_i \times \mathbb{A}_i} \left(\mathbb{E}_{Q^i_\sigma(\cdot \mid s_i, a_i)} \left[ \ln \frac{Q_\sigma^i(Y_i \mid s_i, a_i)}{Q_{\theta_i}^i(Y_i \mid s_i, a_i)} \right]\right) q_{\sigma_i}(s_i,a_i)\] to be the expected Kullback-Leibler divergence between $Q_{\theta_i}(\cdot \mid s_i,a_i)$ and the objective distribution $Q_\sigma^i(\cdot \mid s_i,a_i)$, weighting $(s_i,a_i)$ pairs according to $q_{\sigma_i}(s_i,a_i)$. The set of closest parameters is \[\Theta_i(\sigma) = \arg \min_{\theta_i \in \Theta_i} K_i(\sigma,\theta_i) \] \begin{definition} A strategy profile $\sigma$ is a \emph{Berk-Nash equilibrium} if for all players $i$, there exists a $\mu_i \in \Delta(\Theta_i)$ such that \begin{itemize} \item[(a)] $\mu_i \in \Delta(\Theta_i(\sigma))$; i.e., $\mu$ has support on the set of KL-minimizers. \item[(b)] $\sigma_i$ is optimal given $\mu_i$; namely, $\sigma_i(a_i \mid s_i)>0$ implies that \[a_i \in \arg \max_{\overline{a}_i \in \mathbb{A}_i} \mathbb{E}_{\overline{Q}^i_{\mu_i}(\cdot \mid s_i, \overline{a}_i)} [u_i(\overline{a}_i, Y_i)]\] where $\overline{Q}^i_{\mu_i}(\cdot \mid s_i, \overline{a}_i) = \int_{\Theta_i} Q^i_{\theta_i}(\cdot \mid s_i, a_i) \mu_i(\theta_i)d\theta_i$ is the distribution over consequences of player $i$, conditional on $(s_i,a_i) \in \mathbb{S}_i \times \mathbb{A}_i$, induced by $\mu_i$. \end{itemize} \end{definition} \begin{remark} This definition is equivalent to Nash equilibrium when (a) is replaced with the condition that players have correct beliefs; i.e., $\overline{Q}^i_{\mu_i} = Q_\sigma^i$. \end{remark} \begin{proposition}[\citet{EspondaPouzo}] A Berk-Nash equilibrium exists. \end{proposition} Building on Proposition \ref{prop:Berk}, several authors have examined convergence of misspecified learning processes where---different from \citet{Berk1966}'s setting---signals are endogenous to actions chosen by agents \citep{Nyarko,FudenbergRomanyukStrack,HeidhuesKoszegiStrack2021}. The stable outcomes under many of these processes turn out to correspond to Berk Nash equilibria or a refinement of this set. Some recent works on this topic include \citet{EspondaPouzo}, \citet{EspondaPouzoYamamoto}, \citet{BohrenHauser}, \citet{FudenbergLanzaniStrack}, \citet{EspondaPouzoYamamoto} and \citet{FrickIijimaIshii}. \section{Additional Exercises} \begin{exercise} There are two states of the world, $\theta \in \{A,B\}$. A news source receives an infinite sequence of signals about this state of the world drawn iid according to the following signal structure \[\begin{array}{ccc} & a & b \\ \theta = A & 3/4 & 1/4 \\ \theta = B & 1/4 & 3/4 \end{array}\] This news source is biased. When it observes the signal realization $a$, it reports $a$, but conditional on observing the signal realization $b$, it reports this $b$ with probability $1-\lambda$ and otherwise falsely reports $a$ (where $\lambda$ is constant across time). You are aware that the news source is biased and dogmatically believe that $\lambda = 1/2$. Suppose the true state is $\theta=B$, and you observe the infinite sequence of news reports. Provide a condition (potentially empty) on the true value of $\lambda$ such that your asymptotic belief is that the state is $\theta=A$. Interpret this result. \end{exercise} \chapter{Information Design} \section{Bayesian Persuasion} \subsection{Example} \label{sec:BPexample} Two agents, a judge and a prosecutor, are involved in a court case. The unknown payoff-relevant state is whether the defendant is \emph{innocent} ($I$) or \emph{guilty} ($G$). The judge and the prosecutor share a common prior that the defendant is guilty with probability 0.3. Suppose the prosecutor cannot falsify or distort evidence, but can selectively choose what kind of information to present to the court (e.g., deciding who to subpoena or which forensic tests to conduct). Formally, the prosecutor chooses an information structure $\sigma: \{G,I\} \rightarrow \Delta(S)$ for some set of signal realizations $S$. The judge observes the outcome of the signal $\sigma$, updates his beliefs, and chooses whether to \emph{acquit} or \emph{convict} the defendant. The judge and prosecutor's payoffs are determined by the judge's action and by the unknown state. The judge receives a payoff of 1 from convicting a guilty defendant or from acquitting an innocent defendant, and otherwise receives a payoff of zero. The prosecutor receives a payoff of 1 if the judge convicts the defendant and a payoff of 0 if the judge acquits the defendant, independent of the defendant's guilt. What information structure should the prosecutor choose, and what is the best expected payoff he can achieve? Let's start with some benchmarks. One option is to send a completely uninformative signal. Since innocence is more likely than guilt under the judge's prior, the judge chooses to acquit given no information, yielding a payoff of zero for the prosecutor. At the other extreme, the prosecutor can choose a perfectly informative signal that reveals the defendant's guilt. The judge convicts precisely when the defendant is guilty, yielding an expected payoff (under the prior) of 0.3 for the prosecutor. Can the prosecutor do better? The perfectly revealing signal splits defendants into two bins---one labeled ``convict" and one labelled ``acquit" (Figure \ref{fig:BPreveal}). \begin{figure}[H] \begin{center} \includegraphics[scale=0.25]{BPreveal.png} \end{center} \caption{Depiction of the perfectly revealing signal, where each circle represents $1/10$ of the population.} \label{fig:BPreveal} \end{figure} The judge's posterior for individuals labeled ``convict" is that they are guilty with probability 1, so he optimally convicts any individual with this label. Likewise his posterior for individuals labeled ``acquit" is that they are innocent with probability 1, so he acquits any individual with this label. Now consider moving one unit of innocent individuals from the acquit bin to the convict bin (Figure \ref{fig:BPdeviate}). \begin{figure}[H] \begin{center} \includegraphics[scale=0.25]{BPdeviate.png} \end{center} \caption{Deviation from the perfectly revealing signal.} \label{fig:BPdeviate} \end{figure} \begin{remark} Every ``bin representation" as shown in Figures \ref{fig:BPreveal} and \ref{fig:BPdeviate} corresponds to a unique signal. For each $\theta \in \Theta$ and $s \in \{\mbox{convict}, \mbox{acquit}\}$, let $P(\theta,s)$ be the mass of $\theta$-type units in bin $s$ (interpreting each circle as $1/10$ of the population). Then $P$ is a probability measure on $\Theta \times S$, and the corresponding signal $\sigma: \Theta \rightarrow \Delta(S)$ can be derived by Bayes' rule. As we see in the proof of Proposition \ref{prop:BP}, every signal also admits a bin representation.\footnote{In particular, every signal admits a ``bin representation" that consists of two bins---a convict bin, and an acquit bin---where the judge optimally convicts all individuals in the convict bin and acquits all individuals in the acquit bin.} \end{remark} Following this modification on the perfectly revealing signal, the posterior probability of guilt in the acquit bin is unchanged. The posterior probability of guilt for individuals labeled ``convict" drops to $3/4$---but crucially, the judge's optimal action remains the same. Intuitively, by pooling innocent defendants with guilty defendants (but maintaining sufficiently guilty defendants that the judge still wants to convict), the prosecutor is able to induce the judge to wrongly convict a larger number of defendants. Iterating this logic, we can continue to move units of innocent individuals into the convict bin, up until the judge is indifferent between convicting and acquitting (Figure \ref{fig:BPoptimal}). \begin{figure}[h] \begin{center} \includegraphics[scale=0.25]{BPoptimal.png} \end{center} \caption{Depiction of the prosecutor-optimal signal structure.} \label{fig:BPoptimal} \end{figure} These bins correspond to the following signal structure: \begin{equation} \label{eq:OptimalSignal} \begin{array}{ccc} & \emph{convict} & \emph{acquit} \\ G & 1 & 0 \\ I & 3/7 & 4/7 \end{array} \end{equation} \noindent That this signal structure is optimal will follow from the results in the subsequent section. Strikingly, although the judge knows that only 30\% of defendants are guilty, he ends up convicting 60\% of them. \subsection{Model} There are two agents, a Sender and a Receiver. The unknown parameter $\theta$ takes values in the finite set $\Theta$, and agents share a common prior $\mu_0 \in \Delta(\Theta)$. A signal is any mapping $\sigma: \Theta \rightarrow \Delta(S)$ from the set of states into distributions over a finite set of signal realizations $S$. The Receiver chooses from a compact set of actions $A$. Both agents' payoffs depend on the Receiver's action and the unknown state. We'll denote the Receiver's utility function by $u_R: A \times \Theta \rightarrow \mathbb{R}$ and the Sender's utility function by $u_S: A \times \Theta \rightarrow \mathbb{R}$, where both are assumed to be continuous. The timeline is as follows: First, the Sender chooses a signal $\sigma$. The realization of this signal is then observed by the Receiver, who updates his beliefs and chooses an action $a \in A$. Finally payoffs are realized. The solution concept is Sender-Preferred subgame perfect equilibrium; that is, the Receiver chooses an action to maximize his expected payoffs, breaking ties between optimal actions by maximizing Sender's payoffs.\footnote{If there are multiple such actions, the Receiver chooses any action between them.} \subsection{Solution and Geometric Representation} Consider any Sender-Preferred subgame perfect equilibrium, and let $\hat{a}(\mu)$ denote the Receiver's action given belief $\mu \in \Delta(\Theta)$ in this equilibrium. That is, \begin{equation} \label{eq:ReceiverAction} \hat{a}(\mu) \in \argmax_{a \in A(\mu)} \mathbb{E}_\mu \left[u_S(a, \theta)\right] \end{equation} where \[A(\mu) = \argmax_{a \in A} \mathbb{E}_\mu \left[u_R(a, \theta)\right]\] is the set of actions that maximize the Receiver's expected payoff given belief $\mu$. (If the RHS of (\ref{eq:ReceiverAction}) is non-empty, set $\hat{a}(\mu)$ to be any action in this set.) Let \[\hat{v}(\mu) := \mathbb{E}_\mu \left[u_S(\hat{a}(\mu), \theta)\right]\] be the Sender's expected payoff given belief $\mu$ and Receiver-action $\hat{a}(\mu)$. A signal's \emph{value} is the Sender's (ex-ante) expected payoff given choice of that signal. \begin{proposition}[\citet{KamenicaGentzkow}] The following are equivalent: \begin{itemize} \item[(i)] There exists a (finite-valued) signal with value $v^*$. \item[(ii)] There exists a (finite-valued) signal taking realizations in $S\subseteq A$ with value $v^*$. \item[(iii)] There exists a Bayes-plausible distribution over posterior beliefs, $\tau \in \Delta(\Delta(\Theta))$, such that $\mathbb{E}_\tau\left[\hat{v}(\mu)\right] = v^*$. \end{itemize} \label{prop:BP} \end{proposition} \begin{proof} The implication (ii) $\Rightarrow$ (i) is immediate. The implication (ii) $\Rightarrow$ (iii) follows from Fact \ref{fact:Martingale} (every signal induces a Bayes-plausible distribution over posterior beliefs). To show (i) $\Rightarrow$ (ii), observe that for any signal $\sigma: \Theta \rightarrow \Delta(S)$ with value $v^*$, we can define a new signal $\widetilde{\sigma}: \Theta \rightarrow \Delta(A)$ that maps types into the recommended action under $\sigma$. That is, \[\widetilde{\sigma}(a \mid \theta) = \sum_{s : \hat{a}(\mu_s) = a} \sigma(s \mid \theta)\] for every $a \in A$ and $\theta \in \Theta$, where $\mu_s$ denotes the Receiver's posterior given signal realization $s$ under $\sigma$. (The number of distinct action recommendations cannot exceed the size of $S$ and so is finite.) Clearly the optimal action given recommendation of $a$ remains the action $a$, so the distribution of optimal actions induced by $\widetilde{\sigma}$ and $\sigma$ are the same. The direction (iii) $\Rightarrow$ (i) is nearly immediate from Proposition \ref{prop:BayesPlausible} (every Bayes-plausible distribution over posterior beliefs can be induced by a signal), but we need to show that it is possible to construct a \emph{finite-valued} signal for arbitrary $\tau$ (even ones with infinite support).\footnote{The construction in Section \ref{sec:BayesPlausibility} chooses $S$ to be the set of all beliefs in the support of $\tau$, which need not be finite.} We'll use the following result from convex analysis. \begin{proposition}[Caratheodory's Theorem] Let $X \subseteq \mathbb{R}^n$ be a nonempty subset of finite-dimensional Euclidean space. Let $conv(X)$ denote the convex hull of $X$. Then every vector in $conv(X)$ can be represented as a convex combination of at most $n+1$ vectors from $X$. \end{proposition} Fix any $v^*$ and Bayes-plausible $\tau$ such that $\mathbb{E}_\tau [\hat{\nu}(\mu)] = v^*$. Define \[C = \{(\mu, \hat{v}(\mu)) \mid \mu \in \Delta(\Theta)\}\] to be the set of all beliefs and valuations of those beliefs, noting that $C \subseteq \mathbb{R}^n$ where $n \equiv \vert \Theta \vert$.\footnote{The simplex $\Delta(\Theta)$ is a subset of $\mathbb{R}^{n-1}$ and the valuation belongs to $\mathbb{R}$, hence $C \subseteq \mathbb{R}^n$.} Moreover, by assumption that $v^* = \mathbb{E}_\tau [\hat{\nu}(\mu)]$ for some Bayes-plausible distribution $\tau$ over posterior beliefs, the vector $(\mu_0, v^*)$ belongs to the convex hull of $C$. Then by Caratheodory's Theorem, there exists a sequence of beliefs $(\mu_i)_{i=1}^{n+1}$ and a sequence of nonnegative weights $(\alpha_i)_{i=1}^{n+1}$ summing to 1, such that \[(\mu_0,v^*) = \sum_{i=1}^{n+1} \alpha_i \cdot (\mu_i,\hat{v}(\mu_i))\] Let $\tau^*$ be the distribution over posterior beliefs that assigns probability $\alpha_i$ to each belief $\mu_i$, $1\leq i \leq n+1$. Then \[\mathbb{E}_{\tau^*}[\hat{v}(\mu)] = \sum_{i=1}^{n+1} \alpha_i \cdot \hat{v}(\mu_i) = v^*\] as desired. Follow the construction in Section \ref{sec:BayesPlausibility} (setting the set of signal realizations $S$ to be the posterior beliefs in the support of $\tau^*$) to complete the proof. \end{proof} \bigskip Proposition \ref{prop:BP} tells us that we can determine when the Sender benefits from persuasion by studying how $\mathbb{E}_\tau\left[\hat{\nu}(\mu)\right]$ varies over the set of Bayes-plausible distributions. \begin{corollary} The Sender benefits from persuasion if and only if there exists a Bayes-plausible distribution $\tau$ such that $\mathbb{E}_\tau \left[\hat{\nu} (\mu)\right] > \hat{\nu} (\mu_0)$. \end{corollary} \begin{corollary} The value of an optimal signal is \begin{align*} \max_\tau \mathbb{E}_\tau \left[\hat{\nu}(\mu)\right] \quad \mbox{s.t. } \int \mu d\tau(\mu) = \mu_0 \end{align*} \end{corollary} The value of information for the Sender at any prior $\mu$ can be represented geometrically using the upper concave envelope of $\hat{\nu}$. \begin{definition} \label{def:ConcaveClosure} Define \[V(\mu) \equiv \sup \{z \mid (\mu,z) \in Conv(\hat{\nu})\} \quad \forall \mu \in \Delta(\Theta)\] where $Conv(\hat{\nu})$ denotes the convex hull of the graph $\hat{\nu}$. That is, $V$ is the smallest concave function that is everywhere weakly greater than $\hat{\nu}$. \end{definition} \begin{figure}[H] \centering \includegraphics[scale=1]{BP.pdf} \caption{Illustration of Definition \ref{def:ConcaveClosure}.} \end{figure} By Proposition \ref{prop:BP}, the set $\{ z \mid (\mu_0, z) \in Conv(\hat{\nu})\}$ is precisely those expected payoffs that the Sender can achieve when the prior $\mu_0$. For example, in Figure \ref{def:ConcaveClosure}, the value $v$ is achievable from the prior $\mu_0$ via a signal that splits the prior into two posterior $\tilde{\mu}$ and $\tilde{\mu}'$ (setting the weights so that the expected posterior equals the prior). So $V(\mu_0) = \sup \{z \mid (\mu_0,z) \in Conv(\hat{\nu})\}$ is the largest payoff Sender can achieve when the prior is $\mu_0$, and the Sender strictly benefits from persuasion if and only if $V(\mu_0) > \hat{\nu}(\mu_0)$. The following corollary is immediate from the previous analysis. \begin{corollary} If $\hat{v}$ is concave, then the Sender does not benefit from persuasion for any prior. If $\hat{v}$ is strictly convex, the Sender benefits from persuasion for every prior. \end{corollary} \subsection{Back to the Example} Returning to the setting of Section \ref{sec:BPexample}, observe that in any Sender-preferred subgame equilibrium, the judge's action given probability of guilt $\mu$ is \[\hat{a}(\mu) = \left\{\begin{array}{cc} \mbox{\emph{convict}} & \mbox{if } \mu \geq 0.5 \\ \mbox{\emph{acquit}} & \mbox{if } \mu < 0.5 \\ \end{array}\right.\] where the tie at $\mu=0.5$ is broken in favor of the prosecutor. So the prosecutor's expected payoff is \[\hat{v}(\mu) = \left\{\begin{array}{cc} 1 & \mbox{if } \mu \geq 0.5 \\ 0 & \mbox{if } \mu < 0.5 \\ \end{array}\right.\] as depicted in Panel (a) of Figure \ref{fig:v}. \begin{figure}[h] \centering \includegraphics[scale=0.9]{BPmerged.pdf} \caption{Depiction of $\hat{v}(\mu)$ in the prosecutor-judge example.} \label{fig:v} \end{figure} The upper concave envelope of $\hat{v}$ is \[V(\mu) = \left\{\begin{array}{cc} 1 & \mbox{if } \mu \geq 0.5 \\ 2\mu & \mbox{if } \mu < 0.5 \\ \end{array}\right.\] as depicted in Panel (b) of Figure \ref{fig:v}. At the prior belief of $\mu_0 = 0.3$, we have $V(0.3)=0.6$, confirming that the signal structure in (\ref{eq:OptimalSignal}) delivers the best possible expected payoff for the prosecutor. We see moreover that the prosecutor benefits from persuasion whenever $\mu_0 <0.5$ (i.e., whenever the judge would optimally acquit under the prior), but cannot improve his expected payoff through choice of any signal structure when $\mu_0 \geq 0.5$.
2,877,628,091,450
arxiv
\section{Introduction} The experiments at Relativistic Heavy Ion Collider (RHIC) have shown evidence for the hot and dense matter, also known as quark gluon plasma (QGP), created during the early stages of the collisions~\cite{qgp1,qgp2,qgp3,qgp4}. The high temperature and baryon density of the produced matter makes it most suitable environment for the production of light nuclei (p, d, $^{3}He$, $^{4}He$), hypertriton and dibaryons ($\Lambda\Lambda$, p$\Omega$, $\Xi\Xi$, $\Omega\Omega$) as well as their antiparticles. For long time the study of light (anti)nuclei and (anti)hypernuclei production has remained of interest for physicists~\cite{ygm1,ygm2}. These studies are important to understand the matter-antimatter symmetry, dark matter and structure of neutron star~\cite{hori,ko}. Antihypertriton ($^{3}_{\bar\Lambda}\bar{H}$) and antihelium-4 ($^{4}\bar{He}$) have already been observed at RHIC~\cite{starscience, starnature} and Large Hadron Collider~\cite{ALICE}. Very recently, interaction between antiproton pairs has been also measured by the STAR experiment~\cite{pbarpbar-star}. The production of light (anti)nuclei and (anti)hypernuclei in heavy ion collisions is fairly described by the thermal model~\cite{therm1,therm2} and the coalescence model based on multiphase transport model as well as other transport models~\cite{zhang, chen, botvina, zhu}. The production of light (anti)nuclei and (anti)hypernuclei in the most central Au+Au collisions at $\sqrt{s_{NN}}=$ 200 GeV has been studied using coalescence model and hydrodynamic blast-wave model in~\cite{Xue, sun}. Using the same model of Ref.~\cite{Xue}, the production of light (anti)nuclei and (anti)hypernuclei in the most central Au+Au collisions at $\sqrt{s_{NN}}=$ 11.5 GeV are discussed in this article. Different quantum chromodynamics (QCD) based models have proposed existence of dibaryons as exotic form of matter. The H dibaryon was first predicted by Jaffe~\cite{jaffe} and then later many other dibaryon states were predicted, e.g. p$\Omega$~\cite{pomega}, $\Xi\Xi$~\cite{xixi} and $\Omega\Omega$~\cite{omegaomega}. Recently experiments at RHIC~\cite{HSTAR} and LHC~\cite{HALICE} have searched for H dibaryon. With the advancement in computation power reasonable theoretical progress has been made to understand dibaryon structure~\cite{ExHIC1,ExHIC2,th-dib1,th-dib2,th-dib3}. However information about the invariant yield of dibaryons from heavy ion collisions remains scarce and more efforts are required in this direction. The invariant yield of dibaryons $\Lambda\Lambda$, p$\Omega$, $\Xi\Xi$ and $\Omega\Omega$ are presented for central Au+Au collisions at $\sqrt{s_{NN}}=$ 11.5 and 200 GeV. The baryon-strangeness correlation coefficient $C_{BS}$ is proposed as a diagnostic tool to understand the nature of matter formed in heavy ion collisions~\cite{vkoch,haussler}. For QGP state the $C_{BS}$ is expected to be unity, however a significant dependence of $C_{BS}$ on hadronic environment is observed by V. Koch, A. Majumder and J. Randrup~\cite{vkoch}. Measurement of $C_{BS}$ in experiments is a technical challenge as one needs to measure baryon number and strangeness on event-by-event basis. Therefore the strangeness population factor $S_{3}$ was introduced by T. A. Armstrong {\it et al.}~\cite{armstrong}, which fairly depicts the local correlation between baryon number and strangeness~\cite{zhang}. Further we introduce $S_{2}$, which represents the local strangeness-strangeness correlations. Keeping in mind the technical challenges to measure $C_{BS}$, in this Letter, we concentrate on the strangeness population factor $S_{3}$, $S_{2}$ for central Au+Au collisions at $\sqrt{s_{NN}}=$ 11.5 and 200 GeV. \begin{figure*} \begin{center} \epsfxsize = 4.8in \epsfysize = 3.5in \epsffile{DiLm_11GeV_n.eps} \end{center} \caption{ (color online). Differential invariant yields versus p$_{T}$ distribution for p, $\Lambda$, $\Xi$, $\Omega$, $\Lambda\Lambda$, p$\Omega$, $\Xi^{0}\Xi^{-}$, $\Omega\Omega$, light (anti)nuclei and hypertriton produced in central Au+Au collisions at $\sqrt{s_{NN}} = 11.5$ and 200 GeV. The filled symbols are data from STAR experiment~\cite{star200_1,star200_2,star200_3,stardndy_11} and different lines represent our calculations from the hydrodynamical blast-wave model plus a coalescence model.} \label{fig:spectra_11GeV} \end{figure*} \section{Coalescence Model} A naive coalescence model is used to study the production of multistrange hadrons, light nuclei and hypertriton in central Au+Au collisions at $\sqrt{s_{NN}}=$ 11.5 and 200 GeV. It is assumed that the production of these particles occur at the kinetic freeze-out stage. In this case the particle production probability is proportional to the primordial hadron density and can be described by following equation~\cite{sato}: \begin{equation} \label{EA} E_{c} \frac{d^3N_{c}}{d^3p_{c}} = B ( E_a \frac{d^3N_a}{d^3p_{a}})^{n} ( E_b \frac{d^3N_b}{d^3p_{b}})^{m}, \end{equation} \noindent where E$\frac{d^3N}{d^3p}$ are the invariant yield of particles (a, b and c) under consideration, $p_{c}, p_{a}$ and $p_{b}$ are their momenta, B is the coalescence parameter and $\vec{p_{c}}=n\vec{p_{a}}+m\vec{p_{b}}$. The phase space information from the hydrodynamic blast-wave model is used as an input to the equation~(\ref{EA}) to calculate the invariant yields of $\Lambda\Lambda$, p$\Omega$, $\Xi^{0}\Xi^{-}$, $\Omega\Omega$, light nuclei and hypertriton. In hydrodynamic blast-wave model~\cite{bwm1}, the system is characterized by these parameters: the kinetic freeze-out temperature $T_{kin}$ , the radial flow parameter $\rho_0$ and elliptic flow parameter $\rho_2$ , the spatial anisotropy a, the average transverse radius R, and the particle emission duration $\tau_0$. It is assumed that the fireball created in heavy ion collision is in local thermal equilibrium and moves outward with velocity $u_{\mu}$. The phase-space emission points for hadrons are defined as a Wigner function: \begin{eqnarray} \label{sxp} \nonumber S(x,p) d^{4}x &= & \frac{2s+1}{(2\pi)^3}m_{t}\cosh(y-\eta)exp(-\frac{p^{\mu}u_{\mu}}{T_{k}}) \\ \nonumber && \times \Theta(1-\tilde{r}(r,\phi))H(\eta)\\ && \times \delta(\tau-\tau_{0})d\tau\tau d\eta r drd\phi, \end{eqnarray} \noindent where y is the rapidity, $m_{t}$ is transverse mass, $p^{\mu}$ is four momentum, and (2s+1) is the degeneracy due to spin of hadrons. $\tilde{r}$ is given by \begin{equation} \label{rtilde} \tilde{r} =\sqrt{\frac{(x^1)^2}{{R_{x}}^2}+\frac{(x^2)^2}{{R_{y}}^2}}, R_{x}=aR, R_{y}=\frac{R}{a}, \end{equation} where $(x^{1},x^{2})$ is the transverse position of the hadrons in coordinate space. Then we can define the azimuthally integrated $p_{T}$ spectrum as \begin{equation} \label{dnptdpt} \frac{dN}{2\pi p_{T}dp_{T}}=\int S(x,p)d^4x. \end{equation} Results obtained for the invariant yields of multistrange hadrons, nuclei and hypertriton using equation~\ref{EA} and~\ref{dnptdpt} are discussed in next section. \begin{table} \scalebox{0.7}{ \begin{tabular}{ |c|| c| c| c| c| c| c| c|} \hline $\sqrt{s_{NN}}$ & $dN_{^{3}He}/dy$ & $dN_{^{3}_{\Lambda}H}/dy$ & $dN_{^{4}He}/dy$ & $dN_{\Lambda\Lambda}/dy$ & $dN_{p\Omega}/dy$ & $dN_{\Xi^{0}\Xi^{-}}/dy$ & $dN_{\Omega\Omega}/dy$\\ (GeV) & & & & & & &\\ \hline 11.5 & $1.06\times10^{-2}$ & $2.04\times10^{-3}$ & $3.63\times10^{-5}$ & $2.46\times10^{-2}$ & $2.12\times10^{-3}$ & $6.68\times10^{-4}$ &$1.63\times10^{-6}$ \\ \hlin 200 & $1.65\times10^{-4}$ & $1.05\times10^{-4}$ & $3.30\times10^{-7}$ & $7.24\times10^{-3}$ & $4.24\times10^{-4}$ & $2.75\times10^{-4}$ &$3.25\times10^{-6}$ \\ \hline \end{tabular}} \caption{\label{tab:int_dndy} $p_{T}$ integrated yields of light nuclei, hypertriton and dibaryons in Au+Au collisions. } \end{table} \section{Result and discussion} To study dibaryons, light nuclei and hypertriton production in the central Au+Au collisions at the RHIC energies 200 GeV and 11.5 GeV, we use following parameters derived from the STAR experiment at the RHIC as input to the hydrodynamic blast-wave model: kinetic freeze-out temperature = 89 (120) MeV, baryo-chemical potential = 21.9 (315) MeV, strangeness chemical potential = 6.5 (68) MeV and radial flow parameter $\rho_0$ = 0.91 (0.46) for the $\sqrt{s_{NN}}=200$ (11.5) GeV~\cite{kumar}. The elliptic flow parameter $\rho_2$ = 0, spatial anisotropy a = 1, average transverse radius R = 10 fm and finite longitudinal proper time = 6.2 fm/c are set same for both $\sqrt{s_{NN}}=200 GeV$ and 11.5 GeV~\cite{rtau1,rtau2,rtau3}. Similar calculations were done by K.-J. Sun and L.-W. Chen in~\cite{sun}, where the freeze-out parameters are higher than the parameters used in our calculations. The proton spectra used from the PHENIX collaboration to derive the freeze-out parameter in~\cite{sun} are not corrected for the feed-down from $\Lambda$ and $\Sigma$ baryons.The coalescence of hadron occurs when $|\vec{r}_{i}-\vec{r}_{j}| < 2R_{0}$ and $|\vec{p}_{i}-\vec{p}_{j}| < 100$ MeV/c, where $(\vec{r}_{i},\vec{p}_{i})$ and $(\vec{r}_{j},\vec{p}_{j})$ are the phase-space position of the two constituent hadrons, and $R_{0}$ is the nuclear force radius. For deuteron and multistrange dibaryon $R_{0}$ = 1.57 fm is used and for the other nuclei $R_{0}$ = 1.5 fm is used. The first two panels in figure~\ref{fig:spectra_11GeV} show differential yields of p, $\Lambda$, $\Xi$, $\Omega$, $\Lambda\Lambda$, p$\Omega$, $\Xi^{0}\Xi^{-}$ and $\Omega\Omega$ produced in central Au+Au collisions at $\sqrt{s_{NN}} = 200$ and 11.5 GeV respectively. Our calculation can reproduce the data for proton, $\Lambda$, $\Xi$ and $\Omega$ from the STAR experiment at both energies~\cite{star200_1,star200_2,star200_3,stardndy_11}. The light (anti)nuclei and hypertriton spectra for $\sqrt{s_{NN}} = 11.5$ GeV are shown in third panel of figure~\ref{fig:spectra_11GeV}. For $\sqrt{s_{NN}} = 200$ GeV the light (anti)nuclei spectra are taken from the article~\cite{Xue}, where same coalescence model was used. The $p_{T}$ integrated yield for light nuclei and dibaryons in the central rapidity are given in Table~\ref{tab:int_dndy}. We observe that expected yields of all the particles at $\sqrt{s_{NN}} = 11.5$ GeV are significantly higher than $\sqrt{s_{NN}} = 200$ except for $\Omega\Omega$, may be because of competition between strangeness production mechanism at this energy. Figure~\ref{fig:rapidity} shows the rapidity distribution of p, $\Lambda$, $\Xi$, $\Omega$, $\Lambda\Lambda$, p$\Omega$, $\Xi^{0}\Xi^{-}$, $\Omega\Omega$, d, $^{3}He$ and hypertriton in central Au+Au collision at $\sqrt{s_{NN}} = 11.5$ GeV from the hydrodynamical blast-wave model plus a coalescence model. Since uniform rapidity distribution is used for $\sqrt{s_{NN}} = 200$ GeV, we have not shown those rapidity distributions here. \begin{figure} \begin{center} \epsfxsize = 3.3in \epsfysize = 2.5in \epsffile{rapidity.eps} \end{center} \caption{ (color online) The rapidity distribution of multistrange hadrons, light nuclei and hypertriton in central Au+Au collisions at $\sqrt{s_{NN}} = 11.5$ GeV.} \label{fig:rapidity} \end{figure} The p$_{T}$ integrated yields {\it dN/dy} of multistrange hadrons as a function of strangeness $|S|$ for central Au+Au collisions at $\sqrt{s_{NN}} = 11.5$ GeV (left) and 200 GeV (right) are shown in figure~\ref{fig:yieldvsS}, where filled symbols are data from the STAR experiment~\cite{star200_2,star200_3,stardndy_11} and different lines represent our calculations from the hydrodynamical blast-wave model plus a coalescence model. The $\Lambda\Lambda$ and $\Omega\Omega$ dibaryon production yields at top RHIC energy were estimated by the ExHIC collaboration based on a realistic coalescence model and statistical model~\cite{ExHIC1,ExHIC2}. Those yields are compared with our calculations in the figure~\ref{fig:yieldvsS}. We observe an exponential behavior of the invariant yield of multistrange hadrons similar to light nuclei~\cite{Xue}. The yield for baryon and dibaryon systems are fitted with function $N_{S} = N^{i}({\frac{1}{\lambda}})^{|S|-1}$ , where $N^{i}$ is number of initial strange hadrons, $\lambda$ is penalty factor and $S$ is the strangeness. The penalty factor quantitatively tells us how hard it is to produce a hadron with strangeness ($|S|$+1) compared to a hadron with strangeness ($|S|$). We obtain $\lambda$ = 9.86 for baryons and $\lambda$ = 4.62 for dibaryon system from the model for central Au+Au collisions at $\sqrt{s_{NN}}=11.5$ GeV. Similarly we obtain $\lambda$ = 6.46 for baryons and $\lambda$ = 4.21 for dibaryon system from the model for central Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV. By fitting the data from the STAR experiment for baryons, we get $\lambda$ = 12.92 $\pm$ 1.04 (5.71 $\pm$ 0.34) for the central Au+Au collisions at $\sqrt{s_{NN}}=11.5$ (200) GeV. \begin{figure} \begin{center} \epsfxsize = 3.5in \epsfysize = 2.50in \epsffile{YieldvsS.eps} \end{center} \caption{(color online) p$_{T}$ integrated yields {\it dN/dy} of multistrange hadrons as a function of strangeness $|S|$ for central Au+Au collisions at $\sqrt{s_{NN}} = 11.5$ GeV (left) and 200 GeV (right). The filled symbols are data from the STAR experiment~\cite{star200_2,star200_3,stardndy_11}, the solid lines represent our calculations from the hydrodynamical blast-wave model plus a coalescence model and the dashed lines for the $\Lambda\Lambda$ and $\Omega\Omega$ dibaryons are from Ref~\cite{ExHIC2}.} \label{fig:yieldvsS} \end{figure} Figure~\ref{fig:yieldvsB} shows the production rate of nuclei as a function of baryon number for the central Au+Au collisions at $\sqrt{s_{NN}} = 11.5$ GeV and 200 GeV, where solid points are our calculations using the coalescence model and open symbols are data from the STAR experiment~\cite{starnature}. At $\sqrt{s_{NN}} = 200$ GeV, our results are consistent with the STAR measurement within the uncertainties. The production rates exhibit a decreasing exponential behavior with the increase in baryon number. Further we obtain the reduction factor by fitting the data with exponential function $e^{-rB}$. Obtained fit values for reduction factor are 1.2$\times10^{3}$ (1.5$\times10^{3}$) and 0.33$\times10^{3}$ (1.95$\times10^{4}$)for adding one more nucleon (antinucleon) to the system for $\sqrt{s_{NN}} = 200$ and 11.5 GeV respectively. The reduction factor obtained from our calculation at $\sqrt{s_{NN}} = 200$ GeV is comparable with $1.1^{+0.3}_{-0.2}\times10^{3}$ ($1.6^{+1.0}_{-0.6}\times10^{3}$) obtained by the STAR experiment~\cite{starnature}. The production rate for nuclei at $\sqrt{s_{NN}} = 11.5$ GeV are significantly higher than $\sqrt{s_{NN}} = 200$ GeV where the rates decrease sharply for antinuclei at same energy compared to $\sqrt{s_{NN}} = 200$ GeV. The difference in reduction factors between matter and antimatter shows a significant energy (or temperature) dependence, which illustrate an increasing matter-antimatter asymmetry of the yields at lower energies (temperatures). If we make a rough extension to current Universe at room temperature, we can hardly observe the antimatter existence, which is consistent with the current observation of the cosmic rays from which neither antideutron nor antihelium are observed~\cite{fuke, abe}. \begin{figure} \begin{center} \epsfxsize = 3.5in \epsfysize = 2.50in \epsffile{InvariantYieldvsB.eps} \end{center} \caption{(color online) Invariant yield of nuclei in the average transverse momentum region ($p_{T}/|B|$ = 0.875 GeV/c) as a function of baryon number $B$ for central Au+Au collisions at $\sqrt{s_{NN}} = 11.5$ GeV and 200 GeV. Open symbols are data from STAR experiment~\cite{starnature}, solid points are our calculations from coalescence model and different lines represent fit to the coalescence model and data from STAR experiment~\cite{starnature}.} \label{fig:yieldvsB} \end{figure} The strangeness population factor $S_{3}=^{3}_{\Lambda}H/(^{3}He\times\frac{\Lambda}{p})$ contains the local baryon-strangeness correlation in the numerator and the baryon-baryon correlation in the denominator~\cite{zhang,sato}. Therefore $S_{3}$ is quantitatively a good representation of $\chi^{BS}_{11}/\chi^{B}_{2}$, where $\chi$ is the second derivative of the free energy with respect to the chemical potential, from lattice QCD~\cite{cheng}. The ratio $S_{3}$ as a function transverse momentum is shown in figure~\ref{fig:S3S2} (left). Similarly we define $S_{2} = \Lambda\Lambda/(d\times(\Lambda/p)^{2})$ for strangeness=-2 dibaryon which contains the local strangeness-strangeness correlation in the numerator and the baryon-baryon correlation in the denominator. An increase in ratios $S_{2}$ and $S_{3}$ is observed for Au+Au collisions at $\sqrt{s_{NN}}$ = 200 GeV compared to $\sqrt{s_{NN}}$ = 11.5 GeV as shown in figure~\ref{fig:S3S2} (right). The ratios $\frac{^{3}He}{^{3}H}$ for $\sqrt{s_{NN}} = 11.5$ GeV and 200 GeV are also shown in figure~\ref{fig:S3S2} (left). We observe that the ratio $\frac{^{3}He}{^{3}H}$ at $\sqrt{s_{NN}} = 11.5$ GeV is lower than unity, where the isospin effects become important compared to $\sqrt{s_{NN}} =$ 200 GeV. \begin{figure} \begin{center} \epsfxsize = 3.5in \epsfysize = 2.50in \epsffile{S3S2_linscl.eps} \end{center} \caption{(color online) On left ratios $S_{2}$, $S_{3}$ and $\frac{^{3}He}{^{3}H}$ are plotted as a function of transverse momentum ($p_{T}$) for central Au+Au collisions at $\sqrt{s_{NN}} = 11.5$ GeV and 200 GeV. On right ratios $S_{2}$ and $S_{3}$ are plotted as a function of beam energy $\sqrt{s_{NN}}$, where open cross is data from STAR experiment~\cite{starscience}.} \label{fig:S3S2} \end{figure} \section{Conclusion} We presented an interesting calculation for the production of dibaryons, light (anti)nuclei and hypertriton, based on a naive coalescence model for Au+Au collisions at $\sqrt{s_{NN}}$ = 11.5 and 200 GeV. The exponential behavior of the invariant yields versus strangeness is studied for the multistrange hadrons and penalty factor for the baryon and dibaryon are derived. The ratios $S_{2}$ and $S_{3}$ are discussed for Au+Au collisions at $\sqrt{s_{NN}}$ = 11.5 and 200 GeV. We observe an increase in $S_{2}$ and $S_{3}$ at $\sqrt{s_{NN}}$ = 200 GeV compared to $\sqrt{s_{NN}}$ = 11.5 GeV. Furthermore our study indicates that the suppression factor for nuclei production at $\sqrt{s_{NN}}$ = 11.5 GeV is roughly four times smaller than suppression factor at $\sqrt{s_{NN}}$ = 200 GeV; leading to higher probability for observation of light nuclei candidates at lower energy. Our calculation will provide the motivation to carry out measurement of $S_{3}$, light nuclei and dibaryons during the phase-II of beam energy scan program at STAR experiment at RHIC~\cite{BES-2}. \section{Acknowledgments} This work is supported in part by the Major State Basic Research Development Program in China under Contract No. 2014CB845401, the National Natural Science Foundation of China under contract Nos. 11421505, 11520101004, 11275250, 11322547. Author N. Shah is supported by Chinese Academy of Sciences (CAS) President's International Fellowship Initiative No. 2015PM029. \section*{References} \renewcommand{\bibfont}{\small}
2,877,628,091,451
arxiv
\section{Introduction} The study of electroweak processes plays an important role in few-body physics. Effective field theory (EFT) provides a systematic way of calculating the transition amplitudes for those processes. It can also establish, through the symmetry of QCD, useful relations between the amplitudes for weak- and strong-interaction processes. Some of the important processes, {\it e.g.}, neutron $\beta$-decay\cite{aetal-plb04} and the electroweak processes involving the deuteron, have been studied in the framework of EFT\cite{this_and_that}. In this talk, we review two recent studies on neutron-neutron fusion\cite{ak-05} and $np\to d\gamma$ for big-bang nucleosynthesis (BBN)\cite{achh-05}; these studies employ pionless EFT with dibaryon fields (dEFT)\cite{ah-prc05}.\footnote{ We refer to it as ``dibaryon EFT'' (dEFT) in this talk.} As regards $nn$-fusion, we pay particular attention to the consequences of uncertainties in the existing experimental data on the $neutron$-$neutron$ scattering length and effective range. As for the $np\to d\gamma$ cross section at BBN energies, a Markov Chain Monte Carlo (MCMC) is adapted to analyze the relevant experimental data and determine the low energy constants (LECs) in dEFT. \section{Neutron-Neutron Fusion, $nn\to de^-\bar{\nu}_e$} Ultra-high-intensity neutron-beam facilities are currently under construction at, e.g., the Oak Ridge National Laboratory and J-PARC and are expected to bring great progress in high-precession experiments concerning the fundamental properties of the neutron. Besides these experiments that focus on the properties of a single neutron, one might consider processes that involve the interaction of two free neutrons, which allow the model-independent determination of the neutron-neutron scattering length and effective range, $a_0^{nn}$ and $r_0^{nn}$. In this talk, we first consider the $nn$-fusion process for neutrons of very low energies such as the ultra-cold neutrons and thermal neutrons. It is worth noting that, for very low energy neutrons, the maximum energy $E_e^{max}$ of the outgoing electrons from $nn$-fusion is $E_e^{max}\simeq B+\delta_N \simeq 3.52$ MeV, where $B$ is the deuteron binding energy and $\delta_N = m_n-m_p$. The value of $E_e^{max}$ is significantly larger than the maximum energy of electrons from neutron $\beta$-decay, $E_{e,\beta\mbox{-}decay}^{max}\simeq \delta_N\simeq 1.29$ MeV, and thus the $nn$-fusion electrons with energies larger than $\delta_N$ are in principle distinguishable from the main background electrons of neutron $\beta$-decay. \begin{figure}[t] \begin{center} \epsfxsize=7cm \epsfbox{fig-nn.eps} \caption{ Diagrams for the $nn$ fusion process up to NLO in dEFT. \label{fig;nn}} \end{center} \end{figure} Diagrams for the $nn$-fusion process up to next-to leading order (NLO) are shown in Fig.~\ref{fig;nn}, from which the cross section is calculated\cite{ak-05}. We also include the Fermi function and $\alpha$-order radiative corrections pertaining to the one-body interaction\cite{aetal-plb04} to ensure accuracy better than 1 \% in the cross section. The two low-energy constants (LECs), $e_V^R$ and $l_{1A}$, appear in our calculation. Using the formula for neutron $\beta$-decay\cite{aetal-plb04} and the recent values of $G_F$, $V_{ud}$, $g_A$, and the neutron lifetime $\tau$ in the literatures, we deduce $\frac{\alpha}{2\pi}e_V^R = (2.01\pm 0.40)\times 10^{-2}$. The LEC, $l_{1A}$, which also contributes to other processes, {\it e.g.}, $pp$-fusion and $\nu$-$d$ reactions, can in principle be fixed from the tritium $\beta$-decay data. However, there has been no attempt to include the weak current into the three-nucleon system in dEFT. So we make use of the result from the pionful EFT\cite{petal-prc03}, and obtain $l_{1A} = -0.33 \pm 0.03$. Hence the uncertainties due to the errors in these LECs and higher order terms should be less than 1\%. The prime uncertainty in the cross section comes from $a_0^{nn}$ and $r_0^{nn}$, \cite{annrnn} \begin{equation} a_0^{nn} = - 18.5 \pm 0.4 \mbox{[fm]} \, , \ \ r_0^{nn} = 2.80\pm 0.11 \mbox{[fm]}\, . \label{eq;annrnn} \end{equation} We are now in a position to carry out numerical calculations of the electron spectrum and the cross section. Since the $nn$-fusion cross section obeys the $1/v$ law, where $v$ is the relative velocity between the two neutrons, we may concentrate on a particular value of the incident neutron energy. We consider here a head-on collision of two ultra-cold neutrons (UCN) ($v_{UCN}\simeq 5$m/sec), and thus $v=2v_{UCN}\sim 10$m/sec. \begin{figure}[t] \begin{center} \epsfxsize=6cm \epsfbox{fig-dsig.eps} \caption{Spectrum of the electrons from neutron-neutron fusion, $nn\to de\nu$.} \label{fig;nnfusion} \end{center} \end{figure} In Fig.~\ref{fig;nnfusion}, we plot the calculated electron spectrum, $d\sigma/dE_e$, as a function of $E_e$. As mentioned, the electrons with $E_e>\delta_N=1.29$ MeV are in principle distinguishable from the electrons coming out of neutron $\beta$-decay. The total cross section $\sigma$ is calculated to be \begin{equation} \sigma = (38.6\pm 1.5)\times 10^{-40} \mbox{[cm$^2$]}\, . \end{equation} We find that the significant uncertainty ($\sim$4\%) in the cross section comes solely from the current experimental errors of $a_0^{nn}$ and $r_0^{nn}$. Since the cross section obtained here is very small, the experimental observation of this reaction does not seem to belong to the near future. \section{$np\to d\gamma$ at the BBN energies} Primordial nucleosynthesis processes take place between 1 and $10^2$ seconds after the big bang at temperatures ranging from $T\simeq$ 1 MeV to 70 keV. Predictions of primordial light element abundances, D, ${}^3$He, ${}^4$He, and ${}^7$Li, and the comparison of them with observations are a crucial test of the standard big bang cosmology. The uncertainties in these predictions are dominated by the nuclear physics input for the reaction cross sections. Reaction databases are continuously updated\cite{bbn}, with more attention now paid to the error budget. The cross section of the $np\to d\gamma$ process at the BBN energies has been thoroughly studied by using pionless EFT up to N$^3$LO by Chen and Savage\cite{cs-prc99}, and up to N$^4$LO by Rupak\cite{r-npa00}. In this part of talk, we present an estimation of the cross section employing a new method, {\it i.e.}, a combination of dEFT up to NLO and an MCMC analysis with the aid of the relevant experimental data. We find that this method leads to a result comparable with that obtained by Rupak, and we discuss that the estimated $np\to d\gamma$ cross section at the BBN energies is reliable to within 1\%. \begin{figure}[b] \begin{center} \epsfxsize=7cm \epsfbox{fig-diagrams-npdg.eps} \caption{Diagrams for the $np\to d\gamma$ process up to NLO in dEFT. \label{fig;npdg}} \end{center} \end{figure} Diagrams for the $np\to d\gamma$ process up to NLO in dEFT are shown in Fig.~\ref{fig;npdg}. From these diagrams we calculate the amplitudes for the $S$(${}^1S_0$ and ${}^3S_1$)- and $P$-waves of the initial two-nucleon. We note that since the ${}^3S_1$ amplitude is highly suppressed due to the orthogonality of the scattering and bound ${}^3S_1$ states, we neglect it in our calculations. Using these amplitudes, we can easily calculate the cross section for $np\to d\gamma$. \begin{table}[t] \tbl{Values of parameters \label{tab;parameters} } {\footnotesize \begin{tabular}{c|cc} \hline & MCMC & Prev. Method \\ \hline $a_0$ & $-23.7426\pm 0.0081$ & $-23.749\pm 0.008$ \\ $r_0$ & $2.783\pm 0.043$ & $2.81\pm0.05$ \\ $\rho_d$ & $1.7460\pm 0.0072$ & $1.760\pm0.005$ \\ $l_1$ & $0.798\pm 0.029$ & $0.782\pm 0.022$ \\ \hline \end{tabular} } \end{table} Five parameters, $a_0$, $r_0$, $\gamma$, $\rho_d$, and $l_1$, appear in the amplitudes. We determine the values of the four parameters, $a_0$, and $r_0$, $\rho_d$, and $l_1$, by the MCMC analysis of the relevant low energy experimental data; the total cross section of the $np$ scattering at the energies $\le$5 MeV (2124 data) from the NN-OnLine web page, the $np\to d\gamma$ cross section from Suzuki {\it et al.}\cite{suzuki_etal} and Nagai {\it et al.}\cite{nagai_etal} including two thermal capture data\cite{cox_etal}, the $d\gamma\to np$ cross section from Hara {\it et al.}\cite{hara_etal} and Moreh {\it et al.}\cite{moreh_etal}, and the photon analyzing power from Tornow {\it et al.}\cite{tornow_etal} and Schreiber {\it et al.}\cite{schreiber_etal}. Meanwhile, we constrain $\gamma$ from the accurate value of $B$. In Table~\ref{tab;parameters} we give our estimates of the parameters obtained from the present MCMC analysis along with the values obtained in the previous method (``Prev. Method'').\footnote{ The values of the effective ranges, $a_0$, $r_0$, and $\rho_d$, are taken from Ref.\cite{kn-zpa75}, and the value of $l_1$ is obtained from the averaged value of the two thermal capture rates\cite{cox_etal}. } We find small differences ($\le$2\%) between the values of the parameters for the two cases; we will come back to this later. In Table~\ref{table;results} the theoretical estimates of the $np\to d\gamma$ cross section at BBN energies are given as a function of the initial two-nucleon energy $E$ in the center of mass (CM) frame. The column labeled ``dEFT(MCMC)'' gives our preliminary results for the mean values and standard deviations obtained in MCMC. Table~\ref{table;results} also shows the results of four other methods: ``dEFT(Prev. Meth.)'' based on the parameter set ``Prev. Method'' in Table~\ref{tab;parameters}, pionless EFT up to N${}^4$LO by Rupak, a high-precision potential model calculation including the meson-exchange current by Nakamura, and an R-matrix analysis by Hale. Good agreement is found among the different approaches except that the results of ``dEFT(Prev. Meth.)'' at $E=0.5$ and 1 MeV and those of Hale exhibit some deviations, which are $\sim$0.6\% in the former and go up to 4.5 \% in the latter. The $\sim$0.6\% difference at $E=0.5$ and 1 MeV between ``dEFT(MCMC)'' and ``dEFT(Prev. Meth.)'' is significant compared to the small $\sim$0.3\% statistical errors obtained here. This difference can be accounted for by higher order terms that are not included in the amplitudes of dEFT up to NLO. By including the higher order terms associated with the $P$-wave scattering volumes\cite{cs-prc99}, we can reproduce the ``dEFT(MCMC)'' results at $E=0.5$ and 1 MeV in ``dEFT(Prev. Meth.)''. This implies that the values fitted by MCMC mimic the roles of the higher order terms. Since our results ``dEFT(MCMC)'' agree quite well with those of Rupak and Nakamura's calculations, and since in the N$^4$LO pionless EFT calculation by Rupak, various corrections due to the higher order terms have been studied, we infer that the estimated $np\to d\gamma$ cross section at the BBN energies should be reliable within 1\% accuracy. A dEFT calculation provides a systematic perturbation scheme and a simple model-independent expression for the amplitudes in terms of a finite number of LECs. As demonstrated above, the combination of a dEFT calculation and an MCMC analysis of available experimental data would be a useful method to deduce reliable cross sections for other few-body processes. \begin{table}[t] \tbl{Theoretical estimates of the $np\to d\gamma$ cross sections at the BBN energies. $E$ is the initial two-nucleon energy in CM frame. See the text for more details. \label{table;results}} {\footnotesize \begin{tabular}{c|ccccc} \hline E(MeV) & dEFT(MCMC) & dEFT(Prev. Meth.) & Rupak & Nakamura & Hale \\ \hline $1.265\times10^{-8}$ & 333.8(4) & 333.7(15) & 334.2(0) & 335.0 & 332.6(7) \\ $5\times10^{-4}$ & 1.667(2) & 1.666(8) & 1.668(0) & 1.674 & 1.661(7) \\ $1\times10^{-3}$ & 1.171(1) & 1.171(5) & 1.172(0) & 1.176 & 1.167(2) \\ $5\times10^{-3}$ & 0.4979(6) & 0.4976(21) & 0.4982(0) & 0.4999 & 0.4953(11) \\ $1\times10^{-2}$ & 0.3321(4) & 0.3319(14) & 0.3324(0) & 0.3335 & 0.3298(9) \\ $5\times10^{-2}$ & 0.1079(1) & 0.1079(4) & 0.1081(0) & 0.1084 & 0.1052(9) \\ 0.100 & 0.06341(7) & 0.0634(2) & 0.06352(5) & 0.06366 & 0.0605(10)\\ 0.500 & 0.03413(8) & 0.0343(1) & 0.0341(2) & 0.03416 & 0.0338(8) \\ 1.00 & 0.03502(10) & 0.0352(2) & 0.0349(3) & 0.03495 & 0.0365(8)\\ \hline \end{tabular} } \end{table} The author would like to thank K. Kubodera, R.~H. Cyburt, S.~W. Hong, and C.~H. Hyun for collaboration.
2,877,628,091,452
arxiv
\subsection{Event mixing for non-correlated pairs and the correction for purity.} Non-correlated pairs each consist of two daughters particles. These daughters belong to two events which are carefully chosen so that they have similar event multiplicity and topology. The ratio of the $\frac{A(k^*)}{B(k^*)}$ (see above), after being normalized at a large $k^*$ (at least 0.25 $\mathrm{GeV}/c$), gives the measured CF, $C(k^*)_{\rm meas}$. Because in practice one cannot select 100\% pure (anti)protons, a correction to pairs is applied to obtain the PID-purity-corrected CF: $C_{\mathrm{PurityCorrected}}(k^*) = \frac{ C_{\mathrm{meas}}(k^*) -1}{\mathrm{PairPurity}(k^*)} + 1$. For simplicity, in Eq.~\ref{eq:Inclusive} the subscript ``meas" is dropped, and elsewhere in this paper, the subscript ``PurityCorrected" is dropped. \subsection{The transformation from $k^*_{p\Lambda}$ to $k^*_{pp}$.} $C_{p\Lambda}(k^*)$ in equation~\ref{eq:Inclusive} is naturally expressed as a function of $k^*_{p\Lambda}$. Thus, to use it in the above equation, one needs to transform it to a function of $k^*_{pp}$. Here $k^*_{p\Lambda}$ (and $k^*_{pp}$) is the magnitude of the three-momentum of either particle in the pair rest frame, while in this case for $k^*_{pp}$, one of the protons is the decay daughter of $\Lambda$. This transformation is done by $C_{p\Lambda}(k^*_{pp})={\displaystyle \int} C_{p\Lambda}(k^*_{p\Lambda}) T(k^*_{p\Lambda},k^*_{pp}) dk^*_{p\Lambda} $, where $T(k^*_{p\Lambda},k^*_{pp})$ is a matrix that transforms $k^*_{p\Lambda}$ to $k^*_{pp}$~\cite{HannaThesis}. The transformation matrix is generated with the THERMINATOR2 model~\cite{THERMINATOR2} which is a Monte Carlo event generator dedicated to studies of the statistical production of particles in relativistic heavy-ion collisions. \subsection{The calculation of the FSI contribution to the correlation function.} The femtoscopic correlations due to the Coulomb FSI between the emitted electron and the residual nucleus in beta decay have been well known for more than 80 years; they reveal themselves in a sensitivity of the Fermi function (an analogue of the CF~\cite{Led07}) to the nuclear radius. Compared with non-interacting particles, the FSI effect in a two-particle system with total spin $S$ manifests itself in the substitution of the product of plane waves, ${\rm exp}(-ip_1X_a-ip_2X_b)$, by the non-symmetrized Bethe-Salpeter amplitudes $\Psi_{p_1p_2}^{S(-)}(X_a,X_b) = \Psi_{p_1p_2}^{S(+)*}(X_a,X_b)$ ~\cite{GKW79,Lednicky81,Led09,Erazmus95}. For identical particles, the symmetrization requirement in the representation of total pair spin $S$ takes on the same form for both bosons and fermions: the non-symmetrized amplitude should be substituted by $[\Psi_{p_1p_2}^{S(-)}(X_a,X_b) + (-1)^S \Psi_{p_2p_1}^{S(-)}(X_a,X_b)]/\sqrt{2}$. In the pair rest frame, $X_a-X_b=\{t^*,{\bf r}^*\}$ and $p_1-p_2=\{\omega_1^*-\omega_2^*,2{\bf k}^*\}$ where $\omega_i^*=(m_i^2+k^{*2})^{1/2}$ is the energy of a particle of mass $m_i$, and $t^*$ and ${\bf r}^*$ are the relative emission time and relative separation in the pair rest frame, respectively. In this frame, the non-symmetrized Bethe-Salpeter amplitude at equal emission times ($t^*=0$) reduces, up to an inessential phase factor, to a stationary solution of the scattering problem, $\psi_{-{\bf k}^*}^{S(+)}({\bf r}^*)$. At small relative momenta, $k^* <$~$\sim$$1/r^*$, this solution can be used in practical calculations with the condition $|t^*| \ll m r^{*2}$~\cite{Lednicky81,Led09}. The equal-time approximation is almost exact in beta decay, and it is usually quite accurate for particles produced in high-energy collisions (to a few percent in the FSI contribution to CF's of particles even as light as pions~\cite{Led09}). In collisions involving heavy nuclei, the characteristic separation of the emission points, $r^*$, can be considered substantially larger than the range of the strong-interaction potential. The FSI contribution is then independent of the actual potential form and can be calculated analytically with the help of corresponding scattering amplitudes only~\cite{gkll86}. At small $k^*$, it is basically determined by the s-wave scattering amplitudes $f^S(k^*)$ scaled by the separation $r^*$~\cite{Lednicky81}. \subsection{The analytical calculation of the (anti)proton-(anti)proton correlation function.} The (anti)proton-(anti)proton correlation function, $C_{pp}(k^*;R_{pp})$ in equation~\ref{eq:Inclusive}, can be described by the Lednick\'{y} and Lyuboshitz analytical model~\cite{Lednicky81}. In this model, the correlation function is calculated as the square of the properly symmetrized wave function averaged over the total pair spin $S$ and the distribution of relative distances (${\bf r}^*$) of particle emission points in the pair rest frame, assuming 1/4 of the singlet and 3/4 of triplet states and a simple Gaussian distribution $dN/d^3{\bf r}^* \sim \exp(-{\bf r}^{*2}/4R_{pp}^2$). Starting with the FSI weight of nucleons emitted with the separation ${\bf r}^*$ and detected with the relative momentum ${\bf k}^*$, \begin{eqnarray*} w({\bf k}^*,{\bf r}^*)= |\psi_{-{\bf k}^*}^{S(+)}(({\bf r}^*) + (-1)^S \psi_{{\bf k}^*}^{S(+)}({\bf r}^*)|^2/2, \end{eqnarray*} where $\psi_{-{\bf k}^*}^{S(+)}({\bf r}^*) $ is the equal-time ($t^*=0$) reduced Bethe-Salpeter amplitude which can be approximated by the outer solution of the scattering problem~\cite{Landau1974}. This is \begin{eqnarray*} \psi_{-{\bf k}^*}^{S(+)}(({\bf r}^*)= e^{i\delta_c}\sqrt{A_c(\eta)} [e^{-i{\bf k}^*{\bf k}^*}F(-i\eta,1,i\xi) + f_c(k^*) \frac{\widetilde{G}(\rho,\eta)}{{r}^{*}}], \end{eqnarray*} where $\eta=(k^* a_c)^{-1}$, $a_c=$ (57.5 fm) is the Bohr radius for two protons, $\rho= k^* r^*$, $\xi= {\bf k^*}{\bf r^*}+\rho$, $A_c(\eta)$ is the Coulomb penetration factor given by $A_c(\eta)=2\pi\eta[\exp(2\pi\eta)-1]^{-1}$, $F$ is the confluent hypergeometric function, $\widetilde{G}(\rho,\eta)=\sqrt{A_c(\eta)}[G_0(\rho,\eta)+iF_0(\rho,\eta)]$ is a combination of the regular ($F_0$) and singular ($G_0$) s-wave Coulomb functions, \begin{eqnarray*} f_c(k^*)=[\frac{1}{f_0}+\frac{1}{2}d_0k^{*2}- \frac{2}{a_c}h(\eta) - ik^*A_c(\eta)]^{-1} \end{eqnarray*} is the s-wave scattering amplitude renormalized by the Coulomb interaction, and $h(\eta)=\eta^2\sum\limits_{n=1}^{ \infty}[n(n^2+\eta^2)]^{-1} -C -\ln|\eta|$ (here $C \doteq 0.5772$ is the Euler constant). The dependence of the scattering parameters on the total pair spin S is omitted since only the singlet ($S=0$) s-wave FSI contributes in the case of identical nucleons. The theoretical CF at a given $k^*$ can be calculated as the average FSI weight $\langle w({\bf k}^*,{\bf r}^*)\rangle$ obtained from the separation $r^*$, simulated according to the Gaussian law, and the angle between the vectors ${\bf k}^*$ and ${\bf r}^*$, simulated according to a uniform cosine distribution. This CF is subject to the integral correction~\cite{Lednicky81} $-A_c(\eta)|f_c(k^*)|^2 d_0/(8\sqrt{\pi}R_{pp}^3)$ due to the deviation of the outer solution from the true wave function in the inner potential region. In addition, in Au+Au collisions the emitting source has a net positive charge, and it influences the CF differently for proton and antiproton pairs. This effect is included in the consideration according to ref.~\cite{Led09,Erazmus95} \subsection{Systematic uncertainties.} The systematic uncertainties include variations due to track-wise and pair-wise cuts, the uncertainty in describing the $C_{p\Lambda}$ correlation function~\cite{BodmerAndUsmani}, and the uncertainty from the $C_{\Lambda\Lambda}$ measurement. The latter dominates the systematic error of $d_0$ and $f_0$, and it affects $d_0$ more than it does $f_0$ because the shape of the CF is sensitive to $d_0$, in particular at low $k^*$. As a consistency check, when fitting the proton-proton CF, both $f_0$ and $d_0$ are also allowed to vary freely, and the fitted $f_0$ and $d_0$ agree with the results from fitting the antiproton-antiproton CF. Assuming the measurements from different systematic checks follow a uniform distribution, the final systematic error is given by (maximum - minimum)/$\sqrt{12}$. In our calculations, we consider the two-proton wave function, taking into account the Coulomb interaction between point-like protons in all orbital angular momentum waves and the strong interaction in the s-wave only. We neglect the small non-Coulomb electromagnetic contributions due to magnetic interactions, vacuum polarization, and the finite proton size~\cite{Mathelitsch1984,Heller1967,Bergervoet1988}. This approximation changes the scattering parameters at the level of a few percent~\cite{Mathelitsch1984,Heller1967,Bergervoet1988}. \end{methods}
2,877,628,091,453
arxiv
\section{Introduction} Gravitational lensing measurements provide the only direct method to probe the non-luminous matter component of lensing systems. Weak lensing measurements of cluster masses and mass distributions are becoming routine, however the measurement of the dark matter components of individual galaxies is a new and exciting prospect. By analysing gravitational microlensing signals in some strongly lensed quasars, we are able to infer the ratio of clumpy to continuously distributed matter along the line of sight to the background source. This gives us a probe of the dark matter content in a lensing galaxy at projected radii of $\sim2$--10kpc from the centre of the galaxy (\citealt{sw04}; \citealt{pooley+09}). Recent years have seen considerable advancement in the mapping of dark and stellar mass in lensing galaxies (e.g. \citealt*{keeton+98}; \citealt*{ferreras+05}; \citealt{barnabe+09}). Usually, the total mass within the Einstein Radius is constrained by modelling the lensing galaxy to fit the observed lensed image positions. Photometry of the lensing galaxy, in combination with stellar population synthesis models, provides the distribution of stellar mass. Finally, observations of stellar velocity dispersions can be used to break the mass-sheet and mass-anisotropy degeneracies, and thus constrain the overall mass density profile (see for example \citealt{kt03}; \citealt{tk04}; \citealt{ferreras+05}; \citealt*{ferreras+08}; \citealt{barnabe+09}; \citealt{auger+09}, and references therein). These analyses can provide a very detailed picture of the structure of lensing galaxies. However, they are quite complex, relying on often difficult observations and detailed modelling. A complementary method exists, in which observations of microlensing in the lensed quasar images are used to constrain the dark matter percentage along those lines of sight directly. Cosmological microlensing occurs when the light path to a lensed quasar image intersects a starfield in a foreground lensing galaxy. The lensing galaxy as a whole magnifies the lensed image; microlensing by individual stars induces variations about this macro-magnification. This is most readily detected in lightcurves of quasar images, where relative motion between observer, lens and source causes uncorrelated fluctuations in brightness between lensed images. This effect was first observed in the lensed quasar Q2237+0305 \citep{irwin+89}. In some lensing systems we observe a close pair of quasar images. Basic lensing theory suggests that these two images should have approximately equal magnifications (\citealt{cr79}; \citealt{bn86}). On the contrary, in eight out of ten known cases we find that the brightness of the image located at the saddle point in the time delay surface is suppressed relative to the image at the minimum in the time delay surface \citep{pooley+07}. Microlensing is one possible explanation for such flux ratio anomalies. However, microlensing by a purely stellar component is not sufficient. \citet{sw02} showed that this discrepancy could be accounted for by adding a significant smooth matter component to the lens at the image positions, since minimum and saddle point images are microlensed differently when a smooth matter component is added. We have previously developed a technique for using single-epoch multi-wavelength observations of anomalous lensed quasars to constrain the radius and radial profile of the background quasar accretion discs (\citealt{bfww08}; \citealt{fbw09}). In those analyses, we marginalised over the smooth matter percentage in the lens as a nuisance parameter. Here, we turn the problem around and instead marginalise over the quasar parameters to obtain constraints on the smooth matter percentage in the lens at the image positions. Rough microlensing measurements of smooth matter percentages have been reported previously. Spectroscopy of SDSS J0924+0219 undertaken by \citet{keeton+06} suggested a smooth matter percentage of 80 to 85 per cent in that lens at the location of the $D$ and $A$ images. Using X-ray monitoring of HE 1104+1805, \citet{chartas+09} reported a smooth matter percentage of $\sim80$ per cent is favoured. \citet{pooley+09} measured the smooth matter percentage in PG 1115+080 to be $\sim90$ per cent, using X-ray observations. \citet{metal08} found a weak trend supporting this result. Most recently, \citet{dai+09} favoured a smooth matter fraction of $\sim70$ per cent using X-ray and optical monitoring of RXJ 1131-1231. Microlensing analyses consistently predict a significant smooth matter percentage in the lensing galaxy at the position of anomalous images. In this paper, we present constraints on the dark matter percentages in three lensing galaxies: MG 0414+0534, SDSS J0924+0219 and Q2237+0305. MG 0414+0534 and SDSS J0924+0219 are both lensed by early-type galaxies, and consist of close image pairs displaying a flux ratio anomaly. MG 0414+0534 is moderately anomalous, whereas SDSS J0924+0219 is the most anomalous lensed quasar currently known. Q2237+0305 differs from the previous sources in two key ways: it is lensed by a barred spiral galaxy, and it does not contain a close image pair. Nevertheless, it is known to be affected by microlensing (e.g. \citealt{irwin+89}). This paper is laid out as follows: in Section \ref{sec:obs} we discuss the observational data on the three systems of interest. The simulation technique is briefly described in Section \ref{sec:sims}. We present our results and discussion in Section \ref{sec:results}, and conclude in Section \ref{sec:conclusions}. Throughout this paper we use a cosmology with $H_0=70\rm{kms^{-1}Mpc^{-1}}$, $\Omega_m=0.3$ and $\Omega_{\Lambda}=0.7$. \section{Observational data} \label{sec:obs} \subsection{MG 0414+0534} MG 0414+0534 was discovered by \citet{hewitt+92}. It consists of a background quasar at $z_s=2.64$ \citep{lejt95} and a foreground early-type lensing galaxy at $z_l=0.96$ \citep{tk99}. Four images of the quasar are observed, with the close image pair (images $A_1$ and $A_2$) displaying a flux ratio anomaly. This anomaly is weak in both the mid-infrared ($A_2/A_1 = 0.90 \pm 0.04$ on 2005 October 10, \citealt{minezaki+09}) and the radio ($A_2/A_1 = 0.90 \pm 0.02$ on 1990 April 2, \citealt{kh93}), but somewhat stronger in the optical ($A_2/A_1 = 0.45\pm 0.06$ on 1991 November 2-4, \citealt{sm93}). In our analysis, we used three epochs of multi-wavelength MG 0414+0534 observations, presented in Table \ref{0414obs}. The first two epochs were archival HST data, obtained from the CASTLES Survey webpage\footnote{http://cfa-www.harvard.edu/castles/} \citep{fls97}. The third epoch was obtained by us using the Magellan 6.5-metre Baade telescope. These data were first presented in \citet{bfww08}. \begin{deluxetable}{lrll} \tablecaption{Observed flux ratios in MG 0414+0534\label{0414obs}} \tablehead{ \colhead{Filter} & \colhead{$\lambda_c$ (\AA)} & \colhead{$F_{obs} = A_2/A_1$} & \colhead{Date}} \startdata $H$ & 16500 & $0.67\pm0.05$ & 2007 November 3\\ $J$ & 12500 & $0.60\pm0.2$ & 2007 November 3\\ $z^\prime$ & 9134 & $0.34\pm0.1$ & 2007 November 3\\ $i^\prime$ & 7625 & $0.26\pm0.1$ & 2007 November 3\\ $r^\prime$ & 6231 & $0.21\pm0.1$ & 2007 November 3\\ F205W & 20650 & $0.83\pm0.03$ & 1997 August 14 \\ F110W & 11250 & $0.64\pm0.04$ & 1997 August 14 \\ F814W & 7940 & $0.47\pm0.01$ & 1994 November 8 \\ F675W & 6714 & $0.40\pm0.01$ & 1994 November 8 \\ \enddata \tablecomments{Central wavelengths $\lambda_c$ and observed (anomalous) flux ratios $F_{obs}$ between images $A_2$ and $A_1$ in MG 0414+0534, in each of nine filters. The 2007 November 3 observations were taken with the IMACS and PANIC instruments on the Magellan 6.5-m Baade telescope \citep{bfww08}. The 1997 August 14 observations were taken with the NICMOS instrument on \textit{HST} (obtained from the CASTLES Survey web page. The 1994 November 8 observations were taken with the WFPC2 instrument on \textit{HST} (\citealt{fls97}).} \end{deluxetable} \subsection{SDSS J0924+0219} SDSS J0924+0219 is the most anomalous lensed quasar currently known. The minimum image $A$ has been observed to be a factor of $\sim20$ brighter than the saddle point image $D$ in the optical \citep{keeton+06}. The quasar was discovered by \citet{inada03} in Sloan Digital Sky Survey (SDSS) imaging, and consists of an early-type lensing galaxy at $z_l = 0.394$ \citep{eigenbrod06a} and a background quasar at $z_s=1.524$ \citep{inada03}. Again, we use three epochs of observational data. These are presented in Table \ref{0924obs}. The 2008 March 21 data were obtained by us using the Magellan 6.5-metre Baade telescope \citep{fbw09}. The 2003 November 18-23 data were taken using the HST/NICMOS and WFPC2 instruments as part of the CASTLES Survey \citep{keeton+06}. The 2001 December 15 were obtained by \citet{inada03} using the MagIC instrument on the Baade telescope, and re-reduced by us (details can be found in \citealt{fbw09}). \begin{deluxetable}{lrll} \tablecaption{Observed flux ratios in SDSS J0924+0219\label{0924obs}} \tablehead{ \colhead{Filter} & \colhead{$\lambda_c$ (\AA)} & \colhead{$F_{obs}={D}/{A}$} & \colhead{Date}} \startdata $H$ & $16500\pm1450$ & $0.23\pm0.05$ & 2008 March 21 \\ $J$ & $12500\pm800$ & $0.15\pm0.05$ & 2008 March 21 \\ $Y$ & $10200\pm500$ & $0.14\pm0.05$ & 2008 March 21 \\ $z^\prime$ & $9134\pm800$ & $0.19\pm0.10$ & 2008 March 21 \\ $i^\prime$ & $7625\pm650$ & $0.16\pm0.10$ & 2008 March 21 \\ $r^\prime$ & $6231\pm650$ & $0.10\pm0.10$ & 2008 March 21 \\ $g^\prime$ & $4750\pm750$ & $0.08\pm0.08$ & 2008 March 21 \\ $i^\prime$ & $7625\pm650$ & $0.08\pm0.05$ & 2001 December 15 \\ $r^\prime$ & $6231\pm650$ & $0.07\pm0.05$ & 2001 December 15 \\ $g^\prime$ & $4750\pm750$ & $0.06\pm0.05$ & 2001 December 15 \\ $u^\prime$ & $3540\pm310$ & $<$0.09 & 2001 December 15 \\ F160W & $15950\pm2000$ & $0.08\pm0.01$ & 2003 November 18 \\ F814W & $8269\pm850$ & $0.05\pm0.005$ & 2003 November 18-19 \\ F555W & $5202\pm600$ & $0.05\pm0.01$ & 2003 November 23 \\ \enddata \tablecomments{Central wavelengths $\lambda_c$ and observed (anomalous) flux ratios $F_{obs}$ between images $A$ and $D$ in SDSS J0924+0219, in each filter. The 2008 March 21 observations were taken with the IMACS and PANIC instruments on the Magellan 6.5-m Baade telescope \citep{fbw09}. The 2001 December 15 data were taken using MagIC on Baade \citep{inada03}. The 2003 data were taken using HST/NICMOS and WFPC2 for the CASTLES Survey \citep{keeton+06}.} \end{deluxetable} \subsection{Q2237+0305} Q2237+0305 is perhaps the most well-studied gravitationally lensed quasar. It was discovered by \citet{huchra+85}, and consists of a lensing galaxy at $z_l=0.0394$ and a background quasar at $z_s=1.695$. The two previous sources had early type lensing galaxies; the lens in Q2237+0305 is a barred spiral. Near-perfect alignment between observer, lens and quasar results in four virtually symmetric images of the background source, located in the bulge of the lensing galaxy. The optical depth to stars is therefore quite high, making the system an excellent target for microlensing analyses. Typically, this is taken to mean the smooth matter percentage in microlensing simulations can be set to zero. We will test this assumption here. Q2237+0305 also differs from the two previous sources in that it does not contain a close image pair displaying a flux ratio anomaly. Nevertheless, its images are known to vary in brightness due to microlensing. We choose to examine images $A$ and $B$ both because they are well modelled, and they are roughly equi-distant from the centre of the lensing galaxy. Our observational data were obtained from \citet{eigenbrod+08b}, in which 43 epochs of Q2237+0305 spectroscopic data from the FORS1 spectrograph on the Very Large Telescope (VLT) were analysed. Eigenbrod and collaborators deconvolved their Q2237+0305 spectra into a broad emission line component, a continuum emission component, and an iron pseudo-continuum. The continuum emission was fit with a power-law of the form $f_\nu \propto \nu^{\alpha_\nu}$. The power-law fit was then split into six wavelength bands, each with a width of 250\AA~in the quasar rest frame, and integrated in each band. The result is pseudo-broadband photometry in six wavebands, with contamination from broad emission lines and the iron continuum removed. We selected two epochs from these data for our analysis, separated by approximately a year. The flux ratios are presented in Table \ref{2237obs}. Following the \citet{eigenbrod+08b} numbering, the 2005 November 11 dataset is epoch number 17, and the 2006 November 10 dataset is epoch number 28. \begin{deluxetable*}{lrrll} \tablecaption{Observed flux ratios in Q2237+0305\label{2237obs}} \tablehead{ \colhead{Band} & \colhead{Emitted $\lambda_c$ (\AA)} & \colhead{Observed $\lambda_c$ (\AA)} & \colhead{$B/A$} & \colhead{Date}} \startdata 1 & $1625\pm125$ & $4379\pm337$ & $0.52\pm0.02$ & 2005 November 11 \\ 2 & $1875\pm125$ & $5053\pm337$ & $0.51\pm0.02$ & 2005 November 11 \\ 3 & $2125\pm125$ & $5727\pm337$ & $0.50\pm0.01$ & 2005 November 11 \\ 4 & $2375\pm125$ & $6401\pm337$ & $0.49\pm0.01$ & 2005 November 11 \\ 5 & $2625\pm125$ & $7074\pm337$ & $0.48\pm0.01$ & 2005 November 11 \\ 6 & $2875\pm125$ & $7748\pm337$ & $0.47\pm0.01$ & 2005 November 11 \\ 1 & $1625\pm125$ & $4379\pm337$ & $0.33\pm0.02$ & 2006 November 10 \\ 2 & $1875\pm125$ & $5053\pm337$ & $0.34\pm0.02$ & 2006 November 10 \\ 3 & $2125\pm125$ & $5725\pm337$ & $0.35\pm0.01$ & 2006 November 10 \\ 4 & $2375\pm125$ & $6401\pm337$ & $0.36\pm0.02$ & 2006 November 10 \\ 5 & $2625\pm125$ & $7074\pm337$ & $0.37\pm0.02$ & 2006 November 10 \\ 6 & $2875\pm125$ & $7748\pm337$ & $0.37\pm0.01$ & 2006 November 10 \\ \enddata \tablecomments{Two epochs of observational $B/A$ flux ratios for Q2237+0305, obtained from \citet{eigenbrod+08b}. Photometry was extracted from spectra obtained with the FORS1 spectrograph on the Very Large Telescope (VLT) at the European Southern Observatory (ESO). Filter wavelengths are in the rest frame of the source quasar, located at a redshift of $z_s=1.695$. Following the \citet{eigenbrod+08b} numbering, the 2005 November 11 dataset is epoch number 17, and the 2006 November 10 dataset is epoch number 28.} \end{deluxetable*} \section{Microlensing simulations} \label{sec:sims} The simulation technique used in this work has been presented previously in \citet{bfww08} and \citet*{fbw09}. In those papers, we marginalised over the smooth matter percentage $s$ as a nuisance parameter. Here, we instead marginalise over the accretion disc radius $\sigma_0$ and the power-law index $\zeta$ relating the radius of the accretion disc to the observed wavelength. Our microlensing simulations were conducted using an inverse ray-shooting technique (\citealt{krs86}; \citealt{wpk90}). The key parameters in these simulations are the convergence $\kappa_{tot}$, which is a measure of the focussing power of the lens, and shear $\gamma$, which is a measure of the distortion introduced by the lens. The lensing parameters used in this analysis are presented in Table \ref{tab:lensparams}. \begin{deluxetable}{lcccc} \tablecaption{Lensing parameters for the images of interest in this analysis\label{tab:lensparams}} \tablehead{ \colhead{Quasar} & \colhead{Image} & \colhead{$\kappa_{tot}$} & \colhead{$\gamma$} & \colhead{$\mu_{tot}$}} \startdata MG 0414+0534 & $A_1$ & 0.472 & 0.488 & 24.2 \\ MG 0414+0534 & $A_2$ & 0.485 & 0.550 & -26.8 \\ SDSS J0924+0219 & $A$ & 0.502 & 0.458 & 26.2 \\ SDSS J0924+0219 & $D$ & 0.476 & 0.565 & -22.4 \\ Q2237+0305 & $A$ & 0.413 & 0.382 & 5.03 \\ Q2237+0305 & $B$ & 0.410 & 0.384 & 4.98 \\ \enddata \tablecomments{Convergence $\kappa_{tot}$, shear $\gamma$ and magnification $\mu_{tot}$ for each of the lensed images examined in this paper. A negative total magnification is interpreted as a parity flip. MG 0414+0534 parameters were obtained from \citet{wms95}. SDSS J0924+0219 parameters were obtained from \citet{keeton+06}. Q2237+0305 were obtained via private communication with Cathryn Trott, based on modelling in \citet{trott+09}.} \end{deluxetable} The convergence can be split into two components $\kappa_{tot} = \kappa_* + \kappa_s$, where $\kappa_*$ describes a clumpy stellar component and $\kappa_s$ a smoothly distributed component. We define the smooth matter percentage $s$ to be the ratio of the continuously distributed component to the total convergence: \begin{equation} s = \kappa_s / \kappa_{tot} \end{equation} We allowed the smooth matter percentage to vary from 0 to 99 per cent, in 10 per cent increments. The smooth matter percentage is thus relatively coarsely sampled; our simulations were optimised to probe the accretion disc sources in each system. The microlenses in our simulations were drawn from a Salpeter mass function $dN/dM \propto M^{-2.35}$ with a mass range $M_{max}/M_{min} = 50$. Physical sizes are therefore scaled by the average Einstein Radius projected on to the source plane $\eta_0$, which varies from system to system. Magnification maps were generated covering an area of $24\eta_0 \times 24\eta_0$, with a resolution of $2048\times2048$ pixels. Ten maps were generated for each image and smooth matter percentage. As discussed in \citet{bfww08}, we randomly selected source positions in each combination of magnification maps, to build up a simulated library of flux ratio curves as a function of wavelength. Comparing these with the observations allows us to construct a three dimensional likelihood distribution for the observed flux ratio spectrum $F^{obs}$ given three model parameters: the radius of the quasar source in the bluest filter $\sigma_0$, the power-law index relating observed wavelength to radius of the source $\zeta$, and the smooth matter percentage $s$. We can convert these likelihoods to an \textit{a posteriori} probability distribution for the three model parameters given the observations using Bayes theorem: \begin{equation} \frac{\rm{d}^3P}{\rm{d}\sigma_0\rm{d}\zeta\rm{d}s} \propto L(F^{obs}|\sigma_0,\zeta,s) \frac{\rm{d}P_{prior}}{\rm{d}\sigma_0} \frac{\rm{d}P_{prior}}{\rm{d}\zeta} \frac{\rm{d}P_{prior}}{\rm{d}s} \end{equation} Uniform priors were used for the two dimensionless quantities, smooth matter percentage $s$ and power-law index $\zeta$. A logarithmic prior was used for the radius $\sigma_0$. We note that this differs slightly from the analyses in \citet{bfww08} and \citet{fbw09}, where a uniform prior was also used for $\sigma_0$. We will briefly discuss prior dependence in Section \ref{sec:results}. We marginalise over the accretion disc parameters $\sigma_0$ and $\zeta$ to obtain a probability distribution for the smooth matter percentage $s$: \begin{equation} \frac{\rm{d}P}{\rm{d}s} = \int \int \frac{\rm{d}^3P}{\rm{d}\sigma_0\rm{d}\zeta\rm{d}s} \rm{d}\sigma_0 \rm{d}\zeta \end{equation} Our analysis focusses on two lensed images in each system. By dealing with flux ratios between images only, we remove the intrinsic quasar flux from the problem, provided the difference in light travel time between images is short. We assume that the smooth matter percentages in the two lensed images are identical. This is reasonable for the two anomalous systems, MG 0414+0534 and SDSS J0924+0219, as the anomalous images lie very close to each other. In Q2237+0305, where the images are widely separated, we choose to analyse image $A$ and $B$ only as they are essentially equi-distant from the centre of the lensing galaxy, and do not lie atop any obvious spiral features. The probability distributions we obtain for smooth matter percentage are presented for MG 0414+0534 (Figure \ref{0414smooth}), SDSS J0924+0219 (Figure \ref{0924smooth}) and Q2237+0305 (Figure \ref{2237smooth}). The dashed histograms show the differential probability distributions, and the solid lines show the cumulative probability distributions. \section{Results and discussion} \label{sec:results} We obtain the following formal constraints on smooth matter percentage at the image positions in each system: $50^{+30}_{-40}$ per cent in MG 0414+0534, $80^{+10}_{-10}$ per cent in SDSS J0924+0219, and $\leq50$ in Q2237+0305 (68 per cent confidence limits are quoted). Our simulations are not optimised to probe smooth matter percentage; we sample smooth matter parameter space only sparsely, in order to reduce simulation time. These results should therefore be considered estimates, rather than exact measurements. Nevertheless, they provide an interesting, and currently poorly explored, measurement of smooth matter content within only a few effective radii of early type galaxies. In MG 0414+0534, the differential probability distribution (Figure \ref{0414smooth}, dashed line) does not particularly favour any single smooth matter percentage. This leads to a quite broad formal constraint on the smooth matter percentage. \begin{figure} \plotone{fig1.eps} \caption{Probability distribution for smooth matter percentage in MG 0414+0534. The differential distribution (dashed line) and cumulative distribution (solid line) are both provided.\label{0414smooth}} \end{figure} Conversely, our measured smooth matter percentage in SDSS J0924+0219 is high, as we would expect for such an anomalous system. There is one other measurement of smooth matter percentage of this system in the literature. \citet{keeton+06} estimated it to be 80 to 85 per cent, based on the estimated size of the broad emission line region in the system. Our analysis is focussed on the accretion disc, and finds essentially identical results. We do use the \citet{keeton+06} observational data in obtaining our constraints, however excluding it and working only with the Magellan data does not significantly alter our results. \begin{figure} \plotone{fig2.eps} \caption{Probability distribution for smooth matter percentage in SDSS J0924+0219. The differential distribution (dashed line) and cumulative distribution (solid line) are both provided.\label{0924smooth}} \end{figure} As has been discussed earlier, the lensed images in Q2237+0305 lie in the bulge of the lensing galaxy. Stars are therefore expected to dominate the microlensing signal, rather than a smooth matter component. Indeed, we find a smooth matter percentage that is consistent with zero in this system. There is a peak in the differential probability distribution (Figure \ref{2237smooth}, dashed line) at $\sim20$ per cent. Though we are reluctant to suggest that this peak is real, we do note that such a feature could be evidence of additional absorbing features along the line of sight (see for example \citealt{foltz+92}, who reported MgII absorption features in the spectrum of Q2237+0305 at a redshift of 0.97). \begin{figure} \plotone{fig3.eps} \caption{Probability distribution for smooth matter percentage in Q2237+0305 at the image $A$ and $B$ positions. The $B/A$ flux ratio was used as those images are at roughly the same projected distance from the centre of the lensing galaxy ($\sim0\farcs95$). The differential distribution (dashed line) and cumulative distribution (solid line) are both provided.\label{2237smooth}} \end{figure} To confirm that our results are not dominated by the choice of prior probability for the radius of the background quasar accretion disc $\sigma_0$, we repeated our analysis using a uniform prior rather than a logarithmic prior. Within their errors, we found no significant variation in our constraints on smooth matter percentage in any of our systems. We can perform a simple calculation to obtain a rough theoretical prediction of the percentage of dark matter we expect to see at the image positions in each source. The method is briefly described by \citet{sw02}, but will be repeated here for clarity. It is only applicable to MG 0414+0534 and SDSS J0924+0219 as it assumes an early type lensing galaxy. We begin by working out a stellar surface mass density $\Sigma_s$ at the image positions. To do this, we take the observed effective radius of the lensing galaxy $R_e$, and compare it with Figure 10 in \citet{bernardi03} to obtain a $g$-band surface brightness in magnitudes per square arcsecond (assuming no evolution in early type galaxies between redshift $z=0$ and the redshift of the lensing galaxy). \citet{kauffmann03} provide a relationship between mass-to-light ratio $M/L$ and $g$-band magnitude derived from $10^5$ Sloan galaxies in their Figure 14. Using the $g$-band surface brightness obtained from \citet{bernardi03}, we can get a rough mass-to-light ratio for our lensing galaxy. We convert our surface brightness from magnitudes per square arcsecond into solar luminosities per square parsec using the following relationship: \begin{equation} S[\rm{mag}/\rm{arcsec}^2] = M_{\odot g} + 21.572 -2.5\rm{log}_{10}S[\rm{L}_\odot/\rm{pc}^2] \end{equation} where S is the surface brightness and $M_{\odot g}=5.45$ is the solar magnitude in the $g$-band. Now that we have the lens galaxy surface brightness in solar magnitudes, we can use the mass-to-light ratio to convert into a stellar surface mass density in solar masses per square parsec. This is the stellar surface mass density at the effective radius $R_e$ -- we use the deVaucoulers profile to extrapolate this value out to the Einstein Radius, which is the approximate location of the lensed images. The final piece of information we need is the surface mass density at the image positions. The critical surface mass density $\Sigma_{cr}$ for lensing is obtained from the following relationship: \begin{equation} \Sigma_{cr} = \frac{c^2}{4\pi G} \frac{D_s}{D_d D_{ds}} \end{equation} where $D_d$ is the angular diameter distance to the lens, $D_s$ is the angular diameter distance to the source, and $D_{ds}$ is the angular diameter distance between lens and source. The other symbols have their usual meanings. For an isothermal sphere, the surface mass density at the Einstein Radius is half the critical surface mass density $\Sigma_{cr}$. The smooth matter percentage $s$ is simply obtained as follows: \begin{equation} s = 1 - \frac{\Sigma_s}{0.5\Sigma{cr}} \end{equation} MG 0414+0534 has a deVaucoulers effective radius $R_e = 0\farcs78$ \citep{ketal00}, which gives a $g$-band magnitude of $\sim21.75$ and a mass-to-light ratio $M/L \sim4$. This gives a stellar surface mass density at the effective radius of $514 M_\odot/pc^2$. For an Einstein Radius of $1\farcs15$ \citep{trotter00}, the stellar surface mass density on the Einstein Ring is smaller by a factor of 0.46, giving $\Sigma_s=236 M_\odot/pc^2$. For a source at $z=2.64$ and a lens at $z=0.96$, the critical surface mass density for lensing is $\Sigma_{cr} = 2189 M_\odot/pc^2$. This gives a theoretically predicted smooth matter percentage of 78 per cent. Note that this number differs slightly from the figure quoted in \citet{sw02} as they were using preprints of \citet{bernardi03} and \citet{kauffmann03}. This prediction is on the high side of our measured smooth matter percentage of $50^{+30}_{-40}$ per cent, although it is formally consistent. SDSS J0924+0219 has two measured effective radii in the literature: $R_e=0\farcs31\pm0\farcs02$ \citep{metal06} and $R_e=0\farcs50\pm0\farcs05$ \citep{eigenbrod06a}. Both results were obtained using {\it HST} data. \citet{metal06} fit a deVaucoulers profile to the lensing galaxy, whereas \citet{eigenbrod06a} chose a two-dimensional exponential disk. Since an exponential profile is shallower than a deVaucoulers profile, it would tend to give a larger effective radius. This may allow the two effective radii to be reconciled, however we will deal with each case separately. The system has a lensing galaxy at $z=0.39$ and a source at $z=1.524$, giving a critical surface mass density of $\Sigma_{cr} = 2323 M_\odot/pc^2$. For an effective radius $R_e=0\farcs50$ we expect a $g$-band surface brightness of $\sim20.5\rm{mag}/\rm{arcsec}^2$, and a mass-to-light ratio $M/L=5.6$. At the effective radius, the stellar surface mass density is $2285 M_\odot/pc^2$. For an Einstein Radius of $0\farcs85$ \citep{inada03} and a deVaucoulers profile, the stellar surface mass density at the Einstein Radius is lower by a factor of 0.35, giving $\Sigma_s = 791 M_\odot/pc^2$. Thus, we predict a smooth matter percentage of 32 per cent. This is significantly lower than our $1\sigma$ measured value of $80^{+10}_{-10}$. Even at the 95 per cent level, our measured smooth matter percentage is not consistent with the prediction of this rough calculation. The situation is somewhat different for an effective radius $R_e = 0\farcs31$. Here, an early type galaxy with no evolution between $z=0$ and $z=0.39$ should have a $g$-band surface brightness of $\sim19.75\rm{mag}/\rm{arcsec}^2$ and a mass-to-light ratio $M/L=5$. This gives a stellar surface mass density of $4052 M_\odot/pc^2$ at the effective radius. Again assuming a deVaucoulers profile, the stellar surface mass density is reduced by a factor of 0.13 at the Einstein Radius of $0\farcs85$, giving $539 M_\odot/pc^2$. We therefore expect 46 per cent of the mass to be in stars, and 54 per cent in smooth matter. This is consistent with our measured smooth matter percentage at the 95 per cent level ($80^{+10}_{-30}$). In Figure \ref{smooth_compare} we show a comparison between the results of this rough theoretical calculation and our microlensing measurements for MG 0414+0534 (circle) and SDSS J0924+0219 (square). No errors are provided for the rough theoretical calculations, although we emphasise that they should only be considered estimates. With only two data points, there does not seem to be any systematic difference between the microlensing measurements and the rough calculation. \begin{figure} \plotone{fig4.eps} \caption{Comparison between smooth matter percentages obtained from microlensing measurements and rough theoretical calculations in this paper. No errors are given for the theoretical calculations (see Section \ref{sec:results}), although they should only be considered rough estimates. The dashed line shows 1:1 correspondence. Results for MG 0414+0534 (circle) and SDSS J0924+0219 (square) are shown, with 68 per cent confidence limits.\label{smooth_compare}} \end{figure} It is, however, not correct to connect smooth matter percentage in our simulations directly with dark matter content in the lens. Intervening systems (which may not be detected otherwise) can contribute smooth matter to the surface mass density, as can gas and dust in the lens. As the lensing galaxies in both MG 0414+0534 and SDSS J0924+0219 are early type galaxies, we would expect this contribution to be very small. More importantly, \citet{lg06} showed that small compact masses can mimic a smooth matter component. This was first demonstrated analytically in \citet{dr81}. For a source with a radius $0.1\eta_0$, \citet{lg06} find that compact masses smaller than $\sim0.01\rm{M}_\odot$ can mimic a smoothly distributed mass component. This may help to explain why our measured microlensing smooth matter percentage in SDSS J0924+0219 appears to be slightly higher than we would expect from a simple deVaucoulers galaxy profile in an isothermal dark matter profile. We note that the three lensing systems analysed in this paper do not represent a fair statistical sample of gravitationally lensed quasars. The single-epoch imaging technique used here requires short time delays between images, and so we preferentially select lenses with close image pairs (or high symmetry in the case of Q2237+0305). We have chosen close image pairs that exhibit strongly anomalous flux ratios for our analysis. Since a priori, anomalous flux ratios are driven by a smooth matter fraction \citep{sw02}, this selection biases our estimates towards larger smooth dark matter fractions. Thus our results cannot be used to make global statements regarding smooth dark matter in galaxies. However eight out of ten close image systems display anomalous flux ratios \citep{pooley+07}, and so we expect the smooth matter percentage at those image positions to be high. This indicates that any bias in our analysis of individual systems would be relatively minor. In the future our analysis could be extended to a statistical sample through the use of monitoring data and the light curve technique discussed in \citet{k04}. \section{Conclusions} \label{sec:conclusions} We have presented estimates of the smooth matter percentage in three lensing galaxies along the line of sight to the lensed images. We find a smooth matter percentage of $50^{+30}_{-40}$ in MG 0414+0534, $80^{+10}_{-10}$ in SDSS J0924+0219, and $\leq50$ in Q2237+0305, with 68 per cent confidence. In the two systems where the lensed images lie in the outer regions of the lensing galaxies (5 to 10 kpc from their centres), these measurements are inconsistent with zero smooth dark matter in the lensing galaxies. In Q2237+0305, where the lensed images lie in the central bulge of the lensing galaxy and so stars are expected to dominate the microlensing signal, our result is consistent with zero per cent smooth matter, as expected. These measurements were obtained using a single-epoch imaging technique that is free from the need for long-term monitoring campaigns, which are required for detailed analysis of the lens mass profile. Our results also do not depend upon the typically unknown velocities of the stars in the lensing galaxy, and of the lensing galaxy and background source themselves, which enter into analyses of microlensing lightcurves. It is however only appropriate for systems with time delays between images of less than a day, so we can ensure we are observing the background source in the same state along each line of sight. It also does not provide us with any information on the slope of the dark matter profile; for that, stellar velocity dispersions in the lensing galaxy are required. Nevertheless, our technique does provide an observationally inexpensive method for estimating the dark matter fraction in lensing galaxies at the location of lensed images. Gravitational lensing remains the only method for directly probing the dark matter content of galaxies. \acknowledgments NFB acknowledges the support of an Australian Postgraduate Award. DJEF acknowledges the support of a Magellan Fellowship from Astronomy Australia Limited. We are indebted to Joachim Wambsganss for the use of his inverse ray-shooting code. We thank the referee for comments which helped improve the final version of this manuscript. {\it Facilities:} \facility{Magellan:Baade}, \facility{HST}, \facility{VLT:Kueyen}.